openstacksdk-0.11.3/0000775000175100017510000000000013236151501014306 5ustar zuulzuul00000000000000openstacksdk-0.11.3/PKG-INFO0000664000175100017510000002143213236151501015405 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: openstacksdk Version: 0.11.3 Summary: An SDK for building applications to work with OpenStack Home-page: http://developer.openstack.org/sdks/python/openstacksdk/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: openstacksdk ============ openstacksdk is a client library for for building applications to work with OpenStack clouds. The project aims to provide a consistent and complete set of interactions with OpenStack's many services, along with complete documentation, examples, and tools. It also contains an abstraction interface layer. Clouds can do many things, but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, the per-service oriented portions of the SDK are for you. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then the Cloud Abstraction layer is for you. A Brief History --------------- .. TODO(shade) This history section should move to the docs. We can put a link to the published URL here in the README, but it's too long. openstacksdk started its life as three different libraries: shade, os-client-config and python-openstacksdk. ``shade`` started its life as some code inside of OpenStack Infra's `nodepool`_ project, and as some code inside of the `Ansible OpenStack Modules`_. Ansible had a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding the logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. Because of its background from nodepool, shade contained abstractions to work around deployment differences and is resource oriented rather than service oriented. This allows a user to think about Security Groups without having to know whether Security Groups are provided by Nova or Neutron on a given cloud. On the other hand, as an interface that provides an abstraction, it deviates from the published OpenStack REST API and adds its own opinions, which may not get in the way of more advanced users with specific needs. ``os-client-config`` was a library for collecting client configuration for using an OpenStack cloud in a consistent and comprehensive manner, which introduced the ``clouds.yaml`` file for expressing named cloud configurations. ``python-openstacksdk`` was a library that exposed the OpenStack APIs to developers in a consistent and predictable manner. After a while it became clear that there was value in both the high-level layer that contains additional business logic and the lower-level SDK that exposes services and their resources faithfully and consistently as Python objects. Even with both of those layers, it is still beneficial at times to be able to make direct REST calls and to do so with the same properly configured `Session`_ from `python-requests`_. This led to the merge of the three projects. The original contents of the shade library have been moved into ``openstack.cloud`` and os-client-config has been moved in to ``openstack.config``. Future releases of shade will provide a thin compatibility layer that subclasses the objects from ``openstack.cloud`` and provides different argument defaults where needed for compatibility. Similarly future releases of os-client-config will provide a compatibility layer shim around ``openstack.config``. .. note:: The ``openstack.cloud.OpenStackCloud`` object and the ``openstack.connection.Connection`` object are going to be merged. It is recommended to not write any new code which consumes objects from the ``openstack.cloud`` namespace until that merge is complete. .. _nodepool: https://docs.openstack.org/infra/nodepool/ .. _Ansible OpenStack Modules: http://docs.ansible.com/ansible/latest/list_of_cloud_modules.html#openstack .. _Session: http://docs.python-requests.org/en/master/user/advanced/#session-objects .. _python-requests: http://docs.python-requests.org/en/master/ openstack ========= List servers using objects configured with the ``clouds.yaml`` file: .. code-block:: python import openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) # Initialize cloud conn = openstack.connect(cloud='mordred') for server in conn.compute.servers(): print(server.to_dict()) openstack.config ================ ``openstack.config`` will find cloud configuration for as few as 1 clouds and as many as you want to put in a config file. It will read environment variables and config files, and it also contains some vendor specific default values so that you don't have to know extra info to use OpenStack * If you have a config file, you will get the clouds listed in it * If you have environment variables, you will get a cloud named `envvars` * If you have neither, you will get a cloud named `defaults` with base defaults Sometimes an example is nice. Create a ``clouds.yaml`` file: .. code-block:: yaml clouds: mordred: region_name: Dallas auth: username: 'mordred' password: XXXXXXX project_name: 'shade' auth_url: 'https://identity.example.com' Please note: ``openstack.config`` will look for a file called ``clouds.yaml`` in the following locations: * Current Directory * ``~/.config/openstack`` * ``/etc/openstack`` More information at https://developer.openstack.org/sdks/python/openstacksdk/users/config openstack.cloud =============== Create a server using objects configured with the ``clouds.yaml`` file: .. code-block:: python import openstack.cloud # Initialize and turn on debug logging openstack.enable_logging(debug=True) # Initialize cloud # Cloud configs are read with openstack.config cloud = openstack.cloud.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Links ===== * `Issue Tracker `_ * `Code Review `_ * `Documentation `_ * `PyPI `_ * `Mailing list `_ * `Bugs `_ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 openstacksdk-0.11.3/setup.cfg0000666000175100017510000000250613236151501016134 0ustar zuulzuul00000000000000[metadata] name = openstacksdk summary = An SDK for building applications to work with OpenStack description-file = README.rst author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = http://developer.openstack.org/sdks/python/openstacksdk/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.5 [files] packages = openstack [entry_points] console_scripts = openstack-inventory = openstack.cloud.cmd.inventory:main [build_sphinx] source-dir = doc/source build-dir = doc/build all_files = 1 warning-is-error = 1 [upload_sphinx] upload-dir = doc/build/html [compile_catalog] directory = openstack/locale domain = python-openstacksdk [update_catalog] domain = python-openstacksdk output_dir = openstack/locale input_file = openstack/locale/python-openstacksdk.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = openstack/locale/python-openstacksdk.pot [wheel] universal = 1 [egg_info] tag_build = tag_date = 0 openstacksdk-0.11.3/tools/0000775000175100017510000000000013236151501015446 5ustar zuulzuul00000000000000openstacksdk-0.11.3/tools/keystone_version.py0000666000175100017510000000542513236151340021437 0ustar zuulzuul00000000000000# Copyright (c) 2017 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import openstack.config import pprint import sys import urlparse def print_versions(r): if 'version' in r: for version in r['version']: print_version(version) if 'values' in r: for version in r['values']: print_version(version) if isinstance(r, list): for version in r: print_version(version) def print_version(version): if version['status'] in ('CURRENT', 'stable'): print( "\tVersion ID: {id} updated {updated}".format( id=version.get('id'), updated=version.get('updated'))) verbose = '-v' in sys.argv ran = [] for cloud in openstack.config.OpenStackConfig().get_all_clouds(): if cloud.name in ran: continue ran.append(cloud.name) # We don't actually need a compute client - but we'll be getting full urls # anyway. Without this SSL cert info becomes wrong. c = cloud.get_session_client('compute') endpoint = cloud.config['auth']['auth_url'] try: print(endpoint) r = c.get(endpoint).json() if verbose: pprint.pprint(r) except Exception as e: print("Error with {cloud}: {e}".format(cloud=cloud.name, e=str(e))) continue if 'version' in r: print_version(r['version']) url = urlparse.urlparse(endpoint) parts = url.path.split(':') if len(parts) == 2: path, port = parts else: path = url.path port = None stripped = path.rsplit('/', 2)[0] if port: stripped = '{stripped}:{port}'.format(stripped=stripped, port=port) endpoint = urlparse.urlunsplit( (url.scheme, url.netloc, stripped, url.params, url.query)) print(" also {endpoint}".format(endpoint=endpoint)) try: r = c.get(endpoint).json() if verbose: pprint.pprint(r) except Exception: print("\tUnauthorized") continue if 'version' in r: print_version(r) elif 'versions' in r: print_versions(r['versions']) else: print("\n\nUNKNOWN\n\n{r}".format(r=r)) else: print_versions(r['versions']) openstacksdk-0.11.3/tools/nova_version.py0000666000175100017510000000410713236151340020535 0ustar zuulzuul00000000000000# Copyright (c) 2017 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import openstack.config ran = [] for cloud in openstack.config.OpenStackConfig().get_all_clouds(): if cloud.name in ran: continue ran.append(cloud.name) c = cloud.get_session_client('compute') try: raw_endpoint = c.get_endpoint() have_current = False endpoint = raw_endpoint.rsplit('/', 2)[0] print(endpoint) r = c.get(endpoint).json() except Exception: print("Error with %s" % cloud.name) continue for version in r['versions']: if version['status'] == 'CURRENT': have_current = True print( "\tVersion ID: {id} updated {updated}".format( id=version.get('id'), updated=version.get('updated'))) print( "\tVersion Max: {max}".format(max=version.get('version'))) print( "\tVersion Min: {min}".format(min=version.get('min_version'))) if not have_current: for version in r['versions']: if version['status'] == 'SUPPORTED': have_current = True print( "\tVersion ID: {id} updated {updated}".format( id=version.get('id'), updated=version.get('updated'))) print( "\tVersion Max: {max}".format(max=version.get('version'))) print( "\tVersion Min: {min}".format( min=version.get('min_version'))) openstacksdk-0.11.3/extras/0000775000175100017510000000000013236151501015614 5ustar zuulzuul00000000000000openstacksdk-0.11.3/extras/delete-network.sh0000666000175100017510000000107413236151340021106 0ustar zuulzuul00000000000000neutron router-gateway-clear router1 neutron router-interface-delete router1 for subnet in private-subnet ipv6-private-subnet ; do neutron router-interface-delete router1 $subnet subnet_id=$(neutron subnet-show $subnet -f value -c id) neutron port-list | grep $subnet_id | awk '{print $2}' | xargs -n1 neutron port-delete neutron subnet-delete $subnet done neutron router-delete router1 neutron net-delete private # Make the public network directly consumable neutron subnet-update public-subnet --enable-dhcp=True neutron net-update public --shared=True openstacksdk-0.11.3/extras/run-ansible-tests.sh0000777000175100017510000000513413236151340021540 0ustar zuulzuul00000000000000#!/bin/bash ############################################################################# # run-ansible-tests.sh # # Script used to setup a tox environment for running Ansible. This is meant # to be called by tox (via tox.ini). To run the Ansible tests, use: # # tox -e ansible [TAG ...] # or # tox -e ansible -- -c cloudX [TAG ...] # or to use the development version of Ansible: # tox -e ansible -- -d -c cloudX [TAG ...] # # USAGE: # run-ansible-tests.sh -e ENVDIR [-d] [-c CLOUD] [TAG ...] # # PARAMETERS: # -d Use Ansible source repo development branch. # -e ENVDIR Directory of the tox environment to use for testing. # -c CLOUD Name of the cloud to use for testing. # Defaults to "devstack-admin". # [TAG ...] Optional list of space-separated tags to control which # modules are tested. # # EXAMPLES: # # Run all Ansible tests # run-ansible-tests.sh -e ansible # # # Run auth, keypair, and network tests against cloudX # run-ansible-tests.sh -e ansible -c cloudX auth keypair network ############################################################################# CLOUD="devstack-admin" ENVDIR= USE_DEV=0 while getopts "c:de:" opt do case $opt in d) USE_DEV=1 ;; c) CLOUD=${OPTARG} ;; e) ENVDIR=${OPTARG} ;; ?) echo "Invalid option: -${OPTARG}" exit 1;; esac done if [ -z ${ENVDIR} ] then echo "Option -e is required" exit 1 fi shift $((OPTIND-1)) TAGS=$( echo "$*" | tr ' ' , ) # We need to source the current tox environment so that Ansible will # be setup for the correct python environment. source $ENVDIR/bin/activate if [ ${USE_DEV} -eq 1 ] then if [ -d ${ENVDIR}/ansible ] then echo "Using existing Ansible source repo" else echo "Installing Ansible source repo at $ENVDIR" git clone --recursive https://github.com/ansible/ansible.git ${ENVDIR}/ansible fi source $ENVDIR/ansible/hacking/env-setup else echo "Installing Ansible from pip" pip install ansible fi # Run the shade Ansible tests tag_opt="" if [ ! -z ${TAGS} ] then tag_opt="--tags ${TAGS}" fi # Until we have a module that lets us determine the image we want from # within a playbook, we have to find the image here and pass it in. # We use the openstack client instead of nova client since it can use clouds.yaml. IMAGE=`openstack --os-cloud=${CLOUD} image list -f value -c Name | grep cirros | grep -v -e ramdisk -e kernel` if [ $? -ne 0 ] then echo "Failed to find Cirros image" exit 1 fi ansible-playbook -vvv ./openstack/tests/ansible/run.yml -e "cloud=${CLOUD} image=${IMAGE}" ${tag_opt} openstacksdk-0.11.3/setup.py0000666000175100017510000000200613236151340016021 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) openstacksdk-0.11.3/openstack/0000775000175100017510000000000013236151501016275 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/version.py0000666000175100017510000000120113236151340020331 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version __version__ = pbr.version.VersionInfo('openstacksdk').version_string() openstacksdk-0.11.3/openstack/network/0000775000175100017510000000000013236151501017766 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/network/version.py0000666000175100017510000000172613236151340022036 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = network_service.NetworkService( version=network_service.NetworkService.UNVERSIONED ) # capabilities allow_list = True # Properties links = resource.Body('links') status = resource.Body('status') openstacksdk-0.11.3/openstack/network/network_service.py0000666000175100017510000000166013236151340023557 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class NetworkService(service_filter.ServiceFilter): """The network service.""" valid_versions = [service_filter.ValidVersion('v2', 'v2.0')] def __init__(self, version=None): """Create a network service.""" super(NetworkService, self).__init__(service_type='network', version=version) openstacksdk-0.11.3/openstack/network/v2/0000775000175100017510000000000013236151501020315 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/network/v2/security_group_rule.py0000666000175100017510000000726313236151340025014 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class SecurityGroupRule(resource.Resource): resource_key = 'security_group_rule' resources_key = 'security_group_rules' base_path = '/security-group-rules' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = False allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'direction', 'protocol', 'remote_group_id', 'security_group_id', ether_type='ethertype', project_id='tenant_id', ) # Properties #: Timestamp when the security group rule was created. created_at = resource.Body('created_at') #: The security group rule description. description = resource.Body('description') #: ``ingress`` or ``egress``: The direction in which the security group #: rule is applied. For a compute instance, an ingress security group #: rule is applied to incoming ingress traffic for that instance. #: An egress rule is applied to traffic leaving the instance. direction = resource.Body('direction') #: Must be IPv4 or IPv6, and addresses represented in CIDR must match #: the ingress or egress rules. ether_type = resource.Body('ethertype') #: The maximum port number in the range that is matched by the #: security group rule. The port_range_min attribute constrains #: the port_range_max attribute. If the protocol is ICMP, this #: value must be an ICMP type. port_range_max = resource.Body('port_range_max', type=int) #: The minimum port number in the range that is matched by the #: security group rule. If the protocol is TCP or UDP, this value #: must be less than or equal to the value of the port_range_max #: attribute. If the protocol is ICMP, this value must be an ICMP type. port_range_min = resource.Body('port_range_min', type=int) #: The ID of the project this security group rule is associated with. project_id = resource.Body('tenant_id') #: The protocol that is matched by the security group rule. #: Valid values are ``null``, ``tcp``, ``udp``, and ``icmp``. protocol = resource.Body('protocol') #: The remote security group ID to be associated with this security #: group rule. You can specify either ``remote_group_id`` or #: ``remote_ip_prefix`` in the request body. remote_group_id = resource.Body('remote_group_id') #: The remote IP prefix to be associated with this security group rule. #: You can specify either ``remote_group_id`` or ``remote_ip_prefix`` #: in the request body. This attribute matches the specified IP prefix #: as the source IP address of the IP packet. remote_ip_prefix = resource.Body('remote_ip_prefix') #: Revision number of the security group rule. *Type: int* revision_number = resource.Body('revision_number', type=int) #: The security group ID to associate with this security group rule. security_group_id = resource.Body('security_group_id') #: Timestamp when the security group rule was last updated. updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/network/v2/qos_minimum_bandwidth_rule.py0000666000175100017510000000246513236151340026311 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class QoSMinimumBandwidthRule(resource.Resource): resource_key = 'minimum_bandwidth_rule' resources_key = 'minimum_bandwidth_rules' base_path = '/qos/policies/%(qos_policy_id)s/minimum_bandwidth_rules' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: The ID of the QoS policy who owns rule. qos_policy_id = resource.URI('qos_policy_id') #: Minimum bandwidth in kbps. min_kbps = resource.Body('min_kbps') #: Traffic direction from the tenant point of view. Valid values: 'egress' direction = resource.Body('direction') openstacksdk-0.11.3/openstack/network/v2/floating_ip.py0000666000175100017510000000635613236151340023177 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class FloatingIP(resource.Resource): name_attribute = "floating_ip_address" resource_name = "floating ip" resource_key = 'floatingip' resources_key = 'floatingips' base_path = '/floatingips' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'fixed_ip_address', 'floating_ip_address', 'floating_network_id', 'port_id', 'router_id', 'status', 'subnet_id', project_id='tenant_id') # Properties #: Timestamp at which the floating IP was created. created_at = resource.Body('created_at') #: The floating IP description. description = resource.Body('description') #: The fixed IP address associated with the floating IP. If you #: intend to associate the floating IP with a fixed IP at creation #: time, then you must indicate the identifier of the internal port. #: If an internal port has multiple associated IP addresses, the #: service chooses the first IP unless you explicitly specify the #: parameter fixed_ip_address to select a specific IP. fixed_ip_address = resource.Body('fixed_ip_address') #: The floating IP address. floating_ip_address = resource.Body('floating_ip_address') #: Floating IP object doesn't have name attribute, set ip address to name #: so that user could find floating IP by UUID or IP address using find_ip name = floating_ip_address #: The ID of the network associated with the floating IP. floating_network_id = resource.Body('floating_network_id') #: The port ID. port_id = resource.Body('port_id') #: The ID of the QoS policy attached to the floating IP. qos_policy_id = resource.Body('qos_policy_id') #: The ID of the project this floating IP is associated with. project_id = resource.Body('tenant_id') #: Revision number of the floating IP. *Type: int* revision_number = resource.Body('revision_number', type=int) #: The ID of an associated router. router_id = resource.Body('router_id') #: The floating IP status. Value is ``ACTIVE`` or ``DOWN``. status = resource.Body('status') #: Timestamp at which the floating IP was last updated. updated_at = resource.Body('updated_at') #: The Subnet ID associated with the floating IP. subnet_id = resource.Body('subnet_id') @classmethod def find_available(cls, session): info = cls.list(session, port_id='') try: return next(info) except StopIteration: return None openstacksdk-0.11.3/openstack/network/v2/qos_rule_type.py0000666000175100017510000000225213236151340023565 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class QoSRuleType(resource.Resource): resource_key = 'rule_type' resources_key = 'rule_types' base_path = '/qos/rule-types' service = network_service.NetworkService() # capabilities allow_create = False allow_get = True allow_update = False allow_delete = False allow_list = True _query_mapping = resource.QueryParameters('type', 'drivers') # Properties #: QoS rule type name. type = resource.Body('type') #: List of QoS backend drivers supporting this QoS rule type drivers = resource.Body('drivers') openstacksdk-0.11.3/openstack/network/v2/quota.py0000666000175100017510000001302213236151340022021 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class Quota(resource.Resource): resource_key = 'quota' resources_key = 'quotas' base_path = '/quotas' service = network_service.NetworkService() # capabilities allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: The maximum amount of floating IPs you can have. *Type: int* floating_ips = resource.Body('floatingip', type=int) #: The maximum amount of health monitors you can create. *Type: int* health_monitors = resource.Body('healthmonitor', type=int) #: The maximum amount of listeners you can create. *Type: int* listeners = resource.Body('listener', type=int) #: The maximum amount of load balancers you can create. *Type: int* load_balancers = resource.Body('loadbalancer', type=int) #: The maximum amount of L7 policies you can create. *Type: int* l7_policies = resource.Body('l7policy', type=int) #: The maximum amount of networks you can create. *Type: int* networks = resource.Body('network', type=int) #: The maximum amount of pools you can create. *Type: int* pools = resource.Body('pool', type=int) #: The maximum amount of ports you can create. *Type: int* ports = resource.Body('port', type=int) #: The ID of the project these quota values are for. project_id = resource.Body('tenant_id', alternate_id=True) #: The maximum amount of RBAC policies you can create. *Type: int* rbac_policies = resource.Body('rbac_policy', type=int) #: The maximum amount of routers you can create. *Type: int* routers = resource.Body('router', type=int) #: The maximum amount of subnets you can create. *Type: int* subnets = resource.Body('subnet', type=int) #: The maximum amount of subnet pools you can create. *Type: int* subnet_pools = resource.Body('subnetpool', type=int) #: The maximum amount of security group rules you can create. *Type: int* security_group_rules = resource.Body('security_group_rule', type=int) #: The maximum amount of security groups you can create. *Type: int* security_groups = resource.Body('security_group', type=int) def _prepare_request(self, requires_id=True, prepend_key=False): _request = super(Quota, self)._prepare_request(requires_id, prepend_key) if self.resource_key in _request.body: _body = _request.body[self.resource_key] else: _body = _request.body if 'id' in _body: del _body['id'] return _request class QuotaDefault(Quota): base_path = '/quotas/%(project)s/default' # capabilities allow_retrieve = True allow_update = False allow_delete = False allow_list = False # Properties #: The ID of the project. project = resource.URI('project') class QuotaDetails(Quota): base_path = '/quotas/%(project)s/details' # capabilities allow_retrieve = True allow_update = False allow_delete = False allow_list = False # Properties #: The ID of the project. project = resource.URI('project') #: The maximum amount of floating IPs you can have. *Type: dict* floating_ips = resource.Body('floatingip', type=dict) #: The maximum amount of health monitors you can create. *Type: dict* health_monitors = resource.Body('healthmonitor', type=dict) #: The maximum amount of listeners you can create. *Type: dict* listeners = resource.Body('listener', type=dict) #: The maximum amount of load balancers you can create. *Type: dict* load_balancers = resource.Body('loadbalancer', type=dict) #: The maximum amount of L7 policies you can create. *Type: dict* l7_policies = resource.Body('l7policy', type=dict) #: The maximum amount of networks you can create. *Type: dict* networks = resource.Body('network', type=dict) #: The maximum amount of pools you can create. *Type: dict* pools = resource.Body('pool', type=dict) #: The maximum amount of ports you can create. *Type: dict* ports = resource.Body('port', type=dict) #: The ID of the project these quota values are for. project_id = resource.Body('tenant_id', alternate_id=True) #: The maximum amount of RBAC policies you can create. *Type: dict* rbac_policies = resource.Body('rbac_policy', type=dict) #: The maximum amount of routers you can create. *Type: int* routers = resource.Body('router', type=dict) #: The maximum amount of subnets you can create. *Type: dict* subnets = resource.Body('subnet', type=dict) #: The maximum amount of subnet pools you can create. *Type: dict* subnet_pools = resource.Body('subnetpool', type=dict) #: The maximum amount of security group rules you can create. *Type: dict* security_group_rules = resource.Body('security_group_rule', type=dict) #: The maximum amount of security groups you can create. *Type: dict* security_groups = resource.Body('security_group', type=dict) openstacksdk-0.11.3/openstack/network/v2/address_scope.py0000666000175100017510000000306213236151340023511 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class AddressScope(resource.Resource): """Address scope extension.""" resource_key = 'address_scope' resources_key = 'address_scopes' base_path = '/address-scopes' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'name', 'ip_version', project_id='tenant_id', is_shared='shared', ) # Properties #: The address scope name. name = resource.Body('name') #: The ID of the project that owns the address scope. project_id = resource.Body('tenant_id') #: The IP address family of the address scope. #: *Type: int* ip_version = resource.Body('ip_version', type=int) #: Indicates whether this address scope is shared across all projects. #: *Type: bool* is_shared = resource.Body('shared', type=bool) openstacksdk-0.11.3/openstack/network/v2/tag.py0000666000175100017510000000175513236151340021455 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import utils class TagMixin(object): _tag_query_parameters = { 'tags': 'tags', 'any_tags': 'tags-any', 'not_tags': 'not-tags', 'not_any_tags': 'not-tags-any', } def set_tags(self, session, tags): url = utils.urljoin(self.base_path, self.id, 'tags') session.put(url, json={'tags': tags}) self._body.attributes.update({'tags': tags}) return self openstacksdk-0.11.3/openstack/network/v2/metering_label.py0000666000175100017510000000277613236151340023657 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class MeteringLabel(resource.Resource): resource_key = 'metering_label' resources_key = 'metering_labels' base_path = '/metering/metering-labels' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'name', is_shared='shared', project_id='tenant_id' ) # Properties #: Description of the metering label. description = resource.Body('description') #: Name of the metering label. name = resource.Body('name') #: The ID of the project this metering label is associated with. project_id = resource.Body('tenant_id') #: Indicates whether this label is shared across all tenants. #: *Type: bool* is_shared = resource.Body('shared', type=bool) openstacksdk-0.11.3/openstack/network/v2/security_group.py0000666000175100017510000000352713236151340023764 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class SecurityGroup(resource.Resource): resource_key = 'security_group' resources_key = 'security_groups' base_path = '/security-groups' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'name', project_id='tenant_id', ) # Properties #: Timestamp when the security group was created. created_at = resource.Body('created_at') #: The security group description. description = resource.Body('description') #: The security group name. name = resource.Body('name') #: The ID of the project this security group is associated with. project_id = resource.Body('tenant_id') #: Revision number of the security group. *Type: int* revision_number = resource.Body('revision_number', type=int) #: A list of #: :class:`~openstack.network.v2.security_group_rule.SecurityGroupRule` #: objects. *Type: list* security_group_rules = resource.Body('security_group_rules', type=list) #: Timestamp when the security group was last updated. updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/network/v2/listener.py0000666000175100017510000000522213236151340022520 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class Listener(resource.Resource): resource_key = 'listener' resources_key = 'listeners' base_path = '/lbaas/listeners' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'connection_limit', 'default_pool_id', 'default_tls_container_ref', 'description', 'name', 'project_id', 'protocol', 'protocol_port', is_admin_state_up='admin_state_up' ) # Properties #: The maximum number of connections permitted for this load balancer. #: Default is infinite. connection_limit = resource.Body('connection_limit') #: ID of default pool. Must have compatible protocol with listener. default_pool_id = resource.Body('default_pool_id') #: A reference to a container of TLS secrets. default_tls_container_ref = resource.Body('default_tls_container_ref') #: Description for the listener. description = resource.Body('description') #: The administrative state of the listener, which is up #: ``True`` or down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: List of load balancers associated with this listener. #: *Type: list of dicts which contain the load balancer IDs* load_balancer_ids = resource.Body('loadbalancers') #: The ID of the load balancer associated with this listener. load_balancer_id = resource.Body('loadbalancer_id') #: Name of the listener name = resource.Body('name') #: The ID of the project this listener is associated with. project_id = resource.Body('project_id') #: The protocol of the listener, which is TCP, HTTP, HTTPS #: or TERMINATED_HTTPS. protocol = resource.Body('protocol') #: Port the listener will listen to, e.g. 80. protocol_port = resource.Body('protocol_port') #: A list of references to TLS secrets. #: *Type: list* sni_container_refs = resource.Body('sni_container_refs') openstacksdk-0.11.3/openstack/network/v2/subnet_pool.py0000666000175100017510000000710313236151340023224 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack.network.v2 import tag from openstack import resource class SubnetPool(resource.Resource, tag.TagMixin): resource_key = 'subnetpool' resources_key = 'subnetpools' base_path = '/subnetpools' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'address_scope_id', 'description', 'ip_version', 'is_default', 'name', is_shared='shared', project_id='tenant_id', **tag.TagMixin._tag_query_parameters ) # Properties #: The ID of the address scope associated with the subnet pool. address_scope_id = resource.Body('address_scope_id') #: Timestamp when the subnet pool was created. created_at = resource.Body('created_at') #: The length of the prefix to allocate when the cidr or prefixlen #: attributes are omitted when creating a subnet. *Type: int* default_prefix_length = resource.Body('default_prefixlen', type=int) #: A per-project quota on the prefix space that can be allocated #: from the subnet pool for project subnets. For IPv4 subnet pools, #: default_quota is measured in units of /32. For IPv6 subnet pools, #: default_quota is measured units of /64. All projects that use the #: subnet pool have the same prefix quota applied. *Type: int* default_quota = resource.Body('default_quota', type=int) #: The subnet pool description. description = resource.Body('description') #: Read-only. The IP address family of the list of prefixes. #: *Type: int* ip_version = resource.Body('ip_version', type=int) #: Whether or not this is the default subnet pool. #: *Type: bool* is_default = resource.Body('is_default', type=bool) #: Indicates whether this subnet pool is shared across all projects. #: *Type: bool* is_shared = resource.Body('shared', type=bool) #: The maximum prefix length that can be allocated from the #: subnet pool. *Type: int* maximum_prefix_length = resource.Body('max_prefixlen', type=int) #: The minimum prefix length that can be allocated from the #: subnet pool. *Type: int* minimum_prefix_length = resource.Body('min_prefixlen', type=int) #: The subnet pool name. name = resource.Body('name') #: The ID of the project that owns the subnet pool. project_id = resource.Body('tenant_id') #: A list of subnet prefixes that are assigned to the subnet pool. #: The adjacent prefixes are merged and treated as a single prefix. #: *Type: list* prefixes = resource.Body('prefixes', type=list) #: Revision number of the subnet pool. *Type: int* revision_number = resource.Body('revision_number', type=int) #: Timestamp when the subnet pool was last updated. updated_at = resource.Body('updated_at') #: A list of assocaited tags #: *Type: list of tag strings* tags = resource.Body('tags', type=list, default=[]) openstacksdk-0.11.3/openstack/network/v2/flavor.py0000666000175100017510000000416413236151340022170 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource from openstack import utils class Flavor(resource.Resource): resource_key = 'flavor' resources_key = 'flavors' base_path = '/flavors' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'name', 'service_type', is_enabled='enabled') # properties #: description for the flavor description = resource.Body('description') #: Sets enabled flag is_enabled = resource.Body('enabled', type=bool) #: The name of the flavor name = resource.Body('name') #: Service type to which the flavor applies service_type = resource.Body('service_type') #: IDs of service profiles associated with this flavor service_profile_ids = resource.Body('service_profiles', type=list) def associate_flavor_with_service_profile( self, session, service_profile_id=None): flavor_id = self.id url = utils.urljoin(self.base_path, flavor_id, 'service_profiles') body = {"service_profile": {"id": service_profile_id}} resp = session.post(url, json=body) return resp.json() def disassociate_flavor_from_service_profile( self, session, service_profile_id=None): flavor_id = self.id url = utils.urljoin( self.base_path, flavor_id, 'service_profiles', service_profile_id) session.delete(url,) return None openstacksdk-0.11.3/openstack/network/v2/segment.py0000666000175100017510000000364413236151340022343 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class Segment(resource.Resource): resource_key = 'segment' resources_key = 'segments' base_path = '/segments' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'name', 'network_id', 'network_type', 'physical_network', 'segmentation_id', ) # Properties #: The segment description. description = resource.Body('description') #: The segment name. name = resource.Body('name') #: The ID of the network associated with this segment. network_id = resource.Body('network_id') #: The type of network associated with this segment, such as #: ``flat``, ``geneve``, ``gre``, ``local``, ``vlan`` or ``vxlan``. network_type = resource.Body('network_type') #: The name of the physical network associated with this segment. physical_network = resource.Body('physical_network') #: The segmentation ID for this segment. The network type #: defines the segmentation model, VLAN ID for ``vlan`` network type #: and tunnel ID for ``geneve``, ``gre`` and ``vxlan`` network types. #: *Type: int* segmentation_id = resource.Body('segmentation_id', type=int) openstacksdk-0.11.3/openstack/network/v2/router.py0000666000175100017510000001347013236151340022217 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack.network.v2 import tag from openstack import resource from openstack import utils class Router(resource.Resource, tag.TagMixin): resource_key = 'router' resources_key = 'routers' base_path = '/routers' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # NOTE: We don't support query on datetime, list or dict fields _query_mapping = resource.QueryParameters( 'description', 'flavor_id', 'name', 'status', is_admin_state_up='admin_state_up', is_distributed='distributed', is_ha='ha', project_id='tenant_id', **tag.TagMixin._tag_query_parameters ) # Properties #: Availability zone hints to use when scheduling the router. #: *Type: list of availability zone names* availability_zone_hints = resource.Body('availability_zone_hints', type=list) #: Availability zones for the router. #: *Type: list of availability zone names* availability_zones = resource.Body('availability_zones', type=list) #: Timestamp when the router was created. created_at = resource.Body('created_at') #: The router description. description = resource.Body('description') #: The ``network_id``, for the external gateway. *Type: dict* external_gateway_info = resource.Body('external_gateway_info', type=dict) #: The ID of the flavor. flavor_id = resource.Body('flavor_id') #: The administrative state of the router, which is up ``True`` #: or down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: The distributed state of the router, which is distributed ``True`` #: or not ``False``. *Type: bool* *Default: False* is_distributed = resource.Body('distributed', type=bool, default=False) #: The highly-available state of the router, which is highly available #: ``True`` or not ``False``. *Type: bool* *Default: False* is_ha = resource.Body('ha', type=bool, default=False) #: The router name. name = resource.Body('name') #: The ID of the project this router is associated with. project_id = resource.Body('tenant_id') #: Revision number of the router. *Type: int* revision_number = resource.Body('revision', type=int) #: The extra routes configuration for the router. routes = resource.Body('routes', type=list) #: The router status. status = resource.Body('status') #: Timestamp when the router was created. updated_at = resource.Body('updated_at') #: A list of assocaited tags #: *Type: list of tag strings* tags = resource.Body('tags', type=list, default=[]) def add_interface(self, session, **body): """Add an internal interface to a logical router. :param session: The session to communicate through. :type session: :class:`~keystoneauth1.adapter.Adapter` :param dict body: The body requested to be updated on the router :returns: The body of the response as a dictionary. """ url = utils.urljoin(self.base_path, self.id, 'add_router_interface') resp = session.put(url, json=body) return resp.json() def remove_interface(self, session, **body): """Remove an internal interface from a logical router. :param session: The session to communicate through. :type session: :class:`~keystoneauth1.adapter.Adapter` :param dict body: The body requested to be updated on the router :returns: The body of the response as a dictionary. """ url = utils.urljoin(self.base_path, self.id, 'remove_router_interface') resp = session.put(url, json=body) return resp.json() def add_gateway(self, session, **body): """Add an external gateway to a logical router. :param session: The session to communicate through. :type session: :class:`~keystoneauth1.adapter.Adapter` :param dict body: The body requested to be updated on the router :returns: The body of the response as a dictionary. """ url = utils.urljoin(self.base_path, self.id, 'add_gateway_router') resp = session.put(url, json=body) return resp.json() def remove_gateway(self, session, **body): """Remove an external gateway from a logical router. :param session: The session to communicate through. :type session: :class:`~keystoneauth1.adapter.Adapter` :param dict body: The body requested to be updated on the router :returns: The body of the response as a dictionary. """ url = utils.urljoin(self.base_path, self.id, 'remove_gateway_router') resp = session.put(url, json=body) return resp.json() class L3AgentRouter(Router): resource_key = 'router' resources_key = 'routers' base_path = '/agents/%(agent_id)s/l3-routers' resource_name = 'l3-router' service = network_service.NetworkService() # capabilities allow_create = False allow_retrieve = True allow_update = False allow_delete = False allow_list = True # NOTE: No query parameter is supported openstacksdk-0.11.3/openstack/network/v2/pool_member.py0000666000175100017510000000421213236151340023171 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class PoolMember(resource.Resource): resource_key = 'member' resources_key = 'members' base_path = '/lbaas/pools/%(pool_id)s/members' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'address', 'name', 'protocol_port', 'subnet_id', 'weight', is_admin_state_up='admin_state_up', project_id='tenant_id', ) # Properties #: The ID of the owning pool pool_id = resource.URI('pool_id') #: The IP address of the pool member. address = resource.Body('address') #: The administrative state of the pool member, which is up ``True`` or #: down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: Name of the pool member. name = resource.Body('name') #: The ID of the project this pool member is associated with. project_id = resource.Body('tenant_id') #: The port on which the application is hosted. protocol_port = resource.Body('protocol_port', type=int) #: Subnet ID in which to access this pool member. subnet_id = resource.Body('subnet_id') #: A positive integer value that indicates the relative portion of traffic #: that this member should receive from the pool. For example, a member #: with a weight of 10 receives five times as much traffic as a member #: with weight of 2. weight = resource.Body('weight', type=int) openstacksdk-0.11.3/openstack/network/v2/service_profile.py0000666000175100017510000000302613236151340024053 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class ServiceProfile(resource.Resource): resource_key = 'service_profile' resources_key = 'service_profiles' base_path = '/service_profiles' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'driver', is_enabled='enabled', project_id='tenant_id' ) # Properties #: Description of the service flavor profile. description = resource.Body('description') #: Provider driver for the service flavor profile driver = resource.Body('driver') #: Sets enabled flag is_enabled = resource.Body('enabled', type=bool) #: Metainformation of the service flavor profile meta_info = resource.Body('metainfo') #: The owner project ID project_id = resource.Body('tenant_id') openstacksdk-0.11.3/openstack/network/v2/metering_label_rule.py0000666000175100017510000000367313236151340024703 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class MeteringLabelRule(resource.Resource): resource_key = 'metering_label_rule' resources_key = 'metering_label_rules' base_path = '/metering/metering-label-rules' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'direction', 'metering_label_id', 'remote_ip_prefix', project_id='tenant_id', ) # Properties #: ingress or egress: The direction in which metering label rule is #: applied. Default: ``"ingress"`` direction = resource.Body('direction') #: Specify whether the ``remote_ip_prefix`` will be excluded or not #: from traffic counters of the metering label, ie: to not count the #: traffic of a specific IP address of a range. Default: ``False``, #: *Type: bool* is_excluded = resource.Body('excluded', type=bool) #: The metering label ID to associate with this metering label rule. metering_label_id = resource.Body('metering_label_id') #: The ID of the project this metering label rule is associated with. project_id = resource.Body('tenant_id') #: The remote IP prefix to be associated with this metering label rule. remote_ip_prefix = resource.Body('remote_ip_prefix') openstacksdk-0.11.3/openstack/network/v2/extension.py0000666000175100017510000000252313236151340022710 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class Extension(resource.Resource): resource_key = 'extension' resources_key = 'extensions' base_path = '/extensions' service = network_service.NetworkService() # capabilities allow_get = True allow_list = True # NOTE: No query parameters supported # Properties #: An alias the extension is known under. alias = resource.Body('alias', alternate_id=True) #: Text describing what the extension does. description = resource.Body('description') #: Links pertaining to this extension. links = resource.Body('links') #: The name of this extension. name = resource.Body('name') #: Timestamp when the extension was last updated. updated_at = resource.Body('updated') openstacksdk-0.11.3/openstack/network/v2/port.py0000666000175100017510000001433613236151340021665 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack.network.v2 import tag from openstack import resource class Port(resource.Resource, tag.TagMixin): resource_key = 'port' resources_key = 'ports' base_path = '/ports' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # NOTE: we skip query on list or datetime fields for now _query_mapping = resource.QueryParameters( 'description', 'device_id', 'device_owner', 'fixed_ips', 'ip_address', 'mac_address', 'name', 'network_id', 'status', 'subnet_id', is_admin_state_up='admin_state_up', is_port_security_enabled='port_security_enabled', project_id='tenant_id', **tag.TagMixin._tag_query_parameters ) # Properties #: Allowed address pairs. allowed_address_pairs = resource.Body('allowed_address_pairs', type=list) #: The ID of the host where the port is allocated. In some cases, #: different implementations can run on different hosts. binding_host_id = resource.Body('binding:host_id') #: A dictionary the enables the application running on the specified #: host to pass and receive vif port-specific information to the plug-in. #: *Type: dict* binding_profile = resource.Body('binding:profile', type=dict) #: Read-only. A dictionary that enables the application to pass #: information about functions that the Networking API provides. #: To enable or disable port filtering features such as security group #: and anti-MAC/IP spoofing, specify ``port_filter: True`` or #: ``port_filter: False``. *Type: dict* binding_vif_details = resource.Body('binding:vif_details', type=dict) #: Read-only. The vif type for the specified port. binding_vif_type = resource.Body('binding:vif_type') #: The vnic type that is bound to the neutron port. #: #: In POST and PUT operations, specify a value of ``normal`` (virtual nic), #: ``direct`` (pci passthrough), or ``macvtap`` #: (virtual interface with a tap-like software interface). #: These values support SR-IOV PCI passthrough networking. #: The ML2 plug-in supports the vnic_type. #: #: In GET operations, the binding:vnic_type extended attribute is #: visible to only port owners and administrative users. binding_vnic_type = resource.Body('binding:vnic_type') #: Timestamp when the port was created. created_at = resource.Body('created_at') #: Underlying data plane status of this port. data_plane_status = resource.Body('data_plane_status') #: The port description. description = resource.Body('description') #: Device ID of this port. device_id = resource.Body('device_id') #: Device owner of this port (e.g. ``network:dhcp``). device_owner = resource.Body('device_owner') #: DNS assignment for the port. dns_assignment = resource.Body('dns_assignment') #: DNS name for the port. dns_name = resource.Body('dns_name') #: Extra DHCP options. extra_dhcp_opts = resource.Body('extra_dhcp_opts', type=list) #: IP addresses of an allowed address pair. ip_address = resource.Body('ip_address') #: IP addresses for the port. Includes the IP address and subnet ID. fixed_ips = resource.Body('fixed_ips', type=list) #: The administrative state of the port, which is up ``True`` or #: down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: The port security status, which is enabled ``True`` or disabled #: ``False``. *Type: bool* *Default: False* is_port_security_enabled = resource.Body('port_security_enabled', type=bool, default=False) #: The MAC address of an allowed address pair. mac_address = resource.Body('mac_address') #: The port name. name = resource.Body('name') #: The ID of the attached network. network_id = resource.Body('network_id') #: The ID of the project who owns the network. Only administrative #: users can specify a project ID other than their own. project_id = resource.Body('tenant_id') #: The extra DHCP option name. option_name = resource.Body('opt_name') #: The extra DHCP option value. option_value = resource.Body('opt_value') #: The ID of the QoS policy attached to the port. qos_policy_id = resource.Body('qos_policy_id') #: Revision number of the port. *Type: int* revision_number = resource.Body('revision_number', type=int) #: The IDs of any attached security groups. #: *Type: list of strs of the security group IDs* security_group_ids = resource.Body('security_groups', type=list) #: The port status. Value is ``ACTIVE`` or ``DOWN``. status = resource.Body('status') #: The ID of the subnet. If you specify only a subnet UUID, OpenStack #: networking allocates an available IP from that subnet to the port. #: If you specify both a subnet ID and an IP address, OpenStack networking #: tries to allocate the address to the port. subnet_id = resource.Body('subnet_id') #: Read-only. The trunk referring to this parent port and its subports. #: Present for trunk parent ports if ``trunk-details`` extension is loaded. #: *Type: dict with keys: trunk_id, sub_ports. #: sub_ports is a list of dicts with keys: #: port_id, segmentation_type, segmentation_id, mac_address* trunk_details = resource.Body('trunk_details', type=dict) #: Timestamp when the port was last updated. updated_at = resource.Body('updated_at') #: A list of assocaited tags #: *Type: list of tag strings* tags = resource.Body('tags', type=list, default=[]) openstacksdk-0.11.3/openstack/network/v2/health_monitor.py0000666000175100017510000000524313236151340023712 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class HealthMonitor(resource.Resource): resource_key = 'healthmonitor' resources_key = 'healthmonitors' base_path = '/lbaas/healthmonitors' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'delay', 'expected_codes', 'http_method', 'max_retries', 'timeout', 'type', 'url_path', is_admin_state_up='adminstate_up', project_id='tenant_id', ) # Properties #: The time, in seconds, between sending probes to members. delay = resource.Body('delay') #: Expected HTTP codes for a passing HTTP(S) monitor. expected_codes = resource.Body('expected_codes') #: The HTTP method that the monitor uses for requests. http_method = resource.Body('http_method') #: The administrative state of the health monitor, which is up #: ``True`` or down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: Maximum consecutive health probe tries. max_retries = resource.Body('max_retries') #: Name of the health monitor. name = resource.Body('name') #: List of pools associated with this health monitor #: *Type: list of dicts which contain the pool IDs* pool_ids = resource.Body('pools', type=list) #: The ID of the pool associated with this health monitor pool_id = resource.Body('pool_id') #: The ID of the project this health monitor is associated with. project_id = resource.Body('tenant_id') #: The maximum number of seconds for a monitor to wait for a #: connection to be established before it times out. This value must #: be less than the delay value. timeout = resource.Body('timeout') #: The type of probe sent by the load balancer to verify the member #: state, which is PING, TCP, HTTP, or HTTPS. type = resource.Body('type') #: Path portion of URI that will be probed if type is HTTP(S). url_path = resource.Body('url_path') openstacksdk-0.11.3/openstack/network/v2/agent.py0000666000175100017510000001061413236151340021772 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource from openstack import utils class Agent(resource.Resource): """Neutron agent extension.""" resource_key = 'agent' resources_key = 'agents' base_path = '/agents' service = network_service.NetworkService() # capabilities allow_create = False allow_get = True allow_update = True allow_delete = True allow_list = True # NOTE: We skip query for JSON fields and datetime fields _query_mapping = resource.QueryParameters( 'agent_type', 'availability_zone', 'binary', 'description', 'host', 'topic', is_admin_state_up='admin_state_up', is_alive='alive', ) # Properties #: The type of network agent. agent_type = resource.Body('agent_type') #: Availability zone for the network agent. availability_zone = resource.Body('availability_zone') #: The name of the network agent's application binary. binary = resource.Body('binary') #: Network agent configuration data specific to the agent_type. configuration = resource.Body('configurations') #: Timestamp when the network agent was created. created_at = resource.Body('created_at') #: The network agent description. description = resource.Body('description') #: Timestamp when the network agent's heartbeat was last seen. last_heartbeat_at = resource.Body('heartbeat_timestamp') #: The host the agent is running on. host = resource.Body('host') #: The administrative state of the network agent, which is up #: ``True`` or down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: Whether or not the network agent is alive. #: *Type: bool* is_alive = resource.Body('alive', type=bool) #: Timestamp when the network agent was last started. started_at = resource.Body('started_at') #: The messaging queue topic the network agent subscribes to. topic = resource.Body('topic') #: The HA state of the L3 agent. This is one of 'active', 'standby' or #: 'fault' for HA routers, or None for other types of routers. ha_state = resource.Body('ha_state') def add_agent_to_network(self, session, network_id): body = {'network_id': network_id} url = utils.urljoin(self.base_path, self.id, 'dhcp-networks') resp = session.post(url, json=body) return resp.json() def remove_agent_from_network(self, session, network_id): body = {'network_id': network_id} url = utils.urljoin(self.base_path, self.id, 'dhcp-networks', network_id) session.delete(url, json=body) def add_router_to_agent(self, session, router): body = {'router_id': router} url = utils.urljoin(self.base_path, self.id, 'l3-routers') resp = session.post(url, json=body) return resp.json() def remove_router_from_agent(self, session, router): body = {'router_id': router} url = utils.urljoin(self.base_path, self.id, 'l3-routers', router) session.delete(url, json=body) class NetworkHostingDHCPAgent(Agent): resource_key = 'agent' resources_key = 'agents' resource_name = 'dhcp-agent' base_path = '/networks/%(network_id)s/dhcp-agents' service = network_service.NetworkService() # capabilities allow_create = False allow_get = True allow_update = False allow_delete = False allow_list = True # NOTE: Doesn't support query yet. class RouterL3Agent(Agent): resource_key = 'agent' resources_key = 'agents' base_path = '/routers/%(router_id)s/l3-agents' resource_name = 'l3-agent' service = network_service.NetworkService() # capabilities allow_create = False allow_retrieve = True allow_update = False allow_delete = False allow_list = True # NOTE: No query parameter is supported openstacksdk-0.11.3/openstack/network/v2/auto_allocated_topology.py0000666000175100017510000000345013236151340025610 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class AutoAllocatedTopology(resource.Resource): resource_name = 'auto_allocated_topology' resource_key = 'auto_allocated_topology' base_path = '/auto-allocated-topology' service = network_service.NetworkService() # Capabilities allow_create = False allow_get = True allow_update = False allow_delete = True allow_list = False # NOTE: this resource doesn't support list or query # Properties #: Project ID #: If project is not specified the topology will be created #: for project user is authenticated against. #: Will return in error if resources have not been configured correctly #: To use this feature auto-allocated-topology, subnet_allocation, #: external-net and router extensions must be enabled and set up. project_id = resource.Body('tenant_id') class ValidateTopology(AutoAllocatedTopology): base_path = '/auto-allocated-topology/%(project)s?fields=dry-run' #: Validate requirements before running (Does not return topology) #: Will return "Deployment error:" if the resources required have not #: been correctly set up. dry_run = resource.Body('dry_run') project = resource.URI('project') openstacksdk-0.11.3/openstack/network/v2/qos_bandwidth_limit_rule.py0000666000175100017510000000261113236151340025745 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class QoSBandwidthLimitRule(resource.Resource): resource_key = 'bandwidth_limit_rule' resources_key = 'bandwidth_limit_rules' base_path = '/qos/policies/%(qos_policy_id)s/bandwidth_limit_rules' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: The ID of the QoS policy who owns rule. qos_policy_id = resource.URI('qos_policy_id') #: Maximum bandwidth in kbps. max_kbps = resource.Body('max_kbps') #: Maximum burst bandwidth in kbps. max_burst_kbps = resource.Body('max_burst_kbps') #: Traffic direction from the tenant point of view ('egress', 'ingress'). direction = resource.Body('direction') openstacksdk-0.11.3/openstack/network/v2/__init__.py0000666000175100017510000000000013236151340022417 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/network/v2/qos_policy.py0000666000175100017510000000343413236151340023057 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class QoSPolicy(resource.Resource): resource_key = 'policy' resources_key = 'policies' base_path = '/qos/policies' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'name', 'description', 'is_default', project_id='tenant_id', is_shared='shared', ) # Properties #: QoS policy name. name = resource.Body('name') #: The ID of the project who owns the network. Only administrative #: users can specify a project ID other than their own. project_id = resource.Body('tenant_id') #: The QoS policy description. description = resource.Body('description') #: Indicates whether this QoS policy is the default policy for this #: project. #: *Type: bool* is_default = resource.Body('is_default', type=bool) #: Indicates whether this QoS policy is shared across all projects. #: *Type: bool* is_shared = resource.Body('shared', type=bool) #: List of QoS rules applied to this QoS policy. rules = resource.Body('rules') openstacksdk-0.11.3/openstack/network/v2/availability_zone.py0000666000175100017510000000300413236151340024374 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource as _resource class AvailabilityZone(_resource.Resource): resource_key = 'availability_zone' resources_key = 'availability_zones' base_path = '/availability_zones' service = network_service.NetworkService() # capabilities allow_create = False allow_get = False allow_update = False allow_delete = False allow_list = True # NOTE: We don't support query by state yet because there is a mapping # at neutron side difficult to map. _query_mapping = _resource.QueryParameters( name='availability_zone', resource='agent_type') # Properties #: Name of the availability zone. name = _resource.Body('name') #: Type of resource for the availability zone, such as ``network``. resource = _resource.Body('resource') #: State of the availability zone, either ``available`` or #: ``unavailable``. state = _resource.Body('state') openstacksdk-0.11.3/openstack/network/v2/network.py0000666000175100017510000001342113236151340022364 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack.network.v2 import tag from openstack import resource class Network(resource.Resource, tag.TagMixin): resource_key = 'network' resources_key = 'networks' base_path = '/networks' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # NOTE: We don't support query on list or datetime fields yet _query_mapping = resource.QueryParameters( 'description', 'name', 'status', ipv4_address_scope_id='ipv4_address_scope', ipv6_address_scope_id='ipv6_address_scope', is_admin_state_up='admin_state_up', is_port_security_enabled='port_security_enabled', is_router_external='router:external', is_shared='shared', project_id='tenant_id', provider_network_type='provider:network_type', provider_physical_network='provider:physical_network', provider_segmentation_id='provider:segmentation_id', **tag.TagMixin._tag_query_parameters ) # Properties #: Availability zone hints to use when scheduling the network. #: *Type: list of availability zone names* availability_zone_hints = resource.Body('availability_zone_hints') #: Availability zones for the network. #: *Type: list of availability zone names* availability_zones = resource.Body('availability_zones') #: Timestamp when the network was created. created_at = resource.Body('created_at') #: The network description. description = resource.Body('description') #: The DNS domain associated. dns_domain = resource.Body('dns_domain') #: The ID of the IPv4 address scope for the network. ipv4_address_scope_id = resource.Body('ipv4_address_scope') #: The ID of the IPv6 address scope for the network. ipv6_address_scope_id = resource.Body('ipv6_address_scope') #: The administrative state of the network, which is up ``True`` or #: down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: Whether or not this is the default external network. #: *Type: bool* is_default = resource.Body('is_default', type=bool) #: The port security status, which is enabled ``True`` or disabled #: ``False``. *Type: bool* *Default: False* #: Available for multiple provider extensions. is_port_security_enabled = resource.Body('port_security_enabled', type=bool, default=False) #: Whether or not the router is external. #: *Type: bool* *Default: False* is_router_external = resource.Body('router:external', type=bool, default=False) #: Indicates whether this network is shared across all tenants. #: By default, only administrative users can change this value. #: *Type: bool* is_shared = resource.Body('shared', type=bool) #: Read-only. The maximum transmission unit (MTU) of the network resource. mtu = resource.Body('mtu', type=int) #: The network name. name = resource.Body('name') #: The ID of the project this network is associated with. project_id = resource.Body('project_id') #: The type of physical network that maps to this network resource. #: For example, ``flat``, ``vlan``, ``vxlan``, or ``gre``. #: Available for multiple provider extensions. provider_network_type = resource.Body('provider:network_type') #: The physical network where this network object is implemented. #: Available for multiple provider extensions. provider_physical_network = resource.Body('provider:physical_network') #: An isolated segment ID on the physical network. The provider #: network type defines the segmentation model. #: Available for multiple provider extensions. provider_segmentation_id = resource.Body('provider:segmentation_id') #: The ID of the QoS policy attached to the port. qos_policy_id = resource.Body('qos_policy_id') #: Revision number of the network. *Type: int* revision_number = resource.Body('revision_number', type=int) #: A list of provider segment objects. #: Available for multiple provider extensions. segments = resource.Body('segments') #: The network status. status = resource.Body('status') #: The associated subnet IDs. #: *Type: list of strs of the subnet IDs* subnet_ids = resource.Body('subnets', type=list) #: Timestamp when the network was last updated. updated_at = resource.Body('updated_at') #: Indicates the VLAN transparency mode of the network is_vlan_transparent = resource.Body('vlan_transparent', type=bool) #: A list of assocaited tags #: *Type: list of tag strings* tags = resource.Body('tags', type=list, default=[]) class DHCPAgentHostingNetwork(Network): resource_key = 'network' resources_key = 'networks' base_path = '/agents/%(agent_id)s/dhcp-networks' resource_name = 'dhcp-network' service = network_service.NetworkService() # capabilities allow_create = False allow_get = True allow_update = False allow_delete = False allow_list = True # NOTE: No query parameter is supported openstacksdk-0.11.3/openstack/network/v2/vpn_service.py0000666000175100017510000000373013236151340023220 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource # NOTE: The VPN service is unmaintained, need to consider remove it class VPNService(resource.Resource): resource_key = 'vpnservice' resources_key = 'vpnservices' base_path = '/vpn/vpnservices' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: Human-readable description for the vpnservice. description = resource.Body('description') #: The external IPv4 address that is used for the VPN service. external_v4_ip = resource.Body('external_v4_ip') #: The external IPv6 address that is used for the VPN service. external_v6_ip = resource.Body('external_v6_ip') #: The administrative state of the vpnservice, which is up ``True`` or #: down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: The vpnservice name. name = resource.Body('name') #: ID of the router into which the VPN service is inserted. router_id = resource.Body('router_id') #: The ID of the project this vpnservice is associated with. project_id = resource.Body('tenant_id') #: The vpnservice status. status = resource.Body('status') #: The ID of the subnet on which the tenant wants the vpnservice. subnet_id = resource.Body('subnet_id') openstacksdk-0.11.3/openstack/network/v2/load_balancer.py0000666000175100017510000000450313236151340023442 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class LoadBalancer(resource.Resource): resource_key = 'loadbalancer' resources_key = 'loadbalancers' base_path = '/lbaas/loadbalancers' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: Description for the load balancer. description = resource.Body('description') #: The administrative state of the load balancer, which is up #: ``True`` or down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: List of listeners associated with this load balancer. #: *Type: list of dicts which contain the listener IDs* listener_ids = resource.Body('listeners', type=list) #: Name of the load balancer name = resource.Body('name') #: Status of load_balancer operating, e.g. ONLINE, OFFLINE. operating_status = resource.Body('operating_status') #: List of pools associated with this load balancer. #: *Type: list of dicts which contain the pool IDs* pool_ids = resource.Body('pools', type=list) #: The ID of the project this load balancer is associated with. project_id = resource.Body('tenant_id') #: The name of the provider. provider = resource.Body('provider') #: Status of load balancer provisioning, e.g. ACTIVE, INACTIVE. provisioning_status = resource.Body('provisioning_status') #: The IP address of the VIP. vip_address = resource.Body('vip_address') #: The ID of the port for the VIP. vip_port_id = resource.Body('vip_port_id') #: The ID of the subnet on which to allocate the VIP address. vip_subnet_id = resource.Body('vip_subnet_id') openstacksdk-0.11.3/openstack/network/v2/rbac_policy.py0000666000175100017510000000303613236151340023162 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class RBACPolicy(resource.Resource): resource_key = 'rbac_policy' resources_key = 'rbac_policies' base_path = '/rbac-policies' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'action', 'object_id', 'object_type', 'project_id', 'target_project_id', ) # Properties #: ID of the object that this RBAC policy affects. object_id = resource.Body('object_id') #: The ID of the project this RBAC will be enforced. target_project_id = resource.Body('target_tenant') #: The owner project ID. project_id = resource.Body('tenant_id') #: Type of the object that this RBAC policy affects. object_type = resource.Body('object_type') #: Action for the RBAC policy. action = resource.Body('action') openstacksdk-0.11.3/openstack/network/v2/service_provider.py0000666000175100017510000000246513236151340024253 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class ServiceProvider(resource.Resource): resources_key = 'service_providers' base_path = '/service-providers' service = network_service.NetworkService() # Capabilities allow_create = False allow_get = False allow_update = False allow_delete = False allow_list = True _query_mapping = resource.QueryParameters( 'service_type', 'name', is_default='default' ) # Properties #: Service type (FIREWALL, FLAVORS, METERING, QOS, etc..) service_type = resource.Body('service_type') #: Name of the service type name = resource.Body('name') #: The default value of service type is_default = resource.Body('default', type=bool) openstacksdk-0.11.3/openstack/network/v2/network_ip_availability.py0000666000175100017510000000357213236151340025614 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class NetworkIPAvailability(resource.Resource): resource_key = 'network_ip_availability' resources_key = 'network_ip_availabilities' base_path = '/network-ip-availabilities' name_attribute = 'network_name' service = network_service.NetworkService() # capabilities allow_create = False allow_get = True allow_update = False allow_delete = False allow_list = True _query_mapping = resource.QueryParameters( 'ip_version', 'network_id', 'network_name', project_id='tenant_id' ) # Properties #: Network ID to use when listing network IP availability. network_id = resource.Body('network_id') #: Network Name for the particular network IP availability. network_name = resource.Body('network_name') #: The Subnet IP Availability of all subnets of a network. #: *Type: list* subnet_ip_availability = resource.Body('subnet_ip_availability', type=list) #: The ID of the project this network IP availability is associated with. project_id = resource.Body('tenant_id') #: The total ips of a network. #: *Type: int* total_ips = resource.Body('total_ips', type=int) #: The used or consumed ip of a network #: *Type: int* used_ips = resource.Body('used_ips', type=int) openstacksdk-0.11.3/openstack/network/v2/pool.py0000666000175100017510000000710113236151340021642 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class Pool(resource.Resource): resource_key = 'pool' resources_key = 'pools' base_path = '/lbaas/pools' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'lb_algorithm', 'name', 'protocol', 'provider', 'subnet_id', 'virtual_ip_id', 'listener_id', is_admin_state_up='admin_state_up', project_id='tenant_id', load_balancer_id='loadbalancer_id', ) # Properties #: Description for the pool. description = resource.Body('description') #: The ID of the associated health monitors. health_monitor_ids = resource.Body('health_monitors', type=list) #: The statuses of the associated health monitors. health_monitor_status = resource.Body('health_monitor_status', type=list) #: The administrative state of the pool, which is up ``True`` or down #: ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: The load-balancer algorithm, which is round-robin, least-connections, #: and so on. This value, which must be supported, is dependent on the #: load-balancer provider. Round-robin must be supported. lb_algorithm = resource.Body('lb_algorithm') #: List of associated listeners. #: *Type: list of dicts which contain the listener IDs* listener_ids = resource.Body('listeners', type=list) #: ID of listener associated with this pool listener_id = resource.Body('listener_id') #: List of associated load balancers. #: *Type: list of dicts which contain the load balancer IDs* load_balancer_ids = resource.Body('loadbalancers', type=list) #: ID of load balancer associated with this pool load_balancer_id = resource.Body('loadbalancer_id') #: List of members that belong to the pool. #: *Type: list of dicts which contain the member IDs* member_ids = resource.Body('members', type=list) #: Pool name. Does not have to be unique. name = resource.Body('name') #: The ID of the project this pool is associated with. project_id = resource.Body('tenant_id') #: The protocol of the pool, which is TCP, HTTP, or HTTPS. protocol = resource.Body('protocol') #: The provider name of the load balancer service. provider = resource.Body('provider') #: Human readable description of the status. status = resource.Body('status') #: The status of the network. status_description = resource.Body('status_description') #: The subnet on which the members of the pool will be located. subnet_id = resource.Body('subnet_id') #: Session persistence algorithm that should be used (if any). #: *Type: dict with keys ``type`` and ``cookie_name``* session_persistence = resource.Body('session_persistence') #: The ID of the virtual IP (VIP) address. virtual_ip_id = resource.Body('vip_id') openstacksdk-0.11.3/openstack/network/v2/subnet.py0000666000175100017510000000731113236151340022174 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack.network.v2 import tag from openstack import resource class Subnet(resource.Resource, tag.TagMixin): resource_key = 'subnet' resources_key = 'subnets' base_path = '/subnets' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # NOTE: Query on list or datetime fields are currently not supported. _query_mapping = resource.QueryParameters( 'cidr', 'description', 'gateway_ip', 'ip_version', 'ipv6_address_mode', 'ipv6_ra_mode', 'name', 'network_id', 'segment_id', is_dhcp_enabled='enable_dhcp', project_id='tenant_id', subnet_pool_id='subnetpool_id', use_default_subnet_pool='use_default_subnetpool', **tag.TagMixin._tag_query_parameters ) # Properties #: List of allocation pools each of which has a start and an end address #: for this subnet allocation_pools = resource.Body('allocation_pools', type=list) #: The CIDR. cidr = resource.Body('cidr') #: Timestamp when the subnet was created. created_at = resource.Body('created_at') #: The subnet description. description = resource.Body('description') #: A list of DNS nameservers. dns_nameservers = resource.Body('dns_nameservers', type=list) #: The gateway IP address. gateway_ip = resource.Body('gateway_ip') #: A list of host routes. host_routes = resource.Body('host_routes', type=list) #: The IP version, which is 4 or 6. #: *Type: int* ip_version = resource.Body('ip_version', type=int) #: The IPv6 address modes which are 'dhcpv6-stateful', 'dhcpv6-stateless' #: or 'slacc'. ipv6_address_mode = resource.Body('ipv6_address_mode') #: The IPv6 router advertisements modes which can be 'slaac', #: 'dhcpv6-stateful', 'dhcpv6-stateless'. ipv6_ra_mode = resource.Body('ipv6_ra_mode') #: Set to ``True`` if DHCP is enabled and ``False`` if DHCP is disabled. #: *Type: bool* is_dhcp_enabled = resource.Body('enable_dhcp', type=bool) #: The subnet name. name = resource.Body('name') #: The ID of the attached network. network_id = resource.Body('network_id') #: The ID of the project this subnet is associated with. project_id = resource.Body('tenant_id') #: Revision number of the subnet. *Type: int* revision_number = resource.Body('revision_number', type=int) #: The ID of the segment this subnet is associated with. segment_id = resource.Body('segment_id') #: Service types for this subnet service_types = resource.Body('service_types', type=list) #: The subnet pool ID from which to obtain a CIDR. subnet_pool_id = resource.Body('subnetpool_id') #: Timestamp when the subnet was last updated. updated_at = resource.Body('updated_at') #: Whether to use the default subnet pool to obtain a CIDR. use_default_subnet_pool = resource.Body( 'use_default_subnetpool', type=bool ) #: A list of assocaited tags #: *Type: list of tag strings* tags = resource.Body('tags', type=list, default=[]) openstacksdk-0.11.3/openstack/network/v2/qos_dscp_marking_rule.py0000666000175100017510000000223713236151340025250 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network import network_service from openstack import resource class QoSDSCPMarkingRule(resource.Resource): resource_key = 'dscp_marking_rule' resources_key = 'dscp_marking_rules' base_path = '/qos/policies/%(qos_policy_id)s/dscp_marking_rules' service = network_service.NetworkService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: The ID of the QoS policy who owns rule. qos_policy_id = resource.URI('qos_policy_id') #: DSCP mark field. dscp_mark = resource.Body('dscp_mark') openstacksdk-0.11.3/openstack/network/v2/_proxy.py0000666000175100017510000042646413236151340022232 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import exceptions from openstack.network.v2 import address_scope as _address_scope from openstack.network.v2 import agent as _agent from openstack.network.v2 import auto_allocated_topology as \ _auto_allocated_topology from openstack.network.v2 import availability_zone from openstack.network.v2 import extension from openstack.network.v2 import flavor as _flavor from openstack.network.v2 import floating_ip as _floating_ip from openstack.network.v2 import health_monitor as _health_monitor from openstack.network.v2 import listener as _listener from openstack.network.v2 import load_balancer as _load_balancer from openstack.network.v2 import metering_label as _metering_label from openstack.network.v2 import metering_label_rule as _metering_label_rule from openstack.network.v2 import network as _network from openstack.network.v2 import network_ip_availability from openstack.network.v2 import pool as _pool from openstack.network.v2 import pool_member as _pool_member from openstack.network.v2 import port as _port from openstack.network.v2 import qos_bandwidth_limit_rule as \ _qos_bandwidth_limit_rule from openstack.network.v2 import qos_dscp_marking_rule as \ _qos_dscp_marking_rule from openstack.network.v2 import qos_minimum_bandwidth_rule as \ _qos_minimum_bandwidth_rule from openstack.network.v2 import qos_policy as _qos_policy from openstack.network.v2 import qos_rule_type as _qos_rule_type from openstack.network.v2 import quota as _quota from openstack.network.v2 import rbac_policy as _rbac_policy from openstack.network.v2 import router as _router from openstack.network.v2 import security_group as _security_group from openstack.network.v2 import security_group_rule as _security_group_rule from openstack.network.v2 import segment as _segment from openstack.network.v2 import service_profile as _service_profile from openstack.network.v2 import service_provider as _service_provider from openstack.network.v2 import subnet as _subnet from openstack.network.v2 import subnet_pool as _subnet_pool from openstack.network.v2 import vpn_service as _vpn_service from openstack import proxy from openstack import utils class Proxy(proxy.BaseProxy): def create_address_scope(self, **attrs): """Create a new address scope from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.address_scope.AddressScope`, comprised of the properties on the AddressScope class. :returns: The results of address scope creation :rtype: :class:`~openstack.network.v2.address_scope.AddressScope` """ return self._create(_address_scope.AddressScope, **attrs) def delete_address_scope(self, address_scope, ignore_missing=True): """Delete an address scope :param address_scope: The value can be either the ID of an address scope or a :class:`~openstack.network.v2.address_scope.AddressScope` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the address scope does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent address scope. :returns: ``None`` """ self._delete(_address_scope.AddressScope, address_scope, ignore_missing=ignore_missing) def find_address_scope(self, name_or_id, ignore_missing=True, **args): """Find a single address scope :param name_or_id: The name or ID of an address scope. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.address_scope.AddressScope` or None """ return self._find(_address_scope.AddressScope, name_or_id, ignore_missing=ignore_missing, **args) def get_address_scope(self, address_scope): """Get a single address scope :param address_scope: The value can be the ID of an address scope or a :class:`~openstack.network.v2.address_scope.AddressScope` instance. :returns: One :class:`~openstack.network.v2.address_scope.AddressScope` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_address_scope.AddressScope, address_scope) def address_scopes(self, **query): """Return a generator of address scopes :param dict query: Optional query parameters to be sent to limit the resources being returned. * ``name``: Address scope name * ``ip_version``: Address scope IP address version * ``tenant_id``: Owner tenant ID * ``shared``: Address scope is shared (boolean) :returns: A generator of address scope objects :rtype: :class:`~openstack.network.v2.address_scope.AddressScope` """ return self._list(_address_scope.AddressScope, paginated=False, **query) def update_address_scope(self, address_scope, **attrs): """Update an address scope :param address_scope: Either the ID of an address scope or a :class:`~openstack.network.v2.address_scope.AddressScope` instance. :param dict attrs: The attributes to update on the address scope represented by ``value``. :returns: The updated address scope :rtype: :class:`~openstack.network.v2.address_scope.AddressScope` """ return self._update(_address_scope.AddressScope, address_scope, **attrs) def agents(self, **query): """Return a generator of network agents :param dict query: Optional query parameters to be sent to limit the resources being returned. * ``agent_type``: Agent type. * ``availability_zone``: The availability zone for an agent. * ``binary``: The name of the agent's application binary. * ``description``: The description of the agent. * ``host``: The host (host name or host address) the agent is running on. * ``topic``: The message queue topic used. * ``is_admin_state_up``: The administrative state of the agent. * ``is_alive``: Whether the agent is alive. :returns: A generator of agents :rtype: :class:`~openstack.network.v2.agent.Agent` """ return self._list(_agent.Agent, paginated=False, **query) def delete_agent(self, agent, ignore_missing=True): """Delete a network agent :param agent: The value can be the ID of a agent or a :class:`~openstack.network.v2.agent.Agent` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the agent does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent agent. :returns: ``None`` """ self._delete(_agent.Agent, agent, ignore_missing=ignore_missing) def get_agent(self, agent): """Get a single network agent :param agent: The value can be the ID of a agent or a :class:`~openstack.network.v2.agent.Agent` instance. :returns: One :class:`~openstack.network.v2.agent.Agent` :rtype: :class:`~openstack.network.v2.agent.Agent` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_agent.Agent, agent) def update_agent(self, agent, **attrs): """Update a network agent :param agent: The value can be the ID of a agent or a :class:`~openstack.network.v2.agent.Agent` instance. :param dict attrs: The attributes to update on the agent represented by ``value``. :returns: One :class:`~openstack.network.v2.agent.Agent` :rtype: :class:`~openstack.network.v2.agent.Agent` """ return self._update(_agent.Agent, agent, **attrs) def dhcp_agent_hosting_networks(self, agent, **query): """A generator of networks hosted by a DHCP agent. :param agent: Either the agent id of an instance of :class:`~openstack.network.v2.network_agent.Agent` :param query: kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :return: A generator of networks """ agent_obj = self._get_resource(_agent.Agent, agent) return self._list(_network.DHCPAgentHostingNetwork, paginated=False, agent_id=agent_obj.id, **query) def add_dhcp_agent_to_network(self, agent, network): """Add a DHCP Agent to a network :param agent: Either the agent id of an instance of :class:`~openstack.network.v2.network_agent.Agent` :param network: Network instance :return: """ network = self._get_resource(_network.Network, network) agent = self._get_resource(_agent.Agent, agent) return agent.add_agent_to_network(self, network.id) def remove_dhcp_agent_from_network(self, agent, network): """Remove a DHCP Agent from a network :param agent: Either the agent id of an instance of :class:`~openstack.network.v2.network_agent.Agent` :param network: Network instance :return: """ network = self._get_resource(_network.Network, network) agent = self._get_resource(_agent.Agent, agent) return agent.remove_agent_from_network(self, network.id) def network_hosting_dhcp_agents(self, network, **query): """A generator of DHCP agents hosted on a network. :param network: The instance of :class:`~openstack.network.v2.network.Network` :param dict query: Optional query parameters to be sent to limit the resources returned. :return: A generator of hosted DHCP agents """ net = self._get_resource(_network.Network, network) return self._list(_agent.NetworkHostingDHCPAgent, paginated=False, network_id=net.id, **query) def get_auto_allocated_topology(self, project=None): """Get the auto-allocated topology of a given tenant :param project: The value is the ID or name of a project :returns: The auto-allocated topology :rtype: :class:`~openstack.network.v2.\ auto_allocated_topology.AutoAllocatedTopology` """ # If project option is not given, grab project id from session if project is None: project = self.get_project_id() return self._get(_auto_allocated_topology.AutoAllocatedTopology, project) def delete_auto_allocated_topology(self, project=None, ignore_missing=False): """Delete auto-allocated topology :param project: The value is the ID or name of a project :param ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the topology does not exist. When set to ``True``, no exception will be raised when attempting to delete nonexistant topology :returns: ``None`` """ # If project option is not given, grab project id from session if project is None: project = self.get_project_id() self._delete(_auto_allocated_topology.AutoAllocatedTopology, project, ignore_missing=ignore_missing) def validate_auto_allocated_topology(self, project=None): """Validate the resources for auto allocation :param project: The value is the ID or name of a project :returns: Whether all resources are correctly configured or not :rtype: :class:`~openstack.network.v2.\ auto_allocated_topology.ValidateTopology` """ # If project option is not given, grab project id from session if project is None: project = self.get_project_id() return self._get(_auto_allocated_topology.ValidateTopology, project=project, requires_id=False) def availability_zones(self, **query): """Return a generator of availability zones :param dict query: optional query parameters to be set to limit the returned resources. Valid parameters include: * ``name``: The name of an availability zone. * ``resource``: The type of resource for the availability zone. :returns: A generator of availability zone objects :rtype: :class:`~openstack.network.v2.availability_zone.AvailabilityZone` """ return self._list(availability_zone.AvailabilityZone, paginated=False) def find_extension(self, name_or_id, ignore_missing=True, **args): """Find a single extension :param name_or_id: The name or ID of a extension. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.extension.Extension` or None """ return self._find(extension.Extension, name_or_id, ignore_missing=ignore_missing, **args) def extensions(self, **query): """Return a generator of extensions :param dict query: Optional query parameters to be sent to limit the resources being returned. Currently no parameter is supported. :returns: A generator of extension objects :rtype: :class:`~openstack.network.v2.extension.Extension` """ return self._list(extension.Extension, paginated=False, **query) def create_flavor(self, **attrs): """Create a new network service flavor from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.flavor.Flavor`, comprised of the properties on the Flavor class. :returns: The results of flavor creation :rtype: :class:`~openstack.network.v2.flavor.Flavor` """ return self._create(_flavor.Flavor, **attrs) def delete_flavor(self, flavor, ignore_missing=True): """Delete a network service flavor :param flavor: The value can be either the ID of a flavor or a :class:`~openstack.network.v2.flavor.Flavor` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the flavor does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent flavor. :returns: ``None`` """ self._delete(_flavor.Flavor, flavor, ignore_missing=ignore_missing) def find_flavor(self, name_or_id, ignore_missing=True, **args): """Find a single network service flavor :param name_or_id: The name or ID of a flavor. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.flavor.Flavor` or None """ return self._find(_flavor.Flavor, name_or_id, ignore_missing=ignore_missing, **args) def get_flavor(self, flavor): """Get a single network service flavor :param flavor: The value can be the ID of a flavor or a :class:`~openstack.network.v2.flavor.Flavor` instance. :returns: One :class:`~openstack.network.v2.flavor.Flavor` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_flavor.Flavor, flavor) def update_flavor(self, flavor, **attrs): """Update a network service flavor :param flavor: Either the id of a flavor or a :class:`~openstack.network.v2.flavor.Flavor` instance. :attrs kwargs: The attributes to update on the flavor represented by ``value``. :returns: The updated flavor :rtype: :class:`~openstack.network.v2.flavor.Flavor` """ return self._update(_flavor.Flavor, flavor, **attrs) def flavors(self, **query): """Return a generator of network service flavors :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters include: * ``description``: The description of a flavor. * ``is_enabled``: Whether a flavor is enabled. * ``name``: The name of a flavor. * ``service_type``: The service type to which a falvor applies. :returns: A generator of flavor objects :rtype: :class:`~openstack.network.v2.flavor.Flavor` """ return self._list(_flavor.Flavor, paginated=True, **query) def associate_flavor_with_service_profile(self, flavor, service_profile): """Associate network flavor with service profile. :param flavor: Either the id of a flavor or a :class:`~openstack.network.v2.flavor.Flavor` instance. :param service_profile: The value can be either the ID of a service profile or a :class:`~openstack.network.v2.service_profile.ServiceProfile` instance. :return: """ flavor = self._get_resource(_flavor.Flavor, flavor) service_profile = self._get_resource( _service_profile.ServiceProfile, service_profile) return flavor.associate_flavor_with_service_profile( self, service_profile.id) def disassociate_flavor_from_service_profile( self, flavor, service_profile): """Disassociate network flavor from service profile. :param flavor: Either the id of a flavor or a :class:`~openstack.network.v2.flavor.Flavor` instance. :param service_profile: The value can be either the ID of a service profile or a :class:`~openstack.network.v2.service_profile.ServiceProfile` instance. :return: """ flavor = self._get_resource(_flavor.Flavor, flavor) service_profile = self._get_resource( _service_profile.ServiceProfile, service_profile) return flavor.disassociate_flavor_from_service_profile( self, service_profile.id) def create_ip(self, **attrs): """Create a new floating ip from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.floating_ip.FloatingIP`, comprised of the properties on the FloatingIP class. :returns: The results of floating ip creation :rtype: :class:`~openstack.network.v2.floating_ip.FloatingIP` """ return self._create(_floating_ip.FloatingIP, **attrs) def delete_ip(self, floating_ip, ignore_missing=True): """Delete a floating ip :param floating_ip: The value can be either the ID of a floating ip or a :class:`~openstack.network.v2.floating_ip.FloatingIP` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the floating ip does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent ip. :returns: ``None`` """ self._delete(_floating_ip.FloatingIP, floating_ip, ignore_missing=ignore_missing) def find_available_ip(self): """Find an available IP :returns: One :class:`~openstack.network.v2.floating_ip.FloatingIP` or None """ return _floating_ip.FloatingIP.find_available(self) def find_ip(self, name_or_id, ignore_missing=True, **args): """Find a single IP :param name_or_id: The name or ID of an IP. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.floating_ip.FloatingIP` or None """ return self._find(_floating_ip.FloatingIP, name_or_id, ignore_missing=ignore_missing, **args) def get_ip(self, floating_ip): """Get a single floating ip :param floating_ip: The value can be the ID of a floating ip or a :class:`~openstack.network.v2.floating_ip.FloatingIP` instance. :returns: One :class:`~openstack.network.v2.floating_ip.FloatingIP` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_floating_ip.FloatingIP, floating_ip) def ips(self, **query): """Return a generator of ips :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``description``: The description of a floating IP. * ``fixed_ip_address``: The fixed IP address associated with a floating IP address. * ``floating_ip_address``: The IP address of a floating IP. * ``floating_network_id``: The ID of the network associated with a floating IP. * ``port_id``: The ID of the port to which a floating IP is associated. * ``project_id``: The ID of the project a floating IP is associated with. * ``router_id``: The ID of an associated router. * ``status``: The status of a floating IP, which can be ``ACTIVE`` or ``DOWN``. :returns: A generator of floating IP objects :rtype: :class:`~openstack.network.v2.floating_ip.FloatingIP` """ return self._list(_floating_ip.FloatingIP, paginated=False, **query) def update_ip(self, floating_ip, **attrs): """Update a ip :param floating_ip: Either the id of a ip or a :class:`~openstack.network.v2.floating_ip.FloatingIP` instance. :param dict attrs: The attributes to update on the ip represented by ``value``. :returns: The updated ip :rtype: :class:`~openstack.network.v2.floating_ip.FloatingIP` """ return self._update(_floating_ip.FloatingIP, floating_ip, **attrs) def create_health_monitor(self, **attrs): """Create a new health monitor from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.health_monitor.HealthMonitor`, comprised of the properties on the HealthMonitor class. :returns: The results of health monitor creation :rtype: :class:`~openstack.network.v2.health_monitor.HealthMonitor` """ return self._create(_health_monitor.HealthMonitor, **attrs) def delete_health_monitor(self, health_monitor, ignore_missing=True): """Delete a health monitor :param health_monitor: The value can be either the ID of a health monitor or a :class:`~openstack.network.v2.health_monitor.HealthMonitor` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the health monitor does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent health monitor. :returns: ``None`` """ self._delete(_health_monitor.HealthMonitor, health_monitor, ignore_missing=ignore_missing) def find_health_monitor(self, name_or_id, ignore_missing=True, **args): """Find a single health monitor :param name_or_id: The name or ID of a health monitor. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.health_monitor. HealthMonitor` or None """ return self._find(_health_monitor.HealthMonitor, name_or_id, ignore_missing=ignore_missing, **args) def get_health_monitor(self, health_monitor): """Get a single health monitor :param health_monitor: The value can be the ID of a health monitor or a :class:`~openstack.network.v2.health_monitor.HealthMonitor` instance. :returns: One :class:`~openstack.network.v2.health_monitor.HealthMonitor` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_health_monitor.HealthMonitor, health_monitor) def health_monitors(self, **query): """Return a generator of health monitors :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``delay``: the time in milliseconds between sending probes. * ``expected_codes``: The expected HTTP codes for a pssing HTTP(S) monitor. * ``http_method``: The HTTP method a monitor uses for requests. * ``is_admin_state_up``: The administrative state of a health monitor. * ``max_retries``: The maximum consecutive health probe attempts. * ``project_id``: The ID of the project this health monitor is associated with. * ``timeout``: The maximum number of milliseconds for a monitor to wait for a connection to be established before it times out. * ``type``: The type of probe sent by the load balancer for health check, which can be ``PING``, ``TCP``, ``HTTP`` or ``HTTPS``. * ``url_path``: The path portion of a URI that will be probed. :returns: A generator of health monitor objects :rtype: :class:`~openstack.network.v2.health_monitor.HealthMonitor` """ return self._list(_health_monitor.HealthMonitor, paginated=False, **query) def update_health_monitor(self, health_monitor, **attrs): """Update a health monitor :param health_monitor: Either the id of a health monitor or a :class:`~openstack.network.v2.health_monitor. HealthMonitor` instance. :param dict attrs: The attributes to update on the health monitor represented by ``value``. :returns: The updated health monitor :rtype: :class:`~openstack.network.v2.health_monitor.HealthMonitor` """ return self._update(_health_monitor.HealthMonitor, health_monitor, **attrs) def create_listener(self, **attrs): """Create a new listener from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.listener.Listener`, comprised of the properties on the Listener class. :returns: The results of listener creation :rtype: :class:`~openstack.network.v2.listener.Listener` """ return self._create(_listener.Listener, **attrs) def delete_listener(self, listener, ignore_missing=True): """Delete a listener :param listener: The value can be either the ID of a listner or a :class:`~openstack.network.v2.listener.Listener` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the listner does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent listener. :returns: ``None`` """ self._delete(_listener.Listener, listener, ignore_missing=ignore_missing) def find_listener(self, name_or_id, ignore_missing=True, **args): """Find a single listener :param name_or_id: The name or ID of a listener. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.listener.Listener` or None """ return self._find(_listener.Listener, name_or_id, ignore_missing=ignore_missing, **args) def get_listener(self, listener): """Get a single listener :param listener: The value can be the ID of a listener or a :class:`~openstack.network.v2.listener.Listener` instance. :returns: One :class:`~openstack.network.v2.listener.Listener` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_listener.Listener, listener) def listeners(self, **query): """Return a generator of listeners :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``connection_limit``: The maximum number of connections permitted for the load-balancer. * ``default_pool_id``: The ID of the default pool. * ``default_tls_container_ref``: A reference to a container of TLS secret. * ``description``: The description of a listener. * ``is_admin_state_up``: The administrative state of the listener. * ``name``: The name of a listener. * ``project_id``: The ID of the project associated with a listener. * ``protocol``: The protocol of the listener. * ``protocol_port``: Port the listener will listen to. :returns: A generator of listener objects :rtype: :class:`~openstack.network.v2.listener.Listener` """ return self._list(_listener.Listener, paginated=False, **query) def update_listener(self, listener, **attrs): """Update a listener :param listener: Either the id of a listener or a :class:`~openstack.network.v2.listener.Listener` instance. :param dict attrs: The attributes to update on the listener represented by ``listener``. :returns: The updated listener :rtype: :class:`~openstack.network.v2.listener.Listener` """ return self._update(_listener.Listener, listener, **attrs) def create_load_balancer(self, **attrs): """Create a new load balancer from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.load_balancer.LoadBalancer`, comprised of the properties on the LoadBalancer class. :returns: The results of load balancer creation :rtype: :class:`~openstack.network.v2.load_balancer.LoadBalancer` """ return self._create(_load_balancer.LoadBalancer, **attrs) def delete_load_balancer(self, load_balancer, ignore_missing=True): """Delete a load balancer :param load_balancer: The value can be the ID of a load balancer or a :class:`~openstack.network.v2.load_balancer.LoadBalancer` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the load balancer does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent load balancer. :returns: ``None`` """ self._delete(_load_balancer.LoadBalancer, load_balancer, ignore_missing=ignore_missing) def find_load_balancer(self, name_or_id, ignore_missing=True, **args): """Find a single load balancer :param name_or_id: The name or ID of a load balancer. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.load_balancer.LoadBalancer` or None """ return self._find(_load_balancer.LoadBalancer, name_or_id, ignore_missing=ignore_missing, **args) def get_load_balancer(self, load_balancer): """Get a single load balancer :param load_balancer: The value can be the ID of a load balancer or a :class:`~openstack.network.v2.load_balancer.LoadBalancer` instance. :returns: One :class:`~openstack.network.v2.load_balancer.LoadBalancer` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_load_balancer.LoadBalancer, load_balancer) def load_balancers(self, **query): """Return a generator of load balancers :param dict query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of load balancer objects :rtype: :class:`~openstack.network.v2.load_balancer.LoadBalancer` """ return self._list(_load_balancer.LoadBalancer, paginated=False, **query) def update_load_balancer(self, load_balancer, **attrs): """Update a load balancer :param load_balancer: Either the id of a load balancer or a :class:`~openstack.network.v2.load_balancer.LoadBalancer` instance. :param dict attrs: The attributes to update on the load balancer represented by ``load_balancer``. :returns: The updated load balancer :rtype: :class:`~openstack.network.v2.load_balancer.LoadBalancer` """ return self._update(_load_balancer.LoadBalancer, load_balancer, **attrs) def create_metering_label(self, **attrs): """Create a new metering label from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.metering_label.MeteringLabel`, comprised of the properties on the MeteringLabel class. :returns: The results of metering label creation :rtype: :class:`~openstack.network.v2.metering_label.MeteringLabel` """ return self._create(_metering_label.MeteringLabel, **attrs) def delete_metering_label(self, metering_label, ignore_missing=True): """Delete a metering label :param metering_label: The value can be either the ID of a metering label or a :class:`~openstack.network.v2.metering_label.MeteringLabel` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the metering label does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent metering label. :returns: ``None`` """ self._delete(_metering_label.MeteringLabel, metering_label, ignore_missing=ignore_missing) def find_metering_label(self, name_or_id, ignore_missing=True, **args): """Find a single metering label :param name_or_id: The name or ID of a metering label. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.metering_label. MeteringLabel` or None """ return self._find(_metering_label.MeteringLabel, name_or_id, ignore_missing=ignore_missing, **args) def get_metering_label(self, metering_label): """Get a single metering label :param metering_label: The value can be the ID of a metering label or a :class:`~openstack.network.v2.metering_label.MeteringLabel` instance. :returns: One :class:`~openstack.network.v2.metering_label.MeteringLabel` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_metering_label.MeteringLabel, metering_label) def metering_labels(self, **query): """Return a generator of metering labels :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``description``: Description of a metering label. * ``name``: Name of a metering label. * ``is_shared``: Boolean indicating whether a metering label is shared. * ``project_id``: The ID of the project a metering label is associated with. :returns: A generator of metering label objects :rtype: :class:`~openstack.network.v2.metering_label.MeteringLabel` """ return self._list(_metering_label.MeteringLabel, paginated=False, **query) def update_metering_label(self, metering_label, **attrs): """Update a metering label :param metering_label: Either the id of a metering label or a :class:`~openstack.network.v2.metering_label. MeteringLabel` instance. :param dict attrs: The attributes to update on the metering label represented by ``metering_label``. :returns: The updated metering label :rtype: :class:`~openstack.network.v2.metering_label.MeteringLabel` """ return self._update(_metering_label.MeteringLabel, metering_label, **attrs) def create_metering_label_rule(self, **attrs): """Create a new metering label rule from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.metering_label_rule.\ MeteringLabelRule`, comprised of the properties on the MeteringLabelRule class. :returns: The results of metering label rule creation :rtype: :class:`~openstack.network.v2.metering_label_rule.\ MeteringLabelRule` """ return self._create(_metering_label_rule.MeteringLabelRule, **attrs) def delete_metering_label_rule(self, metering_label_rule, ignore_missing=True): """Delete a metering label rule :param metering_label_rule: The value can be either the ID of a metering label rule or a :class:`~openstack.network.v2.metering_label_rule.\ MeteringLabelRule` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the metering label rule does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent metering label rule. :returns: ``None`` """ self._delete(_metering_label_rule.MeteringLabelRule, metering_label_rule, ignore_missing=ignore_missing) def find_metering_label_rule(self, name_or_id, ignore_missing=True, **args): """Find a single metering label rule :param name_or_id: The name or ID of a metering label rule. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.metering_label_rule. MeteringLabelRule` or None """ return self._find(_metering_label_rule.MeteringLabelRule, name_or_id, ignore_missing=ignore_missing, **args) def get_metering_label_rule(self, metering_label_rule): """Get a single metering label rule :param metering_label_rule: The value can be the ID of a metering label rule or a :class:`~openstack.network.v2.metering_label_rule.\ MeteringLabelRule` instance. :returns: One :class:`~openstack.network.v2.metering_label_rule.\ MeteringLabelRule` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_metering_label_rule.MeteringLabelRule, metering_label_rule) def metering_label_rules(self, **query): """Return a generator of metering label rules :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``direction``: The direction in which metering label rule is applied. * ``metering_label_id``: The ID of a metering label this rule is associated with. * ``project_id``: The ID of the project the metering label rule is associated with. * ``remote_ip_prefix``: The remote IP prefix to be associated with this metering label rule. :returns: A generator of metering label rule objects :rtype: :class:`~openstack.network.v2.metering_label_rule. MeteringLabelRule` """ return self._list(_metering_label_rule.MeteringLabelRule, paginated=False, **query) def update_metering_label_rule(self, metering_label_rule, **attrs): """Update a metering label rule :param metering_label_rule: Either the id of a metering label rule or a :class:`~openstack.network.v2.metering_label_rule. MeteringLabelRule` instance. :param dict attrs: The attributes to update on the metering label rule represented by ``metering_label_rule``. :returns: The updated metering label rule :rtype: :class:`~openstack.network.v2.metering_label_rule. MeteringLabelRule` """ return self._update(_metering_label_rule.MeteringLabelRule, metering_label_rule, **attrs) def create_network(self, **attrs): """Create a new network from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.network.Network`, comprised of the properties on the Network class. :returns: The results of network creation :rtype: :class:`~openstack.network.v2.network.Network` """ return self._create(_network.Network, **attrs) def delete_network(self, network, ignore_missing=True): """Delete a network :param network: The value can be either the ID of a network or a :class:`~openstack.network.v2.network.Network` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the network does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent network. :returns: ``None`` """ self._delete(_network.Network, network, ignore_missing=ignore_missing) def find_network(self, name_or_id, ignore_missing=True, **args): """Find a single network :param name_or_id: The name or ID of a network. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.network.Network` or None """ return self._find(_network.Network, name_or_id, ignore_missing=ignore_missing, **args) def get_network(self, network): """Get a single network :param network: The value can be the ID of a network or a :class:`~openstack.network.v2.network.Network` instance. :returns: One :class:`~openstack.network.v2.network.Network` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_network.Network, network) def networks(self, **query): """Return a generator of networks :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. Available parameters include: * ``description``: The network description. * ``ipv4_address_scope_id``: The ID of the IPv4 address scope for the network. * ``ipv6_address_scope_id``: The ID of the IPv6 address scope for the network. * ``is_admin_state_up``: Network administrative state * ``is_port_security_enabled``: The port security status. * ``is_router_external``: Network is external or not. * ``is_shared``: Whether the network is shared across projects. * ``name``: The name of the network. * ``status``: Network status * ``project_id``: Owner tenant ID * ``provider_network_type``: Network physical mechanism * ``provider_physical_network``: Physical network * ``provider_segmentation_id``: VLAN ID for VLAN networks or Tunnel ID for GENEVE/GRE/VXLAN networks :returns: A generator of network objects :rtype: :class:`~openstack.network.v2.network.Network` """ return self._list(_network.Network, paginated=False, **query) def update_network(self, network, **attrs): """Update a network :param network: Either the id of a network or an instance of type :class:`~openstack.network.v2.network.Network`. :param dict attrs: The attributes to update on the network represented by ``network``. :returns: The updated network :rtype: :class:`~openstack.network.v2.network.Network` """ return self._update(_network.Network, network, **attrs) def find_network_ip_availability(self, name_or_id, ignore_missing=True, **args): """Find IP availability of a network :param name_or_id: The name or ID of a network. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.network_ip_availability. NetworkIPAvailability` or None """ return self._find(network_ip_availability.NetworkIPAvailability, name_or_id, ignore_missing=ignore_missing, **args) def get_network_ip_availability(self, network): """Get IP availability of a network :param network: The value can be the ID of a network or a :class:`~openstack.network.v2.network.Network` instance. :returns: One :class:`~openstack.network.v2.network_ip_availability. NetworkIPAvailability` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(network_ip_availability.NetworkIPAvailability, network) def network_ip_availabilities(self, **query): """Return a generator of network ip availabilities :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. Available parameters include: * ``ip_version``: IP version of the network * ``network_id``: ID of network to use when listening network IP availability. * ``network_name``: The name of the network for the particular network IP availability. * ``project_id``: Owner tenant ID :returns: A generator of network ip availability objects :rtype: :class:`~openstack.network.v2.network_ip_availability. NetworkIPAvailability` """ return self._list(network_ip_availability.NetworkIPAvailability, paginated=False, **query) def create_pool(self, **attrs): """Create a new pool from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.pool.Pool`, comprised of the properties on the Pool class. :returns: The results of pool creation :rtype: :class:`~openstack.network.v2.pool.Pool` """ return self._create(_pool.Pool, **attrs) def delete_pool(self, pool, ignore_missing=True): """Delete a pool :param pool: The value can be either the ID of a pool or a :class:`~openstack.network.v2.pool.Pool` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the pool does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent pool. :returns: ``None`` """ self._delete(_pool.Pool, pool, ignore_missing=ignore_missing) def find_pool(self, name_or_id, ignore_missing=True, **args): """Find a single pool :param name_or_id: The name or ID of a pool. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.pool.Pool` or None """ return self._find(_pool.Pool, name_or_id, ignore_missing=ignore_missing, **args) def get_pool(self, pool): """Get a single pool :param pool: The value can be the ID of a pool or a :class:`~openstack.network.v2.pool.Pool` instance. :returns: One :class:`~openstack.network.v2.pool.Pool` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_pool.Pool, pool) def pools(self, **query): """Return a generator of pools :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``description``: The description for the pool. * ``is_admin_state_up``: The administrative state of the pool. * ``lb_algorithm``: The load-balancer algorithm used, which is one of ``round-robin``, ``least-connections`` and so on. * ``name``: The name of the node pool. * ``project_id``: The ID of the project the pool is associated with. * ``protocol``: The protocol used by the pool, which is one of ``TCP``, ``HTTP`` or ``HTTPS``. * ``provider``: The name of the provider of the load balancer service. * ``subnet_id``: The subnet on which the members of the pool are located. * ``virtual_ip_id``: The ID of the virtual IP used. :returns: A generator of pool objects :rtype: :class:`~openstack.network.v2.pool.Pool` """ return self._list(_pool.Pool, paginated=False, **query) def update_pool(self, pool, **attrs): """Update a pool :param pool: Either the id of a pool or a :class:`~openstack.network.v2.pool.Pool` instance. :param dict attrs: The attributes to update on the pool represented by ``pool``. :returns: The updated pool :rtype: :class:`~openstack.network.v2.pool.Pool` """ return self._update(_pool.Pool, pool, **attrs) def create_pool_member(self, pool, **attrs): """Create a new pool member from attributes :param pool: The pool can be either the ID of a pool or a :class:`~openstack.network.v2.pool.Pool` instance that the member will be created in. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.pool_member.PoolMember`, comprised of the properties on the PoolMember class. :returns: The results of pool member creation :rtype: :class:`~openstack.network.v2.pool_member.PoolMember` """ poolobj = self._get_resource(_pool.Pool, pool) return self._create(_pool_member.PoolMember, pool_id=poolobj.id, **attrs) def delete_pool_member(self, pool_member, pool, ignore_missing=True): """Delete a pool member :param pool_member: The member can be either the ID of a pool member or a :class:`~openstack.network.v2.pool_member.PoolMember` instance. :param pool: The pool can be either the ID of a pool or a :class:`~openstack.network.v2.pool.Pool` instance that the member belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the pool member does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent pool member. :returns: ``None`` """ poolobj = self._get_resource(_pool.Pool, pool) self._delete(_pool_member.PoolMember, pool_member, ignore_missing=ignore_missing, pool_id=poolobj.id) def find_pool_member(self, name_or_id, pool, ignore_missing=True, **args): """Find a single pool member :param str name_or_id: The name or ID of a pool member. :param pool: The pool can be either the ID of a pool or a :class:`~openstack.network.v2.pool.Pool` instance that the member belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.pool_member.PoolMember` or None """ poolobj = self._get_resource(_pool.Pool, pool) return self._find(_pool_member.PoolMember, name_or_id, ignore_missing=ignore_missing, pool_id=poolobj.id, **args) def get_pool_member(self, pool_member, pool): """Get a single pool member :param pool_member: The member can be the ID of a pool member or a :class:`~openstack.network.v2.pool_member.PoolMember` instance. :param pool: The pool can be either the ID of a pool or a :class:`~openstack.network.v2.pool.Pool` instance that the member belongs to. :returns: One :class:`~openstack.network.v2.pool_member.PoolMember` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ poolobj = self._get_resource(_pool.Pool, pool) return self._get(_pool_member.PoolMember, pool_member, pool_id=poolobj.id) def pool_members(self, pool, **query): """Return a generator of pool members :param pool: The pool can be either the ID of a pool or a :class:`~openstack.network.v2.pool.Pool` instance that the member belongs to. :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``address``: The IP address of the pool member. * ``is_admin_state_up``: The administrative state of the pool member. * ``name``: Name of the pool member. * ``project_id``: The ID of the project this pool member is associated with. * ``protocol_port``: The port on which the application is hosted. * ``subnet_id``: Subnet ID in which to access this pool member. * ``weight``: A positive integer value that indicates the relative portion of traffic that this member should receive from the pool. :returns: A generator of pool member objects :rtype: :class:`~openstack.network.v2.pool_member.PoolMember` """ poolobj = self._get_resource(_pool.Pool, pool) return self._list(_pool_member.PoolMember, paginated=False, pool_id=poolobj.id, **query) def update_pool_member(self, pool_member, pool, **attrs): """Update a pool member :param pool_member: Either the ID of a pool member or a :class:`~openstack.network.v2.pool_member.PoolMember` instance. :param pool: The pool can be either the ID of a pool or a :class:`~openstack.network.v2.pool.Pool` instance that the member belongs to. :param dict attrs: The attributes to update on the pool member represented by ``pool_member``. :returns: The updated pool member :rtype: :class:`~openstack.network.v2.pool_member.PoolMember` """ poolobj = self._get_resource(_pool.Pool, pool) return self._update(_pool_member.PoolMember, pool_member, pool_id=poolobj.id, **attrs) def create_port(self, **attrs): """Create a new port from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.port.Port`, comprised of the properties on the Port class. :returns: The results of port creation :rtype: :class:`~openstack.network.v2.port.Port` """ return self._create(_port.Port, **attrs) def delete_port(self, port, ignore_missing=True): """Delete a port :param port: The value can be either the ID of a port or a :class:`~openstack.network.v2.port.Port` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the port does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent port. :returns: ``None`` """ self._delete(_port.Port, port, ignore_missing=ignore_missing) def find_port(self, name_or_id, ignore_missing=True, **args): """Find a single port :param name_or_id: The name or ID of a port. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.port.Port` or None """ return self._find(_port.Port, name_or_id, ignore_missing=ignore_missing, **args) def get_port(self, port): """Get a single port :param port: The value can be the ID of a port or a :class:`~openstack.network.v2.port.Port` instance. :returns: One :class:`~openstack.network.v2.port.Port` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_port.Port, port) def ports(self, **query): """Return a generator of ports :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. Available parameters include: * ``description``: The port description. * ``device_id``: Port device ID. * ``device_owner``: Port device owner (e.g. ``network:dhcp``). * ``ip_address``: IP addresses of an allowed address pair. * ``is_admin_state_up``: The administrative state of the port. * ``is_port_security_enabled``: The port security status. * ``mac_address``: Port MAC address. * ``name``: The port name. * ``network_id``: ID of network that owns the ports. * ``project_id``: The ID of the project who owns the network. * ``status``: The port status. Value is ``ACTIVE`` or ``DOWN``. * ``subnet_id``: The ID of the subnet. :returns: A generator of port objects :rtype: :class:`~openstack.network.v2.port.Port` """ return self._list(_port.Port, paginated=False, **query) def update_port(self, port, **attrs): """Update a port :param port: Either the id of a port or a :class:`~openstack.network.v2.port.Port` instance. :param dict attrs: The attributes to update on the port represented by ``port``. :returns: The updated port :rtype: :class:`~openstack.network.v2.port.Port` """ return self._update(_port.Port, port, **attrs) def add_ip_to_port(self, port, ip): ip['port_id'] = port.id return ip.update(self) def remove_ip_from_port(self, ip): ip['port_id'] = None return ip.update(self) def get_subnet_ports(self, subnet_id): result = [] ports = self.ports() for puerta in ports: for fixed_ip in puerta.fixed_ips: if fixed_ip['subnet_id'] == subnet_id: result.append(puerta) return result def create_qos_bandwidth_limit_rule(self, qos_policy, **attrs): """Create a new bandwidth limit rule :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2. qos_bandwidth_limit_rule.QoSBandwidthLimitRule`, comprised of the properties on the QoSBandwidthLimitRule class. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :returns: The results of resource creation :rtype: :class:`~openstack.network.v2.qos_bandwidth_limit_rule. QoSBandwidthLimitRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._create(_qos_bandwidth_limit_rule.QoSBandwidthLimitRule, qos_policy_id=policy.id, **attrs) def delete_qos_bandwidth_limit_rule(self, qos_rule, qos_policy, ignore_missing=True): """Delete a bandwidth limit rule :param qos_rule: The value can be either the ID of a bandwidth limit rule or a :class:`~openstack.network.v2. qos_bandwidth_limit_rule.QoSBandwidthLimitRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent bandwidth limit rule. :returns: ``None`` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) self._delete(_qos_bandwidth_limit_rule.QoSBandwidthLimitRule, qos_rule, ignore_missing=ignore_missing, qos_policy_id=policy.id) def find_qos_bandwidth_limit_rule(self, qos_rule_id, qos_policy, ignore_missing=True, **args): """Find a bandwidth limit rule :param qos_rule_id: The ID of a bandwidth limit rule. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.qos_bandwidth_limit_rule. QoSBandwidthLimitRule` or None """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._find(_qos_bandwidth_limit_rule.QoSBandwidthLimitRule, qos_rule_id, ignore_missing=ignore_missing, qos_policy_id=policy.id, **args) def get_qos_bandwidth_limit_rule(self, qos_rule, qos_policy): """Get a single bandwidth limit rule :param qos_rule: The value can be the ID of a minimum bandwidth rule or a :class:`~openstack.network.v2. qos_bandwidth_limit_rule.QoSBandwidthLimitRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :returns: One :class:`~openstack.network.v2.qos_bandwidth_limit_rule. QoSBandwidthLimitRule` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._get(_qos_bandwidth_limit_rule.QoSBandwidthLimitRule, qos_rule, qos_policy_id=policy.id) def qos_bandwidth_limit_rules(self, qos_policy, **query): """Return a generator of bandwidth limit rules :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of bandwidth limit rule objects :rtype: :class:`~openstack.network.v2.qos_bandwidth_limit_rule. QoSBandwidthLimitRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._list(_qos_bandwidth_limit_rule.QoSBandwidthLimitRule, paginated=False, qos_policy_id=policy.id, **query) def update_qos_bandwidth_limit_rule(self, qos_rule, qos_policy, **attrs): """Update a bandwidth limit rule :param qos_rule: Either the id of a bandwidth limit rule or a :class:`~openstack.network.v2. qos_bandwidth_limit_rule.QoSBandwidthLimitRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :attrs kwargs: The attributes to update on the bandwidth limit rule represented by ``value``. :returns: The updated minimum bandwidth rule :rtype: :class:`~openstack.network.v2.qos_bandwidth_limit_rule. QoSBandwidthLimitRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._update(_qos_bandwidth_limit_rule.QoSBandwidthLimitRule, qos_rule, qos_policy_id=policy.id, **attrs) def create_qos_dscp_marking_rule(self, qos_policy, **attrs): """Create a new QoS DSCP marking rule :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2. qos_dscp_marking_rule.QoSDSCPMarkingRule`, comprised of the properties on the QosDscpMarkingRule class. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :returns: The results of router creation :rtype: :class:`~openstack.network.v2.qos_dscp_marking_rule. QoSDSCPMarkingRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._create(_qos_dscp_marking_rule.QoSDSCPMarkingRule, qos_policy_id=policy.id, **attrs) def delete_qos_dscp_marking_rule(self, qos_rule, qos_policy, ignore_missing=True): """Delete a QoS DSCP marking rule :param qos_rule: The value can be either the ID of a minimum bandwidth rule or a :class:`~openstack.network.v2. qos_dscp_marking_rule.QoSDSCPMarkingRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent minimum bandwidth rule. :returns: ``None`` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) self._delete(_qos_dscp_marking_rule.QoSDSCPMarkingRule, qos_rule, ignore_missing=ignore_missing, qos_policy_id=policy.id) def find_qos_dscp_marking_rule(self, qos_rule_id, qos_policy, ignore_missing=True, **args): """Find a QoS DSCP marking rule :param qos_rule_id: The ID of a QoS DSCP marking rule. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.qos_dscp_marking_rule. QoSDSCPMarkingRule` or None """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._find(_qos_dscp_marking_rule.QoSDSCPMarkingRule, qos_rule_id, ignore_missing=ignore_missing, qos_policy_id=policy.id, **args) def get_qos_dscp_marking_rule(self, qos_rule, qos_policy): """Get a single QoS DSCP marking rule :param qos_rule: The value can be the ID of a minimum bandwidth rule or a :class:`~openstack.network.v2.qos_dscp_marking_rule. QoSDSCPMarkingRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :returns: One :class:`~openstack.network.v2.qos_dscp_marking_rule. QoSDSCPMarkingRule` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._get(_qos_dscp_marking_rule.QoSDSCPMarkingRule, qos_rule, qos_policy_id=policy.id) def qos_dscp_marking_rules(self, qos_policy, **query): """Return a generator of QoS DSCP marking rules :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of QoS DSCP marking rule objects :rtype: :class:`~openstack.network.v2.qos_dscp_marking_rule. QoSDSCPMarkingRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._list(_qos_dscp_marking_rule.QoSDSCPMarkingRule, paginated=False, qos_policy_id=policy.id, **query) def update_qos_dscp_marking_rule(self, qos_rule, qos_policy, **attrs): """Update a QoS DSCP marking rule :param qos_rule: Either the id of a minimum bandwidth rule or a :class:`~openstack.network.v2.qos_dscp_marking_rule. QoSDSCPMarkingRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :attrs kwargs: The attributes to update on the QoS DSCP marking rule represented by ``value``. :returns: The updated QoS DSCP marking rule :rtype: :class:`~openstack.network.v2.qos_dscp_marking_rule. QoSDSCPMarkingRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._update(_qos_dscp_marking_rule.QoSDSCPMarkingRule, qos_rule, qos_policy_id=policy.id, **attrs) def create_qos_minimum_bandwidth_rule(self, qos_policy, **attrs): """Create a new minimum bandwidth rule :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2. qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule`, comprised of the properties on the QoSMinimumBandwidthRule class. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :returns: The results of resource creation :rtype: :class:`~openstack.network.v2.qos_minimum_bandwidth_rule. QoSMinimumBandwidthRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._create( _qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, qos_policy_id=policy.id, **attrs) def delete_qos_minimum_bandwidth_rule(self, qos_rule, qos_policy, ignore_missing=True): """Delete a minimum bandwidth rule :param qos_rule: The value can be either the ID of a minimum bandwidth rule or a :class:`~openstack.network.v2. qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent minimum bandwidth rule. :returns: ``None`` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) self._delete(_qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, qos_rule, ignore_missing=ignore_missing, qos_policy_id=policy.id) def find_qos_minimum_bandwidth_rule(self, qos_rule_id, qos_policy, ignore_missing=True, **args): """Find a minimum bandwidth rule :param qos_rule_id: The ID of a minimum bandwidth rule. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.qos_minimum_bandwidth_rule. QoSMinimumBandwidthRule` or None """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._find(_qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, qos_rule_id, ignore_missing=ignore_missing, qos_policy_id=policy.id, **args) def get_qos_minimum_bandwidth_rule(self, qos_rule, qos_policy): """Get a single minimum bandwidth rule :param qos_rule: The value can be the ID of a minimum bandwidth rule or a :class:`~openstack.network.v2. qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :returns: One :class:`~openstack.network.v2.qos_minimum_bandwidth_rule. QoSMinimumBandwidthRule` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._get(_qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, qos_rule, qos_policy_id=policy.id) def qos_minimum_bandwidth_rules(self, qos_policy, **query): """Return a generator of minimum bandwidth rules :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of minimum bandwidth rule objects :rtype: :class:`~openstack.network.v2.qos_minimum_bandwidth_rule. QoSMinimumBandwidthRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._list(_qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, paginated=False, qos_policy_id=policy.id, **query) def update_qos_minimum_bandwidth_rule(self, qos_rule, qos_policy, **attrs): """Update a minimum bandwidth rule :param qos_rule: Either the id of a minimum bandwidth rule or a :class:`~openstack.network.v2. qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule` instance. :param qos_policy: The value can be the ID of the QoS policy that the rule belongs or a :class:`~openstack.network.v2. qos_policy.QoSPolicy` instance. :attrs kwargs: The attributes to update on the minimum bandwidth rule represented by ``value``. :returns: The updated minimum bandwidth rule :rtype: :class:`~openstack.network.v2.qos_minimum_bandwidth_rule. QoSMinimumBandwidthRule` """ policy = self._get_resource(_qos_policy.QoSPolicy, qos_policy) return self._update(_qos_minimum_bandwidth_rule. QoSMinimumBandwidthRule, qos_rule, qos_policy_id=policy.id, **attrs) def create_qos_policy(self, **attrs): """Create a new QoS policy from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.qos_policy. QoSPolicy`, comprised of the properties on the QoSPolicy class. :returns: The results of QoS policy creation :rtype: :class:`~openstack.network.v2.qos_policy.QoSPolicy` """ return self._create(_qos_policy.QoSPolicy, **attrs) def delete_qos_policy(self, qos_policy, ignore_missing=True): """Delete a QoS policy :param qos_policy: The value can be either the ID of a QoS policy or a :class:`~openstack.network.v2.qos_policy.QoSPolicy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the QoS policy does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent QoS policy. :returns: ``None`` """ self._delete(_qos_policy.QoSPolicy, qos_policy, ignore_missing=ignore_missing) def find_qos_policy(self, name_or_id, ignore_missing=True, **args): """Find a single QoS policy :param name_or_id: The name or ID of a QoS policy. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.qos_policy.QoSPolicy` or None """ return self._find(_qos_policy.QoSPolicy, name_or_id, ignore_missing=ignore_missing, **args) def get_qos_policy(self, qos_policy): """Get a single QoS policy :param qos_policy: The value can be the ID of a QoS policy or a :class:`~openstack.network.v2.qos_policy.QoSPolicy` instance. :returns: One :class:`~openstack.network.v2.qos_policy.QoSPolicy` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_qos_policy.QoSPolicy, qos_policy) def qos_policies(self, **query): """Return a generator of QoS policies :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``description``: The description of a QoS policy. * ``is_shared``: Whether the policy is shared among projects. * ``name``: The name of a QoS policy. * ``project_id``: The ID of the project who owns the network. :returns: A generator of QoS policy objects :rtype: :class:`~openstack.network.v2.qos_policy.QoSPolicy` """ return self._list(_qos_policy.QoSPolicy, paginated=False, **query) def update_qos_policy(self, qos_policy, **attrs): """Update a QoS policy :param qos_policy: Either the id of a QoS policy or a :class:`~openstack.network.v2.qos_policy.QoSPolicy` instance. :attrs kwargs: The attributes to update on the QoS policy represented by ``value``. :returns: The updated QoS policy :rtype: :class:`~openstack.network.v2.qos_policy.QoSPolicy` """ return self._update(_qos_policy.QoSPolicy, qos_policy, **attrs) def find_qos_rule_type(self, rule_type_name, ignore_missing=True): """Find a single QoS rule type details :param rule_type_name: The name of a QoS rule type. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.network.v2.qos_rule_type.QoSRuleType` or None """ return self._find(_qos_rule_type.QoSRuleType, rule_type_name, ignore_missing=ignore_missing) def get_qos_rule_type(self, qos_rule_type): """Get details about single QoS rule type :param qos_rule_type: The value can be the name of a QoS policy rule type or a :class:`~openstack.network.v2. qos_rule_type.QoSRuleType` instance. :returns: One :class:`~openstack.network.v2.qos_rule_type.QoSRuleType` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_qos_rule_type.QoSRuleType, qos_rule_type) def qos_rule_types(self, **query): """Return a generator of QoS rule types :param dict query: Optional query parameters to be sent to limit the resources returned. Valid parameters include: * ``type``: The type of the QoS rule type. :returns: A generator of QoS rule type objects :rtype: :class:`~openstack.network.v2.qos_rule_type.QoSRuleType` """ return self._list(_qos_rule_type.QoSRuleType, paginated=False, **query) def delete_quota(self, quota, ignore_missing=True): """Delete a quota (i.e. reset to the default quota) :param quota: The value can be either the ID of a quota or a :class:`~openstack.network.v2.quota.Quota` instance. The ID of a quota is the same as the project ID for the quota. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when quota does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent quota. :returns: ``None`` """ self._delete(_quota.Quota, quota, ignore_missing=ignore_missing) def get_quota(self, quota, details=False): """Get a quota :param quota: The value can be the ID of a quota or a :class:`~openstack.network.v2.quota.Quota` instance. The ID of a quota is the same as the project ID for the quota. :param details: If set to True, details about quota usage will be returned. :returns: One :class:`~openstack.network.v2.quota.Quota` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ if details: quota_obj = self._get_resource(_quota.Quota, quota) quota = self._get(_quota.QuotaDetails, project=quota_obj.id, requires_id=False) else: quota = self._get(_quota.Quota, quota) return quota def get_quota_default(self, quota): """Get a default quota :param quota: The value can be the ID of a default quota or a :class:`~openstack.network.v2.quota.QuotaDefault` instance. The ID of a default quota is the same as the project ID for the default quota. :returns: One :class:`~openstack.network.v2.quota.QuotaDefault` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ quota_obj = self._get_resource(_quota.Quota, quota) return self._get(_quota.QuotaDefault, project=quota_obj.id, requires_id=False) def quotas(self, **query): """Return a generator of quotas :param dict query: Optional query parameters to be sent to limit the resources being returned. Currently no query parameter is supported. :returns: A generator of quota objects :rtype: :class:`~openstack.network.v2.quota.Quota` """ return self._list(_quota.Quota, paginated=False, **query) def update_quota(self, quota, **attrs): """Update a quota :param quota: Either the ID of a quota or a :class:`~openstack.network.v2.quota.Quota` instance. The ID of a quota is the same as the project ID for the quota. :param dict attrs: The attributes to update on the quota represented by ``quota``. :returns: The updated quota :rtype: :class:`~openstack.network.v2.quota.Quota` """ return self._update(_quota.Quota, quota, **attrs) def create_rbac_policy(self, **attrs): """Create a new RBAC policy from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.rbac_policy.RBACPolicy`, comprised of the properties on the RBACPolicy class. :return: The results of RBAC policy creation :rtype: :class:`~openstack.network.v2.rbac_policy.RBACPolicy` """ return self._create(_rbac_policy.RBACPolicy, **attrs) def delete_rbac_policy(self, rbac_policy, ignore_missing=True): """Delete a RBAC policy :param rbac_policy: The value can be either the ID of a RBAC policy or a :class:`~openstack.network.v2.rbac_policy.RBACPolicy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the RBAC policy does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent RBAC policy. :returns: ``None`` """ self._delete(_rbac_policy.RBACPolicy, rbac_policy, ignore_missing=ignore_missing) def find_rbac_policy(self, rbac_policy, ignore_missing=True, **args): """Find a single RBAC policy :param rbac_policy: The ID of a RBAC policy. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.rbac_policy.RBACPolicy` or None """ return self._find(_rbac_policy.RBACPolicy, rbac_policy, ignore_missing=ignore_missing, **args) def get_rbac_policy(self, rbac_policy): """Get a single RBAC policy :param rbac_policy: The value can be the ID of a RBAC policy or a :class:`~openstack.network.v2.rbac_policy.RBACPolicy` instance. :returns: One :class:`~openstack.network.v2.rbac_policy.RBACPolicy` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_rbac_policy.RBACPolicy, rbac_policy) def rbac_policies(self, **query): """Return a generator of RBAC policies :param dict query: Optional query parameters to be sent to limit the resources being returned. Available parameters include: * ``action``: RBAC policy action * ``object_type``: Type of the object that the RBAC policy affects * ``target_project_id``: ID of the tenant that the RBAC policy affects * ``project_id``: Owner tenant ID :returns: A generator of rbac objects :rtype: :class:`~openstack.network.v2.rbac_policy.RBACPolicy` """ return self._list(_rbac_policy.RBACPolicy, paginated=False, **query) def update_rbac_policy(self, rbac_policy, **attrs): """Update a RBAC policy :param rbac_policy: Either the id of a RBAC policy or a :class:`~openstack.network.v2.rbac_policy.RBACPolicy` instance. :param dict attrs: The attributes to update on the RBAC policy represented by ``rbac_policy``. :returns: The updated RBAC policy :rtype: :class:`~openstack.network.v2.rbac_policy.RBACPolicy` """ return self._update(_rbac_policy.RBACPolicy, rbac_policy, **attrs) def create_router(self, **attrs): """Create a new router from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.router.Router`, comprised of the properties on the Router class. :returns: The results of router creation :rtype: :class:`~openstack.network.v2.router.Router` """ return self._create(_router.Router, **attrs) def delete_router(self, router, ignore_missing=True): """Delete a router :param router: The value can be either the ID of a router or a :class:`~openstack.network.v2.router.Router` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the router does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent router. :returns: ``None`` """ self._delete(_router.Router, router, ignore_missing=ignore_missing) def find_router(self, name_or_id, ignore_missing=True, **args): """Find a single router :param name_or_id: The name or ID of a router. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.router.Router` or None """ return self._find(_router.Router, name_or_id, ignore_missing=ignore_missing, **args) def get_router(self, router): """Get a single router :param router: The value can be the ID of a router or a :class:`~openstack.network.v2.router.Router` instance. :returns: One :class:`~openstack.network.v2.router.Router` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_router.Router, router) def routers(self, **query): """Return a generator of routers :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``description``: The description of a router. * ``flavor_id``: The ID of the flavor. * ``is_admin_state_up``: Router administrative state is up or not * ``is_distributed``: The distributed state of a router * ``is_ha``: The highly-available state of a router * ``name``: Router name * ``project_id``: The ID of the project this router is associated with. * ``status``: The status of the router. :returns: A generator of router objects :rtype: :class:`~openstack.network.v2.router.Router` """ return self._list(_router.Router, paginated=False, **query) def update_router(self, router, **attrs): """Update a router :param router: Either the id of a router or a :class:`~openstack.network.v2.router.Router` instance. :param dict attrs: The attributes to update on the router represented by ``router``. :returns: The updated router :rtype: :class:`~openstack.network.v2.router.Router` """ return self._update(_router.Router, router, **attrs) def add_interface_to_router(self, router, subnet_id=None, port_id=None): """Add Interface to a router :param router: Either the router ID or an instance of :class:`~openstack.network.v2.router.Router` :param subnet_id: ID of the subnet :param port_id: ID of the port :returns: Router with updated interface :rtype: :class: `~openstack.network.v2.router.Router` """ body = {} if port_id: body = {'port_id': port_id} else: body = {'subnet_id': subnet_id} router = self._get_resource(_router.Router, router) return router.add_interface(self, **body) def remove_interface_from_router(self, router, subnet_id=None, port_id=None): """Remove Interface from a router :param router: Either the router ID or an instance of :class:`~openstack.network.v2.router.Router` :param subnet: ID of the subnet :param port: ID of the port :returns: Router with updated interface :rtype: :class: `~openstack.network.v2.router.Router` """ body = {} if port_id: body = {'port_id': port_id} else: body = {'subnet_id': subnet_id} router = self._get_resource(_router.Router, router) return router.remove_interface(self, **body) def add_gateway_to_router(self, router, **body): """Add Gateway to a router :param router: Either the router ID or an instance of :class:`~openstack.network.v2.router.Router` :param body: Body with the gateway information :returns: Router with updated interface :rtype: :class: `~openstack.network.v2.router.Router` """ router = self._get_resource(_router.Router, router) return router.add_gateway(self, **body) def remove_gateway_from_router(self, router, **body): """Remove Gateway from a router :param router: Either the router ID or an instance of :class:`~openstack.network.v2.router.Router` :param body: Body with the gateway information :returns: Router with updated interface :rtype: :class: `~openstack.network.v2.router.Router` """ router = self._get_resource(_router.Router, router) return router.remove_gateway(self, **body) def routers_hosting_l3_agents(self, router, **query): """Return a generator of L3 agent hosting a router :param router: Either the router id or an instance of :class:`~openstack.network.v2.router.Router` :param kwargs \*\*query: Optional query parameters to be sent to limit the resources returned :returns: A generator of Router L3 Agents :rtype: :class:`~openstack.network.v2.router.RouterL3Agents` """ router = self._get_resource(_router.Router, router) return self._list(_agent.RouterL3Agent, paginated=False, router_id=router.id, **query) def agent_hosted_routers(self, agent, **query): """Return a generator of routers hosted by a L3 agent :param agent: Either the agent id of an instance of :class:`~openstack.network.v2.network_agent.Agent` :param kwargs \*\*query: Optional query parameters to be sent to limit the resources returned :returns: A generator of routers :rtype: :class:`~openstack.network.v2.agent.L3AgentRouters` """ agent = self._get_resource(_agent.Agent, agent) return self._list(_router.L3AgentRouter, paginated=False, agent_id=agent.id, **query) def add_router_to_agent(self, agent, router): """Add router to L3 agent :param agent: Either the id of an agent :class:`~openstack.network.v2.agent.Agent` instance :param router: A router instance :returns: Agent with attached router :rtype: :class:`~openstack.network.v2.agent.Agent` """ agent = self._get_resource(_agent.Agent, agent) router = self._get_resource(_router.Router, router) return agent.add_router_to_agent(self, router.id) def remove_router_from_agent(self, agent, router): """Remove router from L3 agent :param agent: Either the id of an agent or an :class:`~openstack.network.v2.agent.Agent` instance :param router: A router instance :returns: Agent with removed router :rtype: :class:`~openstack.network.v2.agent.Agent` """ agent = self._get_resource(_agent.Agent, agent) router = self._get_resource(_router.Router, router) return agent.remove_router_from_agent(self, router.id) def create_security_group(self, **attrs): """Create a new security group from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.security_group.SecurityGroup`, comprised of the properties on the SecurityGroup class. :returns: The results of security group creation :rtype: :class:`~openstack.network.v2.security_group.SecurityGroup` """ return self._create(_security_group.SecurityGroup, **attrs) def delete_security_group(self, security_group, ignore_missing=True): """Delete a security group :param security_group: The value can be either the ID of a security group or a :class:`~openstack.network.v2.security_group.SecurityGroup` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the security group does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent security group. :returns: ``None`` """ self._delete(_security_group.SecurityGroup, security_group, ignore_missing=ignore_missing) def find_security_group(self, name_or_id, ignore_missing=True, **args): """Find a single security group :param name_or_id: The name or ID of a security group. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.security_group. SecurityGroup` or None """ return self._find(_security_group.SecurityGroup, name_or_id, ignore_missing=ignore_missing, **args) def get_security_group(self, security_group): """Get a single security group :param security_group: The value can be the ID of a security group or a :class:`~openstack.network.v2.security_group.SecurityGroup` instance. :returns: One :class:`~openstack.network.v2.security_group.SecurityGroup` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_security_group.SecurityGroup, security_group) def security_groups(self, **query): """Return a generator of security groups :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: * ``description``: Security group description * ``name``: The name of a security group * ``project_id``: The ID of the project this security group is associated with. :returns: A generator of security group objects :rtype: :class:`~openstack.network.v2.security_group.SecurityGroup` """ return self._list(_security_group.SecurityGroup, paginated=False, **query) def update_security_group(self, security_group, **attrs): """Update a security group :param security_group: Either the id of a security group or a :class:`~openstack.network.v2.security_group.SecurityGroup` instance. :param dict attrs: The attributes to update on the security group represented by ``security_group``. :returns: The updated security group :rtype: :class:`~openstack.network.v2.security_group.SecurityGroup` """ return self._update(_security_group.SecurityGroup, security_group, **attrs) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="See the Network user guide for an example") def security_group_open_port(self, sgid, port, protocol='tcp'): rule = { 'direction': 'ingress', 'remote_ip_prefix': '0.0.0.0/0', 'protocol': protocol, 'port_range_max': port, 'port_range_min': port, 'security_group_id': sgid, 'ethertype': 'IPv4' } return self.create_security_group_rule(**rule) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="See the Network user guide for an example") def security_group_allow_ping(self, sgid): rule = { 'direction': 'ingress', 'remote_ip_prefix': '0.0.0.0/0', 'protocol': 'icmp', 'port_range_max': None, 'port_range_min': None, 'security_group_id': sgid, 'ethertype': 'IPv4' } return self.create_security_group_rule(**rule) def create_security_group_rule(self, **attrs): """Create a new security group rule from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.security_group_rule. SecurityGroupRule`, comprised of the properties on the SecurityGroupRule class. :returns: The results of security group rule creation :rtype: :class:`~openstack.network.v2.security_group_rule.\ SecurityGroupRule` """ return self._create(_security_group_rule.SecurityGroupRule, **attrs) def delete_security_group_rule(self, security_group_rule, ignore_missing=True): """Delete a security group rule :param security_group_rule: The value can be either the ID of a security group rule or a :class:`~openstack.network.v2.security_group_rule. SecurityGroupRule` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the security group rule does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent security group rule. :returns: ``None`` """ self._delete(_security_group_rule.SecurityGroupRule, security_group_rule, ignore_missing=ignore_missing) def find_security_group_rule(self, name_or_id, ignore_missing=True, **args): """Find a single security group rule :param str name_or_id: The ID of a security group rule. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.security_group_rule. SecurityGroupRule` or None """ return self._find(_security_group_rule.SecurityGroupRule, name_or_id, ignore_missing=ignore_missing, **args) def get_security_group_rule(self, security_group_rule): """Get a single security group rule :param security_group_rule: The value can be the ID of a security group rule or a :class:`~openstack.network.v2.security_group_rule.\ SecurityGroupRule` instance. :returns: :class:`~openstack.network.v2.security_group_rule.\ SecurityGroupRule` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_security_group_rule.SecurityGroupRule, security_group_rule) def security_group_rules(self, **query): """Return a generator of security group rules :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. Available parameters include: * ``description``: The security group rule description * ``direction``: Security group rule direction * ``ether_type``: Must be IPv4 or IPv6, and addresses represented in CIDR must match the ingress or egress rule. * ``project_id``: The ID of the project this security group rule is associated with. * ``protocol``: Security group rule protocol * ``remote_group_id``: ID of a remote security group * ``security_group_id``: ID of security group that owns the rules :returns: A generator of security group rule objects :rtype: :class:`~openstack.network.v2.security_group_rule. SecurityGroupRule` """ return self._list(_security_group_rule.SecurityGroupRule, paginated=False, **query) def create_segment(self, **attrs): """Create a new segment from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.segment.Segment`, comprised of the properties on the Segment class. :returns: The results of segment creation :rtype: :class:`~openstack.network.v2.segment.Segment` """ return self._create(_segment.Segment, **attrs) def delete_segment(self, segment, ignore_missing=True): """Delete a segment :param segment: The value can be either the ID of a segment or a :class:`~openstack.network.v2.segment.Segment` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the segment does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent segment. :returns: ``None`` """ self._delete(_segment.Segment, segment, ignore_missing=ignore_missing) def find_segment(self, name_or_id, ignore_missing=True, **args): """Find a single segment :param name_or_id: The name or ID of a segment. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.segment.Segment` or None """ return self._find(_segment.Segment, name_or_id, ignore_missing=ignore_missing, **args) def get_segment(self, segment): """Get a single segment :param segment: The value can be the ID of a segment or a :class:`~openstack.network.v2.segment.Segment` instance. :returns: One :class:`~openstack.network.v2.segment.Segment` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_segment.Segment, segment) def segments(self, **query): """Return a generator of segments :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. Available parameters include: * ``description``: The segment description * ``name``: Name of the segments * ``network_id``: ID of the network that owns the segments * ``network_type``: Network type for the segments * ``physical_network``: Physical network name for the segments * ``segmentation_id``: Segmentation ID for the segments :returns: A generator of segment objects :rtype: :class:`~openstack.network.v2.segment.Segment` """ return self._list(_segment.Segment, paginated=False, **query) def update_segment(self, segment, **attrs): """Update a segment :param segment: Either the id of a segment or a :class:`~openstack.network.v2.segment.Segment` instance. :attrs kwargs: The attributes to update on the segment represented by ``value``. :returns: The update segment :rtype: :class:`~openstack.network.v2.segment.Segment` """ return self._update(_segment.Segment, segment, **attrs) def service_providers(self, **query): """Return a generator of service providers :param kwargs \*\* query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of service provider objects :rtype: :class:`~openstack.network.v2.service_provider.ServiceProvider` """ return self._list(_service_provider.ServiceProvider, paginated=False, **query) def create_service_profile(self, **attrs): """Create a new network service flavor profile from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.service_profile .ServiceProfile`, comprised of the properties on the ServiceProfile class. :returns: The results of service profile creation :rtype: :class:`~openstack.network.v2.service_profile.ServiceProfile` """ return self._create(_service_profile.ServiceProfile, **attrs) def delete_service_profile(self, service_profile, ignore_missing=True): """Delete a network service flavor profile :param service_profile: The value can be either the ID of a service profile or a :class:`~openstack.network.v2.service_profile .ServiceProfile` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the service profile does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent service profile. :returns: ``None`` """ self._delete(_service_profile.ServiceProfile, service_profile, ignore_missing=ignore_missing) def find_service_profile(self, name_or_id, ignore_missing=True, **args): """Find a single network service flavor profile :param name_or_id: The name or ID of a service profile. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.service_profile .ServiceProfile` or None """ return self._find(_service_profile.ServiceProfile, name_or_id, ignore_missing=ignore_missing, **args) def get_service_profile(self, service_profile): """Get a single network service flavor profile :param service_profile: The value can be the ID of a service_profile or a :class:`~openstack.network.v2.service_profile.ServiceProfile` instance. :returns: One :class:`~openstack.network.v2.service_profile .ServiceProfile` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_service_profile.ServiceProfile, service_profile) def service_profiles(self, **query): """Return a generator of network service flavor profiles :param dict query: Optional query parameters to be sent to limit the resources returned. Available parameters inclue: * ``description``: The description of the service flavor profile * ``driver``: Provider driver for the service flavor profile * ``is_enabled``: Whether the profile is enabled * ``project_id``: The owner project ID :returns: A generator of service profile objects :rtype: :class:`~openstack.network.v2.service_profile.ServiceProfile` """ return self._list(_service_profile.ServiceProfile, paginated=True, **query) def update_service_profile(self, service_profile, **attrs): """Update a network flavor service profile :param service_profile: Either the id of a service profile or a :class:`~openstack.network.v2.service_profile .ServiceProfile` instance. :attrs kwargs: The attributes to update on the service profile represented by ``value``. :returns: The updated service profile :rtype: :class:`~openstack.network.v2.service_profile.ServiceProfile` """ return self._update(_service_profile.ServiceProfile, service_profile, **attrs) def create_subnet(self, **attrs): """Create a new subnet from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.subnet.Subnet`, comprised of the properties on the Subnet class. :returns: The results of subnet creation :rtype: :class:`~openstack.network.v2.subnet.Subnet` """ return self._create(_subnet.Subnet, **attrs) def delete_subnet(self, subnet, ignore_missing=True): """Delete a subnet :param subnet: The value can be either the ID of a subnet or a :class:`~openstack.network.v2.subnet.Subnet` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the subnet does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent subnet. :returns: ``None`` """ self._delete(_subnet.Subnet, subnet, ignore_missing=ignore_missing) def find_subnet(self, name_or_id, ignore_missing=True, **args): """Find a single subnet :param name_or_id: The name or ID of a subnet. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.subnet.Subnet` or None """ return self._find(_subnet.Subnet, name_or_id, ignore_missing=ignore_missing, **args) def get_subnet(self, subnet): """Get a single subnet :param subnet: The value can be the ID of a subnet or a :class:`~openstack.network.v2.subnet.Subnet` instance. :returns: One :class:`~openstack.network.v2.subnet.Subnet` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_subnet.Subnet, subnet) def subnets(self, **query): """Return a generator of subnets :param dict query: Optional query parameters to be sent to limit the resources being returned. Available parameters include: * ``cidr``: Subnet CIDR * ``description``: The subnet description * ``gateway_ip``: Subnet gateway IP address * ``ip_version``: Subnet IP address version * ``ipv6_address_mode``: The IPv6 address mode * ``ipv6_ra_mode``: The IPv6 router advertisement mode * ``is_dhcp_enabled``: Subnet has DHCP enabled (boolean) * ``name``: Subnet name * ``network_id``: ID of network that owns the subnets * ``project_id``: Owner tenant ID * ``subnet_pool_id``: The subnet pool ID from which to obtain a CIDR. :returns: A generator of subnet objects :rtype: :class:`~openstack.network.v2.subnet.Subnet` """ return self._list(_subnet.Subnet, paginated=False, **query) def update_subnet(self, subnet, **attrs): """Update a subnet :param subnet: Either the id of a subnet or a :class:`~openstack.network.v2.subnet.Subnet` instance. :param dict attrs: The attributes to update on the subnet represented by ``subnet``. :returns: The updated subnet :rtype: :class:`~openstack.network.v2.subnet.Subnet` """ return self._update(_subnet.Subnet, subnet, **attrs) def create_subnet_pool(self, **attrs): """Create a new subnet pool from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.subnet_pool.SubnetPool`, comprised of the properties on the SubnetPool class. :returns: The results of subnet pool creation :rtype: :class:`~openstack.network.v2.subnet_pool.SubnetPool` """ return self._create(_subnet_pool.SubnetPool, **attrs) def delete_subnet_pool(self, subnet_pool, ignore_missing=True): """Delete a subnet pool :param subnet_pool: The value can be either the ID of a subnet pool or a :class:`~openstack.network.v2.subnet_pool.SubnetPool` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the subnet pool does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent subnet pool. :returns: ``None`` """ self._delete(_subnet_pool.SubnetPool, subnet_pool, ignore_missing=ignore_missing) def find_subnet_pool(self, name_or_id, ignore_missing=True, **args): """Find a single subnet pool :param name_or_id: The name or ID of a subnet pool. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.subnet_pool.SubnetPool` or None """ return self._find(_subnet_pool.SubnetPool, name_or_id, ignore_missing=ignore_missing, **args) def get_subnet_pool(self, subnet_pool): """Get a single subnet pool :param subnet_pool: The value can be the ID of a subnet pool or a :class:`~openstack.network.v2.subnet_pool.SubnetPool` instance. :returns: One :class:`~openstack.network.v2.subnet_pool.SubnetPool` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_subnet_pool.SubnetPool, subnet_pool) def subnet_pools(self, **query): """Return a generator of subnet pools :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. Available parameters include: * ``address_scope_id``: Subnet pool address scope ID * ``description``: The subnet pool description * ``ip_version``: The IP address family * ``is_default``: Subnet pool is the default (boolean) * ``is_shared``: Subnet pool is shared (boolean) * ``name``: Subnet pool name * ``project_id``: Owner tenant ID :returns: A generator of subnet pool objects :rtype: :class:`~openstack.network.v2.subnet_pool.SubnetPool` """ return self._list(_subnet_pool.SubnetPool, paginated=False, **query) def update_subnet_pool(self, subnet_pool, **attrs): """Update a subnet pool :param subnet_pool: Either the ID of a subnet pool or a :class:`~openstack.network.v2.subnet_pool.SubnetPool` instance. :param dict attrs: The attributes to update on the subnet pool represented by ``subnet_pool``. :returns: The updated subnet pool :rtype: :class:`~openstack.network.v2.subnet_pool.SubnetPool` """ return self._update(_subnet_pool.SubnetPool, subnet_pool, **attrs) @staticmethod def _check_tag_support(resource): try: # Check 'tags' attribute exists resource.tags except AttributeError: raise exceptions.InvalidRequest( '%s resource does not support tag' % resource.__class__.__name__) def set_tags(self, resource, tags): """Replace tags of a specified resource with specified tags :param resource: :class:`~openstack.resource.Resource` instance. :param tags: New tags to be set. :type tags: "list" :returns: The updated resource :rtype: :class:`~openstack.resource.Resource` """ self._check_tag_support(resource) return resource.set_tags(self, tags) def create_vpn_service(self, **attrs): """Create a new vpn service from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.network.v2.vpn_service.VPNService`, comprised of the properties on the VPNService class. :returns: The results of vpn service creation :rtype: :class:`~openstack.network.v2.vpn_service.VPNService` """ return self._create(_vpn_service.VPNService, **attrs) def delete_vpn_service(self, vpn_service, ignore_missing=True): """Delete a vpn service :param vpn_service: The value can be either the ID of a vpn service or a :class:`~openstack.network.v2.vpn_service.VPNService` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the vpn service does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent vpn service. :returns: ``None`` """ self._delete(_vpn_service.VPNService, vpn_service, ignore_missing=ignore_missing) def find_vpn_service(self, name_or_id, ignore_missing=True, **args): """Find a single vpn service :param name_or_id: The name or ID of a vpn service. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict args: Any additional parameters to be passed into underlying methods. such as query filters. :returns: One :class:`~openstack.network.v2.vpn_service.VPNService` or None """ return self._find(_vpn_service.VPNService, name_or_id, ignore_missing=ignore_missing, **args) def get_vpn_service(self, vpn_service): """Get a single vpn service :param vpn_service: The value can be the ID of a vpn service or a :class:`~openstack.network.v2.vpn_service.VPNService` instance. :returns: One :class:`~openstack.network.v2.vpn_service.VPNService` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_vpn_service.VPNService, vpn_service) def vpn_services(self, **query): """Return a generator of vpn services :param dict query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of vpn service objects :rtype: :class:`~openstack.network.v2.vpn_service.VPNService` """ return self._list(_vpn_service.VPNService, paginated=False, **query) def update_vpn_service(self, vpn_service, **attrs): """Update a vpn service :param vpn_service: Either the id of a vpn service or a :class:`~openstack.network.v2.vpn_service.VPNService` instance. :param dict attrs: The attributes to update on the VPN service represented by ``vpn_service``. :returns: The updated vpnservice :rtype: :class:`~openstack.network.v2.vpn_service.VPNService` """ return self._update(_vpn_service.VPNService, vpn_service, **attrs) openstacksdk-0.11.3/openstack/network/__init__.py0000666000175100017510000000000013236151340022070 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/_meta.py0000666000175100017510000001261713236151340017746 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import importlib import warnings import os_service_types from openstack import _log from openstack import proxy from openstack import service_description _logger = _log.setup_logging('openstack') _service_type_manager = os_service_types.ServiceTypes() _DOC_TEMPLATE = ( ":class:`{class_name}` for {service_type} aka {project}") _PROXY_TEMPLATE = """Proxy for {service_type} aka {project} This proxy object could be an instance of {class_doc_strings} depending on client configuration and which version of the service is found on remotely on the cloud. """ class ConnectionMeta(type): def __new__(meta, name, bases, dct): for service in _service_type_manager.services: service_type = service['service_type'] if service_type == 'ec2-api': # NOTE(mordred) It doesn't make any sense to use ec2-api # from openstacksdk. The credentials API calls are all calls # on identity endpoints. continue desc_class = service_description.ServiceDescription service_filter_class = _find_service_filter_class(service_type) descriptor_args = {'service_type': service_type} if service_filter_class: desc_class = service_description.OpenStackServiceDescription descriptor_args['service_filter_class'] = service_filter_class class_names = service_filter_class._get_proxy_class_names() if len(class_names) == 1: doc = _DOC_TEMPLATE.format( class_name="{service_type} Proxy <{name}>".format( service_type=service_type, name=class_names[0]), **service) else: class_doc_strings = "\n".join([ ":class:`{class_name}`".format(class_name=class_name) for class_name in class_names]) doc = _PROXY_TEMPLATE.format( class_doc_strings=class_doc_strings, **service) else: descriptor_args['proxy_class'] = proxy.BaseProxy doc = _DOC_TEMPLATE.format( class_name='~openstack.proxy.BaseProxy', **service) descriptor = desc_class(**descriptor_args) descriptor.__doc__ = doc dct[service_type.replace('-', '_')] = descriptor # Register the descriptor class with every known alias. Don't # add doc strings though - although they are supported, we don't # want to give anybody any bad ideas. Making a second descriptor # does not introduce runtime cost as the descriptors all use # the same _proxies dict on the instance. for alias_name in _get_aliases(service_type): if alias_name[-1].isdigit(): continue alias_descriptor = desc_class(**descriptor_args) dct[alias_name.replace('-', '_')] = alias_descriptor return super(ConnectionMeta, meta).__new__(meta, name, bases, dct) def _get_aliases(service_type, aliases=None): # We make connection attributes for all official real type names # and aliases. Three services have names they were called by in # openstacksdk that are not covered by Service Types Authority aliases. # Include them here - but take heed, no additional values should ever # be added to this list. # that were only used in openstacksdk resource naming. LOCAL_ALIASES = { 'baremetal': 'bare_metal', 'block_storage': 'block_store', 'clustering': 'cluster', } all_types = set(_service_type_manager.get_aliases(service_type)) if aliases: all_types.update(aliases) if service_type in LOCAL_ALIASES: all_types.add(LOCAL_ALIASES[service_type]) return all_types def _find_service_filter_class(service_type): package_name = 'openstack.{service_type}'.format( service_type=service_type).replace('-', '_') module_name = service_type.replace('-', '_') + '_service' class_name = ''.join( [part.capitalize() for part in module_name.split('_')]) try: import_name = '.'.join([package_name, module_name]) service_filter_module = importlib.import_module(import_name) except ImportError as e: # ImportWarning is ignored by default. This warning is here # as an opt-in for people trying to figure out why something # didn't work. warnings.warn( "Could not import {service_type} service filter: {e}".format( service_type=service_type, e=str(e)), ImportWarning) return None # There are no cases in which we should have a module but not the class # inside it. service_filter_class = getattr(service_filter_module, class_name) return service_filter_class openstacksdk-0.11.3/openstack/_adapter.py0000666000175100017510000001330313236151340020431 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ''' Wrapper around keystoneauth Adapter to wrap calls in TaskManager ''' import functools try: import simplejson JSONDecodeError = simplejson.scanner.JSONDecodeError except ImportError: JSONDecodeError = ValueError from six.moves import urllib from keystoneauth1 import adapter from openstack import exceptions from openstack import task_manager as _task_manager def _extract_name(url, service_type=None): '''Produce a key name to use in logging/metrics from the URL path. We want to be able to logic/metric sane general things, so we pull the url apart to generate names. The function returns a list because there are two different ways in which the elements want to be combined below (one for logging, one for statsd) Some examples are likely useful: /servers -> ['servers'] /servers/{id} -> ['servers'] /servers/{id}/os-security-groups -> ['servers', 'os-security-groups'] /v2.0/networks.json -> ['networks'] ''' url_path = urllib.parse.urlparse(url).path.strip() # Remove / from the beginning to keep the list indexes of interesting # things consistent if url_path.startswith('/'): url_path = url_path[1:] # Special case for neutron, which puts .json on the end of urls if url_path.endswith('.json'): url_path = url_path[:-len('.json')] url_parts = url_path.split('/') if url_parts[-1] == 'detail': # Special case detail calls # GET /servers/detail # returns ['servers', 'detail'] name_parts = url_parts[-2:] else: # Strip leading version piece so that # GET /v2.0/networks # returns ['networks'] if url_parts[0] in ('v1', 'v2', 'v2.0'): url_parts = url_parts[1:] name_parts = [] # Pull out every other URL portion - so that # GET /servers/{id}/os-security-groups # returns ['servers', 'os-security-groups'] for idx in range(0, len(url_parts)): if not idx % 2 and url_parts[idx]: name_parts.append(url_parts[idx]) # Keystone Token fetching is a special case, so we name it "tokens" if url_path.endswith('tokens'): name_parts = ['tokens'] # Getting the root of an endpoint is doing version discovery if not name_parts: if service_type == 'object-store': name_parts = ['account'] else: name_parts = ['discovery'] # Strip out anything that's empty or None return [part for part in name_parts if part] def _json_response(response, result_key=None, error_message=None): """Temporary method to use to bridge from ShadeAdapter to SDK calls.""" exceptions.raise_from_response(response, error_message=error_message) if not response.content: # This doesn't have any content return response # Some REST calls do not return json content. Don't decode it. if 'application/json' not in response.headers.get('Content-Type'): return response try: result_json = response.json() except JSONDecodeError: return response return result_json class OpenStackSDKAdapter(adapter.Adapter): """Wrapper around keystoneauth1.adapter.Adapter. Uses task_manager to run tasks rather than executing them directly. This allows using the nodepool MultiThreaded Rate Limiting TaskManager. """ def __init__(self, session=None, task_manager=None, *args, **kwargs): super(OpenStackSDKAdapter, self).__init__( session=session, *args, **kwargs) if not task_manager: task_manager = _task_manager.TaskManager(name=self.service_type) self.task_manager = task_manager def request( self, url, method, run_async=False, error_message=None, raise_exc=False, connect_retries=1, *args, **kwargs): name_parts = _extract_name(url, self.service_type) # TODO(mordred) This if is in service of unit tests that are making # calls without a service_type. It should be fixable once we shift # to requests-mock and stop mocking internals. if self.service_type: name = '.'.join([self.service_type, method] + name_parts) else: name = '.'.join([method] + name_parts) request_method = functools.partial( super(OpenStackSDKAdapter, self).request, url, method) return self.task_manager.submit_function( request_method, run_async=run_async, name=name, connect_retries=connect_retries, raise_exc=raise_exc, **kwargs) def _version_matches(self, version): api_version = self.get_api_major_version() if api_version: return api_version[0] == version return False class ShadeAdapter(OpenStackSDKAdapter): """Wrapper for shade methods that expect json unpacking.""" def request(self, url, method, run_async=False, error_message=None, **kwargs): response = super(ShadeAdapter, self).request( url, method, run_async=run_async, **kwargs) if run_async: return response else: return _json_response(response, error_message=error_message) openstacksdk-0.11.3/openstack/block_storage/0000775000175100017510000000000013236151501021113 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/block_storage/block_storage_service.py0000666000175100017510000000166713236151340026040 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class BlockStorageService(service_filter.ServiceFilter): """The block storage service.""" valid_versions = [service_filter.ValidVersion('v2')] def __init__(self, version=None): """Create a block storage service.""" super(BlockStorageService, self).__init__( service_type='volume', version=version, requires_project_id=True) openstacksdk-0.11.3/openstack/block_storage/v2/0000775000175100017510000000000013236151501021442 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/block_storage/v2/type.py0000666000175100017510000000225713236151340023006 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage import block_storage_service from openstack import resource class Type(resource.Resource): resource_key = "volume_type" resources_key = "volume_types" base_path = "/types" service = block_storage_service.BlockStorageService() # capabilities allow_get = True allow_create = True allow_delete = True allow_list = True # Properties #: A ID representing this type. id = resource.Body("id") #: Name of the type. name = resource.Body("name") #: A dict of extra specifications. "capabilities" is a usual key. extra_specs = resource.Body("extra_specs", type=dict) openstacksdk-0.11.3/openstack/block_storage/v2/volume.py0000666000175100017510000001062413236151340023331 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage import block_storage_service from openstack import format from openstack import resource class Volume(resource.Resource): resource_key = "volume" resources_key = "volumes" base_path = "/volumes" service = block_storage_service.BlockStorageService() _query_mapping = resource.QueryParameters( 'all_tenants', 'name', 'status', 'project_id') # capabilities allow_get = True allow_create = True allow_delete = True allow_update = True allow_list = True # Properties #: A ID representing this volume. id = resource.Body("id") #: The name of this volume. name = resource.Body("name") #: A list of links associated with this volume. *Type: list* links = resource.Body("links", type=list) #: The availability zone. availability_zone = resource.Body("availability_zone") #: To create a volume from an existing volume, specify the ID of #: the existing volume. If specified, the volume is created with #: same size of the source volume. source_volume_id = resource.Body("source_volid") #: The volume description. description = resource.Body("description") #: To create a volume from an existing snapshot, specify the ID of #: the existing volume snapshot. If specified, the volume is created #: in same availability zone and with same size of the snapshot. snapshot_id = resource.Body("snapshot_id") #: The size of the volume, in GBs. *Type: int* size = resource.Body("size", type=int) #: The ID of the image from which you want to create the volume. #: Required to create a bootable volume. image_id = resource.Body("imageRef") #: The name of the associated volume type. volume_type = resource.Body("volume_type") #: Enables or disables the bootable attribute. You can boot an #: instance from a bootable volume. *Type: bool* is_bootable = resource.Body("bootable", type=format.BoolStr) #: One or more metadata key and value pairs to associate with the volume. metadata = resource.Body("metadata") #: One or more metadata key and value pairs about image volume_image_metadata = resource.Body("volume_image_metadata") #: One of the following values: creating, available, attaching, in-use #: deleting, error, error_deleting, backing-up, restoring-backup, #: error_restoring. For details on these statuses, see the #: Block Storage API documentation. status = resource.Body("status") #: TODO(briancurtin): This is currently undocumented in the API. attachments = resource.Body("attachments") #: The timestamp of this volume creation. created_at = resource.Body("created_at") class VolumeDetail(Volume): base_path = "/volumes/detail" #: The volume's current back-end. host = resource.Body("os-vol-host-attr:host") #: The project ID associated with current back-end. project_id = resource.Body("os-vol-tenant-attr:tenant_id") #: The status of this volume's migration (None means that a migration #: is not currently in progress). migration_status = resource.Body("os-vol-mig-status-attr:migstat") #: The volume ID that this volume's name on the back-end is based on. migration_id = resource.Body("os-vol-mig-status-attr:name_id") #: Status of replication on this volume. replication_status = resource.Body("replication_status") #: Extended replication status on this volume. extended_replication_status = resource.Body( "os-volume-replication:extended_status") #: ID of the consistency group. consistency_group_id = resource.Body("consistencygroup_id") #: Data set by the replication driver replication_driver_data = resource.Body( "os-volume-replication:driver_data") #: ``True`` if this volume is encrypted, ``False`` if not. #: *Type: bool* is_encrypted = resource.Body("encrypted", type=format.BoolStr) openstacksdk-0.11.3/openstack/block_storage/v2/__init__.py0000666000175100017510000000000013236151340023544 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/block_storage/v2/stats.py0000666000175100017510000000220113236151340023150 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage import block_storage_service from openstack import resource class Pools(resource.Resource): resource_key = "pool" resources_key = "pools" base_path = "/scheduler-stats/get_pools?detail=True" service = block_storage_service.BlockStorageService() # capabilities allow_get = False allow_create = False allow_delete = False allow_list = True # Properties #: The Cinder name for the pool name = resource.Body("name") #: returns a dict with information about the pool capabilities = resource.Body("capabilities", type=dict) openstacksdk-0.11.3/openstack/block_storage/v2/snapshot.py0000666000175100017510000000455613236151340023670 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage import block_storage_service from openstack import format from openstack import resource class Snapshot(resource.Resource): resource_key = "snapshot" resources_key = "snapshots" base_path = "/snapshots" service = block_storage_service.BlockStorageService() _query_mapping = resource.QueryParameters( 'all_tenants', 'name', 'status', 'volume_id') # capabilities allow_get = True allow_create = True allow_delete = True allow_update = True allow_list = True # Properties #: A ID representing this snapshot. id = resource.Body("id") #: Name of the snapshot. Default is None. name = resource.Body("name") #: The current status of this snapshot. Potential values are creating, #: available, deleting, error, and error_deleting. status = resource.Body("status") #: Description of snapshot. Default is None. description = resource.Body("description") #: The timestamp of this snapshot creation. created_at = resource.Body("created_at") #: Metadata associated with this snapshot. metadata = resource.Body("metadata", type=dict) #: The ID of the volume this snapshot was taken of. volume_id = resource.Body("volume_id") #: The size of the volume, in GBs. size = resource.Body("size", type=int) #: Indicate whether to create snapshot, even if the volume is attached. #: Default is ``False``. *Type: bool* is_forced = resource.Body("force", type=format.BoolStr) class SnapshotDetail(Snapshot): base_path = "/snapshots/detail" #: The percentage of completeness the snapshot is currently at. progress = resource.Body("os-extended-snapshot-attributes:progress") #: The project ID this snapshot is associated with. project_id = resource.Body("os-extended-snapshot-attributes:project_id") openstacksdk-0.11.3/openstack/block_storage/v2/_proxy.py0000666000175100017510000002011013236151340023331 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage.v2 import snapshot as _snapshot from openstack.block_storage.v2 import stats as _stats from openstack.block_storage.v2 import type as _type from openstack.block_storage.v2 import volume as _volume from openstack import proxy class Proxy(proxy.BaseProxy): def get_snapshot(self, snapshot): """Get a single snapshot :param snapshot: The value can be the ID of a snapshot or a :class:`~openstack.volume.v2.snapshot.Snapshot` instance. :returns: One :class:`~openstack.volume.v2.snapshot.Snapshot` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_snapshot.Snapshot, snapshot) def snapshots(self, details=True, **query): """Retrieve a generator of snapshots :param bool details: When set to ``False`` :class:`~openstack.block_storage.v2.snapshot.Snapshot` objects will be returned. The default, ``True``, will cause :class:`~openstack.block_storage.v2.snapshot.SnapshotDetail` objects to be returned. :param kwargs \*\*query: Optional query parameters to be sent to limit the snapshots being returned. Available parameters include: * name: Name of the snapshot as a string. * all_tenants: Whether return the snapshots of all tenants. * volume_id: volume id of a snapshot. * status: Value of the status of the snapshot so that you can filter on "available" for example. :returns: A generator of snapshot objects. """ snapshot = _snapshot.SnapshotDetail if details else _snapshot.Snapshot return self._list(snapshot, paginated=True, **query) def create_snapshot(self, **attrs): """Create a new snapshot from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.volume.v2.snapshot.Snapshot`, comprised of the properties on the Snapshot class. :returns: The results of snapshot creation :rtype: :class:`~openstack.volume.v2.snapshot.Snapshot` """ return self._create(_snapshot.Snapshot, **attrs) def delete_snapshot(self, snapshot, ignore_missing=True): """Delete a snapshot :param snapshot: The value can be either the ID of a snapshot or a :class:`~openstack.volume.v2.snapshot.Snapshot` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the snapshot does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent snapshot. :returns: ``None`` """ self._delete(_snapshot.Snapshot, snapshot, ignore_missing=ignore_missing) def get_type(self, type): """Get a single type :param type: The value can be the ID of a type or a :class:`~openstack.volume.v2.type.Type` instance. :returns: One :class:`~openstack.volume.v2.type.Type` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_type.Type, type) def types(self): """Retrieve a generator of volume types :returns: A generator of volume type objects. """ return self._list(_type.Type, paginated=False) def create_type(self, **attrs): """Create a new type from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.volume.v2.type.Type`, comprised of the properties on the Type class. :returns: The results of type creation :rtype: :class:`~openstack.volume.v2.type.Type` """ return self._create(_type.Type, **attrs) def delete_type(self, type, ignore_missing=True): """Delete a type :param type: The value can be either the ID of a type or a :class:`~openstack.volume.v2.type.Type` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the type does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent type. :returns: ``None`` """ self._delete(_type.Type, type, ignore_missing=ignore_missing) def get_volume(self, volume): """Get a single volume :param volume: The value can be the ID of a volume or a :class:`~openstack.volume.v2.volume.Volume` instance. :returns: One :class:`~openstack.volume.v2.volume.Volume` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_volume.Volume, volume) def volumes(self, details=True, **query): """Retrieve a generator of volumes :param bool details: When set to ``False`` :class:`~openstack.block_storage.v2.volume.Volume` objects will be returned. The default, ``True``, will cause :class:`~openstack.block_storage.v2.volume.VolumeDetail` objects to be returned. :param kwargs \*\*query: Optional query parameters to be sent to limit the volumes being returned. Available parameters include: * name: Name of the volume as a string. * all_tenants: Whether return the volumes of all tenants * status: Value of the status of the volume so that you can filter on "available" for example. :returns: A generator of volume objects. """ volume = _volume.VolumeDetail if details else _volume.Volume return self._list(volume, paginated=True, **query) def create_volume(self, **attrs): """Create a new volume from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.volume.v2.volume.Volume`, comprised of the properties on the Volume class. :returns: The results of volume creation :rtype: :class:`~openstack.volume.v2.volume.Volume` """ return self._create(_volume.Volume, **attrs) def delete_volume(self, volume, ignore_missing=True): """Delete a volume :param volume: The value can be either the ID of a volume or a :class:`~openstack.volume.v2.volume.Volume` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the volume does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent volume. :returns: ``None`` """ self._delete(_volume.Volume, volume, ignore_missing=ignore_missing) def backend_pools(self): """Returns a generator of cinder Back-end storage pools :returns A generator of cinder Back-end storage pools objects """ return self._list(_stats.Pools, paginated=False) openstacksdk-0.11.3/openstack/block_storage/__init__.py0000666000175100017510000000000013236151340023215 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/resource2.py0000666000175100017510000000162213236151340020564 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import resource from openstack import utils class Resource(resource.Resource): @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="openstack.resource2 is now openstack.resource") def __init__(self, *args, **kwargs): super(Resource, self).__init__(*args, **kwargs) openstacksdk-0.11.3/openstack/service_description.py0000666000175100017510000001256613236151340022727 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. __all__ = [ 'OpenStackServiceDescription', 'ServiceDescription', ] import importlib import os_service_types from openstack import _log from openstack import proxy _logger = _log.setup_logging('openstack') _service_type_manager = os_service_types.ServiceTypes() class ServiceDescription(object): #: Proxy class for this service proxy_class = proxy.BaseProxy #: main service_type to use to find this service in the catalog service_type = None #: list of aliases this service might be registered as aliases = [] def __init__(self, service_type, proxy_class=None, aliases=None): """Class describing how to interact with a REST service. Each service in an OpenStack cloud needs to be found by looking for it in the catalog. Once the endpoint is found, REST calls can be made, but a Proxy class and some Resource objects are needed to provide an object interface. Instances of ServiceDescription can be passed to `openstack.connection.Connection.add_service`, or a list can be passed to the `openstack.connection.Connection` constructor in the ``extra_services`` argument. All three parameters can be provided at instantation time, or a service-specific subclass can be used that sets the attributes directly. :param string service_type: service_type to look for in the keystone catalog :param proxy.BaseProxy proxy_class: subclass of :class:`~openstack.proxy.BaseProxy` implementing an interface for this service. Defaults to :class:`~openstack.proxy.BaseProxy` which provides REST operations but no additional features. :param list aliases: Optional list of aliases, if there is more than one name that might be used to register the service in the catalog. """ self.service_type = service_type or self.service_type self.proxy_class = proxy_class or self.proxy_class if self.proxy_class: self._validate_proxy_class() self.aliases = aliases or self.aliases self.all_types = [service_type] + self.aliases self._proxy = None def _validate_proxy_class(self): if not issubclass(self.proxy_class, proxy.BaseProxy): raise TypeError( "{module}.{proxy_class} must inherit from BaseProxy".format( module=self.proxy_class.__module__, proxy_class=self.proxy_class.__name__)) def get_proxy_class(self, config): return self.proxy_class def __get__(self, instance, owner): if instance is None: return self if self.service_type not in instance._proxies: config = instance.config proxy_class = self.get_proxy_class(config) instance._proxies[self.service_type] = proxy_class( session=instance.config.get_session(), task_manager=instance.task_manager, allow_version_hack=True, service_type=config.get_service_type(self.service_type), service_name=config.get_service_name(self.service_type), interface=config.get_interface(self.service_type), region_name=config.region_name, version=config.get_api_version(self.service_type) ) return instance._proxies[self.service_type] def __set__(self, instance, value): raise AttributeError('Service Descriptors cannot be set') def __delete__(self, instance): raise AttributeError('Service Descriptors cannot be deleted') class OpenStackServiceDescription(ServiceDescription): def __init__(self, service_filter_class, *args, **kwargs): """Official OpenStack ServiceDescription. The OpenStackServiceDescription class is a helper class for services listed in Service Types Authority and that are directly supported by openstacksdk. It finds the proxy_class by looking in the openstacksdk tree for appropriately named modules. :param service_filter_class: A subclass of :class:`~openstack.service_filter.ServiceFilter` """ super(OpenStackServiceDescription, self).__init__(*args, **kwargs) self._service_filter_class = service_filter_class def get_proxy_class(self, config): # TODO(mordred) Replace this with proper discovery version_string = config.get_api_version(self.service_type) version = None if version_string: version = 'v{version}'.format(version=version_string[0]) service_filter = self._service_filter_class(version=version) module_name = service_filter.get_module() + "._proxy" module = importlib.import_module(module_name) return getattr(module, "Proxy") openstacksdk-0.11.3/openstack/cloud/0000775000175100017510000000000013236151501017403 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/cloud/_normalize.py0000666000175100017510000011214713236151364022133 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # TODO(shade) The normalize functions here should get merged in to # the sdk resource objects. import datetime import munch import six _IMAGE_FIELDS = ( 'checksum', 'container_format', 'direct_url', 'disk_format', 'file', 'id', 'name', 'owner', 'virtual_size', ) _SERVER_FIELDS = ( 'accessIPv4', 'accessIPv6', 'addresses', 'adminPass', 'created', 'key_name', 'metadata', 'networks', 'private_v4', 'public_v4', 'public_v6', 'status', 'updated', 'user_id', ) _KEYPAIR_FIELDS = ( 'fingerprint', 'name', 'private_key', 'public_key', 'user_id', ) _KEYPAIR_USELESS_FIELDS = ( 'deleted', 'deleted_at', 'id', 'updated_at', ) _COMPUTE_LIMITS_FIELDS = ( ('maxPersonality', 'max_personality'), ('maxPersonalitySize', 'max_personality_size'), ('maxServerGroupMembers', 'max_server_group_members'), ('maxServerGroups', 'max_server_groups'), ('maxServerMeta', 'max_server_meta'), ('maxTotalCores', 'max_total_cores'), ('maxTotalInstances', 'max_total_instances'), ('maxTotalKeypairs', 'max_total_keypairs'), ('maxTotalRAMSize', 'max_total_ram_size'), ('totalCoresUsed', 'total_cores_used'), ('totalInstancesUsed', 'total_instances_used'), ('totalRAMUsed', 'total_ram_used'), ('totalServerGroupsUsed', 'total_server_groups_used'), ) _pushdown_fields = { 'project': [ 'domain_id' ] } def _split_filters(obj_name='', filters=None, **kwargs): # Handle jmsepath filters if not filters: filters = {} if not isinstance(filters, dict): return {}, filters # Filter out None values from extra kwargs, because those are # defaults. If you want to search for things with None values, # they're going to need to go into the filters dict for (key, value) in kwargs.items(): if value is not None: filters[key] = value pushdown = {} client = {} for (key, value) in filters.items(): if key in _pushdown_fields.get(obj_name, {}): pushdown[key] = value else: client[key] = value return pushdown, client def _to_bool(value): if isinstance(value, six.string_types): if not value: return False prospective = value.lower().capitalize() return prospective == 'True' return bool(value) def _pop_int(resource, key): return int(resource.pop(key, 0) or 0) def _pop_float(resource, key): return float(resource.pop(key, 0) or 0) def _pop_or_get(resource, key, default, strict): if strict: return resource.pop(key, default) else: return resource.get(key, default) class Normalizer(object): '''Mix-in class to provide the normalization functions. This is in a separate class just for on-disk source code organization reasons. ''' def _normalize_compute_limits(self, limits, project_id=None): """ Normalize a limits object. Limits modified in this method and shouldn't be modified afterwards. """ # Copy incoming limits because of shared dicts in unittests limits = limits['absolute'].copy() new_limits = munch.Munch() new_limits['location'] = self._get_current_location( project_id=project_id) for field in _COMPUTE_LIMITS_FIELDS: new_limits[field[1]] = limits.pop(field[0], None) new_limits['properties'] = limits.copy() return new_limits def _remove_novaclient_artifacts(self, item): # Remove novaclient artifacts item.pop('links', None) item.pop('NAME_ATTR', None) item.pop('HUMAN_ID', None) item.pop('human_id', None) item.pop('request_ids', None) item.pop('x_openstack_request_ids', None) def _normalize_flavors(self, flavors): """ Normalize a list of flavor objects """ ret = [] for flavor in flavors: ret.append(self._normalize_flavor(flavor)) return ret def _normalize_flavor(self, flavor): """ Normalize a flavor object """ new_flavor = munch.Munch() # Copy incoming group because of shared dicts in unittests flavor = flavor.copy() # Discard noise self._remove_novaclient_artifacts(flavor) flavor.pop('links', None) ephemeral = int(_pop_or_get( flavor, 'OS-FLV-EXT-DATA:ephemeral', 0, self.strict_mode)) ephemeral = flavor.pop('ephemeral', ephemeral) is_public = _to_bool(_pop_or_get( flavor, 'os-flavor-access:is_public', True, self.strict_mode)) is_public = _to_bool(flavor.pop('is_public', is_public)) is_disabled = _to_bool(_pop_or_get( flavor, 'OS-FLV-DISABLED:disabled', False, self.strict_mode)) extra_specs = _pop_or_get( flavor, 'OS-FLV-WITH-EXT-SPECS:extra_specs', {}, self.strict_mode) extra_specs = flavor.pop('extra_specs', extra_specs) extra_specs = munch.Munch(extra_specs) new_flavor['location'] = self.current_location new_flavor['id'] = flavor.pop('id') new_flavor['name'] = flavor.pop('name') new_flavor['is_public'] = is_public new_flavor['is_disabled'] = is_disabled new_flavor['ram'] = _pop_int(flavor, 'ram') new_flavor['vcpus'] = _pop_int(flavor, 'vcpus') new_flavor['disk'] = _pop_int(flavor, 'disk') new_flavor['ephemeral'] = ephemeral new_flavor['swap'] = _pop_int(flavor, 'swap') new_flavor['rxtx_factor'] = _pop_float(flavor, 'rxtx_factor') new_flavor['properties'] = flavor.copy() new_flavor['extra_specs'] = extra_specs # Backwards compat with nova - passthrough values if not self.strict_mode: for (k, v) in new_flavor['properties'].items(): new_flavor.setdefault(k, v) return new_flavor def _normalize_keypairs(self, keypairs): """Normalize Nova Keypairs""" ret = [] for keypair in keypairs: ret.append(self._normalize_keypair(keypair)) return ret def _normalize_keypair(self, keypair): """Normalize Ironic Machine""" new_keypair = munch.Munch() keypair = keypair.copy() # Discard noise self._remove_novaclient_artifacts(keypair) new_keypair['location'] = self.current_location for key in _KEYPAIR_FIELDS: new_keypair[key] = keypair.pop(key, None) # These are completely meaningless fields for key in _KEYPAIR_USELESS_FIELDS: keypair.pop(key, None) new_keypair['type'] = keypair.pop('type', 'ssh') # created_at isn't returned from the keypair creation. (what?) new_keypair['created_at'] = keypair.pop( 'created_at', datetime.datetime.now().isoformat()) # Don't even get me started on this new_keypair['id'] = new_keypair['name'] new_keypair['properties'] = keypair.copy() return new_keypair def _normalize_images(self, images): ret = [] for image in images: ret.append(self._normalize_image(image)) return ret def _normalize_image(self, image): new_image = munch.Munch( location=self._get_current_location(project_id=image.get('owner'))) # This copy is to keep things from getting epically weird in tests image = image.copy() # Discard noise self._remove_novaclient_artifacts(image) # If someone made a property called "properties" that contains a # string (this has happened at least one time in the wild), the # the rest of the normalization here goes belly up. properties = image.pop('properties', {}) if not isinstance(properties, dict): properties = {'properties': properties} visibility = image.pop('visibility', None) protected = _to_bool(image.pop('protected', False)) if visibility: is_public = (visibility == 'public') else: is_public = image.pop('is_public', False) visibility = 'public' if is_public else 'private' new_image['size'] = image.pop('OS-EXT-IMG-SIZE:size', 0) new_image['size'] = image.pop('size', new_image['size']) new_image['min_ram'] = image.pop('minRam', 0) new_image['min_ram'] = image.pop('min_ram', new_image['min_ram']) new_image['min_disk'] = image.pop('minDisk', 0) new_image['min_disk'] = image.pop('min_disk', new_image['min_disk']) new_image['created_at'] = image.pop('created', '') new_image['created_at'] = image.pop( 'created_at', new_image['created_at']) new_image['updated_at'] = image.pop('updated', '') new_image['updated_at'] = image.pop( 'updated_at', new_image['updated_at']) for field in _IMAGE_FIELDS: new_image[field] = image.pop(field, None) new_image['tags'] = image.pop('tags', []) new_image['status'] = image.pop('status').lower() for field in ('min_ram', 'min_disk', 'size', 'virtual_size'): new_image[field] = _pop_int(new_image, field) new_image['is_protected'] = protected new_image['locations'] = image.pop('locations', []) metadata = image.pop('metadata', {}) for key, val in metadata.items(): properties.setdefault(key, val) for key, val in image.items(): properties.setdefault(key, val) new_image['properties'] = properties new_image['is_public'] = is_public new_image['visibility'] = visibility # Backwards compat with glance if not self.strict_mode: for key, val in properties.items(): if key != 'properties': new_image[key] = val new_image['protected'] = protected new_image['metadata'] = properties new_image['created'] = new_image['created_at'] new_image['updated'] = new_image['updated_at'] new_image['minDisk'] = new_image['min_disk'] new_image['minRam'] = new_image['min_ram'] return new_image def _normalize_secgroups(self, groups): """Normalize the structure of security groups This makes security group dicts, as returned from nova, look like the security group dicts as returned from neutron. This does not make them look exactly the same, but it's pretty close. :param list groups: A list of security group dicts. :returns: A list of normalized dicts. """ ret = [] for group in groups: ret.append(self._normalize_secgroup(group)) return ret def _normalize_secgroup(self, group): ret = munch.Munch() # Copy incoming group because of shared dicts in unittests group = group.copy() # Discard noise self._remove_novaclient_artifacts(group) rules = self._normalize_secgroup_rules( group.pop('security_group_rules', group.pop('rules', []))) project_id = group.pop('tenant_id', '') project_id = group.pop('project_id', project_id) ret['location'] = self._get_current_location(project_id=project_id) ret['id'] = group.pop('id') ret['name'] = group.pop('name') ret['security_group_rules'] = rules ret['description'] = group.pop('description') ret['properties'] = group # Backwards compat with Neutron if not self.strict_mode: ret['tenant_id'] = project_id ret['project_id'] = project_id for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_secgroup_rules(self, rules): """Normalize the structure of nova security group rules Note that nova uses -1 for non-specific port values, but neutron represents these with None. :param list rules: A list of security group rule dicts. :returns: A list of normalized dicts. """ ret = [] for rule in rules: ret.append(self._normalize_secgroup_rule(rule)) return ret def _normalize_secgroup_rule(self, rule): ret = munch.Munch() # Copy incoming rule because of shared dicts in unittests rule = rule.copy() ret['id'] = rule.pop('id') ret['direction'] = rule.pop('direction', 'ingress') ret['ethertype'] = rule.pop('ethertype', 'IPv4') port_range_min = rule.get( 'port_range_min', rule.pop('from_port', None)) if port_range_min == -1: port_range_min = None if port_range_min is not None: port_range_min = int(port_range_min) ret['port_range_min'] = port_range_min port_range_max = rule.pop( 'port_range_max', rule.pop('to_port', None)) if port_range_max == -1: port_range_max = None if port_range_min is not None: port_range_min = int(port_range_min) ret['port_range_max'] = port_range_max ret['protocol'] = rule.pop('protocol', rule.pop('ip_protocol', None)) ret['remote_ip_prefix'] = rule.pop( 'remote_ip_prefix', rule.pop('ip_range', {}).get('cidr', None)) ret['security_group_id'] = rule.pop( 'security_group_id', rule.pop('parent_group_id', None)) ret['remote_group_id'] = rule.pop('remote_group_id', None) project_id = rule.pop('tenant_id', '') project_id = rule.pop('project_id', project_id) ret['location'] = self._get_current_location(project_id=project_id) ret['properties'] = rule # Backwards compat with Neutron if not self.strict_mode: ret['tenant_id'] = project_id ret['project_id'] = project_id for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_servers(self, servers): # Here instead of _utils because we need access to region and cloud # name from the cloud object ret = [] for server in servers: ret.append(self._normalize_server(server)) return ret def _normalize_server(self, server): ret = munch.Munch() # Copy incoming server because of shared dicts in unittests server = server.copy() self._remove_novaclient_artifacts(server) ret['id'] = server.pop('id') ret['name'] = server.pop('name') server['flavor'].pop('links', None) ret['flavor'] = server.pop('flavor') # OpenStack can return image as a string when you've booted # from volume if str(server['image']) != server['image']: server['image'].pop('links', None) ret['image'] = server.pop('image') project_id = server.pop('tenant_id', '') project_id = server.pop('project_id', project_id) az = _pop_or_get( server, 'OS-EXT-AZ:availability_zone', None, self.strict_mode) ret['location'] = self._get_current_location( project_id=project_id, zone=az) # Ensure volumes is always in the server dict, even if empty ret['volumes'] = _pop_or_get( server, 'os-extended-volumes:volumes_attached', [], self.strict_mode) config_drive = server.pop('config_drive', False) ret['has_config_drive'] = _to_bool(config_drive) host_id = server.pop('hostId', None) ret['host_id'] = host_id ret['progress'] = _pop_int(server, 'progress') # Leave these in so that the general properties handling works ret['disk_config'] = _pop_or_get( server, 'OS-DCF:diskConfig', None, self.strict_mode) for key in ( 'OS-EXT-STS:power_state', 'OS-EXT-STS:task_state', 'OS-EXT-STS:vm_state', 'OS-SRV-USG:launched_at', 'OS-SRV-USG:terminated_at'): short_key = key.split(':')[1] ret[short_key] = _pop_or_get(server, key, None, self.strict_mode) # Protect against security_groups being None ret['security_groups'] = server.pop('security_groups', None) or [] for field in _SERVER_FIELDS: ret[field] = server.pop(field, None) if not ret['networks']: ret['networks'] = {} ret['interface_ip'] = '' ret['properties'] = server.copy() # Backwards compat if not self.strict_mode: ret['hostId'] = host_id ret['config_drive'] = config_drive ret['project_id'] = project_id ret['tenant_id'] = project_id ret['region'] = self.region_name ret['cloud'] = self.name ret['az'] = az for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_floating_ips(self, ips): """Normalize the structure of floating IPs Unfortunately, not all the Neutron floating_ip attributes are available with Nova and not all Nova floating_ip attributes are available with Neutron. This function extract attributes that are common to Nova and Neutron floating IP resource. If the whole structure is needed inside shade, shade provides private methods that returns "original" objects (e.g. _neutron_allocate_floating_ip) :param list ips: A list of Neutron floating IPs. :returns: A list of normalized dicts with the following attributes:: [ { "id": "this-is-a-floating-ip-id", "fixed_ip_address": "192.0.2.10", "floating_ip_address": "198.51.100.10", "network": "this-is-a-net-or-pool-id", "attached": True, "status": "ACTIVE" }, ... ] """ return [ self._normalize_floating_ip(ip) for ip in ips ] def _normalize_floating_ip(self, ip): ret = munch.Munch() # Copy incoming floating ip because of shared dicts in unittests ip = ip.copy() fixed_ip_address = ip.pop('fixed_ip_address', ip.pop('fixed_ip', None)) floating_ip_address = ip.pop('floating_ip_address', ip.pop('ip', None)) network_id = ip.pop( 'floating_network_id', ip.pop('network', ip.pop('pool', None))) project_id = ip.pop('tenant_id', '') project_id = ip.pop('project_id', project_id) instance_id = ip.pop('instance_id', None) router_id = ip.pop('router_id', None) id = ip.pop('id') port_id = ip.pop('port_id', None) created_at = ip.pop('created_at', None) updated_at = ip.pop('updated_at', None) # Note - description may not always be on the underlying cloud. # Normalizing it here is easy - what do we do when people want to # set a description? description = ip.pop('description', '') revision_number = ip.pop('revision_number', None) if self._use_neutron_floating(): attached = bool(port_id) status = ip.pop('status', 'UNKNOWN') else: attached = bool(instance_id) # In neutron's terms, Nova floating IPs are always ACTIVE status = 'ACTIVE' ret = munch.Munch( attached=attached, fixed_ip_address=fixed_ip_address, floating_ip_address=floating_ip_address, id=id, location=self._get_current_location(project_id=project_id), network=network_id, port=port_id, router=router_id, status=status, created_at=created_at, updated_at=updated_at, description=description, revision_number=revision_number, properties=ip.copy(), ) # Backwards compat if not self.strict_mode: ret['port_id'] = port_id ret['router_id'] = router_id ret['project_id'] = project_id ret['tenant_id'] = project_id ret['floating_network_id'] = network_id for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_projects(self, projects): """Normalize the structure of projects This makes tenants from keystone v2 look like projects from v3. :param list projects: A list of projects to normalize :returns: A list of normalized dicts. """ ret = [] for project in projects: ret.append(self._normalize_project(project)) return ret def _normalize_project(self, project): # Copy incoming project because of shared dicts in unittests project = project.copy() # Discard noise self._remove_novaclient_artifacts(project) # In both v2 and v3 project_id = project.pop('id') name = project.pop('name', '') description = project.pop('description', '') is_enabled = project.pop('enabled', True) # v3 additions domain_id = project.pop('domain_id', 'default') parent_id = project.pop('parent_id', None) is_domain = project.pop('is_domain', False) # Projects have a special relationship with location location = self._get_identity_location() location['project']['domain_id'] = domain_id location['project']['id'] = parent_id ret = munch.Munch( location=location, id=project_id, name=name, description=description, is_enabled=is_enabled, is_domain=is_domain, domain_id=domain_id, properties=project.copy() ) # Backwards compat if not self.strict_mode: ret['enabled'] = is_enabled ret['parent_id'] = parent_id for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_volume_type_access(self, volume_type_access): volume_type_access = volume_type_access.copy() volume_type_id = volume_type_access.pop('volume_type_id') project_id = volume_type_access.pop('project_id') ret = munch.Munch( location=self.current_location, project_id=project_id, volume_type_id=volume_type_id, properties=volume_type_access.copy(), ) return ret def _normalize_volume_type_accesses(self, volume_type_accesses): ret = [] for volume_type_access in volume_type_accesses: ret.append(self._normalize_volume_type_access(volume_type_access)) return ret def _normalize_volume_type(self, volume_type): volume_type = volume_type.copy() volume_id = volume_type.pop('id') description = volume_type.pop('description', None) name = volume_type.pop('name', None) old_is_public = volume_type.pop('os-volume-type-access:is_public', False) is_public = volume_type.pop('is_public', old_is_public) qos_specs_id = volume_type.pop('qos_specs_id', None) extra_specs = volume_type.pop('extra_specs', {}) ret = munch.Munch( location=self.current_location, is_public=is_public, id=volume_id, name=name, description=description, qos_specs_id=qos_specs_id, extra_specs=extra_specs, properties=volume_type.copy(), ) return ret def _normalize_volume_types(self, volume_types): ret = [] for volume in volume_types: ret.append(self._normalize_volume_type(volume)) return ret def _normalize_volumes(self, volumes): """Normalize the structure of volumes This makes tenants from cinder v1 look like volumes from v2. :param list projects: A list of volumes to normalize :returns: A list of normalized dicts. """ ret = [] for volume in volumes: ret.append(self._normalize_volume(volume)) return ret def _normalize_volume(self, volume): volume = volume.copy() # Discard noise self._remove_novaclient_artifacts(volume) volume_id = volume.pop('id') name = volume.pop('display_name', None) name = volume.pop('name', name) description = volume.pop('display_description', None) description = volume.pop('description', description) is_bootable = _to_bool(volume.pop('bootable', True)) is_encrypted = _to_bool(volume.pop('encrypted', False)) can_multiattach = _to_bool(volume.pop('multiattach', False)) project_id = _pop_or_get( volume, 'os-vol-tenant-attr:tenant_id', None, self.strict_mode) az = volume.pop('availability_zone', None) location = self._get_current_location(project_id=project_id, zone=az) host = _pop_or_get( volume, 'os-vol-host-attr:host', None, self.strict_mode) replication_extended_status = _pop_or_get( volume, 'os-volume-replication:extended_status', None, self.strict_mode) migration_status = _pop_or_get( volume, 'os-vol-mig-status-attr:migstat', None, self.strict_mode) migration_status = volume.pop('migration_status', migration_status) _pop_or_get(volume, 'user_id', None, self.strict_mode) source_volume_id = _pop_or_get( volume, 'source_volid', None, self.strict_mode) replication_driver = _pop_or_get( volume, 'os-volume-replication:driver_data', None, self.strict_mode) ret = munch.Munch( location=location, id=volume_id, name=name, description=description, size=_pop_int(volume, 'size'), attachments=volume.pop('attachments', []), status=volume.pop('status'), migration_status=migration_status, host=host, replication_driver=replication_driver, replication_status=volume.pop('replication_status', None), replication_extended_status=replication_extended_status, snapshot_id=volume.pop('snapshot_id', None), created_at=volume.pop('created_at'), updated_at=volume.pop('updated_at', None), source_volume_id=source_volume_id, consistencygroup_id=volume.pop('consistencygroup_id', None), volume_type=volume.pop('volume_type', None), metadata=volume.pop('metadata', {}), is_bootable=is_bootable, is_encrypted=is_encrypted, can_multiattach=can_multiattach, properties=volume.copy(), ) # Backwards compat if not self.strict_mode: ret['display_name'] = name ret['display_description'] = description ret['bootable'] = is_bootable ret['encrypted'] = is_encrypted ret['multiattach'] = can_multiattach ret['availability_zone'] = az for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_volume_attachment(self, attachment): """ Normalize a volume attachment object""" attachment = attachment.copy() # Discard noise self._remove_novaclient_artifacts(attachment) return munch.Munch(**attachment) def _normalize_volume_backups(self, backups): ret = [] for backup in backups: ret.append(self._normalize_volume_backup(backup)) return ret def _normalize_volume_backup(self, backup): """ Normalize a valume backup object""" backup = backup.copy() # Discard noise self._remove_novaclient_artifacts(backup) return munch.Munch(**backup) def _normalize_compute_usage(self, usage): """ Normalize a compute usage object """ usage = usage.copy() # Discard noise self._remove_novaclient_artifacts(usage) project_id = usage.pop('tenant_id', None) ret = munch.Munch( location=self._get_current_location(project_id=project_id), ) for key in ( 'max_personality', 'max_personality_size', 'max_server_group_members', 'max_server_groups', 'max_server_meta', 'max_total_cores', 'max_total_instances', 'max_total_keypairs', 'max_total_ram_size', 'total_cores_used', 'total_hours', 'total_instances_used', 'total_local_gb_usage', 'total_memory_mb_usage', 'total_ram_used', 'total_server_groups_used', 'total_vcpus_usage'): ret[key] = usage.pop(key, 0) ret['started_at'] = usage.pop('start', None) ret['stopped_at'] = usage.pop('stop', None) ret['server_usages'] = self._normalize_server_usages( usage.pop('server_usages', [])) ret['properties'] = usage return ret def _normalize_server_usage(self, server_usage): """ Normalize a server usage object """ server_usage = server_usage.copy() # TODO(mordred) Right now there is already a location on the usage # object. Including one here seems verbose. server_usage.pop('tenant_id') ret = munch.Munch() ret['ended_at'] = server_usage.pop('ended_at', None) ret['started_at'] = server_usage.pop('started_at', None) for key in ( 'flavor', 'instance_id', 'name', 'state'): ret[key] = server_usage.pop(key, '') for key in ( 'hours', 'local_gb', 'memory_mb', 'uptime', 'vcpus'): ret[key] = server_usage.pop(key, 0) ret['properties'] = server_usage return ret def _normalize_server_usages(self, server_usages): ret = [] for server_usage in server_usages: ret.append(self._normalize_server_usage(server_usage)) return ret def _normalize_cluster_templates(self, cluster_templates): ret = [] for cluster_template in cluster_templates: ret.append(self._normalize_cluster_template(cluster_template)) return ret def _normalize_cluster_template(self, cluster_template): """Normalize Magnum cluster_templates.""" cluster_template = cluster_template.copy() # Discard noise cluster_template.pop('links', None) cluster_template.pop('human_id', None) # model_name is a magnumclient-ism cluster_template.pop('model_name', None) ct_id = cluster_template.pop('uuid') ret = munch.Munch( id=ct_id, location=self._get_current_location(), ) ret['is_public'] = cluster_template.pop('public') ret['is_registry_enabled'] = cluster_template.pop('registry_enabled') ret['is_tls_disabled'] = cluster_template.pop('tls_disabled') # pop floating_ip_enabled since we want to hide it in a future patch fip_enabled = cluster_template.pop('floating_ip_enabled', None) if not self.strict_mode: ret['uuid'] = ct_id if fip_enabled is not None: ret['floating_ip_enabled'] = fip_enabled ret['public'] = ret['is_public'] ret['registry_enabled'] = ret['is_registry_enabled'] ret['tls_disabled'] = ret['is_tls_disabled'] # Optional keys for (key, default) in ( ('fixed_network', None), ('fixed_subnet', None), ('http_proxy', None), ('https_proxy', None), ('labels', {}), ('master_flavor_id', None), ('no_proxy', None)): if key in cluster_template: ret[key] = cluster_template.pop(key, default) for key in ( 'apiserver_port', 'cluster_distro', 'coe', 'created_at', 'dns_nameserver', 'docker_volume_size', 'external_network_id', 'flavor_id', 'image_id', 'insecure_registry', 'keypair_id', 'name', 'network_driver', 'server_type', 'updated_at', 'volume_driver'): ret[key] = cluster_template.pop(key) ret['properties'] = cluster_template return ret def _normalize_magnum_services(self, magnum_services): ret = [] for magnum_service in magnum_services: ret.append(self._normalize_magnum_service(magnum_service)) return ret def _normalize_magnum_service(self, magnum_service): """Normalize Magnum magnum_services.""" magnum_service = magnum_service.copy() # Discard noise magnum_service.pop('links', None) magnum_service.pop('human_id', None) # model_name is a magnumclient-ism magnum_service.pop('model_name', None) ret = munch.Munch(location=self._get_current_location()) for key in ( 'binary', 'created_at', 'disabled_reason', 'host', 'id', 'report_count', 'state', 'updated_at'): ret[key] = magnum_service.pop(key) ret['properties'] = magnum_service return ret def _normalize_stacks(self, stacks): """Normalize Heat Stacks""" ret = [] for stack in stacks: ret.append(self._normalize_stack(stack)) return ret def _normalize_stack(self, stack): """Normalize Heat Stack""" stack = stack.copy() # Discard noise self._remove_novaclient_artifacts(stack) # Discard things heatclient adds that aren't in the REST stack.pop('action', None) stack.pop('status', None) stack.pop('identifier', None) stack_status = stack.pop('stack_status') (action, status) = stack_status.split('_', 1) ret = munch.Munch( id=stack.pop('id'), location=self._get_current_location(), action=action, status=status, ) if not self.strict_mode: ret['stack_status'] = stack_status for (new_name, old_name) in ( ('name', 'stack_name'), ('created_at', 'creation_time'), ('deleted_at', 'deletion_time'), ('updated_at', 'updated_time'), ('description', 'description'), ('is_rollback_enabled', 'disable_rollback'), ('parent', 'parent'), ('notification_topics', 'notification_topics'), ('parameters', 'parameters'), ('outputs', 'outputs'), ('owner', 'stack_owner'), ('status_reason', 'stack_status_reason'), ('stack_user_project_id', 'stack_user_project_id'), ('tempate_description', 'template_description'), ('timeout_mins', 'timeout_mins'), ('tags', 'tags')): value = stack.pop(old_name, None) ret[new_name] = value if not self.strict_mode: ret[old_name] = value ret['identifier'] = '{name}/{id}'.format( name=ret['name'], id=ret['id']) ret['properties'] = stack return ret def _normalize_machines(self, machines): """Normalize Ironic Machines""" ret = [] for machine in machines: ret.append(self._normalize_machine(machine)) return ret def _normalize_machine(self, machine): """Normalize Ironic Machine""" machine = machine.copy() # Discard noise self._remove_novaclient_artifacts(machine) # TODO(mordred) Normalize this resource return machine def _normalize_roles(self, roles): """Normalize Keystone roles""" ret = [] for role in roles: ret.append(self._normalize_role(role)) return ret def _normalize_role(self, role): """Normalize Identity roles.""" return munch.Munch( id=role.get('id'), name=role.get('name'), domain_id=role.get('domain_id'), location=self._get_identity_location(), properties={}, ) openstacksdk-0.11.3/openstack/cloud/cmd/0000775000175100017510000000000013236151501020146 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/cloud/cmd/__init__.py0000666000175100017510000000000013236151340022250 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/cloud/cmd/inventory.py0000777000175100017510000000475213236151340022573 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import json import sys import yaml import openstack.cloud import openstack.cloud.inventory def output_format_dict(data, use_yaml): if use_yaml: return yaml.safe_dump(data, default_flow_style=False) else: return json.dumps(data, sort_keys=True, indent=2) def parse_args(): parser = argparse.ArgumentParser(description='OpenStack Inventory Module') parser.add_argument('--refresh', action='store_true', help='Refresh cached information') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--list', action='store_true', help='List active servers') group.add_argument('--host', help='List details about the specific host') parser.add_argument('--private', action='store_true', default=False, help='Use private IPs for interface_ip') parser.add_argument('--cloud', default=None, help='Return data for one cloud only') parser.add_argument('--yaml', action='store_true', default=False, help='Output data in nicely readable yaml') parser.add_argument('--debug', action='store_true', default=False, help='Enable debug output') return parser.parse_args() def main(): args = parse_args() try: openstack.cloud.simple_logging(debug=args.debug) inventory = openstack.cloud.inventory.OpenStackInventory( refresh=args.refresh, private=args.private, cloud=args.cloud) if args.list: output = inventory.list_hosts() elif args.host: output = inventory.get_host(args.host) print(output_format_dict(output, args.yaml)) except openstack.cloud.OpenStackCloudException as e: sys.stderr.write(e.message + '\n') sys.exit(1) sys.exit(0) if __name__ == '__main__': main() openstacksdk-0.11.3/openstack/cloud/_utils.py0000666000175100017510000005610113236151340021262 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import fnmatch import inspect import jmespath import munch import netifaces import re import six import sre_constants import sys import time import uuid from decorator import decorator from openstack import _log from openstack.cloud import exc from openstack.cloud import meta _decorated_methods = [] def _exc_clear(): """Because sys.exc_clear is gone in py3 and is not in six.""" if sys.version_info[0] == 2: sys.exc_clear() def _make_unicode(input): """Turn an input into unicode unconditionally :param input: A unicode, string or other object """ try: if isinstance(input, unicode): return input if isinstance(input, str): return input.decode('utf-8') else: # int, for example return unicode(input) except NameError: # python3! return str(input) def _dictify_resource(resource): if isinstance(resource, list): return [_dictify_resource(r) for r in resource] else: if hasattr(resource, 'toDict'): return resource.toDict() else: return resource def _filter_list(data, name_or_id, filters): """Filter a list by name/ID and arbitrary meta data. :param list data: The list of dictionary data to filter. It is expected that each dictionary contains an 'id' and 'name' key if a value for name_or_id is given. :param string name_or_id: The name or ID of the entity being filtered. Can be a glob pattern, such as 'nb01*'. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. """ # The logger is openstack.cloud.fmmatch to allow a user/operator to # configure logging not to communicate about fnmatch misses # (they shouldn't be too spammy, but one never knows) log = _log.setup_logging('openstack.fnmatch') if name_or_id: # name_or_id might already be unicode name_or_id = _make_unicode(name_or_id) identifier_matches = [] bad_pattern = False try: fn_reg = re.compile(fnmatch.translate(name_or_id)) except sre_constants.error: # If the fnmatch re doesn't compile, then we don't care, # but log it in case the user DID pass a pattern but did # it poorly and wants to know what went wrong with their # search fn_reg = None for e in data: e_id = _make_unicode(e.get('id', None)) e_name = _make_unicode(e.get('name', None)) if ((e_id and e_id == name_or_id) or (e_name and e_name == name_or_id)): identifier_matches.append(e) else: # Only try fnmatch if we don't match exactly if not fn_reg: # If we don't have a pattern, skip this, but set the flag # so that we log the bad pattern bad_pattern = True continue if ((e_id and fn_reg.match(e_id)) or (e_name and fn_reg.match(e_name))): identifier_matches.append(e) if not identifier_matches and bad_pattern: log.debug("Bad pattern passed to fnmatch", exc_info=True) data = identifier_matches if not filters: return data if isinstance(filters, six.string_types): return jmespath.search(filters, data) def _dict_filter(f, d): if not d: return False for key in f.keys(): if isinstance(f[key], dict): if not _dict_filter(f[key], d.get(key, None)): return False elif d.get(key, None) != f[key]: return False return True filtered = [] for e in data: filtered.append(e) for key in filters.keys(): if isinstance(filters[key], dict): if not _dict_filter(filters[key], e.get(key, None)): filtered.pop() break elif e.get(key, None) != filters[key]: filtered.pop() break return filtered def _get_entity(cloud, resource, name_or_id, filters, **kwargs): """Return a single entity from the list returned by a given method. :param object cloud: The controller class (Example: the main OpenStackCloud object) . :param string or callable resource: The string that identifies the resource to use to lookup the get_<>_by_id or search_s methods(Example: network) or a callable to invoke. :param string name_or_id: The name or ID of the entity being filtered or a dict :param filters: A dictionary of meta data to use for further filtering. OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" """ # Sometimes in the control flow of shade, we already have an object # fetched. Rather than then needing to pull the name or id out of that # object, pass it in here and rely on caching to prevent us from making # an additional call, it's simple enough to test to see if we got an # object and just short-circuit return it. if hasattr(name_or_id, 'id'): return name_or_id # If a uuid is passed short-circuit it calling the # get__by_id method if getattr(cloud, 'use_direct_get', False) and _is_uuid_like(name_or_id): get_resource = getattr(cloud, 'get_%s_by_id' % resource, None) if get_resource: return get_resource(name_or_id) search = resource if callable(resource) else getattr( cloud, 'search_%ss' % resource, None) if search: entities = search(name_or_id, filters, **kwargs) if entities: if len(entities) > 1: raise exc.OpenStackCloudException( "Multiple matches found for %s" % name_or_id) return entities[0] return None def normalize_keystone_services(services): """Normalize the structure of keystone services In keystone v2, there is a field called "service_type". In v3, it's "type". Just make the returned dict have both. :param list services: A list of keystone service dicts :returns: A list of normalized dicts. """ ret = [] for service in services: service_type = service.get('type', service.get('service_type')) new_service = { 'id': service['id'], 'name': service['name'], 'description': service.get('description', None), 'type': service_type, 'service_type': service_type, 'enabled': service['enabled'] } ret.append(new_service) return meta.obj_list_to_munch(ret) def localhost_supports_ipv6(): """Determine whether the local host supports IPv6 We look for a default route that supports the IPv6 address family, and assume that if it is present, this host has globally routable IPv6 connectivity. """ try: return netifaces.AF_INET6 in netifaces.gateways()['default'] except AttributeError: return False def normalize_users(users): ret = [ dict( id=user.get('id'), email=user.get('email'), name=user.get('name'), username=user.get('username'), default_project_id=user.get('default_project_id', user.get('tenantId')), domain_id=user.get('domain_id'), enabled=user.get('enabled'), description=user.get('description') ) for user in users ] return meta.obj_list_to_munch(ret) def normalize_domains(domains): ret = [ dict( id=domain.get('id'), name=domain.get('name'), description=domain.get('description'), enabled=domain.get('enabled'), ) for domain in domains ] return meta.obj_list_to_munch(ret) def normalize_groups(domains): """Normalize Identity groups.""" ret = [ dict( id=domain.get('id'), name=domain.get('name'), description=domain.get('description'), domain_id=domain.get('domain_id'), ) for domain in domains ] return meta.obj_list_to_munch(ret) def normalize_role_assignments(assignments): """Put role_assignments into a form that works with search/get interface. Role assignments have the structure:: [ { "role": { "id": "--role-id--" }, "scope": { "domain": { "id": "--domain-id--" } }, "user": { "id": "--user-id--" } }, ] Which is hard to work with in the rest of our interface. Map this to be:: [ { "id": "--role-id--", "domain": "--domain-id--", "user": "--user-id--", } ] Scope can be "domain" or "project" and "user" can also be "group". :param list assignments: A list of dictionaries of role assignments. :returns: A list of flattened/normalized role assignment dicts. """ new_assignments = [] for assignment in assignments: new_val = munch.Munch({'id': assignment['role']['id']}) for scope in ('project', 'domain'): if scope in assignment['scope']: new_val[scope] = assignment['scope'][scope]['id'] for assignee in ('user', 'group'): if assignee in assignment: new_val[assignee] = assignment[assignee]['id'] new_assignments.append(new_val) return new_assignments def normalize_flavor_accesses(flavor_accesses): """Normalize Flavor access list.""" return [munch.Munch( dict( flavor_id=acl.get('flavor_id'), project_id=acl.get('project_id') or acl.get('tenant_id'), ) ) for acl in flavor_accesses ] def valid_kwargs(*valid_args): # This decorator checks if argument passed as **kwargs to a function are # present in valid_args. # # Typically, valid_kwargs is used when we want to distinguish between # None and omitted arguments and we still want to validate the argument # list. # # Example usage: # # @valid_kwargs('opt_arg1', 'opt_arg2') # def my_func(self, mandatory_arg1, mandatory_arg2, **kwargs): # ... # @decorator def func_wrapper(func, *args, **kwargs): argspec = inspect.getargspec(func) for k in kwargs: if k not in argspec.args[1:] and k not in valid_args: raise TypeError( "{f}() got an unexpected keyword argument " "'{arg}'".format(f=inspect.stack()[1][3], arg=k)) return func(*args, **kwargs) return func_wrapper def cache_on_arguments(*cache_on_args, **cache_on_kwargs): _cache_name = cache_on_kwargs.pop('resource', None) def _inner_cache_on_arguments(func): def _cache_decorator(obj, *args, **kwargs): the_method = obj._get_cache(_cache_name).cache_on_arguments( *cache_on_args, **cache_on_kwargs)( func.__get__(obj, type(obj))) return the_method(*args, **kwargs) def invalidate(obj, *args, **kwargs): return obj._get_cache( _cache_name).cache_on_arguments()(func).invalidate( *args, **kwargs) _cache_decorator.invalidate = invalidate _cache_decorator.func = func _decorated_methods.append(func.__name__) return _cache_decorator return _inner_cache_on_arguments @contextlib.contextmanager def shade_exceptions(error_message=None): """Context manager for dealing with shade exceptions. :param string error_message: String to use for the exception message content on non-OpenStackCloudExceptions. Useful for avoiding wrapping shade OpenStackCloudException exceptions within themselves. Code called from within the context may throw such exceptions without having to catch and reraise them. Non-OpenStackCloudException exceptions thrown within the context will be wrapped and the exception message will be appended to the given error message. """ try: yield except exc.OpenStackCloudException: raise except Exception as e: if error_message is None: error_message = str(e) raise exc.OpenStackCloudException(error_message) def safe_dict_min(key, data): """Safely find the minimum for a given key in a list of dict objects. This will find the minimum integer value for specific dictionary key across a list of dictionaries. The values for the given key MUST be integers, or string representations of an integer. The dictionary key does not have to be present in all (or any) of the elements/dicts within the data set. :param string key: The dictionary key to search for the minimum value. :param list data: List of dicts to use for the data set. :returns: None if the field was not not found in any elements, or the minimum value for the field otherwise. """ min_value = None for d in data: if (key in d) and (d[key] is not None): try: val = int(d[key]) except ValueError: raise exc.OpenStackCloudException( "Search for minimum value failed. " "Value for {key} is not an integer: {value}".format( key=key, value=d[key]) ) if (min_value is None) or (val < min_value): min_value = val return min_value def safe_dict_max(key, data): """Safely find the maximum for a given key in a list of dict objects. This will find the maximum integer value for specific dictionary key across a list of dictionaries. The values for the given key MUST be integers, or string representations of an integer. The dictionary key does not have to be present in all (or any) of the elements/dicts within the data set. :param string key: The dictionary key to search for the maximum value. :param list data: List of dicts to use for the data set. :returns: None if the field was not not found in any elements, or the maximum value for the field otherwise. """ max_value = None for d in data: if (key in d) and (d[key] is not None): try: val = int(d[key]) except ValueError: raise exc.OpenStackCloudException( "Search for maximum value failed. " "Value for {key} is not an integer: {value}".format( key=key, value=d[key]) ) if (max_value is None) or (val > max_value): max_value = val return max_value def _call_client_and_retry(client, url, retry_on=None, call_retries=3, retry_wait=2, **kwargs): """Method to provide retry operations. Some APIs utilize HTTP errors on certian operations to indicate that the resource is presently locked, and as such this mechanism provides the ability to retry upon known error codes. :param object client: The client method, such as: ``self.baremetal_client.post`` :param string url: The URL to perform the operation upon. :param integer retry_on: A list of error codes that can be retried on. The method also supports a single integer to be defined. :param integer call_retries: The number of times to retry the call upon the error code defined by the 'retry_on' parameter. Default: 3 :param integer retry_wait: The time in seconds to wait between retry attempts. Default: 2 :returns: The object returned by the client call. """ # NOTE(TheJulia): This method, as of this note, does not have direct # unit tests, although is fairly well tested by the tests checking # retry logic in test_baremetal_node.py. log = _log.setup_logging('shade.http') if isinstance(retry_on, int): retry_on = [retry_on] count = 0 while (count < call_retries): count += 1 try: ret_val = client(url, **kwargs) except exc.OpenStackCloudHTTPError as e: if (retry_on is not None and e.response.status_code in retry_on): log.debug('Received retryable error {err}, waiting ' '{wait} seconds to retry', { 'err': e.response.status_code, 'wait': retry_wait }) time.sleep(retry_wait) continue else: raise # Break out of the loop, since the loop should only continue # when we encounter a known connection error. return ret_val def parse_range(value): """Parse a numerical range string. Breakdown a range expression into its operater and numerical parts. This expression must be a string. Valid values must be an integer string, optionally preceeded by one of the following operators:: - "<" : Less than - ">" : Greater than - "<=" : Less than or equal to - ">=" : Greater than or equal to Some examples of valid values and function return values:: - "1024" : returns (None, 1024) - "<5" : returns ("<", 5) - ">=100" : returns (">=", 100) :param string value: The range expression to be parsed. :returns: A tuple with the operator string (or None if no operator was given) and the integer value. None is returned if parsing failed. """ if value is None: return None range_exp = re.match('(<|>|<=|>=){0,1}(\d+)$', value) if range_exp is None: return None op = range_exp.group(1) num = int(range_exp.group(2)) return (op, num) def range_filter(data, key, range_exp): """Filter a list by a single range expression. :param list data: List of dictionaries to be searched. :param string key: Key name to search within the data set. :param string range_exp: The expression describing the range of values. :returns: A list subset of the original data set. :raises: OpenStackCloudException on invalid range expressions. """ filtered = [] range_exp = str(range_exp).upper() if range_exp == "MIN": key_min = safe_dict_min(key, data) if key_min is None: return [] for d in data: if int(d[key]) == key_min: filtered.append(d) return filtered elif range_exp == "MAX": key_max = safe_dict_max(key, data) if key_max is None: return [] for d in data: if int(d[key]) == key_max: filtered.append(d) return filtered # Not looking for a min or max, so a range or exact value must # have been supplied. val_range = parse_range(range_exp) # If parsing the range fails, it must be a bad value. if val_range is None: raise exc.OpenStackCloudException( "Invalid range value: {value}".format(value=range_exp)) op = val_range[0] if op: # Range matching for d in data: d_val = int(d[key]) if op == '<': if d_val < val_range[1]: filtered.append(d) elif op == '>': if d_val > val_range[1]: filtered.append(d) elif op == '<=': if d_val <= val_range[1]: filtered.append(d) elif op == '>=': if d_val >= val_range[1]: filtered.append(d) return filtered else: # Exact number match for d in data: if int(d[key]) == val_range[1]: filtered.append(d) return filtered def generate_patches_from_kwargs(operation, **kwargs): """Given a set of parameters, returns a list with the valid patch values. :param string operation: The operation to perform. :param list kwargs: Dict of parameters. :returns: A list with the right patch values. """ patches = [] for k, v in kwargs.items(): patch = {'op': operation, 'value': v, 'path': '/%s' % k} patches.append(patch) return sorted(patches) class FileSegment(object): """File-like object to pass to requests.""" def __init__(self, filename, offset, length): self.filename = filename self.offset = offset self.length = length self.pos = 0 self._file = open(filename, 'rb') self.seek(0) def tell(self): return self._file.tell() - self.offset def seek(self, offset, whence=0): if whence == 0: self._file.seek(self.offset + offset, whence) elif whence == 1: self._file.seek(offset, whence) elif whence == 2: self._file.seek(self.offset + self.length - offset, 0) def read(self, size=-1): remaining = self.length - self.pos if remaining <= 0: return b'' to_read = remaining if size < 0 else min(size, remaining) chunk = self._file.read(to_read) self.pos += len(chunk) return chunk def reset(self): self._file.seek(self.offset, 0) def _format_uuid_string(string): return (string.replace('urn:', '') .replace('uuid:', '') .strip('{}') .replace('-', '') .lower()) def _is_uuid_like(val): """Returns validation of a value as a UUID. :param val: Value to verify :type val: string :returns: bool .. versionchanged:: 1.1.1 Support non-lowercase UUIDs. """ try: return str(uuid.UUID(val)).replace('-', '') == _format_uuid_string(val) except (TypeError, ValueError, AttributeError): return False openstacksdk-0.11.3/openstack/cloud/meta.py0000666000175100017510000005300713236151364020721 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import munch import ipaddress import six import socket from openstack import _log from openstack.cloud import exc NON_CALLABLES = (six.string_types, bool, dict, int, float, list, type(None)) def find_nova_interfaces(addresses, ext_tag=None, key_name=None, version=4, mac_addr=None): ret = [] for (k, v) in iter(addresses.items()): if key_name is not None and k != key_name: # key_name is specified and it doesn't match the current network. # Continue with the next one continue for interface_spec in v: if ext_tag is not None: if 'OS-EXT-IPS:type' not in interface_spec: # ext_tag is specified, but this interface has no tag # We could actually return right away as this means that # this cloud doesn't support OS-EXT-IPS. Nevertheless, # it would be better to perform an explicit check. e.g.: # cloud._has_nova_extension('OS-EXT-IPS') # But this needs cloud to be passed to this function. continue elif interface_spec['OS-EXT-IPS:type'] != ext_tag: # Type doesn't match, continue with next one continue if mac_addr is not None: if 'OS-EXT-IPS-MAC:mac_addr' not in interface_spec: # mac_addr is specified, but this interface has no mac_addr # We could actually return right away as this means that # this cloud doesn't support OS-EXT-IPS-MAC. Nevertheless, # it would be better to perform an explicit check. e.g.: # cloud._has_nova_extension('OS-EXT-IPS-MAC') # But this needs cloud to be passed to this function. continue elif interface_spec['OS-EXT-IPS-MAC:mac_addr'] != mac_addr: # MAC doesn't match, continue with next one continue if interface_spec['version'] == version: ret.append(interface_spec) return ret def find_nova_addresses(addresses, ext_tag=None, key_name=None, version=4, mac_addr=None): interfaces = find_nova_interfaces(addresses, ext_tag, key_name, version, mac_addr) floating_addrs = [] fixed_addrs = [] for i in interfaces: if i.get('OS-EXT-IPS:type') == 'floating': floating_addrs.append(i['addr']) else: fixed_addrs.append(i['addr']) return floating_addrs + fixed_addrs def get_server_ip(server, public=False, cloud_public=True, **kwargs): """Get an IP from the Nova addresses dict :param server: The server to pull the address from :param public: Whether the address we're looking for should be considered 'public' and therefore reachabiliity tests should be used. (defaults to False) :param cloud_public: Whether the cloud has been configured to use private IPs from servers as the interface_ip. This inverts the public reachability logic, as in this case it's the private ip we expect shade to be able to reach """ addrs = find_nova_addresses(server['addresses'], **kwargs) return find_best_address( addrs, public=public, cloud_public=cloud_public) def get_server_private_ip(server, cloud=None): """Find the private IP address If Neutron is available, search for a port on a network where `router:external` is False and `shared` is False. This combination indicates a private network with private IP addresses. This port should have the private IP. If Neutron is not available, or something goes wrong communicating with it, as a fallback, try the list of addresses associated with the server dict, looking for an IP type tagged as 'fixed' in the network named 'private'. Last resort, ignore the IP type and just look for an IP on the 'private' network (e.g., Rackspace). """ if cloud and not cloud.use_internal_network(): return None # Try to get a floating IP interface. If we have one then return the # private IP address associated with that floating IP for consistency. fip_ints = find_nova_interfaces(server['addresses'], ext_tag='floating') fip_mac = None if fip_ints: fip_mac = fip_ints[0].get('OS-EXT-IPS-MAC:mac_addr') # Short circuit the ports/networks search below with a heavily cached # and possibly pre-configured network name if cloud: int_nets = cloud.get_internal_ipv4_networks() for int_net in int_nets: int_ip = get_server_ip( server, key_name=int_net['name'], cloud_public=not cloud.private, mac_addr=fip_mac) if int_ip is not None: return int_ip ip = get_server_ip( server, ext_tag='fixed', key_name='private', mac_addr=fip_mac) if ip: return ip # Last resort, and Rackspace return get_server_ip( server, key_name='private') def get_server_external_ipv4(cloud, server): """Find an externally routable IP for the server. There are 5 different scenarios we have to account for: * Cloud has externally routable IP from neutron but neutron APIs don't work (only info available is in nova server record) (rackspace) * Cloud has externally routable IP from neutron (runabove, ovh) * Cloud has externally routable IP from neutron AND supports optional private tenant networks (vexxhost, unitedstack) * Cloud only has private tenant network provided by neutron and requires floating-ip for external routing (dreamhost, hp) * Cloud only has private tenant network provided by nova-network and requires floating-ip for external routing (auro) :param cloud: the cloud we're working with :param server: the server dict from which we want to get an IPv4 address :return: a string containing the IPv4 address or None """ if not cloud.use_external_network(): return None if server['accessIPv4']: return server['accessIPv4'] # Short circuit the ports/networks search below with a heavily cached # and possibly pre-configured network name ext_nets = cloud.get_external_ipv4_networks() for ext_net in ext_nets: ext_ip = get_server_ip( server, key_name=ext_net['name'], public=True, cloud_public=not cloud.private) if ext_ip is not None: return ext_ip # Try to get a floating IP address # Much as I might find floating IPs annoying, if it has one, that's # almost certainly the one that wants to be used ext_ip = get_server_ip( server, ext_tag='floating', public=True, cloud_public=not cloud.private) if ext_ip is not None: return ext_ip # The cloud doesn't support Neutron or Neutron can't be contacted. The # server might have fixed addresses that are reachable from outside the # cloud (e.g. Rax) or have plain ol' floating IPs # Try to get an address from a network named 'public' ext_ip = get_server_ip( server, key_name='public', public=True, cloud_public=not cloud.private) if ext_ip is not None: return ext_ip # Nothing else works, try to find a globally routable IP address for interfaces in server['addresses'].values(): for interface in interfaces: try: ip = ipaddress.ip_address(interface['addr']) except Exception: # Skip any error, we're looking for a working ip - if the # cloud returns garbage, it wouldn't be the first weird thing # but it still doesn't meet the requirement of "be a working # ip address" continue if ip.version == 4 and not ip.is_private: return str(ip) return None def find_best_address(addresses, public=False, cloud_public=True): do_check = public == cloud_public if not addresses: return None if len(addresses) == 1: return addresses[0] if len(addresses) > 1 and do_check: # We only want to do this check if the address is supposed to be # reachable. Otherwise we're just debug log spamming on every listing # of private ip addresses for address in addresses: # Return the first one that is reachable try: for res in socket.getaddrinfo( address, 22, socket.AF_UNSPEC, socket.SOCK_STREAM, 0): family, socktype, proto, _, sa = res connect_socket = socket.socket(family, socktype, proto) connect_socket.settimeout(1) connect_socket.connect(sa) return address except Exception: pass # Give up and return the first - none work as far as we can tell if do_check: log = _log.setup_logging('openstack') log.debug( 'The cloud returned multiple addresses, and none of them seem' ' to work. That might be what you wanted, but we have no clue' " what's going on, so we just picked one at random") return addresses[0] def get_server_external_ipv6(server): """ Get an IPv6 address reachable from outside the cloud. This function assumes that if a server has an IPv6 address, that address is reachable from outside the cloud. :param server: the server from which we want to get an IPv6 address :return: a string containing the IPv6 address or None """ if server['accessIPv6']: return server['accessIPv6'] addresses = find_nova_addresses(addresses=server['addresses'], version=6) return find_best_address(addresses, public=True) def get_server_default_ip(cloud, server): """ Get the configured 'default' address It is possible in clouds.yaml to configure for a cloud a network that is the 'default_interface'. This is the network that should be used to talk to instances on the network. :param cloud: the cloud we're working with :param server: the server dict from which we want to get the default IPv4 address :return: a string containing the IPv4 address or None """ ext_net = cloud.get_default_network() if ext_net: if (cloud._local_ipv6 and not cloud.force_ipv4): # try 6 first, fall back to four versions = [6, 4] else: versions = [4] for version in versions: ext_ip = get_server_ip( server, key_name=ext_net['name'], version=version, public=True, cloud_public=not cloud.private) if ext_ip is not None: return ext_ip return None def _get_interface_ip(cloud, server): """ Get the interface IP for the server Interface IP is the IP that should be used for communicating with the server. It is: - the IP on the configured default_interface network - if cloud.private, the private ip if it exists - if the server has a public ip, the public ip """ default_ip = get_server_default_ip(cloud, server) if default_ip: return default_ip if cloud.private and server['private_v4']: return server['private_v4'] if (server['public_v6'] and cloud._local_ipv6 and not cloud.force_ipv4): return server['public_v6'] else: return server['public_v4'] def get_groups_from_server(cloud, server, server_vars): groups = [] region = cloud.region_name cloud_name = cloud.name # Create a group for the cloud groups.append(cloud_name) # Create a group on region groups.append(region) # And one by cloud_region groups.append("%s_%s" % (cloud_name, region)) # Check if group metadata key in servers' metadata group = server['metadata'].get('group') if group: groups.append(group) for extra_group in server['metadata'].get('groups', '').split(','): if extra_group: groups.append(extra_group) groups.append('instance-%s' % server['id']) for key in ('flavor', 'image'): if 'name' in server_vars[key]: groups.append('%s-%s' % (key, server_vars[key]['name'])) for key, value in iter(server['metadata'].items()): groups.append('meta-%s_%s' % (key, value)) az = server_vars.get('az', None) if az: # Make groups for az, region_az and cloud_region_az groups.append(az) groups.append('%s_%s' % (region, az)) groups.append('%s_%s_%s' % (cloud.name, region, az)) return groups def expand_server_vars(cloud, server): """Backwards compatibility function.""" return add_server_interfaces(cloud, server) def _make_address_dict(fip, port): address = dict(version=4, addr=fip['floating_ip_address']) address['OS-EXT-IPS:type'] = 'floating' address['OS-EXT-IPS-MAC:mac_addr'] = port['mac_address'] return address def _get_supplemental_addresses(cloud, server): fixed_ip_mapping = {} for name, network in server['addresses'].items(): for address in network: if address['version'] == 6: continue if address.get('OS-EXT-IPS:type') == 'floating': # We have a floating IP that nova knows about, do nothing return server['addresses'] fixed_ip_mapping[address['addr']] = name try: # Don't bother doing this before the server is active, it's a waste # of an API call while polling for a server to come up if (cloud.has_service('network') and cloud._has_floating_ips() and server['status'] == 'ACTIVE'): for port in cloud.search_ports( filters=dict(device_id=server['id'])): for fip in cloud.search_floating_ips( filters=dict(port_id=port['id'])): # This SHOULD return one and only one FIP - but doing # it as a search/list lets the logic work regardless if fip['fixed_ip_address'] not in fixed_ip_mapping: log = _log.setup_logging('openstack') log.debug( "The cloud returned floating ip %(fip)s attached" " to server %(server)s but the fixed ip associated" " with the floating ip in the neutron listing" " does not exist in the nova listing. Something" " is exceptionally broken.", dict(fip=fip['id'], server=server['id'])) fixed_net = fixed_ip_mapping[fip['fixed_ip_address']] server['addresses'][fixed_net].append( _make_address_dict(fip, port)) except exc.OpenStackCloudException: # If something goes wrong with a cloud call, that's cool - this is # an attempt to provide additional data and should not block forward # progress pass return server['addresses'] def add_server_interfaces(cloud, server): """Add network interface information to server. Query the cloud as necessary to add information to the server record about the network information needed to interface with the server. Ensures that public_v4, public_v6, private_v4, private_v6, interface_ip, accessIPv4 and accessIPv6 are always set. """ # First, add an IP address. Set it to '' rather than None if it does # not exist to remain consistent with the pre-existing missing values server['addresses'] = _get_supplemental_addresses(cloud, server) server['public_v4'] = get_server_external_ipv4(cloud, server) or '' server['public_v6'] = get_server_external_ipv6(server) or '' server['private_v4'] = get_server_private_ip(server, cloud) or '' server['interface_ip'] = _get_interface_ip(cloud, server) or '' # Some clouds do not set these, but they're a regular part of the Nova # server record. Since we know them, go ahead and set them. In the case # where they were set previous, we use the values, so this will not break # clouds that provide the information if cloud.private and server['private_v4']: server['accessIPv4'] = server['private_v4'] else: server['accessIPv4'] = server['public_v4'] server['accessIPv6'] = server['public_v6'] return server def expand_server_security_groups(cloud, server): try: groups = cloud.list_server_security_groups(server) except exc.OpenStackCloudException: groups = [] server['security_groups'] = groups or [] def get_hostvars_from_server(cloud, server, mounts=None): """Expand additional server information useful for ansible inventory. Variables in this function may make additional cloud queries to flesh out possibly interesting info, making it more expensive to call than expand_server_vars if caching is not set up. If caching is set up, the extra cost should be minimal. """ server_vars = add_server_interfaces(cloud, server) flavor_id = server['flavor']['id'] flavor_name = cloud.get_flavor_name(flavor_id) if flavor_name: server_vars['flavor']['name'] = flavor_name expand_server_security_groups(cloud, server) # OpenStack can return image as a string when you've booted from volume if str(server['image']) == server['image']: image_id = server['image'] server_vars['image'] = dict(id=image_id) else: image_id = server['image'].get('id', None) if image_id: image_name = cloud.get_image_name(image_id) if image_name: server_vars['image']['name'] = image_name volumes = [] if cloud.has_service('volume'): try: for volume in cloud.get_volumes(server): # Make things easier to consume elsewhere volume['device'] = volume['attachments'][0]['device'] volumes.append(volume) except exc.OpenStackCloudException: pass server_vars['volumes'] = volumes if mounts: for mount in mounts: for vol in server_vars['volumes']: if vol['display_name'] == mount['display_name']: if 'mount' in mount: vol['mount'] = mount['mount'] return server_vars def obj_to_munch(obj): """ Turn an object with attributes into a dict suitable for serializing. Some of the things that are returned in OpenStack are objects with attributes. That's awesome - except when you want to expose them as JSON structures. We use this as the basis of get_hostvars_from_server above so that we can just have a plain dict of all of the values that exist in the nova metadata for a server. """ if obj is None: return None elif isinstance(obj, munch.Munch) or hasattr(obj, 'mock_add_spec'): # If we obj_to_munch twice, don't fail, just return the munch # Also, don't try to modify Mock objects - that way lies madness return obj elif isinstance(obj, dict): # The new request-id tracking spec: # https://specs.openstack.org/openstack/nova-specs/specs/juno/approved/log-request-id-mappings.html # adds a request-ids attribute to returned objects. It does this even # with dicts, which now become dict subclasses. So we want to convert # the dict we get, but we also want it to fall through to object # attribute processing so that we can also get the request_ids # data into our resulting object. instance = munch.Munch(obj) else: instance = munch.Munch() for key in dir(obj): try: value = getattr(obj, key) # some attributes can be defined as a @propierty, so we can't assure # to have a valid value # e.g. id in python-novaclient/tree/novaclient/v2/quotas.py except AttributeError: continue if isinstance(value, NON_CALLABLES) and not key.startswith('_'): instance[key] = value return instance obj_to_dict = obj_to_munch def obj_list_to_munch(obj_list): """Enumerate through lists of objects and return lists of dictonaries. Some of the objects returned in OpenStack are actually lists of objects, and in order to expose the data structures as JSON, we need to facilitate the conversion to lists of dictonaries. """ return [obj_to_munch(obj) for obj in obj_list] obj_list_to_dict = obj_list_to_munch def get_and_munchify(key, data): """Get the value associated to key and convert it. The value will be converted in a Munch object or a list of Munch objects based on the type """ result = data.get(key, []) if key else data if isinstance(result, list): return obj_list_to_munch(result) elif isinstance(result, dict): return obj_to_munch(result) return result openstacksdk-0.11.3/openstack/cloud/openstackcloud.py0000666000175100017510000163511213236151364023015 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 3.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import base64 import collections import copy import datetime import functools import hashlib import ipaddress import iso8601 import json import jsonpatch import operator import os import six import threading import time # import types so that we can reference ListType in sphinx param declarations. # We can't just use list, because sphinx gets confused by # openstack.resource.Resource.list and openstack.resource2.Resource.list import types # noqa import warnings import dogpile.cache import munch import requestsexceptions from six.moves import urllib import keystoneauth1.exceptions import keystoneauth1.session from openstack import version as openstack_version from openstack import _adapter from openstack import _log from openstack.cloud.exc import * # noqa from openstack.cloud._heat import event_utils from openstack.cloud._heat import template_utils from openstack.cloud import _normalize from openstack.cloud import meta from openstack.cloud import _utils import openstack.config import openstack.config.defaults import openstack.connection from openstack import task_manager from openstack import utils # TODO(shade) shade keys were x-object-meta-x-sdk-md5 - we need to add those # to freshness checks so that a shade->sdk transition doens't # result in a re-upload OBJECT_MD5_KEY = 'x-object-meta-x-sdk-md5' OBJECT_SHA256_KEY = 'x-object-meta-x-sdk-sha256' OBJECT_AUTOCREATE_KEY = 'x-object-meta-x-sdk-autocreated' OBJECT_AUTOCREATE_CONTAINER = 'images' # TODO(shade) shade keys were owner_specified.shade.md5 - we need to add those # to freshness checks so that a shade->sdk transition doens't # result in a re-upload IMAGE_MD5_KEY = 'owner_specified.openstack.md5' IMAGE_SHA256_KEY = 'owner_specified.openstack.sha256' IMAGE_OBJECT_KEY = 'owner_specified.openstack.object' # Rackspace returns this for intermittent import errors IMAGE_ERROR_396 = "Image cannot be imported. Error code: '396'" DEFAULT_OBJECT_SEGMENT_SIZE = 1073741824 # 1GB # This halves the current default for Swift DEFAULT_MAX_FILE_SIZE = (5 * 1024 * 1024 * 1024 + 2) / 2 DEFAULT_SERVER_AGE = 5 DEFAULT_PORT_AGE = 5 DEFAULT_FLOAT_AGE = 5 _OCC_DOC_URL = "https://docs.openstack.org/developer/os-client-config" OBJECT_CONTAINER_ACLS = { 'public': '.r:*,.rlistings', 'private': '', } def _no_pending_volumes(volumes): """If there are any volumes not in a steady state, don't cache""" for volume in volumes: if volume['status'] not in ('available', 'error', 'in-use'): return False return True def _no_pending_images(images): """If there are any images not in a steady state, don't cache""" for image in images: if image.status not in ('active', 'deleted', 'killed'): return False return True def _no_pending_stacks(stacks): """If there are any stacks not in a steady state, don't cache""" for stack in stacks: status = stack['stack_status'] if '_COMPLETE' not in status and '_FAILED' not in status: return False return True class OpenStackCloud(_normalize.Normalizer): """Represent a connection to an OpenStack Cloud. OpenStackCloud is the entry point for all cloud operations, regardless of which OpenStack service those operations may ultimately come from. The operations on an OpenStackCloud are resource oriented rather than REST API operation oriented. For instance, one will request a Floating IP and that Floating IP will be actualized either via neutron or via nova depending on how this particular cloud has decided to arrange itself. :param TaskManager manager: Optional task manager to use for running OpenStack API tasks. Unless you're doing rate limiting client side, you almost certainly don't need this. (optional) :param bool strict: Only return documented attributes for each resource as per the Data Model contract. (Default False) :param app_name: Name of the application to be appended to the user-agent string. Optional, defaults to None. :param app_version: Version of the application to be appended to the user-agent string. Optional, defaults to None. :param CloudRegion cloud_config: Cloud config object from os-client-config In the future, this will be the only way to pass in cloud configuration, but is being phased in currently. """ def __init__( self, cloud_config=None, manager=None, strict=False, app_name=None, app_version=None, use_direct_get=False, **kwargs): self.log = _log.setup_logging('openstack') if not cloud_config: config = openstack.config.OpenStackConfig( app_name=app_name, app_version=app_version) cloud_config = config.get_one(**kwargs) self.name = cloud_config.name self.auth = cloud_config.get_auth_args() self.region_name = cloud_config.region_name self.default_interface = cloud_config.get_interface() self.private = cloud_config.config.get('private', False) self.image_api_use_tasks = cloud_config.config['image_api_use_tasks'] self.secgroup_source = cloud_config.config['secgroup_source'] self.force_ipv4 = cloud_config.force_ipv4 self.strict_mode = strict # TODO(shade) The openstack.cloud default for get_flavor_extra_specs # should be changed and this should be removed completely self._extra_config = cloud_config._openstack_config.get_extra_config( 'shade', { 'get_flavor_extra_specs': True, }) if manager is not None: self.manager = manager else: self.manager = task_manager.TaskManager( name=':'.join([self.name, self.region_name])) self._external_ipv4_names = cloud_config.get_external_ipv4_networks() self._internal_ipv4_names = cloud_config.get_internal_ipv4_networks() self._external_ipv6_names = cloud_config.get_external_ipv6_networks() self._internal_ipv6_names = cloud_config.get_internal_ipv6_networks() self._nat_destination = cloud_config.get_nat_destination() self._default_network = cloud_config.get_default_network() self._floating_ip_source = cloud_config.config.get( 'floating_ip_source') if self._floating_ip_source: if self._floating_ip_source.lower() == 'none': self._floating_ip_source = None else: self._floating_ip_source = self._floating_ip_source.lower() self._use_external_network = cloud_config.config.get( 'use_external_network', True) self._use_internal_network = cloud_config.config.get( 'use_internal_network', True) # Work around older TaskManager objects that don't have submit_task if not hasattr(self.manager, 'submit_task'): self.manager.submit_task = self.manager.submitTask (self.verify, self.cert) = cloud_config.get_requests_verify_args() # Turn off urllib3 warnings about insecure certs if we have # explicitly configured requests to tell it we do not want # cert verification if not self.verify: self.log.debug( "Turning off Insecure SSL warnings since verify=False") category = requestsexceptions.InsecureRequestWarning if category: # InsecureRequestWarning references a Warning class or is None warnings.filterwarnings('ignore', category=category) self._disable_warnings = {} self.use_direct_get = use_direct_get self._servers = None self._servers_time = 0 self._servers_lock = threading.Lock() self._ports = None self._ports_time = 0 self._ports_lock = threading.Lock() self._floating_ips = None self._floating_ips_time = 0 self._floating_ips_lock = threading.Lock() self._floating_network_by_router = None self._floating_network_by_router_run = False self._floating_network_by_router_lock = threading.Lock() self._networks_lock = threading.Lock() self._reset_network_caches() cache_expiration_time = int(cloud_config.get_cache_expiration_time()) cache_class = cloud_config.get_cache_class() cache_arguments = cloud_config.get_cache_arguments() self._resource_caches = {} if cache_class != 'dogpile.cache.null': self.cache_enabled = True self._cache = self._make_cache( cache_class, cache_expiration_time, cache_arguments) expirations = cloud_config.get_cache_expiration() for expire_key in expirations.keys(): # Only build caches for things we have list operations for if getattr( self, 'list_{0}'.format(expire_key), None): self._resource_caches[expire_key] = self._make_cache( cache_class, expirations[expire_key], cache_arguments) self._SERVER_AGE = DEFAULT_SERVER_AGE self._PORT_AGE = DEFAULT_PORT_AGE self._FLOAT_AGE = DEFAULT_FLOAT_AGE else: self.cache_enabled = False def _fake_invalidate(unused): pass class _FakeCache(object): def invalidate(self): pass # Don't cache list_servers if we're not caching things. # Replace this with a more specific cache configuration # soon. self._SERVER_AGE = 0 self._PORT_AGE = 0 self._FLOAT_AGE = 0 self._cache = _FakeCache() # Undecorate cache decorated methods. Otherwise the call stacks # wind up being stupidly long and hard to debug for method in _utils._decorated_methods: meth_obj = getattr(self, method, None) if not meth_obj: continue if (hasattr(meth_obj, 'invalidate') and hasattr(meth_obj, 'func')): new_func = functools.partial(meth_obj.func, self) new_func.invalidate = _fake_invalidate setattr(self, method, new_func) # If server expiration time is set explicitly, use that. Otherwise # fall back to whatever it was before self._SERVER_AGE = cloud_config.get_cache_resource_expiration( 'server', self._SERVER_AGE) self._PORT_AGE = cloud_config.get_cache_resource_expiration( 'port', self._PORT_AGE) self._FLOAT_AGE = cloud_config.get_cache_resource_expiration( 'floating_ip', self._FLOAT_AGE) self._container_cache = dict() self._file_hash_cache = dict() self._keystone_session = None self._raw_clients = {} self._local_ipv6 = ( _utils.localhost_supports_ipv6() if not self.force_ipv4 else False) self.cloud_config = cloud_config self._conn_object = None @property def _conn(self): if not self._conn_object: self._conn_object = openstack.connection.Connection( config=self.cloud_config, session=self._keystone_session) return self._conn_object def connect_as(self, **kwargs): """Make a new OpenStackCloud object with new auth context. Take the existing settings from the current cloud and construct a new OpenStackCloud object with some of the auth settings overridden. This is useful for getting an object to perform tasks with as another user, or in the context of a different project. .. code-block:: python cloud = openstack.cloud.openstack_cloud(cloud='example') # Work normally servers = cloud.list_servers() cloud2 = cloud.connect_as(username='different-user', password='') # Work as different-user servers = cloud2.list_servers() :param kwargs: keyword arguments can contain anything that would normally go in an auth dict. They will override the same settings from the parent cloud as appropriate. Entries that do not want to be overridden can be ommitted. """ config = openstack.config.OpenStackConfig( app_name=self.cloud_config._app_name, app_version=self.cloud_config._app_version, load_yaml_config=False) params = copy.deepcopy(self.cloud_config.config) # Remove profile from current cloud so that overridding works params.pop('profile', None) # Utility function to help with the stripping below. def pop_keys(params, auth, name_key, id_key): if name_key in auth or id_key in auth: params['auth'].pop(name_key, None) params['auth'].pop(id_key, None) # If there are user, project or domain settings in the incoming auth # dict, strip out both id and name so that a user can say: # cloud.connect_as(project_name='foo') # and have that work with clouds that have a project_id set in their # config. for prefix in ('user', 'project'): if prefix == 'user': name_key = 'username' else: name_key = 'project_name' id_key = '{prefix}_id'.format(prefix=prefix) pop_keys(params, kwargs, name_key, id_key) id_key = '{prefix}_domain_id'.format(prefix=prefix) name_key = '{prefix}_domain_name'.format(prefix=prefix) pop_keys(params, kwargs, name_key, id_key) for key, value in kwargs.items(): params['auth'][key] = value # TODO(mordred) Replace this chunk with the next patch that allows # passing a Session to CloudRegion. # Closure to pass to OpenStackConfig to ensure the new cloud shares # the Session with the current cloud. This will ensure that version # discovery cache will be re-used. def session_constructor(*args, **kwargs): # We need to pass our current keystone session to the Session # Constructor, otherwise the new auth plugin doesn't get used. return keystoneauth1.session.Session(session=self.keystone_session) # Use cloud='defaults' so that we overlay settings properly cloud_config = config.get_one( cloud='defaults', session_constructor=session_constructor, **params) # Override the cloud name so that logging/location work right cloud_config.name = self.name cloud_config.config['profile'] = self.name # Use self.__class__ so that we return whatever this if, like if it's # a subclass in the case of shade wrapping sdk. return self.__class__(cloud_config=cloud_config) def connect_as_project(self, project): """Make a new OpenStackCloud object with a new project. Take the existing settings from the current cloud and construct a new OpenStackCloud object with the project settings overridden. This is useful for getting an object to perform tasks with as another user, or in the context of a different project. .. code-block:: python cloud = openstack.cloud.openstack_cloud(cloud='example') # Work normally servers = cloud.list_servers() cloud2 = cloud.connect_as_project('different-project') # Work in different-project servers = cloud2.list_servers() :param project: Either a project name or a project dict as returned by `list_projects`. """ auth = {} if isinstance(project, dict): auth['project_id'] = project.get('id') auth['project_name'] = project.get('name') if project.get('domain_id'): auth['project_domain_id'] = project['domain_id'] else: auth['project_name'] = project return self.connect_as(**auth) def _make_cache(self, cache_class, expiration_time, arguments): return dogpile.cache.make_region( function_key_generator=self._make_cache_key ).configure( cache_class, expiration_time=expiration_time, arguments=arguments) def _make_cache_key(self, namespace, fn): fname = fn.__name__ if namespace is None: name_key = self.name else: name_key = '%s:%s' % (self.name, namespace) def generate_key(*args, **kwargs): arg_key = ','.join(args) kw_keys = sorted(kwargs.keys()) kwargs_key = ','.join( ['%s:%s' % (k, kwargs[k]) for k in kw_keys if k != 'cache']) ans = "_".join( [str(name_key), fname, arg_key, kwargs_key]) return ans return generate_key def _get_cache(self, resource_name): if resource_name and resource_name in self._resource_caches: return self._resource_caches[resource_name] else: return self._cache def _get_major_version_id(self, version): if isinstance(version, int): return version elif isinstance(version, six.string_types + (tuple,)): return int(version[0]) return version def _get_versioned_client( self, service_type, min_version=None, max_version=None): config_version = self.cloud_config.get_api_version(service_type) config_major = self._get_major_version_id(config_version) max_major = self._get_major_version_id(max_version) min_major = self._get_major_version_id(min_version) # TODO(shade) This should be replaced with use of Connection. However, # we need to find a sane way to deal with this additional # logic - or we need to give up on it. If we give up on it, # we need to make sure we can still support it in the shade # compat layer. # NOTE(mordred) This logic for versions is slightly different # than the ksa Adapter constructor logic. openstack.cloud knows the # versions it knows, and uses them when it detects them. However, if # a user requests a version, and it's not found, and a different one # openstack.cloud does know about is found, that's a warning in # openstack.cloud. if config_version: if min_major and config_major < min_major: raise OpenStackCloudException( "Version {config_version} requested for {service_type}" " but shade understands a minimum of {min_version}".format( config_version=config_version, service_type=service_type, min_version=min_version)) elif max_major and config_major > max_major: raise OpenStackCloudException( "Version {config_version} requested for {service_type}" " but openstack.cloud understands a maximum of" " {max_version}".format( config_version=config_version, service_type=service_type, max_version=max_version)) request_min_version = config_version request_max_version = '{version}.latest'.format( version=config_major) adapter = _adapter.ShadeAdapter( session=self.keystone_session, task_manager=self.manager, service_type=self.cloud_config.get_service_type(service_type), service_name=self.cloud_config.get_service_name(service_type), interface=self.cloud_config.get_interface(service_type), endpoint_override=self.cloud_config.get_endpoint(service_type), region_name=self.cloud_config.region_name, min_version=request_min_version, max_version=request_max_version) if adapter.get_endpoint(): return adapter adapter = _adapter.ShadeAdapter( session=self.keystone_session, task_manager=self.manager, service_type=self.cloud_config.get_service_type(service_type), service_name=self.cloud_config.get_service_name(service_type), interface=self.cloud_config.get_interface(service_type), endpoint_override=self.cloud_config.get_endpoint(service_type), region_name=self.cloud_config.region_name, min_version=min_version, max_version=max_version) # data.api_version can be None if no version was detected, such # as with neutron api_version = adapter.get_api_major_version( endpoint_override=self.cloud_config.get_endpoint(service_type)) api_major = self._get_major_version_id(api_version) # If we detect a different version that was configured, warn the user. # shade still knows what to do - but if the user gave us an explicit # version and we couldn't find it, they may want to investigate. if api_version and (api_major != config_major): warning_msg = ( '{service_type} is configured for {config_version}' ' but only {api_version} is available. shade is happy' ' with this version, but if you were trying to force an' ' override, that did not happen. You may want to check' ' your cloud, or remove the version specification from' ' your config.'.format( service_type=service_type, config_version=config_version, api_version='.'.join([str(f) for f in api_version]))) self.log.debug(warning_msg) warnings.warn(warning_msg) return adapter # TODO(shade) This should be replaced with using openstack Connection # object. def _get_raw_client( self, service_type, api_version=None, endpoint_override=None): return _adapter.ShadeAdapter( session=self.keystone_session, task_manager=self.manager, service_type=self.cloud_config.get_service_type(service_type), service_name=self.cloud_config.get_service_name(service_type), interface=self.cloud_config.get_interface(service_type), endpoint_override=self.cloud_config.get_endpoint( service_type) or endpoint_override, region_name=self.cloud_config.region_name) def _is_client_version(self, client, version): client_name = '_{client}_client'.format(client=client) client = getattr(self, client_name) return client._version_matches(version) @property def _application_catalog_client(self): if 'application-catalog' not in self._raw_clients: self._raw_clients['application-catalog'] = self._get_raw_client( 'application-catalog') return self._raw_clients['application-catalog'] @property def _baremetal_client(self): if 'baremetal' not in self._raw_clients: client = self._get_raw_client('baremetal') # Do this to force version discovery. We need to do that, because # the endpoint-override trick we do for neutron because # ironicclient just appends a /v1 won't work and will break # keystoneauth - because ironic's versioned discovery endpoint # is non-compliant and doesn't return an actual version dict. client = self._get_versioned_client( 'baremetal', min_version=1, max_version='1.latest') self._raw_clients['baremetal'] = client return self._raw_clients['baremetal'] @property def _container_infra_client(self): if 'container-infra' not in self._raw_clients: self._raw_clients['container-infra'] = self._get_raw_client( 'container-infra') return self._raw_clients['container-infra'] @property def _database_client(self): if 'database' not in self._raw_clients: self._raw_clients['database'] = self._get_raw_client('database') return self._raw_clients['database'] @property def _dns_client(self): if 'dns' not in self._raw_clients: dns_client = self._get_versioned_client( 'dns', min_version=2, max_version='2.latest') self._raw_clients['dns'] = dns_client return self._raw_clients['dns'] @property def _identity_client(self): if 'identity' not in self._raw_clients: self._raw_clients['identity'] = self._get_versioned_client( 'identity', min_version=2, max_version='3.latest') return self._raw_clients['identity'] @property def _raw_image_client(self): if 'raw-image' not in self._raw_clients: image_client = self._get_raw_client('image') self._raw_clients['raw-image'] = image_client return self._raw_clients['raw-image'] @property def _image_client(self): if 'image' not in self._raw_clients: self._raw_clients['image'] = self._get_versioned_client( 'image', min_version=1, max_version='2.latest') return self._raw_clients['image'] @property def _network_client(self): if 'network' not in self._raw_clients: client = self._get_raw_client('network') # TODO(mordred) I don't care if this is what neutronclient does, # fix this. # Don't bother with version discovery - there is only one version # of neutron. This is what neutronclient does, fwiw. endpoint = client.get_endpoint() if not endpoint.rstrip().rsplit('/')[1] == 'v2.0': if not endpoint.endswith('/'): endpoint += '/' endpoint = urllib.parse.urljoin( endpoint, 'v2.0') client.endpoint_override = endpoint self._raw_clients['network'] = client return self._raw_clients['network'] @property def _object_store_client(self): if 'object-store' not in self._raw_clients: raw_client = self._get_raw_client('object-store') self._raw_clients['object-store'] = raw_client return self._raw_clients['object-store'] @property def _orchestration_client(self): if 'orchestration' not in self._raw_clients: raw_client = self._get_raw_client('orchestration') self._raw_clients['orchestration'] = raw_client return self._raw_clients['orchestration'] @property def _volume_client(self): if 'volume' not in self._raw_clients: self._raw_clients['volume'] = self._get_raw_client('volume') return self._raw_clients['volume'] def pprint(self, resource): """Wrapper aroud pprint that groks munch objects""" # import late since this is a utility function import pprint new_resource = _utils._dictify_resource(resource) pprint.pprint(new_resource) def pformat(self, resource): """Wrapper aroud pformat that groks munch objects""" # import late since this is a utility function import pprint new_resource = _utils._dictify_resource(resource) return pprint.pformat(new_resource) @property def keystone_session(self): if self._keystone_session is None: try: self._keystone_session = self.cloud_config.get_session() if hasattr(self._keystone_session, 'additional_user_agent'): self._keystone_session.additional_user_agent.append( ('openstacksdk', openstack_version.__version__)) except Exception as e: raise OpenStackCloudException( "Error authenticating to keystone: %s " % str(e)) return self._keystone_session @property def _keystone_catalog(self): return self.keystone_session.auth.get_access( self.keystone_session).service_catalog @property def service_catalog(self): return self._keystone_catalog.catalog def endpoint_for(self, service_type, interface='public'): return self._keystone_catalog.url_for( service_type=service_type, interface=interface) @property def auth_token(self): # Keystone's session will reuse a token if it is still valid. # We don't need to track validity here, just get_token() each time. return self.keystone_session.get_token() @property def current_user_id(self): """Get the id of the currently logged-in user from the token.""" return self.keystone_session.auth.get_access( self.keystone_session).user_id @property def current_project_id(self): """Get the current project ID. Returns the project_id of the current token scope. None means that the token is domain scoped or unscoped. :raises keystoneauth1.exceptions.auth.AuthorizationFailure: if a new token fetch fails. :raises keystoneauth1.exceptions.auth_plugins.MissingAuthPlugin: if a plugin is not available. """ return self.keystone_session.get_project_id() @property def current_project(self): """Return a ``munch.Munch`` describing the current project""" return self._get_project_info() def _get_project_info(self, project_id=None): project_info = munch.Munch( id=project_id, name=None, domain_id=None, domain_name=None, ) if not project_id or project_id == self.current_project_id: # If we don't have a project_id parameter, it means a user is # directly asking what the current state is. # Alternately, if we have one, that means we're calling this # from within a normalize function, which means the object has # a project_id associated with it. If the project_id matches # the project_id of our current token, that means we can supplement # the info with human readable info about names if we have them. # If they don't match, that means we're an admin who has pulled # an object from a different project, so adding info from the # current token would be wrong. auth_args = self.cloud_config.config.get('auth', {}) project_info['id'] = self.current_project_id project_info['name'] = auth_args.get('project_name') project_info['domain_id'] = auth_args.get('project_domain_id') project_info['domain_name'] = auth_args.get('project_domain_name') return project_info @property def current_location(self): """Return a ``munch.Munch`` explaining the current cloud location.""" return self._get_current_location() def _get_current_location(self, project_id=None, zone=None): return munch.Munch( cloud=self.name, region_name=self.region_name, zone=zone, project=self._get_project_info(project_id), ) def _get_identity_location(self): '''Identity resources do not exist inside of projects.''' return munch.Munch( cloud=self.name, region_name=None, zone=None, project=munch.Munch( id=None, name=None, domain_id=None, domain_name=None)) def _get_project_id_param_dict(self, name_or_id): if name_or_id: project = self.get_project(name_or_id) if not project: return {} if self._is_client_version('identity', 3): return {'default_project_id': project['id']} else: return {'tenant_id': project['id']} else: return {} def _get_domain_id_param_dict(self, domain_id): """Get a useable domain.""" # Keystone v3 requires domains for user and project creation. v2 does # not. However, keystone v2 does not allow user creation by non-admin # users, so we can throw an error to the user that does not need to # mention api versions if self._is_client_version('identity', 3): if not domain_id: raise OpenStackCloudException( "User or project creation requires an explicit" " domain_id argument.") else: return {'domain_id': domain_id} else: return {} def _get_identity_params(self, domain_id=None, project=None): """Get the domain and project/tenant parameters if needed. keystone v2 and v3 are divergent enough that we need to pass or not pass project or tenant_id or domain or nothing in a sane manner. """ ret = {} ret.update(self._get_domain_id_param_dict(domain_id)) ret.update(self._get_project_id_param_dict(project)) return ret def range_search(self, data, filters): """Perform integer range searches across a list of dictionaries. Given a list of dictionaries, search across the list using the given dictionary keys and a range of integer values for each key. Only dictionaries that match ALL search filters across the entire original data set will be returned. It is not a requirement that each dictionary contain the key used for searching. Those without the key will be considered non-matching. The range values must be string values and is either a set of digits representing an integer for matching, or a range operator followed by a set of digits representing an integer for matching. If a range operator is not given, exact value matching will be used. Valid operators are one of: <,>,<=,>= :param data: List of dictionaries to be searched. :param filters: Dict describing the one or more range searches to perform. If more than one search is given, the result will be the members of the original data set that match ALL searches. An example of filtering by multiple ranges:: {"vcpus": "<=5", "ram": "<=2048", "disk": "1"} :returns: A list subset of the original data set. :raises: OpenStackCloudException on invalid range expressions. """ filtered = [] for key, range_value in filters.items(): # We always want to operate on the full data set so that # calculations for minimum and maximum are correct. results = _utils.range_filter(data, key, range_value) if not filtered: # First set of results filtered = results else: # The combination of all searches should be the intersection of # all result sets from each search. So adjust the current set # of filtered data by computing its intersection with the # latest result set. filtered = [r for r in results for f in filtered if r == f] return filtered def _get_and_munchify(self, key, data): """Wrapper around meta.get_and_munchify. Some of the methods expect a `meta` attribute to be passed in as part of the method signature. In those methods the meta param is overriding the meta module making the call to meta.get_and_munchify to fail. """ return meta.get_and_munchify(key, data) @_utils.cache_on_arguments() def list_projects(self, domain_id=None, name_or_id=None, filters=None): """List projects. With no parameters, returns a full listing of all visible projects. :param domain_id: domain ID to scope the searched projects. :param name_or_id: project name or ID. :param filters: a dict containing additional filters to use OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: a list of ``munch.Munch`` containing the projects :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ kwargs = dict( filters=filters, domain_id=domain_id) if self._is_client_version('identity', 3): kwargs['obj_name'] = 'project' pushdown, filters = _normalize._split_filters(**kwargs) try: if self._is_client_version('identity', 3): key = 'projects' else: key = 'tenants' data = self._identity_client.get( '/{endpoint}'.format(endpoint=key), params=pushdown) projects = self._normalize_projects( self._get_and_munchify(key, data)) except Exception as e: self.log.debug("Failed to list projects", exc_info=True) raise OpenStackCloudException(str(e)) return _utils._filter_list(projects, name_or_id, filters) def search_projects(self, name_or_id=None, filters=None, domain_id=None): '''Backwards compatibility method for search_projects search_projects originally had a parameter list that was name_or_id, filters and list had domain_id first. This method exists in this form to allow code written with positional parameter to still work. But really, use keyword arguments. ''' return self.list_projects( domain_id=domain_id, name_or_id=name_or_id, filters=filters) def get_project(self, name_or_id, filters=None, domain_id=None): """Get exactly one project. :param name_or_id: project name or ID. :param filters: a dict containing additional filters to use. :param domain_id: domain ID (identity v3 only). :returns: a list of ``munch.Munch`` containing the project description. :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ return _utils._get_entity(self, 'project', name_or_id, filters, domain_id=domain_id) @_utils.valid_kwargs('description') def update_project(self, name_or_id, enabled=None, domain_id=None, **kwargs): with _utils.shade_exceptions( "Error in updating project {project}".format( project=name_or_id)): proj = self.get_project(name_or_id, domain_id=domain_id) if not proj: raise OpenStackCloudException( "Project %s not found." % name_or_id) if enabled is not None: kwargs.update({'enabled': enabled}) # NOTE(samueldmq): Current code only allow updates of description # or enabled fields. if self._is_client_version('identity', 3): data = self._identity_client.patch( '/projects/' + proj['id'], json={'project': kwargs}) project = self._get_and_munchify('project', data) else: data = self._identity_client.post( '/tenants/' + proj['id'], json={'tenant': kwargs}) project = self._get_and_munchify('tenant', data) project = self._normalize_project(project) self.list_projects.invalidate(self) return project def create_project( self, name, description=None, domain_id=None, enabled=True): """Create a project.""" with _utils.shade_exceptions( "Error in creating project {project}".format(project=name)): project_ref = self._get_domain_id_param_dict(domain_id) project_ref.update({'name': name, 'description': description, 'enabled': enabled}) endpoint, key = ('tenants', 'tenant') if self._is_client_version('identity', 3): endpoint, key = ('projects', 'project') data = self._identity_client.post( '/{endpoint}'.format(endpoint=endpoint), json={key: project_ref}) project = self._normalize_project( self._get_and_munchify(key, data)) self.list_projects.invalidate(self) return project def delete_project(self, name_or_id, domain_id=None): """Delete a project. :param string name_or_id: Project name or ID. :param string domain_id: Domain ID containing the project(identity v3 only). :returns: True if delete succeeded, False if the project was not found. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ with _utils.shade_exceptions( "Error in deleting project {project}".format( project=name_or_id)): project = self.get_project(name_or_id, domain_id=domain_id) if project is None: self.log.debug( "Project %s not found for deleting", name_or_id) return False if self._is_client_version('identity', 3): self._identity_client.delete('/projects/' + project['id']) else: self._identity_client.delete('/tenants/' + project['id']) return True @_utils.valid_kwargs('domain_id') @_utils.cache_on_arguments() def list_users(self, **kwargs): """List users. :param domain_id: Domain ID. (v3) :returns: a list of ``munch.Munch`` containing the user description. :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ data = self._identity_client.get('/users', params=kwargs) return _utils.normalize_users( self._get_and_munchify('users', data)) @_utils.valid_kwargs('domain_id') def search_users(self, name_or_id=None, filters=None, **kwargs): """Search users. :param string name_or_id: user name or ID. :param domain_id: Domain ID. (v3) :param filters: a dict containing additional filters to use. OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: a list of ``munch.Munch`` containing the users :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ users = self.list_users(**kwargs) return _utils._filter_list(users, name_or_id, filters) @_utils.valid_kwargs('domain_id') def get_user(self, name_or_id, filters=None, **kwargs): """Get exactly one user. :param string name_or_id: user name or ID. :param domain_id: Domain ID. (v3) :param filters: a dict containing additional filters to use. OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: a single ``munch.Munch`` containing the user description. :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ return _utils._get_entity(self, 'user', name_or_id, filters, **kwargs) def get_user_by_id(self, user_id, normalize=True): """Get a user by ID. :param string user_id: user ID :param bool normalize: Flag to control dict normalization :returns: a single ``munch.Munch`` containing the user description """ data = self._identity_client.get( '/users/{user}'.format(user=user_id), error_message="Error getting user with ID {user_id}".format( user_id=user_id)) user = self._get_and_munchify('user', data) if user and normalize: user = _utils.normalize_users(user) return user # NOTE(Shrews): Keystone v2 supports updating only name, email and enabled. @_utils.valid_kwargs('name', 'email', 'enabled', 'domain_id', 'password', 'description', 'default_project') def update_user(self, name_or_id, **kwargs): self.list_users.invalidate(self) user_kwargs = {} if 'domain_id' in kwargs and kwargs['domain_id']: user_kwargs['domain_id'] = kwargs['domain_id'] user = self.get_user(name_or_id, **user_kwargs) # TODO(mordred) When this changes to REST, force interface=admin # in the adapter call if it's an admin force call (and figure out how # to make that disctinction) if self._is_client_version('identity', 2): # Do not pass v3 args to a v2 keystone. kwargs.pop('domain_id', None) kwargs.pop('description', None) kwargs.pop('default_project', None) password = kwargs.pop('password', None) if password is not None: with _utils.shade_exceptions( "Error updating password for {user}".format( user=name_or_id)): error_msg = "Error updating password for user {}".format( name_or_id) data = self._identity_client.put( '/users/{u}/OS-KSADM/password'.format(u=user['id']), json={'user': {'password': password}}, error_message=error_msg) # Identity v2.0 implements PUT. v3 PATCH. Both work as PATCH. data = self._identity_client.put( '/users/{user}'.format(user=user['id']), json={'user': kwargs}, error_message="Error in updating user {}".format(name_or_id)) else: # NOTE(samueldmq): now this is a REST call and domain_id is dropped # if None. keystoneclient drops keys with None values. if 'domain_id' in kwargs and kwargs['domain_id'] is None: del kwargs['domain_id'] data = self._identity_client.patch( '/users/{user}'.format(user=user['id']), json={'user': kwargs}, error_message="Error in updating user {}".format(name_or_id)) user = self._get_and_munchify('user', data) self.list_users.invalidate(self) return _utils.normalize_users([user])[0] def create_user( self, name, password=None, email=None, default_project=None, enabled=True, domain_id=None, description=None): """Create a user.""" params = self._get_identity_params(domain_id, default_project) params.update({'name': name, 'password': password, 'email': email, 'enabled': enabled}) if self._is_client_version('identity', 3): params['description'] = description elif description is not None: self.log.info( "description parameter is not supported on Keystone v2") error_msg = "Error in creating user {user}".format(user=name) data = self._identity_client.post('/users', json={'user': params}, error_message=error_msg) user = self._get_and_munchify('user', data) self.list_users.invalidate(self) return _utils.normalize_users([user])[0] @_utils.valid_kwargs('domain_id') def delete_user(self, name_or_id, **kwargs): # TODO(mordred) Why are we invalidating at the TOP? self.list_users.invalidate(self) user = self.get_user(name_or_id, **kwargs) if not user: self.log.debug( "User {0} not found for deleting".format(name_or_id)) return False # TODO(mordred) Extra GET only needed to support keystoneclient. # Can be removed as a follow-on. user = self.get_user_by_id(user['id'], normalize=False) self._identity_client.delete( '/users/{user}'.format(user=user['id']), error_message="Error in deleting user {user}".format( user=name_or_id)) self.list_users.invalidate(self) return True def _get_user_and_group(self, user_name_or_id, group_name_or_id): user = self.get_user(user_name_or_id) if not user: raise OpenStackCloudException( 'User {user} not found'.format(user=user_name_or_id)) group = self.get_group(group_name_or_id) if not group: raise OpenStackCloudException( 'Group {user} not found'.format(user=group_name_or_id)) return (user, group) def add_user_to_group(self, name_or_id, group_name_or_id): """Add a user to a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) error_msg = "Error adding user {user} to group {group}".format( user=name_or_id, group=group_name_or_id) self._identity_client.put( '/groups/{g}/users/{u}'.format(g=group['id'], u=user['id']), error_message=error_msg) def is_user_in_group(self, name_or_id, group_name_or_id): """Check to see if a user is in a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :returns: True if user is in the group, False otherwise :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) try: self._identity_client.head( '/groups/{g}/users/{u}'.format(g=group['id'], u=user['id'])) return True except OpenStackCloudURINotFound: # NOTE(samueldmq): knowing this URI exists, let's interpret this as # user not found in group rather than URI not found. return False def remove_user_from_group(self, name_or_id, group_name_or_id): """Remove a user from a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) error_msg = "Error removing user {user} from group {group}".format( user=name_or_id, group=group_name_or_id) self._identity_client.delete( '/groups/{g}/users/{u}'.format(g=group['id'], u=user['id']), error_message=error_msg) def get_template_contents( self, template_file=None, template_url=None, template_object=None, files=None): try: return template_utils.get_template_contents( template_file=template_file, template_url=template_url, template_object=template_object, files=files) except Exception as e: raise OpenStackCloudException( "Error in processing template files: %s" % str(e)) def create_stack( self, name, tags=None, template_file=None, template_url=None, template_object=None, files=None, rollback=True, wait=False, timeout=3600, environment_files=None, **parameters): """Create a stack. :param string name: Name of the stack. :param tags: List of tag(s) of the stack. (optional) :param string template_file: Path to the template. :param string template_url: URL of template. :param string template_object: URL to retrieve template object. :param dict files: dict of additional file content to include. :param boolean rollback: Enable rollback on create failure. :param boolean wait: Whether to wait for the delete to finish. :param int timeout: Stack create timeout in seconds. :param environment_files: Paths to environment files to apply. Other arguments will be passed as stack parameters which will take precedence over any parameters specified in the environments. Only one of template_file, template_url, template_object should be specified. :returns: a dict containing the stack description :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ envfiles, env = template_utils.process_multiple_environments_and_files( env_paths=environment_files) tpl_files, template = template_utils.get_template_contents( template_file=template_file, template_url=template_url, template_object=template_object, files=files) params = dict( stack_name=name, tags=tags, disable_rollback=not rollback, parameters=parameters, template=template, files=dict(list(tpl_files.items()) + list(envfiles.items())), environment=env, timeout_mins=timeout // 60, ) self._orchestration_client.post('/stacks', json=params) if wait: event_utils.poll_for_events(self, stack_name=name, action='CREATE') return self.get_stack(name) def update_stack( self, name_or_id, template_file=None, template_url=None, template_object=None, files=None, rollback=True, wait=False, timeout=3600, environment_files=None, **parameters): """Update a stack. :param string name_or_id: Name or ID of the stack to update. :param string template_file: Path to the template. :param string template_url: URL of template. :param string template_object: URL to retrieve template object. :param dict files: dict of additional file content to include. :param boolean rollback: Enable rollback on update failure. :param boolean wait: Whether to wait for the delete to finish. :param int timeout: Stack update timeout in seconds. :param environment_files: Paths to environment files to apply. Other arguments will be passed as stack parameters which will take precedence over any parameters specified in the environments. Only one of template_file, template_url, template_object should be specified. :returns: a dict containing the stack description :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API calls """ envfiles, env = template_utils.process_multiple_environments_and_files( env_paths=environment_files) tpl_files, template = template_utils.get_template_contents( template_file=template_file, template_url=template_url, template_object=template_object, files=files) params = dict( disable_rollback=not rollback, parameters=parameters, template=template, files=dict(list(tpl_files.items()) + list(envfiles.items())), environment=env, timeout_mins=timeout // 60, ) if wait: # find the last event to use as the marker events = event_utils.get_events( self, name_or_id, event_args={'sort_dir': 'desc', 'limit': 1}) marker = events[0].id if events else None self._orchestration_client.put( '/stacks/{name_or_id}'.format(name_or_id=name_or_id), json=params) if wait: event_utils.poll_for_events(self, name_or_id, action='UPDATE', marker=marker) return self.get_stack(name_or_id) def delete_stack(self, name_or_id, wait=False): """Delete a stack :param string name_or_id: Stack name or ID. :param boolean wait: Whether to wait for the delete to finish :returns: True if delete succeeded, False if the stack was not found. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ stack = self.get_stack(name_or_id) if stack is None: self.log.debug("Stack %s not found for deleting", name_or_id) return False if wait: # find the last event to use as the marker events = event_utils.get_events( self, name_or_id, event_args={'sort_dir': 'desc', 'limit': 1}) marker = events[0].id if events else None self._orchestration_client.delete( '/stacks/{id}'.format(id=stack['id'])) if wait: try: event_utils.poll_for_events(self, stack_name=name_or_id, action='DELETE', marker=marker) except OpenStackCloudHTTPError: pass stack = self.get_stack(name_or_id) if stack and stack['stack_status'] == 'DELETE_FAILED': raise OpenStackCloudException( "Failed to delete stack {id}: {reason}".format( id=name_or_id, reason=stack['stack_status_reason'])) return True def get_name(self): return self.name def get_region(self): return self.region_name def get_flavor_name(self, flavor_id): flavor = self.get_flavor(flavor_id, get_extra=False) if flavor: return flavor['name'] return None def get_flavor_by_ram(self, ram, include=None, get_extra=True): """Get a flavor based on amount of RAM available. Finds the flavor with the least amount of RAM that is at least as much as the specified amount. If `include` is given, further filter based on matching flavor name. :param int ram: Minimum amount of RAM. :param string include: If given, will return a flavor whose name contains this string as a substring. """ flavors = self.list_flavors(get_extra=get_extra) for flavor in sorted(flavors, key=operator.itemgetter('ram')): if (flavor['ram'] >= ram and (not include or include in flavor['name'])): return flavor raise OpenStackCloudException( "Could not find a flavor with {ram} and '{include}'".format( ram=ram, include=include)) def get_session_endpoint(self, service_key): try: return self.cloud_config.get_session_endpoint(service_key) except keystoneauth1.exceptions.catalog.EndpointNotFound as e: self.log.debug( "Endpoint not found in %s cloud: %s", self.name, str(e)) endpoint = None except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Error getting {service} endpoint on {cloud}:{region}:" " {error}".format( service=service_key, cloud=self.name, region=self.region_name, error=str(e))) return endpoint def has_service(self, service_key): if not self.cloud_config.config.get('has_%s' % service_key, True): # TODO(mordred) add a stamp here so that we only report this once if not (service_key in self._disable_warnings and self._disable_warnings[service_key]): self.log.debug( "Disabling %(service_key)s entry in catalog" " per config", {'service_key': service_key}) self._disable_warnings[service_key] = True return False try: endpoint = self.get_session_endpoint(service_key) except OpenStackCloudException: return False if endpoint: return True else: return False @_utils.cache_on_arguments() def _nova_extensions(self): extensions = set() data = _adapter._json_response( self._conn.compute.get('/extensions'), error_message="Error fetching extension list for nova") for extension in self._get_and_munchify('extensions', data): extensions.add(extension['alias']) return extensions def _has_nova_extension(self, extension_name): return extension_name in self._nova_extensions() def search_keypairs(self, name_or_id=None, filters=None): keypairs = self.list_keypairs() return _utils._filter_list(keypairs, name_or_id, filters) @_utils.cache_on_arguments() def _neutron_extensions(self): extensions = set() data = self._network_client.get( '/extensions.json', error_message="Error fetching extension list for neutron") for extension in self._get_and_munchify('extensions', data): extensions.add(extension['alias']) return extensions def _has_neutron_extension(self, extension_alias): return extension_alias in self._neutron_extensions() def search_networks(self, name_or_id=None, filters=None): """Search networks :param name_or_id: Name or ID of the desired network. :param filters: a dict containing additional filters to use. e.g. {'router:external': True} :returns: a list of ``munch.Munch`` containing the network description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ networks = self.list_networks(filters) return _utils._filter_list(networks, name_or_id, filters) def search_routers(self, name_or_id=None, filters=None): """Search routers :param name_or_id: Name or ID of the desired router. :param filters: a dict containing additional filters to use. e.g. {'admin_state_up': True} :returns: a list of ``munch.Munch`` containing the router description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ routers = self.list_routers(filters) return _utils._filter_list(routers, name_or_id, filters) def search_subnets(self, name_or_id=None, filters=None): """Search subnets :param name_or_id: Name or ID of the desired subnet. :param filters: a dict containing additional filters to use. e.g. {'enable_dhcp': True} :returns: a list of ``munch.Munch`` containing the subnet description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ subnets = self.list_subnets(filters) return _utils._filter_list(subnets, name_or_id, filters) def search_ports(self, name_or_id=None, filters=None): """Search ports :param name_or_id: Name or ID of the desired port. :param filters: a dict containing additional filters to use. e.g. {'device_id': '2711c67a-b4a7-43dd-ace7-6187b791c3f0'} :returns: a list of ``munch.Munch`` containing the port description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ # If port caching is enabled, do not push the filter down to # neutron; get all the ports (potentially from the cache) and # filter locally. if self._PORT_AGE: pushdown_filters = None else: pushdown_filters = filters ports = self.list_ports(pushdown_filters) return _utils._filter_list(ports, name_or_id, filters) def search_qos_policies(self, name_or_id=None, filters=None): """Search QoS policies :param name_or_id: Name or ID of the desired policy. :param filters: a dict containing additional filters to use. e.g. {'shared': True} :returns: a list of ``munch.Munch`` containing the network description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ policies = self.list_qos_policies(filters) return _utils._filter_list(policies, name_or_id, filters) def search_volumes(self, name_or_id=None, filters=None): volumes = self.list_volumes() return _utils._filter_list( volumes, name_or_id, filters) def search_volume_snapshots(self, name_or_id=None, filters=None): volumesnapshots = self.list_volume_snapshots() return _utils._filter_list( volumesnapshots, name_or_id, filters) def search_volume_backups(self, name_or_id=None, filters=None): volume_backups = self.list_volume_backups() return _utils._filter_list( volume_backups, name_or_id, filters) def search_volume_types( self, name_or_id=None, filters=None, get_extra=True): volume_types = self.list_volume_types(get_extra=get_extra) return _utils._filter_list(volume_types, name_or_id, filters) def search_flavors(self, name_or_id=None, filters=None, get_extra=True): flavors = self.list_flavors(get_extra=get_extra) return _utils._filter_list(flavors, name_or_id, filters) def search_security_groups(self, name_or_id=None, filters=None): # `filters` could be a dict or a jmespath (str) groups = self.list_security_groups( filters=filters if isinstance(filters, dict) else None ) return _utils._filter_list(groups, name_or_id, filters) def search_servers( self, name_or_id=None, filters=None, detailed=False, all_projects=False, bare=False): servers = self.list_servers( detailed=detailed, all_projects=all_projects, bare=bare) return _utils._filter_list(servers, name_or_id, filters) def search_server_groups(self, name_or_id=None, filters=None): """Seach server groups. :param name: server group name or ID. :param filters: a dict containing additional filters to use. :returns: a list of dicts containing the server groups :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ server_groups = self.list_server_groups() return _utils._filter_list(server_groups, name_or_id, filters) def search_images(self, name_or_id=None, filters=None): images = self.list_images() return _utils._filter_list(images, name_or_id, filters) def search_floating_ip_pools(self, name=None, filters=None): pools = self.list_floating_ip_pools() return _utils._filter_list(pools, name, filters) # With Neutron, there are some cases in which full server side filtering is # not possible (e.g. nested attributes or list of objects) so we also need # to use the client-side filtering # The same goes for all neutron-related search/get methods! def search_floating_ips(self, id=None, filters=None): # `filters` could be a jmespath expression which Neutron server doesn't # understand, obviously. if self._use_neutron_floating() and isinstance(filters, dict): kwargs = {'filters': filters} else: kwargs = {} floating_ips = self.list_floating_ips(**kwargs) return _utils._filter_list(floating_ips, id, filters) def search_stacks(self, name_or_id=None, filters=None): """Search stacks. :param name_or_id: Name or ID of the desired stack. :param filters: a dict containing additional filters to use. e.g. {'stack_status': 'CREATE_COMPLETE'} :returns: a list of ``munch.Munch`` containing the stack description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ stacks = self.list_stacks() return _utils._filter_list(stacks, name_or_id, filters) def list_keypairs(self): """List all available keypairs. :returns: A list of ``munch.Munch`` containing keypair info. """ data = _adapter._json_response( self._conn.compute.get('/os-keypairs'), error_message="Error fetching keypair list") return self._normalize_keypairs([ k['keypair'] for k in self._get_and_munchify('keypairs', data)]) def list_networks(self, filters=None): """List all available networks. :param filters: (optional) dict of filter conditions to push down :returns: A list of ``munch.Munch`` containing network info. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get("/networks.json", params=filters) return self._get_and_munchify('networks', data) def list_routers(self, filters=None): """List all available routers. :param filters: (optional) dict of filter conditions to push down :returns: A list of router ``munch.Munch``. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/routers.json", params=filters, error_message="Error fetching router list") return self._get_and_munchify('routers', data) def list_subnets(self, filters=None): """List all available subnets. :param filters: (optional) dict of filter conditions to push down :returns: A list of subnet ``munch.Munch``. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get("/subnets.json", params=filters) return self._get_and_munchify('subnets', data) def list_ports(self, filters=None): """List all available ports. :param filters: (optional) dict of filter conditions to push down :returns: A list of port ``munch.Munch``. """ # If pushdown filters are specified and we do not have batched caching # enabled, bypass local caching and push down the filters. if filters and self._PORT_AGE == 0: return self._list_ports(filters) # Translate None from search interface to empty {} for kwargs below filters = {} if (time.time() - self._ports_time) >= self._PORT_AGE: # Since we're using cached data anyway, we don't need to # have more than one thread actually submit the list # ports task. Let the first one submit it while holding # a lock, and the non-blocking acquire method will cause # subsequent threads to just skip this and use the old # data until it succeeds. # Initially when we never got data, block to retrieve some data. first_run = self._ports is None if self._ports_lock.acquire(first_run): try: if not (first_run and self._ports is not None): self._ports = self._list_ports(filters) self._ports_time = time.time() finally: self._ports_lock.release() return self._ports def _list_ports(self, filters): data = self._network_client.get( "/ports.json", params=filters, error_message="Error fetching port list") return self._get_and_munchify('ports', data) def list_qos_rule_types(self, filters=None): """List all available QoS rule types. :param filters: (optional) dict of filter conditions to push down :returns: A list of rule types ``munch.Munch``. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/rule-types.json", params=filters, error_message="Error fetching QoS rule types list") return self._get_and_munchify('rule_types', data) def get_qos_rule_type_details(self, rule_type, filters=None): """Get a QoS rule type details by rule type name. :param string rule_type: Name of the QoS rule type. :returns: A rule type details ``munch.Munch`` or None if no matching rule type is found. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') if not self._has_neutron_extension('qos-rule-type-details'): raise OpenStackCloudUnavailableExtension( 'qos-rule-type-details extension is not available ' 'on target cloud') data = self._network_client.get( "/qos/rule-types/{rule_type}.json".format(rule_type=rule_type), error_message="Error fetching QoS details of {rule_type} " "rule type".format(rule_type=rule_type)) return self._get_and_munchify('rule_type', data) def list_qos_policies(self, filters=None): """List all available QoS policies. :param filters: (optional) dict of filter conditions to push down :returns: A list of policies ``munch.Munch``. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/policies.json", params=filters, error_message="Error fetching QoS policies list") return self._get_and_munchify('policies', data) @_utils.cache_on_arguments(should_cache_fn=_no_pending_volumes) def list_volumes(self, cache=True): """List all available volumes. :returns: A list of volume ``munch.Munch``. """ def _list(data): volumes.extend(data.get('volumes', [])) endpoint = None for l in data.get('volumes_links', []): if 'rel' in l and 'next' == l['rel']: endpoint = l['href'] break if endpoint: try: _list(self._volume_client.get(endpoint)) except OpenStackCloudURINotFound: # Catch and re-raise here because we are making recursive # calls and we just have context for the log here self.log.debug( "While listing volumes, could not find next link" " {link}.".format(link=data)) raise if not cache: warnings.warn('cache argument to list_volumes is deprecated. Use ' 'invalidate instead.') # Fetching paginated volumes can fails for several reasons, if # something goes wrong we'll have to start fetching volumes from # scratch attempts = 5 for _ in range(attempts): volumes = [] data = self._volume_client.get('/volumes/detail') if 'volumes_links' not in data: # no pagination needed volumes.extend(data.get('volumes', [])) break try: _list(data) break except OpenStackCloudURINotFound: pass else: self.log.debug( "List volumes failed to retrieve all volumes after" " {attempts} attempts. Returning what we found.".format( attempts=attempts)) # list volumes didn't complete succesfully so just return what # we found return self._normalize_volumes( self._get_and_munchify(key=None, data=volumes)) @_utils.cache_on_arguments() def list_volume_types(self, get_extra=True): """List all available volume types. :returns: A list of volume ``munch.Munch``. """ data = self._volume_client.get( '/types', params=dict(is_public='None'), error_message='Error fetching volume_type list') return self._normalize_volume_types( self._get_and_munchify('volume_types', data)) @_utils.cache_on_arguments() def list_availability_zone_names(self, unavailable=False): """List names of availability zones. :param bool unavailable: Whether or not to include unavailable zones in the output. Defaults to False. :returns: A list of availability zone names, or an empty list if the list could not be fetched. """ try: data = _adapter._json_response( self._conn.compute.get('/os-availability-zone')) except OpenStackCloudHTTPError: self.log.debug( "Availability zone list could not be fetched", exc_info=True) return [] zones = self._get_and_munchify('availabilityZoneInfo', data) ret = [] for zone in zones: if zone['zoneState']['available'] or unavailable: ret.append(zone['zoneName']) return ret @_utils.cache_on_arguments() def list_flavors(self, get_extra=None): """List all available flavors. :param get_extra: Whether or not to fetch extra specs for each flavor. Defaults to True. Default behavior value can be overridden in clouds.yaml by setting openstack.cloud.get_extra_specs to False. :returns: A list of flavor ``munch.Munch``. """ if get_extra is None: get_extra = self._extra_config['get_flavor_extra_specs'] data = _adapter._json_response( self._conn.compute.get( '/flavors/detail', params=dict(is_public='None')), error_message="Error fetching flavor list") flavors = self._normalize_flavors( self._get_and_munchify('flavors', data)) for flavor in flavors: if not flavor.extra_specs and get_extra: endpoint = "/flavors/{id}/os-extra_specs".format( id=flavor.id) try: data = _adapter._json_response( self._conn.compute.get(endpoint), error_message="Error fetching flavor extra specs") flavor.extra_specs = self._get_and_munchify( 'extra_specs', data) except OpenStackCloudHTTPError as e: flavor.extra_specs = {} self.log.debug( 'Fetching extra specs for flavor failed:' ' %(msg)s', {'msg': str(e)}) return flavors @_utils.cache_on_arguments(should_cache_fn=_no_pending_stacks) def list_stacks(self): """List all stacks. :returns: a list of ``munch.Munch`` containing the stack description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ data = self._orchestration_client.get( '/stacks', error_message="Error fetching stack list") return self._normalize_stacks( self._get_and_munchify('stacks', data)) def list_server_security_groups(self, server): """List all security groups associated with the given server. :returns: A list of security group ``munch.Munch``. """ # Don't even try if we're a cloud that doesn't have them if not self._has_secgroups(): return [] data = _adapter._json_response( self._conn.compute.get( '/servers/{server_id}/os-security-groups'.format( server_id=server['id']))) return self._normalize_secgroups( self._get_and_munchify('security_groups', data)) def _get_server_security_groups(self, server, security_groups): if not self._has_secgroups(): raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) if not isinstance(server, dict): server = self.get_server(server, bare=True) if server is None: self.log.debug('Server %s not found', server) return None, None if not isinstance(security_groups, (list, tuple)): security_groups = [security_groups] sec_group_objs = [] for sg in security_groups: if not isinstance(sg, dict): sg = self.get_security_group(sg) if sg is None: self.log.debug('Security group %s not found for adding', sg) return None, None sec_group_objs.append(sg) return server, sec_group_objs def add_server_security_groups(self, server, security_groups): """Add security groups to a server. Add existing security groups to an existing server. If the security groups are already present on the server this will continue unaffected. :returns: False if server or security groups are undefined, True otherwise. :raises: ``OpenStackCloudException``, on operation error. """ server, security_groups = self._get_server_security_groups( server, security_groups) if not (server and security_groups): return False for sg in security_groups: _adapter._json_response(self._conn.compute.post( '/servers/%s/action' % server['id'], json={'addSecurityGroup': {'name': sg.name}})) return True def remove_server_security_groups(self, server, security_groups): """Remove security groups from a server Remove existing security groups from an existing server. If the security groups are not present on the server this will continue unaffected. :returns: False if server or security groups are undefined, True otherwise. :raises: ``OpenStackCloudException``, on operation error. """ server, security_groups = self._get_server_security_groups( server, security_groups) if not (server and security_groups): return False ret = True for sg in security_groups: try: _adapter._json_response(self._conn.compute.post( '/servers/%s/action' % server['id'], json={'removeSecurityGroup': {'name': sg.name}})) except OpenStackCloudURINotFound: # NOTE(jamielennox): Is this ok? If we remove something that # isn't present should we just conclude job done or is that an # error? Nova returns ok if you try to add a group twice. self.log.debug( "The security group %s was not present on server %s so " "no action was performed", sg.name, server.name) ret = False return ret def list_security_groups(self, filters=None): """List all available security groups. :param filters: (optional) dict of filter conditions to push down :returns: A list of security group ``munch.Munch``. """ # Security groups not supported if not self._has_secgroups(): raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) if not filters: filters = {} data = [] # Handle neutron security groups if self._use_neutron_secgroups(): # Neutron returns dicts, so no need to convert objects here. data = self._network_client.get( '/security-groups.json', params=filters, error_message="Error fetching security group list") return self._get_and_munchify('security_groups', data) # Handle nova security groups else: data = _adapter._json_response(self._conn.compute.get( '/os-security-groups', params=filters)) return self._normalize_secgroups( self._get_and_munchify('security_groups', data)) def list_servers(self, detailed=False, all_projects=False, bare=False, filters=None): """List all available servers. :param detailed: Whether or not to add detailed additional information. Defaults to False. :param all_projects: Whether to list servers from all projects or just the current auth scoped project. :param bare: Whether to skip adding any additional information to the server record. Defaults to False, meaning the addresses dict will be populated as needed from neutron. Setting to True implies detailed = False. :param filters: Additional query parameters passed to the API server. :returns: A list of server ``munch.Munch``. """ if (time.time() - self._servers_time) >= self._SERVER_AGE: # Since we're using cached data anyway, we don't need to # have more than one thread actually submit the list # servers task. Let the first one submit it while holding # a lock, and the non-blocking acquire method will cause # subsequent threads to just skip this and use the old # data until it succeeds. # Initially when we never got data, block to retrieve some data. first_run = self._servers is None if self._servers_lock.acquire(first_run): try: if not (first_run and self._servers is not None): self._servers = self._list_servers( detailed=detailed, all_projects=all_projects, bare=bare, filters=filters) self._servers_time = time.time() finally: self._servers_lock.release() return self._servers def _list_servers(self, detailed=False, all_projects=False, bare=False, filters=None): error_msg = "Error fetching server list on {cloud}:{region}:".format( cloud=self.name, region=self.region_name) params = filters or {} if all_projects: params['all_tenants'] = True data = _adapter._json_response( self._conn.compute.get( '/servers/detail', params=params), error_message=error_msg) servers = self._normalize_servers( self._get_and_munchify('servers', data)) return [ self._expand_server(server, detailed, bare) for server in servers ] def list_server_groups(self): """List all available server groups. :returns: A list of server group dicts. """ data = _adapter._json_response( self._conn.compute.get('/os-server-groups'), error_message="Error fetching server group list") return self._get_and_munchify('server_groups', data) def get_compute_limits(self, name_or_id=None): """ Get compute limits for a project :param name_or_id: (optional) project name or ID to get limits for if different from the current project :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the limits """ params = {} project_id = None error_msg = "Failed to get limits" if name_or_id: proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") project_id = proj.id params['tenant_id'] = project_id error_msg = "{msg} for the project: {project} ".format( msg=error_msg, project=name_or_id) data = _adapter._json_response( self._conn.compute.get('/limits', params=params)) limits = self._get_and_munchify('limits', data) return self._normalize_compute_limits(limits, project_id=project_id) @_utils.cache_on_arguments(should_cache_fn=_no_pending_images) def list_images(self, filter_deleted=True, show_all=False): """Get available images. :param filter_deleted: Control whether deleted images are returned. :param show_all: Show all images, including images that are shared but not accepted. (By default in glance v2 shared image that have not been accepted are not shown) show_all will override the value of filter_deleted to False. :returns: A list of glance images. """ if show_all: filter_deleted = False # First, try to actually get images from glance, it's more efficient images = [] params = {} image_list = [] try: if self._is_client_version('image', 2): endpoint = '/images' if show_all: params['member_status'] = 'all' else: endpoint = '/images/detail' response = self._image_client.get(endpoint, params=params) except keystoneauth1.exceptions.catalog.EndpointNotFound: # We didn't have glance, let's try nova # If this doesn't work - we just let the exception propagate response = _adapter._json_response( self._conn.compute.get('/images/detail')) while 'next' in response: image_list.extend(meta.obj_list_to_munch(response['images'])) endpoint = response['next'] # next links from glance have the version prefix. If the catalog # has a versioned endpoint, then we can't append the next link to # it. Strip the absolute prefix (/v1/ or /v2/ to turn it into # a proper relative link. if endpoint.startswith('/v'): endpoint = endpoint[4:] response = self._image_client.get(endpoint) if 'images' in response: image_list.extend(meta.obj_list_to_munch(response['images'])) else: image_list.extend(response) for image in image_list: # The cloud might return DELETED for invalid images. # While that's cute and all, that's an implementation detail. if not filter_deleted: images.append(image) elif image.status.lower() != 'deleted': images.append(image) return self._normalize_images(images) def list_floating_ip_pools(self): """List all available floating IP pools. NOTE: This function supports the nova-net view of the world. nova-net has been deprecated, so it's highly recommended to switch to using neutron. `get_external_ipv4_floating_networks` is what you should almost certainly be using. :returns: A list of floating IP pool ``munch.Munch``. """ if not self._has_nova_extension('os-floating-ip-pools'): raise OpenStackCloudUnavailableExtension( 'Floating IP pools extension is not available on target cloud') data = _adapter._json_response( self._conn.compute.get('os-floating-ip-pools'), error_message="Error fetching floating IP pool list") pools = self._get_and_munchify('floating_ip_pools', data) return [{'name': p['name']} for p in pools] def _list_floating_ips(self, filters=None): if self._use_neutron_floating(): try: return self._normalize_floating_ips( self._neutron_list_floating_ips(filters)) except OpenStackCloudURINotFound as e: # Nova-network don't support server-side floating ips # filtering, so it's safer to return and empty list than # to fallback to Nova which may return more results that # expected. if filters: self.log.error( "Neutron returned NotFound for floating IPs, which" " means this cloud doesn't have neutron floating ips." " shade can't fallback to trying Nova since nova" " doesn't support server-side filtering when listing" " floating ips and filters were given. If you do not" " think shade should be attempting to list floating" " ips on neutron, it is possible to control the" " behavior by setting floating_ip_source to 'nova' or" " None for cloud: %(cloud)s. If you are not already" " using clouds.yaml to configure settings for your" " cloud(s), and you want to configure this setting," " you will need a clouds.yaml file. For more" " information, please see %(doc_url)s", { 'cloud': self.name, 'doc_url': _OCC_DOC_URL, } ) # We can't fallback to nova because we push-down filters. # We got a 404 which means neutron doesn't exist. If the # user return [] self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova else: if filters: raise ValueError( "Nova-network don't support server-side floating ips " "filtering. Use the search_floatting_ips method instead" ) floating_ips = self._nova_list_floating_ips() return self._normalize_floating_ips(floating_ips) def list_floating_ips(self, filters=None): """List all available floating IPs. :param filters: (optional) dict of filter conditions to push down :returns: A list of floating IP ``munch.Munch``. """ # If pushdown filters are specified and we do not have batched caching # enabled, bypass local caching and push down the filters. if filters and self._FLOAT_AGE == 0: return self._list_floating_ips(filters) if (time.time() - self._floating_ips_time) >= self._FLOAT_AGE: # Since we're using cached data anyway, we don't need to # have more than one thread actually submit the list # floating ips task. Let the first one submit it while holding # a lock, and the non-blocking acquire method will cause # subsequent threads to just skip this and use the old # data until it succeeds. # Initially when we never got data, block to retrieve some data. first_run = self._floating_ips is None if self._floating_ips_lock.acquire(first_run): try: if not (first_run and self._floating_ips is not None): self._floating_ips = self._list_floating_ips() self._floating_ips_time = time.time() finally: self._floating_ips_lock.release() return self._floating_ips def _neutron_list_floating_ips(self, filters=None): if not filters: filters = {} data = self._network_client.get('/floatingips.json', params=filters) return self._get_and_munchify('floatingips', data) def _nova_list_floating_ips(self): try: data = _adapter._json_response( self._conn.compute.get('/os-floating-ips')) except OpenStackCloudURINotFound: return [] return self._get_and_munchify('floating_ips', data) def use_external_network(self): return self._use_external_network def use_internal_network(self): return self._use_internal_network def _reset_network_caches(self): # Variables to prevent us from going through the network finding # logic again if we've done it once. This is different from just # the cached value, since "None" is a valid value to find. with self._networks_lock: self._external_ipv4_networks = [] self._external_ipv4_floating_networks = [] self._internal_ipv4_networks = [] self._external_ipv6_networks = [] self._internal_ipv6_networks = [] self._nat_destination_network = None self._default_network_network = None self._network_list_stamp = False def _set_interesting_networks(self): external_ipv4_networks = [] external_ipv4_floating_networks = [] internal_ipv4_networks = [] external_ipv6_networks = [] internal_ipv6_networks = [] nat_destination = None default_network = None all_subnets = None # Filter locally because we have an or condition try: # TODO(mordred): Rackspace exposes neutron but it does not # work. I think that overriding what the service catalog # reports should be a thing os-client-config should handle # in a vendor profile - but for now it does not. That means # this search_networks can just totally fail. If it does # though, that's fine, clearly the neutron introspection is # not going to work. all_networks = self.list_networks() except OpenStackCloudException: self._network_list_stamp = True return for network in all_networks: # External IPv4 networks if (network['name'] in self._external_ipv4_names or network['id'] in self._external_ipv4_names): external_ipv4_networks.append(network) elif ((('router:external' in network and network['router:external']) or network.get('provider:physical_network')) and network['name'] not in self._internal_ipv4_names and network['id'] not in self._internal_ipv4_names): external_ipv4_networks.append(network) # External Floating IPv4 networks if ('router:external' in network and network['router:external']): external_ipv4_floating_networks.append(network) # Internal networks if (network['name'] in self._internal_ipv4_names or network['id'] in self._internal_ipv4_names): internal_ipv4_networks.append(network) elif (not network.get('router:external', False) and not network.get('provider:physical_network') and network['name'] not in self._external_ipv4_names and network['id'] not in self._external_ipv4_names): internal_ipv4_networks.append(network) # External networks if (network['name'] in self._external_ipv6_names or network['id'] in self._external_ipv6_names): external_ipv6_networks.append(network) elif (network.get('router:external') and network['name'] not in self._internal_ipv6_names and network['id'] not in self._internal_ipv6_names): external_ipv6_networks.append(network) # Internal networks if (network['name'] in self._internal_ipv6_names or network['id'] in self._internal_ipv6_names): internal_ipv6_networks.append(network) elif (not network.get('router:external', False) and network['name'] not in self._external_ipv6_names and network['id'] not in self._external_ipv6_names): internal_ipv6_networks.append(network) # NAT Destination if self._nat_destination in ( network['name'], network['id']): if nat_destination: raise OpenStackCloudException( 'Multiple networks were found matching' ' {nat_net} which is the network configured' ' to be the NAT destination. Please check your' ' cloud resources. It is probably a good idea' ' to configure this network by ID rather than' ' by name.'.format( nat_net=self._nat_destination)) nat_destination = network elif self._nat_destination is None: # TODO(mordred) need a config value for floating # ips for this cloud so that we can skip this # No configured nat destination, we have to figured # it out. if all_subnets is None: try: all_subnets = self.list_subnets() except OpenStackCloudException: # Thanks Rackspace broken neutron all_subnets = [] for subnet in all_subnets: # TODO(mordred) trap for detecting more than # one network with a gateway_ip without a config if ('gateway_ip' in subnet and subnet['gateway_ip'] and network['id'] == subnet['network_id']): nat_destination = network break # Default network if self._default_network in ( network['name'], network['id']): if default_network: raise OpenStackCloudException( 'Multiple networks were found matching' ' {default_net} which is the network' ' configured to be the default interface' ' network. Please check your cloud resources.' ' It is probably a good idea' ' to configure this network by ID rather than' ' by name.'.format( default_net=self._default_network)) default_network = network # Validate config vs. reality for net_name in self._external_ipv4_names: if net_name not in [net['name'] for net in external_ipv4_networks]: raise OpenStackCloudException( "Networks: {network} was provided for external IPv4" " access and those networks could not be found".format( network=net_name)) for net_name in self._internal_ipv4_names: if net_name not in [net['name'] for net in internal_ipv4_networks]: raise OpenStackCloudException( "Networks: {network} was provided for internal IPv4" " access and those networks could not be found".format( network=net_name)) for net_name in self._external_ipv6_names: if net_name not in [net['name'] for net in external_ipv6_networks]: raise OpenStackCloudException( "Networks: {network} was provided for external IPv6" " access and those networks could not be found".format( network=net_name)) for net_name in self._internal_ipv6_names: if net_name not in [net['name'] for net in internal_ipv6_networks]: raise OpenStackCloudException( "Networks: {network} was provided for internal IPv6" " access and those networks could not be found".format( network=net_name)) if self._nat_destination and not nat_destination: raise OpenStackCloudException( 'Network {network} was configured to be the' ' destination for inbound NAT but it could not be' ' found'.format( network=self._nat_destination)) if self._default_network and not default_network: raise OpenStackCloudException( 'Network {network} was configured to be the' ' default network interface but it could not be' ' found'.format( network=self._default_network)) self._external_ipv4_networks = external_ipv4_networks self._external_ipv4_floating_networks = external_ipv4_floating_networks self._internal_ipv4_networks = internal_ipv4_networks self._external_ipv6_networks = external_ipv6_networks self._internal_ipv6_networks = internal_ipv6_networks self._nat_destination_network = nat_destination self._default_network_network = default_network def _find_interesting_networks(self): if self._networks_lock.acquire(): try: if self._network_list_stamp: return if (not self._use_external_network and not self._use_internal_network): # Both have been flagged as skip - don't do a list return if not self.has_service('network'): return self._set_interesting_networks() self._network_list_stamp = True finally: self._networks_lock.release() def get_nat_destination(self): """Return the network that is configured to be the NAT destination. :returns: A network dict if one is found """ self._find_interesting_networks() return self._nat_destination_network def get_default_network(self): """Return the network that is configured to be the default interface. :returns: A network dict if one is found """ self._find_interesting_networks() return self._default_network_network def get_external_networks(self): """Return the networks that are configured to route northbound. This should be avoided in favor of the specific ipv4/ipv6 method, but is here for backwards compatibility. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return list( set(self._external_ipv4_networks) | set(self._external_ipv6_networks)) def get_internal_networks(self): """Return the networks that are configured to not route northbound. This should be avoided in favor of the specific ipv4/ipv6 method, but is here for backwards compatibility. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return list( set(self._internal_ipv4_networks) | set(self._internal_ipv6_networks)) def get_external_ipv4_networks(self): """Return the networks that are configured to route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._external_ipv4_networks def get_external_ipv4_floating_networks(self): """Return the networks that are configured to route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._external_ipv4_floating_networks def get_internal_ipv4_networks(self): """Return the networks that are configured to not route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._internal_ipv4_networks def get_external_ipv6_networks(self): """Return the networks that are configured to route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._external_ipv6_networks def get_internal_ipv6_networks(self): """Return the networks that are configured to not route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._internal_ipv6_networks def _has_floating_ips(self): if not self._floating_ip_source: return False else: return self._floating_ip_source in ('nova', 'neutron') def _use_neutron_floating(self): return (self.has_service('network') and self._floating_ip_source == 'neutron') def _has_secgroups(self): if not self.secgroup_source: return False else: return self.secgroup_source.lower() in ('nova', 'neutron') def _use_neutron_secgroups(self): return (self.has_service('network') and self.secgroup_source == 'neutron') def get_keypair(self, name_or_id, filters=None): """Get a keypair by name or ID. :param name_or_id: Name or ID of the keypair. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A keypair ``munch.Munch`` or None if no matching keypair is found. """ return _utils._get_entity(self, 'keypair', name_or_id, filters) def get_network(self, name_or_id, filters=None): """Get a network by name or ID. :param name_or_id: Name or ID of the network. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A network ``munch.Munch`` or None if no matching network is found. """ return _utils._get_entity(self, 'network', name_or_id, filters) def get_network_by_id(self, id): """ Get a network by ID :param id: ID of the network. :returns: A network ``munch.Munch``. """ data = self._network_client.get( '/networks/{id}'.format(id=id), error_message="Error getting network with ID {id}".format(id=id) ) network = self._get_and_munchify('network', data) return network def get_router(self, name_or_id, filters=None): """Get a router by name or ID. :param name_or_id: Name or ID of the router. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A router ``munch.Munch`` or None if no matching router is found. """ return _utils._get_entity(self, 'router', name_or_id, filters) def get_subnet(self, name_or_id, filters=None): """Get a subnet by name or ID. :param name_or_id: Name or ID of the subnet. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A subnet ``munch.Munch`` or None if no matching subnet is found. """ return _utils._get_entity(self, 'subnet', name_or_id, filters) def get_subnet_by_id(self, id): """ Get a subnet by ID :param id: ID of the subnet. :returns: A subnet ``munch.Munch``. """ data = self._network_client.get( '/subnets/{id}'.format(id=id), error_message="Error getting subnet with ID {id}".format(id=id) ) subnet = self._get_and_munchify('subnet', data) return subnet def get_port(self, name_or_id, filters=None): """Get a port by name or ID. :param name_or_id: Name or ID of the port. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A port ``munch.Munch`` or None if no matching port is found. """ return _utils._get_entity(self, 'port', name_or_id, filters) def get_port_by_id(self, id): """ Get a port by ID :param id: ID of the port. :returns: A port ``munch.Munch``. """ data = self._network_client.get( '/ports/{id}'.format(id=id), error_message="Error getting port with ID {id}".format(id=id) ) port = self._get_and_munchify('port', data) return port def get_qos_policy(self, name_or_id, filters=None): """Get a QoS policy by name or ID. :param name_or_id: Name or ID of the policy. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A policy ``munch.Munch`` or None if no matching network is found. """ return _utils._get_entity( self, 'qos_policie', name_or_id, filters) def get_volume(self, name_or_id, filters=None): """Get a volume by name or ID. :param name_or_id: Name or ID of the volume. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A volume ``munch.Munch`` or None if no matching volume is found. """ return _utils._get_entity(self, 'volume', name_or_id, filters) def get_volume_by_id(self, id): """ Get a volume by ID :param id: ID of the volume. :returns: A volume ``munch.Munch``. """ data = self._volume_client.get( '/volumes/{id}'.format(id=id), error_message="Error getting volume with ID {id}".format(id=id) ) volume = self._normalize_volume( self._get_and_munchify('volume', data)) return volume def get_volume_type(self, name_or_id, filters=None): """Get a volume type by name or ID. :param name_or_id: Name or ID of the volume. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A volume ``munch.Munch`` or None if no matching volume is found. """ return _utils._get_entity( self, 'volume_type', name_or_id, filters) def get_flavor(self, name_or_id, filters=None, get_extra=True): """Get a flavor by name or ID. :param name_or_id: Name or ID of the flavor. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :param get_extra: Whether or not the list_flavors call should get the extra flavor specs. :returns: A flavor ``munch.Munch`` or None if no matching flavor is found. """ search_func = functools.partial( self.search_flavors, get_extra=get_extra) return _utils._get_entity(self, search_func, name_or_id, filters) def get_flavor_by_id(self, id, get_extra=True): """ Get a flavor by ID :param id: ID of the flavor. :param get_extra: Whether or not the list_flavors call should get the extra flavor specs. :returns: A flavor ``munch.Munch``. """ data = _adapter._json_response( self._conn.compute.get('/flavors/{id}'.format(id=id)), error_message="Error getting flavor with ID {id}".format(id=id) ) flavor = self._normalize_flavor( self._get_and_munchify('flavor', data)) if get_extra is None: get_extra = self._extra_config['get_flavor_extra_specs'] if not flavor.extra_specs and get_extra: endpoint = "/flavors/{id}/os-extra_specs".format( id=flavor.id) try: data = _adapter._json_response( self._conn.compute.get(endpoint), error_message="Error fetching flavor extra specs") flavor.extra_specs = self._get_and_munchify( 'extra_specs', data) except OpenStackCloudHTTPError as e: flavor.extra_specs = {} self.log.debug( 'Fetching extra specs for flavor failed:' ' %(msg)s', {'msg': str(e)}) return flavor def get_security_group(self, name_or_id, filters=None): """Get a security group by name or ID. :param name_or_id: Name or ID of the security group. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A security group ``munch.Munch`` or None if no matching security group is found. """ return _utils._get_entity( self, 'security_group', name_or_id, filters) def get_security_group_by_id(self, id): """ Get a security group by ID :param id: ID of the security group. :returns: A security group ``munch.Munch``. """ if not self._has_secgroups(): raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) error_message = ("Error getting security group with" " ID {id}".format(id=id)) if self._use_neutron_secgroups(): data = self._network_client.get( '/security-groups/{id}'.format(id=id), error_message=error_message) else: data = _adapter._json_response( self._conn.compute.get( '/os-security-groups/{id}'.format(id=id)), error_message=error_message) return self._normalize_secgroup( self._get_and_munchify('security_group', data)) def get_server_console(self, server, length=None): """Get the console log for a server. :param server: The server to fetch the console log for. Can be either a server dict or the Name or ID of the server. :param int length: The number of lines you would like to retrieve from the end of the log. (optional, defaults to all) :returns: A string containing the text of the console log or an empty string if the cloud does not support console logs. :raises: OpenStackCloudException if an invalid server argument is given or if something else unforseen happens """ if not isinstance(server, dict): server = self.get_server(server, bare=True) if not server: raise OpenStackCloudException( "Console log requested for invalid server") try: return self._get_server_console_output(server['id'], length) except OpenStackCloudBadRequest: return "" def _get_server_console_output(self, server_id, length=None): data = _adapter._json_response(self._conn.compute.post( '/servers/{server_id}/action'.format(server_id=server_id), json={'os-getConsoleOutput': {'length': length}})) return self._get_and_munchify('output', data) def get_server( self, name_or_id=None, filters=None, detailed=False, bare=False, all_projects=False): """Get a server by name or ID. :param name_or_id: Name or ID of the server. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :param detailed: Whether or not to add detailed additional information. Defaults to False. :param bare: Whether to skip adding any additional information to the server record. Defaults to False, meaning the addresses dict will be populated as needed from neutron. Setting to True implies detailed = False. :param all_projects: Whether to get server from all projects or just the current auth scoped project. :returns: A server ``munch.Munch`` or None if no matching server is found. """ searchfunc = functools.partial(self.search_servers, detailed=detailed, bare=True, all_projects=all_projects) server = _utils._get_entity(self, searchfunc, name_or_id, filters) return self._expand_server(server, detailed, bare) def _expand_server(self, server, detailed, bare): if bare or not server: return server elif detailed: return meta.get_hostvars_from_server(self, server) else: return meta.add_server_interfaces(self, server) def get_server_by_id(self, id): data = _adapter._json_response( self._conn.compute.get('/servers/{id}'.format(id=id))) server = self._get_and_munchify('server', data) return meta.add_server_interfaces(self, self._normalize_server(server)) def get_server_group(self, name_or_id=None, filters=None): """Get a server group by name or ID. :param name_or_id: Name or ID of the server group. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'policy': 'affinity', } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A server groups dict or None if no matching server group is found. """ return _utils._get_entity(self, 'server_group', name_or_id, filters) def get_image(self, name_or_id, filters=None): """Get an image by name or ID. :param name_or_id: Name or ID of the image. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: An image ``munch.Munch`` or None if no matching image is found """ return _utils._get_entity(self, 'image', name_or_id, filters) def get_image_by_id(self, id): """ Get a image by ID :param id: ID of the image. :returns: An image ``munch.Munch``. """ data = self._image_client.get( '/images/{id}'.format(id=id), error_message="Error getting image with ID {id}".format(id=id) ) key = 'image' if 'image' in data else None image = self._normalize_image( self._get_and_munchify(key, data)) return image def download_image( self, name_or_id, output_path=None, output_file=None, chunk_size=1024): """Download an image by name or ID :param str name_or_id: Name or ID of the image. :param output_path: the output path to write the image to. Either this or output_file must be specified :param output_file: a file object (or file-like object) to write the image data to. Only write() will be called on this object. Either this or output_path must be specified :param int chunk_size: size in bytes to read from the wire and buffer at one time. Defaults to 1024 :raises: OpenStackCloudException in the event download_image is called without exactly one of either output_path or output_file :raises: OpenStackCloudResourceNotFound if no images are found matching the name or ID provided """ if output_path is None and output_file is None: raise OpenStackCloudException('No output specified, an output path' ' or file object is necessary to ' 'write the image data to') elif output_path is not None and output_file is not None: raise OpenStackCloudException('Both an output path and file object' ' were provided, however only one ' 'can be used at once') image = self.search_images(name_or_id) if len(image) == 0: raise OpenStackCloudResourceNotFound( "No images with name or ID %s were found" % name_or_id, None) if self._is_client_version('image', 2): endpoint = '/images/{id}/file'.format(id=image[0]['id']) else: endpoint = '/images/{id}'.format(id=image[0]['id']) response = self._image_client.get(endpoint, stream=True) with _utils.shade_exceptions("Unable to download image"): if output_path: with open(output_path, 'wb') as fd: for chunk in response.iter_content(chunk_size=chunk_size): fd.write(chunk) return elif output_file: for chunk in response.iter_content(chunk_size=chunk_size): output_file.write(chunk) return def get_floating_ip(self, id, filters=None): """Get a floating IP by ID :param id: ID of the floating IP. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A floating IP ``munch.Munch`` or None if no matching floating IP is found. """ return _utils._get_entity(self, 'floating_ip', id, filters) def get_floating_ip_by_id(self, id): """ Get a floating ip by ID :param id: ID of the floating ip. :returns: A floating ip ``munch.Munch``. """ error_message = "Error getting floating ip with ID {id}".format(id=id) if self._use_neutron_floating(): data = self._network_client.get( '/floatingips/{id}'.format(id=id), error_message=error_message) return self._normalize_floating_ip( self._get_and_munchify('floatingip', data)) else: data = _adapter._json_response( self._conn.compute.get('/os-floating-ips/{id}'.format(id=id)), error_message=error_message) return self._normalize_floating_ip( self._get_and_munchify('floating_ip', data)) def get_stack(self, name_or_id, filters=None): """Get exactly one stack. :param name_or_id: Name or ID of the desired stack. :param filters: a dict containing additional filters to use. e.g. {'stack_status': 'CREATE_COMPLETE'} :returns: a ``munch.Munch`` containing the stack description :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call or if multiple matches are found. """ def _search_one_stack(name_or_id=None, filters=None): # stack names are mandatory and enforced unique in the project # so a StackGet can always be used for name or ID. try: data = self._orchestration_client.get( '/stacks/{name_or_id}'.format(name_or_id=name_or_id), error_message="Error fetching stack") stack = self._get_and_munchify('stack', data) # Treat DELETE_COMPLETE stacks as a NotFound if stack['stack_status'] == 'DELETE_COMPLETE': return [] except OpenStackCloudURINotFound: return [] stack = self._normalize_stack(stack) return _utils._filter_list([stack], name_or_id, filters) return _utils._get_entity( self, _search_one_stack, name_or_id, filters) def create_keypair(self, name, public_key=None): """Create a new keypair. :param name: Name of the keypair being created. :param public_key: Public key for the new keypair. :raises: OpenStackCloudException on operation error. """ keypair = { 'name': name, } if public_key: keypair['public_key'] = public_key data = _adapter._json_response( self._conn.compute.post( '/os-keypairs', json={'keypair': keypair}), error_message="Unable to create keypair {name}".format(name=name)) return self._normalize_keypair( self._get_and_munchify('keypair', data)) def delete_keypair(self, name): """Delete a keypair. :param name: Name of the keypair to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ try: _adapter._json_response(self._conn.compute.delete( '/os-keypairs/{name}'.format(name=name))) except OpenStackCloudURINotFound: self.log.debug("Keypair %s not found for deleting", name) return False return True def create_network(self, name, shared=False, admin_state_up=True, external=False, provider=None, project_id=None, availability_zone_hints=None): """Create a network. :param string name: Name of the network being created. :param bool shared: Set the network as shared. :param bool admin_state_up: Set the network administrative state to up. :param bool external: Whether this network is externally accessible. :param dict provider: A dict of network provider options. Example:: { 'network_type': 'vlan', 'segmentation_id': 'vlan1' } :param string project_id: Specify the project ID this network will be created on (admin-only). :param types.ListType availability_zone_hints: A list of availability zone hints. :returns: The network object. :raises: OpenStackCloudException on operation error. """ network = { 'name': name, 'admin_state_up': admin_state_up, } if shared: network['shared'] = shared if project_id is not None: network['tenant_id'] = project_id if availability_zone_hints is not None: if not isinstance(availability_zone_hints, list): raise OpenStackCloudException( "Parameter 'availability_zone_hints' must be a list") if not self._has_neutron_extension('network_availability_zone'): raise OpenStackCloudUnavailableExtension( 'network_availability_zone extension is not available on ' 'target cloud') network['availability_zone_hints'] = availability_zone_hints if provider: if not isinstance(provider, dict): raise OpenStackCloudException( "Parameter 'provider' must be a dict") # Only pass what we know for attr in ('physical_network', 'network_type', 'segmentation_id'): if attr in provider: arg = "provider:" + attr network[arg] = provider[attr] # Do not send 'router:external' unless it is explicitly # set since sending it *might* cause "Forbidden" errors in # some situations. It defaults to False in the client, anyway. if external: network['router:external'] = True data = self._network_client.post("/networks.json", json={'network': network}) # Reset cache so the new network is picked up self._reset_network_caches() return self._get_and_munchify('network', data) def delete_network(self, name_or_id): """Delete a network. :param name_or_id: Name or ID of the network being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ network = self.get_network(name_or_id) if not network: self.log.debug("Network %s not found for deleting", name_or_id) return False self._network_client.delete( "/networks/{network_id}.json".format(network_id=network['id'])) # Reset cache so the deleted network is removed self._reset_network_caches() return True @_utils.valid_kwargs("name", "description", "shared", "default", "project_id") def create_qos_policy(self, **kwargs): """Create a QoS policy. :param string name: Name of the QoS policy being created. :param string description: Description of created QoS policy. :param bool shared: Set the QoS policy as shared. :param bool default: Set the QoS policy as default for project. :param string project_id: Specify the project ID this QoS policy will be created on (admin-only). :returns: The QoS policy object. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') default = kwargs.pop("default", None) if default is not None: if self._has_neutron_extension('qos-default'): kwargs['is_default'] = default else: self.log.debug("'qos-default' extension is not available on " "target cloud") data = self._network_client.post("/qos/policies.json", json={'policy': kwargs}) return self._get_and_munchify('policy', data) @_utils.valid_kwargs("name", "description", "shared", "default", "project_id") def update_qos_policy(self, name_or_id, **kwargs): """Update an existing QoS policy. :param string name_or_id: Name or ID of the QoS policy to update. :param string policy_name: The new name of the QoS policy. :param string description: The new description of the QoS policy. :param bool shared: If True, the QoS policy will be set as shared. :param bool default: If True, the QoS policy will be set as default for project. :returns: The updated QoS policy object. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') default = kwargs.pop("default", None) if default is not None: if self._has_neutron_extension('qos-default'): kwargs['is_default'] = default else: self.log.debug("'qos-default' extension is not available on " "target cloud") if not kwargs: self.log.debug("No QoS policy data to update") return curr_policy = self.get_qos_policy(name_or_id) if not curr_policy: raise OpenStackCloudException( "QoS policy %s not found." % name_or_id) data = self._network_client.put( "/qos/policies/{policy_id}.json".format( policy_id=curr_policy['id']), json={'policy': kwargs}) return self._get_and_munchify('policy', data) def delete_qos_policy(self, name_or_id): """Delete a QoS policy. :param name_or_id: Name or ID of the policy being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(name_or_id) if not policy: self.log.debug("QoS policy %s not found for deleting", name_or_id) return False self._network_client.delete( "/qos/policies/{policy_id}.json".format(policy_id=policy['id'])) return True def search_qos_bandwidth_limit_rules(self, policy_name_or_id, rule_id=None, filters=None): """Search QoS bandwidth limit rules :param string policy_name_or_id: Name or ID of the QoS policy to which rules should be associated. :param string rule_id: ID of searched rule. :param filters: a dict containing additional filters to use. e.g. {'max_kbps': 1000} :returns: a list of ``munch.Munch`` containing the bandwidth limit rule descriptions. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ rules = self.list_qos_bandwidth_limit_rules(policy_name_or_id, filters) return _utils._filter_list(rules, rule_id, filters) def list_qos_bandwidth_limit_rules(self, policy_name_or_id, filters=None): """List all available QoS bandwith limit rules. :param string policy_name_or_id: Name or ID of the QoS policy from from rules should be listed. :param filters: (optional) dict of filter conditions to push down :returns: A list of ``munch.Munch`` containing rule info. :raises: ``OpenStackCloudResourceNotFound`` if QoS policy will not be found. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/policies/{policy_id}/bandwidth_limit_rules.json".format( policy_id=policy['id']), params=filters, error_message="Error fetching QoS bandwith limit rules from " "{policy}".format(policy=policy['id'])) return self._get_and_munchify('bandwidth_limit_rules', data) def get_qos_bandwidth_limit_rule(self, policy_name_or_id, rule_id): """Get a QoS bandwidth limit rule by name or ID. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param rule_id: ID of the rule. :returns: A bandwidth limit rule ``munch.Munch`` or None if no matching rule is found. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) data = self._network_client.get( "/qos/policies/{policy_id}/bandwidth_limit_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), error_message="Error fetching QoS bandwith limit rule {rule_id} " "from {policy}".format(rule_id=rule_id, policy=policy['id'])) return self._get_and_munchify('bandwidth_limit_rule', data) @_utils.valid_kwargs("max_burst_kbps", "direction") def create_qos_bandwidth_limit_rule(self, policy_name_or_id, max_kbps, **kwargs): """Create a QoS bandwidth limit rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param int max_kbps: Maximum bandwidth limit value (in kilobits per second). :param int max_burst_kbps: Maximum burst value (in kilobits). :param string direction: Ingress or egress. The direction in which the traffic will be limited. :returns: The QoS bandwidth limit rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) if kwargs.get("direction") is not None: if not self._has_neutron_extension('qos-bw-limit-direction'): kwargs.pop("direction") self.log.debug( "'qos-bw-limit-direction' extension is not available on " "target cloud") kwargs['max_kbps'] = max_kbps data = self._network_client.post( "/qos/policies/{policy_id}/bandwidth_limit_rules".format( policy_id=policy['id']), json={'bandwidth_limit_rule': kwargs}) return self._get_and_munchify('bandwidth_limit_rule', data) @_utils.valid_kwargs("max_kbps", "max_burst_kbps", "direction") def update_qos_bandwidth_limit_rule(self, policy_name_or_id, rule_id, **kwargs): """Update a QoS bandwidth limit rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :param int max_kbps: Maximum bandwidth limit value (in kilobits per second). :param int max_burst_kbps: Maximum burst value (in kilobits). :param string direction: Ingress or egress. The direction in which the traffic will be limited. :returns: The updated QoS bandwidth limit rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) if kwargs.get("direction") is not None: if not self._has_neutron_extension('qos-bw-limit-direction'): kwargs.pop("direction") self.log.debug( "'qos-bw-limit-direction' extension is not available on " "target cloud") if not kwargs: self.log.debug("No QoS bandwidth limit rule data to update") return curr_rule = self.get_qos_bandwidth_limit_rule( policy_name_or_id, rule_id) if not curr_rule: raise OpenStackCloudException( "QoS bandwidth_limit_rule {rule_id} not found in policy " "{policy_id}".format(rule_id=rule_id, policy_id=policy['id'])) data = self._network_client.put( "/qos/policies/{policy_id}/bandwidth_limit_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), json={'bandwidth_limit_rule': kwargs}) return self._get_and_munchify('bandwidth_limit_rule', data) def delete_qos_bandwidth_limit_rule(self, policy_name_or_id, rule_id): """Delete a QoS bandwidth limit rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) try: self._network_client.delete( "/qos/policies/{policy}/bandwidth_limit_rules/{rule}.json". format(policy=policy['id'], rule=rule_id)) except OpenStackCloudURINotFound: self.log.debug( "QoS bandwidth limit rule {rule_id} not found in policy " "{policy_id}. Ignoring.".format(rule_id=rule_id, policy_id=policy['id'])) return False return True def search_qos_dscp_marking_rules(self, policy_name_or_id, rule_id=None, filters=None): """Search QoS DSCP marking rules :param string policy_name_or_id: Name or ID of the QoS policy to which rules should be associated. :param string rule_id: ID of searched rule. :param filters: a dict containing additional filters to use. e.g. {'dscp_mark': 32} :returns: a list of ``munch.Munch`` containing the dscp marking rule descriptions. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ rules = self.list_qos_dscp_marking_rules(policy_name_or_id, filters) return _utils._filter_list(rules, rule_id, filters) def list_qos_dscp_marking_rules(self, policy_name_or_id, filters=None): """List all available QoS DSCP marking rules. :param string policy_name_or_id: Name or ID of the QoS policy from from rules should be listed. :param filters: (optional) dict of filter conditions to push down :returns: A list of ``munch.Munch`` containing rule info. :raises: ``OpenStackCloudResourceNotFound`` if QoS policy will not be found. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/policies/{policy_id}/dscp_marking_rules.json".format( policy_id=policy['id']), params=filters, error_message="Error fetching QoS DSCP marking rules from " "{policy}".format(policy=policy['id'])) return meta.get_and_munchify('dscp_marking_rules', data) def get_qos_dscp_marking_rule(self, policy_name_or_id, rule_id): """Get a QoS DSCP marking rule by name or ID. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param rule_id: ID of the rule. :returns: A bandwidth limit rule ``munch.Munch`` or None if no matching rule is found. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) data = self._network_client.get( "/qos/policies/{policy_id}/dscp_marking_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), error_message="Error fetching QoS DSCP marking rule {rule_id} " "from {policy}".format(rule_id=rule_id, policy=policy['id'])) return meta.get_and_munchify('dscp_marking_rule', data) def create_qos_dscp_marking_rule(self, policy_name_or_id, dscp_mark): """Create a QoS DSCP marking rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param int dscp_mark: DSCP mark value :returns: The QoS DSCP marking rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) body = { 'dscp_mark': dscp_mark } data = self._network_client.post( "/qos/policies/{policy_id}/dscp_marking_rules".format( policy_id=policy['id']), json={'dscp_marking_rule': body}) return meta.get_and_munchify('dscp_marking_rule', data) @_utils.valid_kwargs("dscp_mark") def update_qos_dscp_marking_rule(self, policy_name_or_id, rule_id, **kwargs): """Update a QoS DSCP marking rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :param int dscp_mark: DSCP mark value :returns: The updated QoS bandwidth limit rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) if not kwargs: self.log.debug("No QoS DSCP marking rule data to update") return curr_rule = self.get_qos_dscp_marking_rule( policy_name_or_id, rule_id) if not curr_rule: raise OpenStackCloudException( "QoS dscp_marking_rule {rule_id} not found in policy " "{policy_id}".format(rule_id=rule_id, policy_id=policy['id'])) data = self._network_client.put( "/qos/policies/{policy_id}/dscp_marking_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), json={'dscp_marking_rule': kwargs}) return meta.get_and_munchify('dscp_marking_rule', data) def delete_qos_dscp_marking_rule(self, policy_name_or_id, rule_id): """Delete a QoS DSCP marking rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) try: self._network_client.delete( "/qos/policies/{policy}/dscp_marking_rules/{rule}.json". format(policy=policy['id'], rule=rule_id)) except OpenStackCloudURINotFound: self.log.debug( "QoS DSCP marking rule {rule_id} not found in policy " "{policy_id}. Ignoring.".format(rule_id=rule_id, policy_id=policy['id'])) return False return True def search_qos_minimum_bandwidth_rules(self, policy_name_or_id, rule_id=None, filters=None): """Search QoS minimum bandwidth rules :param string policy_name_or_id: Name or ID of the QoS policy to which rules should be associated. :param string rule_id: ID of searched rule. :param filters: a dict containing additional filters to use. e.g. {'min_kbps': 1000} :returns: a list of ``munch.Munch`` containing the bandwidth limit rule descriptions. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ rules = self.list_qos_minimum_bandwidth_rules( policy_name_or_id, filters) return _utils._filter_list(rules, rule_id, filters) def list_qos_minimum_bandwidth_rules(self, policy_name_or_id, filters=None): """List all available QoS minimum bandwith rules. :param string policy_name_or_id: Name or ID of the QoS policy from from rules should be listed. :param filters: (optional) dict of filter conditions to push down :returns: A list of ``munch.Munch`` containing rule info. :raises: ``OpenStackCloudResourceNotFound`` if QoS policy will not be found. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/policies/{policy_id}/minimum_bandwidth_rules.json".format( policy_id=policy['id']), params=filters, error_message="Error fetching QoS minimum bandwith rules from " "{policy}".format(policy=policy['id'])) return self._get_and_munchify('minimum_bandwidth_rules', data) def get_qos_minimum_bandwidth_rule(self, policy_name_or_id, rule_id): """Get a QoS minimum bandwidth rule by name or ID. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param rule_id: ID of the rule. :returns: A bandwidth limit rule ``munch.Munch`` or None if no matching rule is found. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) data = self._network_client.get( "/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), error_message="Error fetching QoS minimum_bandwith rule {rule_id} " "from {policy}".format(rule_id=rule_id, policy=policy['id'])) return self._get_and_munchify('minimum_bandwidth_rule', data) @_utils.valid_kwargs("direction") def create_qos_minimum_bandwidth_rule(self, policy_name_or_id, min_kbps, **kwargs): """Create a QoS minimum bandwidth limit rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param int min_kbps: Minimum bandwidth value (in kilobits per second). :param string direction: Ingress or egress. The direction in which the traffic will be available. :returns: The QoS minimum bandwidth rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) kwargs['min_kbps'] = min_kbps data = self._network_client.post( "/qos/policies/{policy_id}/minimum_bandwidth_rules".format( policy_id=policy['id']), json={'minimum_bandwidth_rule': kwargs}) return self._get_and_munchify('minimum_bandwidth_rule', data) @_utils.valid_kwargs("min_kbps", "direction") def update_qos_minimum_bandwidth_rule(self, policy_name_or_id, rule_id, **kwargs): """Update a QoS minimum bandwidth rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :param int min_kbps: Minimum bandwidth value (in kilobits per second). :param string direction: Ingress or egress. The direction in which the traffic will be available. :returns: The updated QoS minimum bandwidth rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) if not kwargs: self.log.debug("No QoS minimum bandwidth rule data to update") return curr_rule = self.get_qos_minimum_bandwidth_rule( policy_name_or_id, rule_id) if not curr_rule: raise OpenStackCloudException( "QoS minimum_bandwidth_rule {rule_id} not found in policy " "{policy_id}".format(rule_id=rule_id, policy_id=policy['id'])) data = self._network_client.put( "/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), json={'minimum_bandwidth_rule': kwargs}) return self._get_and_munchify('minimum_bandwidth_rule', data) def delete_qos_minimum_bandwidth_rule(self, policy_name_or_id, rule_id): """Delete a QoS minimum bandwidth rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to delete. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) try: self._network_client.delete( "/qos/policies/{policy}/minimum_bandwidth_rules/{rule}.json". format(policy=policy['id'], rule=rule_id)) except OpenStackCloudURINotFound: self.log.debug( "QoS minimum bandwidth rule {rule_id} not found in policy " "{policy_id}. Ignoring.".format(rule_id=rule_id, policy_id=policy['id'])) return False return True def _build_external_gateway_info(self, ext_gateway_net_id, enable_snat, ext_fixed_ips): info = {} if ext_gateway_net_id: info['network_id'] = ext_gateway_net_id # Only send enable_snat if it is different from the Neutron # default of True. Sending it can cause a policy violation error # on some clouds. if enable_snat is not None and not enable_snat: info['enable_snat'] = False if ext_fixed_ips: info['external_fixed_ips'] = ext_fixed_ips if info: return info return None def add_router_interface(self, router, subnet_id=None, port_id=None): """Attach a subnet to an internal router interface. Either a subnet ID or port ID must be specified for the internal interface. Supplying both will result in an error. :param dict router: The dict object of the router being changed :param string subnet_id: The ID of the subnet to use for the interface :param string port_id: The ID of the port to use for the interface :returns: A ``munch.Munch`` with the router ID (ID), subnet ID (subnet_id), port ID (port_id) and tenant ID (tenant_id). :raises: OpenStackCloudException on operation error. """ json_body = {} if subnet_id: json_body['subnet_id'] = subnet_id if port_id: json_body['port_id'] = port_id return self._network_client.put( "/routers/{router_id}/add_router_interface.json".format( router_id=router['id']), json=json_body, error_message="Error attaching interface to router {0}".format( router['id'])) def remove_router_interface(self, router, subnet_id=None, port_id=None): """Detach a subnet from an internal router interface. At least one of subnet_id or port_id must be supplied. If you specify both subnet and port ID, the subnet ID must correspond to the subnet ID of the first IP address on the port specified by the port ID. Otherwise an error occurs. :param dict router: The dict object of the router being changed :param string subnet_id: The ID of the subnet to use for the interface :param string port_id: The ID of the port to use for the interface :returns: None on success :raises: OpenStackCloudException on operation error. """ json_body = {} if subnet_id: json_body['subnet_id'] = subnet_id if port_id: json_body['port_id'] = port_id if not json_body: raise ValueError( "At least one of subnet_id or port_id must be supplied.") self._network_client.put( "/routers/{router_id}/remove_router_interface.json".format( router_id=router['id']), json=json_body, error_message="Error detaching interface from router {0}".format( router['id'])) def list_router_interfaces(self, router, interface_type=None): """List all interfaces for a router. :param dict router: A router dict object. :param string interface_type: One of None, "internal", or "external". Controls whether all, internal interfaces or external interfaces are returned. :returns: A list of port ``munch.Munch`` objects. """ # Find only router interface and gateway ports, ignore L3 HA ports etc. router_interfaces = self.search_ports(filters={ 'device_id': router['id'], 'device_owner': 'network:router_interface'} ) + self.search_ports(filters={ 'device_id': router['id'], 'device_owner': 'network:router_interface_distributed'} ) + self.search_ports(filters={ 'device_id': router['id'], 'device_owner': 'network:ha_router_replicated_interface'}) router_gateways = self.search_ports(filters={ 'device_id': router['id'], 'device_owner': 'network:router_gateway'}) ports = router_interfaces + router_gateways if interface_type: if interface_type == 'internal': return router_interfaces if interface_type == 'external': return router_gateways return ports def create_router(self, name=None, admin_state_up=True, ext_gateway_net_id=None, enable_snat=None, ext_fixed_ips=None, project_id=None, availability_zone_hints=None): """Create a logical router. :param string name: The router name. :param bool admin_state_up: The administrative state of the router. :param string ext_gateway_net_id: Network ID for the external gateway. :param bool enable_snat: Enable Source NAT (SNAT) attribute. :param ext_fixed_ips: List of dictionaries of desired IP and/or subnet on the external network. Example:: [ { "subnet_id": "8ca37218-28ff-41cb-9b10-039601ea7e6b", "ip_address": "192.168.10.2" } ] :param string project_id: Project ID for the router. :param types.ListType availability_zone_hints: A list of availability zone hints. :returns: The router object. :raises: OpenStackCloudException on operation error. """ router = { 'admin_state_up': admin_state_up } if project_id is not None: router['tenant_id'] = project_id if name: router['name'] = name ext_gw_info = self._build_external_gateway_info( ext_gateway_net_id, enable_snat, ext_fixed_ips ) if ext_gw_info: router['external_gateway_info'] = ext_gw_info if availability_zone_hints is not None: if not isinstance(availability_zone_hints, list): raise OpenStackCloudException( "Parameter 'availability_zone_hints' must be a list") if not self._has_neutron_extension('router_availability_zone'): raise OpenStackCloudUnavailableExtension( 'router_availability_zone extension is not available on ' 'target cloud') router['availability_zone_hints'] = availability_zone_hints data = self._network_client.post( "/routers.json", json={"router": router}, error_message="Error creating router {0}".format(name)) return self._get_and_munchify('router', data) def update_router(self, name_or_id, name=None, admin_state_up=None, ext_gateway_net_id=None, enable_snat=None, ext_fixed_ips=None): """Update an existing logical router. :param string name_or_id: The name or UUID of the router to update. :param string name: The new router name. :param bool admin_state_up: The administrative state of the router. :param string ext_gateway_net_id: The network ID for the external gateway. :param bool enable_snat: Enable Source NAT (SNAT) attribute. :param ext_fixed_ips: List of dictionaries of desired IP and/or subnet on the external network. Example:: [ { "subnet_id": "8ca37218-28ff-41cb-9b10-039601ea7e6b", "ip_address": "192.168.10.2" } ] :returns: The router object. :raises: OpenStackCloudException on operation error. """ router = {} if name: router['name'] = name if admin_state_up is not None: router['admin_state_up'] = admin_state_up ext_gw_info = self._build_external_gateway_info( ext_gateway_net_id, enable_snat, ext_fixed_ips ) if ext_gw_info: router['external_gateway_info'] = ext_gw_info if not router: self.log.debug("No router data to update") return curr_router = self.get_router(name_or_id) if not curr_router: raise OpenStackCloudException( "Router %s not found." % name_or_id) data = self._network_client.put( "/routers/{router_id}.json".format(router_id=curr_router['id']), json={"router": router}, error_message="Error updating router {0}".format(name_or_id)) return self._get_and_munchify('router', data) def delete_router(self, name_or_id): """Delete a logical router. If a name, instead of a unique UUID, is supplied, it is possible that we could find more than one matching router since names are not required to be unique. An error will be raised in this case. :param name_or_id: Name or ID of the router being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ router = self.get_router(name_or_id) if not router: self.log.debug("Router %s not found for deleting", name_or_id) return False self._network_client.delete( "/routers/{router_id}.json".format(router_id=router['id']), error_message="Error deleting router {0}".format(name_or_id)) return True def get_image_exclude(self, name_or_id, exclude): for image in self.search_images(name_or_id): if exclude: if exclude not in image.name: return image else: return image return None def get_image_name(self, image_id, exclude=None): image = self.get_image_exclude(image_id, exclude) if image: return image.name return None def get_image_id(self, image_name, exclude=None): image = self.get_image_exclude(image_name, exclude) if image: return image.id return None def create_image_snapshot( self, name, server, wait=False, timeout=3600, **metadata): """Create an image by snapshotting an existing server. ..note:: On most clouds this is a cold snapshot - meaning that the server in question will be shutdown before taking the snapshot. It is possible that it's a live snapshot - but there is no way to know as a user, so caveat emptor. :param name: Name of the image to be created :param server: Server name or ID or dict representing the server to be snapshotted :param wait: If true, waits for image to be created. :param timeout: Seconds to wait for image creation. None is forever. :param metadata: Metadata to give newly-created image entity :returns: A ``munch.Munch`` of the Image object :raises: OpenStackCloudException if there are problems uploading """ if not isinstance(server, dict): server_obj = self.get_server(server, bare=True) if not server_obj: raise OpenStackCloudException( "Server {server} could not be found and therefore" " could not be snapshotted.".format(server=server)) server = server_obj response = _adapter._json_response( self._conn.compute.post( '/servers/{server_id}/action'.format(server_id=server['id']), json={ "createImage": { "name": name, "metadata": metadata, } })) # You won't believe it - wait, who am I kidding - of course you will! # Nova returns the URL of the image created in the Location # header of the response. (what?) But, even better, the URL it responds # with has a very good chance of being wrong (it is built from # nova.conf values that point to internal API servers in any cloud # large enough to have both public and internal endpoints. # However, nobody has ever noticed this because novaclient doesn't # actually use that URL - it extracts the id from the end of # the url, then returns the id. This leads us to question: # a) why Nova is going to return a value in a header # b) why it's going to return data that probably broken # c) indeed the very nature of the fabric of reality # Although it fills us with existential dread, we have no choice but # to follow suit like a lemming being forced over a cliff by evil # producers from Disney. # TODO(mordred) Update this to consume json microversion when it is # available. # blueprint:remove-create-image-location-header-response image_id = response.headers['Location'].rsplit('/', 1)[1] self.list_images.invalidate(self) image = self.get_image(image_id) if not wait: return image return self.wait_for_image(image, timeout=timeout) def wait_for_image(self, image, timeout=3600): image_id = image['id'] for count in utils.iterate_timeout( timeout, "Timeout waiting for image to snapshot"): self.list_images.invalidate(self) image = self.get_image(image_id) if not image: continue if image['status'] == 'active': return image elif image['status'] == 'error': raise OpenStackCloudException( 'Image {image} hit error state'.format(image=image_id)) def delete_image( self, name_or_id, wait=False, timeout=3600, delete_objects=True): """Delete an existing image. :param name_or_id: Name of the image to be deleted. :param wait: If True, waits for image to be deleted. :param timeout: Seconds to wait for image deletion. None is forever. :param delete_objects: If True, also deletes uploaded swift objects. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException if there are problems deleting. """ image = self.get_image(name_or_id) if not image: return False self._image_client.delete( '/images/{id}'.format(id=image.id), error_message="Error in deleting image") self.list_images.invalidate(self) # Task API means an image was uploaded to swift if self.image_api_use_tasks and IMAGE_OBJECT_KEY in image: (container, objname) = image[IMAGE_OBJECT_KEY].split('/', 1) self.delete_object(container=container, name=objname) if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for the image to be deleted."): self._get_cache(None).invalidate() if self.get_image(image.id) is None: break return True def _get_name_and_filename(self, name): # See if name points to an existing file if os.path.exists(name): # Neat. Easy enough return (os.path.splitext(os.path.basename(name))[0], name) # Try appending the disk format name_with_ext = '.'.join(( name, self.cloud_config.config['image_format'])) if os.path.exists(name_with_ext): return (os.path.basename(name), name_with_ext) raise OpenStackCloudException( 'No filename parameter was given to create_image,' ' and {name} was not the path to an existing file.' ' Please provide either a path to an existing file' ' or a name and a filename'.format(name=name)) def _hashes_up_to_date(self, md5, sha256, md5_key, sha256_key): '''Compare md5 and sha256 hashes for being up to date md5 and sha256 are the current values. md5_key and sha256_key are the previous values. ''' up_to_date = False if md5 and md5_key == md5: up_to_date = True if sha256 and sha256_key == sha256: up_to_date = True if md5 and md5_key != md5: up_to_date = False if sha256 and sha256_key != sha256: up_to_date = False return up_to_date def create_image( self, name, filename=None, container=OBJECT_AUTOCREATE_CONTAINER, md5=None, sha256=None, disk_format=None, container_format=None, disable_vendor_agent=True, wait=False, timeout=3600, allow_duplicates=False, meta=None, volume=None, **kwargs): """Upload an image. :param str name: Name of the image to create. If it is a pathname of an image, the name will be constructed from the extensionless basename of the path. :param str filename: The path to the file to upload, if needed. (optional, defaults to None) :param str container: Name of the container in swift where images should be uploaded for import if the cloud requires such a thing. (optiona, defaults to 'images') :param str md5: md5 sum of the image file. If not given, an md5 will be calculated. :param str sha256: sha256 sum of the image file. If not given, an md5 will be calculated. :param str disk_format: The disk format the image is in. (optional, defaults to the os-client-config config value for this cloud) :param str container_format: The container format the image is in. (optional, defaults to the os-client-config config value for this cloud) :param bool disable_vendor_agent: Whether or not to append metadata flags to the image to inform the cloud in question to not expect a vendor agent to be runing. (optional, defaults to True) :param bool wait: If true, waits for image to be created. Defaults to true - however, be aware that one of the upload methods is always synchronous. :param timeout: Seconds to wait for image creation. None is forever. :param allow_duplicates: If true, skips checks that enforce unique image name. (optional, defaults to False) :param meta: A dict of key/value pairs to use for metadata that bypasses automatic type conversion. :param volume: Name or ID or volume object of a volume to create an image from. Mutually exclusive with (optional, defaults to None) Additional kwargs will be passed to the image creation as additional metadata for the image and will have all values converted to string except for min_disk, min_ram, size and virtual_size which will be converted to int. If you are sure you have all of your data types correct or have an advanced need to be explicit, use meta. If you are just a normal consumer, using kwargs is likely the right choice. If a value is in meta and kwargs, meta wins. :returns: A ``munch.Munch`` of the Image object :raises: OpenStackCloudException if there are problems uploading """ if not meta: meta = {} if not disk_format: disk_format = self.cloud_config.config['image_format'] if not container_format: # https://docs.openstack.org/image-guide/image-formats.html container_format = 'bare' if volume: if 'id' in volume: volume_id = volume['id'] else: volume_obj = self.get_volume(volume) if not volume_obj: raise OpenStackCloudException( "Volume {volume} given to create_image could" " not be foud".format(volume=volume)) volume_id = volume_obj['id'] return self._upload_image_from_volume( name=name, volume_id=volume_id, allow_duplicates=allow_duplicates, container_format=container_format, disk_format=disk_format, wait=wait, timeout=timeout) # If there is no filename, see if name is actually the filename if not filename: name, filename = self._get_name_and_filename(name) if not (md5 or sha256): (md5, sha256) = self._get_file_hashes(filename) if allow_duplicates: current_image = None else: current_image = self.get_image(name) if current_image: md5_key = current_image.get(IMAGE_MD5_KEY, '') sha256_key = current_image.get(IMAGE_SHA256_KEY, '') up_to_date = self._hashes_up_to_date( md5=md5, sha256=sha256, md5_key=md5_key, sha256_key=sha256_key) if up_to_date: self.log.debug( "image %(name)s exists and is up to date", {'name': name}) return current_image kwargs[IMAGE_MD5_KEY] = md5 or '' kwargs[IMAGE_SHA256_KEY] = sha256 or '' kwargs[IMAGE_OBJECT_KEY] = '/'.join([container, name]) if disable_vendor_agent: kwargs.update(self.cloud_config.config['disable_vendor_agent']) # We can never have nice things. Glance v1 took "is_public" as a # boolean. Glance v2 takes "visibility". If the user gives us # is_public, we know what they mean. If they give us visibility, they # know that they mean. if self._is_client_version('image', 2): if 'is_public' in kwargs: is_public = kwargs.pop('is_public') if is_public: kwargs['visibility'] = 'public' else: kwargs['visibility'] = 'private' try: # This makes me want to die inside if self.image_api_use_tasks: return self._upload_image_task( name, filename, container, current_image=current_image, wait=wait, timeout=timeout, md5=md5, sha256=sha256, meta=meta, **kwargs) else: # If a user used the v1 calling format, they will have # passed a dict called properties along properties = kwargs.pop('properties', {}) kwargs.update(properties) image_kwargs = dict(properties=kwargs) if disk_format: image_kwargs['disk_format'] = disk_format if container_format: image_kwargs['container_format'] = container_format return self._upload_image_put( name, filename, meta=meta, wait=wait, timeout=timeout, **image_kwargs) except OpenStackCloudException: self.log.debug("Image creation failed", exc_info=True) raise except Exception as e: raise OpenStackCloudException( "Image creation failed: {message}".format(message=str(e))) def _make_v2_image_params(self, meta, properties): ret = {} for k, v in iter(properties.items()): if k in ('min_disk', 'min_ram', 'size', 'virtual_size'): ret[k] = int(v) elif k == 'protected': ret[k] = v else: if v is None: ret[k] = None else: ret[k] = str(v) ret.update(meta) return ret def _upload_image_from_volume( self, name, volume_id, allow_duplicates, container_format, disk_format, wait, timeout): data = self._volume_client.post( '/volumes/{id}/action'.format(id=volume_id), json={ 'os-volume_upload_image': { 'force': allow_duplicates, 'image_name': name, 'container_format': container_format, 'disk_format': disk_format}}) response = self._get_and_munchify('os-volume_upload_image', data) if not wait: return self.get_image(response['image_id']) try: for count in utils.iterate_timeout( timeout, "Timeout waiting for the image to finish."): image_obj = self.get_image(response['image_id']) if image_obj and image_obj.status not in ('queued', 'saving'): return image_obj except OpenStackCloudTimeout: self.log.debug( "Timeout waiting for image to become ready. Deleting.") self.delete_image(response['image_id'], wait=True) raise def _upload_image_put_v2(self, name, image_data, meta, **image_kwargs): properties = image_kwargs.pop('properties', {}) image_kwargs.update(self._make_v2_image_params(meta, properties)) image_kwargs['name'] = name data = self._image_client.post('/images', json=image_kwargs) image = self._get_and_munchify(key=None, data=data) try: self._image_client.put( '/images/{id}/file'.format(id=image.id), headers={'Content-Type': 'application/octet-stream'}, data=image_data) except Exception: self.log.debug("Deleting failed upload of image %s", name) try: self._image_client.delete( '/images/{id}'.format(id=image.id)) except OpenStackCloudHTTPError: # We're just trying to clean up - if it doesn't work - shrug self.log.debug( "Failed deleting image after we failed uploading it.", exc_info=True) raise return image def _upload_image_put_v1( self, name, image_data, meta, **image_kwargs): image_kwargs['properties'].update(meta) image_kwargs['name'] = name image = self._image_client.post('/images', json=image_kwargs) checksum = image_kwargs['properties'].get(IMAGE_MD5_KEY, '') try: # Let us all take a brief moment to be grateful that this # is not actually how OpenStack APIs work anymore headers = { 'x-glance-registry-purge-props': 'false', } if checksum: headers['x-image-meta-checksum'] = checksum image = self._image_client.put( '/images/{id}'.format(id=image.id), headers=headers, data=image_data) except OpenStackCloudHTTPError: self.log.debug("Deleting failed upload of image %s", name) try: self._image_client.delete('/images/{id}'.format(id=image.id)) except OpenStackCloudHTTPError: # We're just trying to clean up - if it doesn't work - shrug self.log.debug( "Failed deleting image after we failed uploading it.", exc_info=True) raise return image def _upload_image_put( self, name, filename, meta, wait, timeout, **image_kwargs): image_data = open(filename, 'rb') # Because reasons and crying bunnies if self._is_client_version('image', 2): image = self._upload_image_put_v2( name, image_data, meta, **image_kwargs) else: image = self._upload_image_put_v1( name, image_data, meta, **image_kwargs) self._get_cache(None).invalidate() if not wait: return image try: for count in utils.iterate_timeout( timeout, "Timeout waiting for the image to finish."): image_obj = self.get_image(image.id) if image_obj and image_obj.status not in ('queued', 'saving'): return image_obj except OpenStackCloudTimeout: self.log.debug( "Timeout waiting for image to become ready. Deleting.") self.delete_image(image.id, wait=True) raise def _upload_image_task( self, name, filename, container, current_image, wait, timeout, meta, md5=None, sha256=None, **image_kwargs): parameters = image_kwargs.pop('parameters', {}) image_kwargs.update(parameters) self.create_object( container, name, filename, md5=md5, sha256=sha256, metadata={OBJECT_AUTOCREATE_KEY: 'true'}, **{'content-type': 'application/octet-stream'}) if not current_image: current_image = self.get_image(name) # TODO(mordred): Can we do something similar to what nodepool does # using glance properties to not delete then upload but instead make a # new "good" image and then mark the old one as "bad" task_args = dict( type='import', input=dict( import_from='{container}/{name}'.format( container=container, name=name), image_properties=dict(name=name))) data = self._image_client.post('/tasks', json=task_args) glance_task = self._get_and_munchify(key=None, data=data) self.list_images.invalidate(self) if wait: start = time.time() image_id = None for count in utils.iterate_timeout( timeout, "Timeout waiting for the image to import."): try: if image_id is None: status = self._image_client.get( '/tasks/{id}'.format(id=glance_task.id)) except OpenStackCloudHTTPError as e: if e.response.status_code == 503: # Clear the exception so that it doesn't linger # and get reported as an Inner Exception later _utils._exc_clear() # Intermittent failure - catch and try again continue raise if status['status'] == 'success': image_id = status['result']['image_id'] try: image = self.get_image(image_id) except OpenStackCloudHTTPError as e: if e.response.status_code == 503: # Clear the exception so that it doesn't linger # and get reported as an Inner Exception later _utils._exc_clear() # Intermittent failure - catch and try again continue raise if image is None: continue self.update_image_properties( image=image, meta=meta, **image_kwargs) self.log.debug( "Image Task %s imported %s in %s", glance_task.id, image_id, (time.time() - start)) # Clean up after ourselves. The object we created is not # needed after the import is done. self.delete_object(container, name) return self.get_image(image_id) elif status['status'] == 'failure': if status['message'] == IMAGE_ERROR_396: glance_task = self._image_client.post( '/tasks', data=task_args) self.list_images.invalidate(self) else: # Clean up after ourselves. The image did not import # and this isn't a 'just retry' error - glance didn't # like the content. So we don't want to keep it for # next time. self.delete_object(container, name) raise OpenStackCloudException( "Image creation failed: {message}".format( message=status['message']), extra_data=status) else: return glance_task def update_image_properties( self, image=None, name_or_id=None, meta=None, **properties): if image is None: image = self.get_image(name_or_id) if not meta: meta = {} img_props = {} for k, v in iter(properties.items()): if v and k in ['ramdisk', 'kernel']: v = self.get_image_id(v) k = '{0}_id'.format(k) img_props[k] = v # This makes me want to die inside if self._is_client_version('image', 2): return self._update_image_properties_v2(image, meta, img_props) else: return self._update_image_properties_v1(image, meta, img_props) def _update_image_properties_v2(self, image, meta, properties): img_props = image.properties.copy() for k, v in iter(self._make_v2_image_params(meta, properties).items()): if image.get(k, None) != v: img_props[k] = v if not img_props: return False headers = { 'Content-Type': 'application/openstack-images-v2.1-json-patch'} patch = sorted(list(jsonpatch.JsonPatch.from_diff( image.properties, img_props)), key=operator.itemgetter('value')) # No need to fire an API call if there is an empty patch if patch: self._image_client.patch( '/images/{id}'.format(id=image.id), headers=headers, data=json.dumps(patch)) self.list_images.invalidate(self) return True def _update_image_properties_v1(self, image, meta, properties): properties.update(meta) img_props = {} for k, v in iter(properties.items()): if image.properties.get(k, None) != v: img_props['x-image-meta-{key}'.format(key=k)] = v if not img_props: return False self._image_client.put( '/images/{id}'.format(image.id), headers=img_props) self.list_images.invalidate(self) return True def create_volume( self, size, wait=True, timeout=None, image=None, bootable=None, **kwargs): """Create a volume. :param size: Size, in GB of the volume to create. :param name: (optional) Name for the volume. :param description: (optional) Name for the volume. :param wait: If true, waits for volume to be created. :param timeout: Seconds to wait for volume creation. None is forever. :param image: (optional) Image name, ID or object from which to create the volume :param bootable: (optional) Make this volume bootable. If set, wait will also be set to true. :param kwargs: Keyword arguments as expected for cinder client. :returns: The created volume object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ if bootable is not None: wait = True if image: image_obj = self.get_image(image) if not image_obj: raise OpenStackCloudException( "Image {image} was requested as the basis for a new" " volume, but was not found on the cloud".format( image=image)) kwargs['imageRef'] = image_obj['id'] kwargs = self._get_volume_kwargs(kwargs) kwargs['size'] = size payload = dict(volume=kwargs) if 'scheduler_hints' in kwargs: payload['OS-SCH-HNT:scheduler_hints'] = kwargs.pop( 'scheduler_hints', None) data = self._volume_client.post( '/volumes', json=dict(payload), error_message='Error in creating volume') volume = self._get_and_munchify('volume', data) self.list_volumes.invalidate(self) if volume['status'] == 'error': raise OpenStackCloudException("Error in creating volume") if wait: vol_id = volume['id'] for count in utils.iterate_timeout( timeout, "Timeout waiting for the volume to be available."): volume = self.get_volume(vol_id) if not volume: continue if volume['status'] == 'available': if bootable is not None: self.set_volume_bootable(volume, bootable=bootable) # no need to re-fetch to update the flag, just set it. volume['bootable'] = bootable return volume if volume['status'] == 'error': raise OpenStackCloudException("Error in creating volume") return self._normalize_volume(volume) def set_volume_bootable(self, name_or_id, bootable=True): """Set a volume's bootable flag. :param name_or_id: Name, unique ID of the volume or a volume dict. :param bool bootable: Whether the volume should be bootable. (Defaults to True) :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ volume = self.get_volume(name_or_id) if not volume: raise OpenStackCloudException( "Volume {name_or_id} does not exist".format( name_or_id=name_or_id)) self._volume_client.post( 'volumes/{id}/action'.format(id=volume['id']), json={'os-set_bootable': {'bootable': bootable}}, error_message="Error setting bootable on volume {volume}".format( volume=volume['id']) ) def delete_volume(self, name_or_id=None, wait=True, timeout=None, force=False): """Delete a volume. :param name_or_id: Name or unique ID of the volume. :param wait: If true, waits for volume to be deleted. :param timeout: Seconds to wait for volume deletion. None is forever. :param force: Force delete volume even if the volume is in deleting or error_deleting state. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ self.list_volumes.invalidate(self) volume = self.get_volume(name_or_id) if not volume: self.log.debug( "Volume %(name_or_id)s does not exist", {'name_or_id': name_or_id}, exc_info=True) return False with _utils.shade_exceptions("Error in deleting volume"): try: if force: self._volume_client.post( 'volumes/{id}/action'.format(id=volume['id']), json={'os-force_delete': None}) else: self._volume_client.delete( 'volumes/{id}'.format(id=volume['id'])) except OpenStackCloudURINotFound: self.log.debug( "Volume {id} not found when deleting. Ignoring.".format( id=volume['id'])) return False self.list_volumes.invalidate(self) if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for the volume to be deleted."): if not self.get_volume(volume['id']): break return True def get_volumes(self, server, cache=True): volumes = [] for volume in self.list_volumes(cache=cache): for attach in volume['attachments']: if attach['server_id'] == server['id']: volumes.append(volume) return volumes def get_volume_id(self, name_or_id): volume = self.get_volume(name_or_id) if volume: return volume['id'] return None def volume_exists(self, name_or_id): return self.get_volume(name_or_id) is not None def get_volume_attach_device(self, volume, server_id): """Return the device name a volume is attached to for a server. This can also be used to verify if a volume is attached to a particular server. :param volume: Volume dict :param server_id: ID of server to check :returns: Device name if attached, None if volume is not attached. """ for attach in volume['attachments']: if server_id == attach['server_id']: return attach['device'] return None def detach_volume(self, server, volume, wait=True, timeout=None): """Detach a volume from a server. :param server: The server dict to detach from. :param volume: The volume dict to detach. :param wait: If true, waits for volume to be detached. :param timeout: Seconds to wait for volume detachment. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ _adapter._json_response(self._conn.compute.delete( '/servers/{server_id}/os-volume_attachments/{volume_id}'.format( server_id=server['id'], volume_id=volume['id'])), error_message=( "Error detaching volume {volume} from server {server}".format( volume=volume['id'], server=server['id']))) if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for volume %s to detach." % volume['id']): try: vol = self.get_volume(volume['id']) except Exception: self.log.debug( "Error getting volume info %s", volume['id'], exc_info=True) continue if vol['status'] == 'available': return if vol['status'] == 'error': raise OpenStackCloudException( "Error in detaching volume %s" % volume['id'] ) def attach_volume(self, server, volume, device=None, wait=True, timeout=None): """Attach a volume to a server. This will attach a volume, described by the passed in volume dict (as returned by get_volume()), to the server described by the passed in server dict (as returned by get_server()) on the named device on the server. If the volume is already attached to the server, or generally not available, then an exception is raised. To re-attach to a server, but under a different device, the user must detach it first. :param server: The server dict to attach to. :param volume: The volume dict to attach. :param device: The device name where the volume will attach. :param wait: If true, waits for volume to be attached. :param timeout: Seconds to wait for volume attachment. None is forever. :returns: a volume attachment object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ dev = self.get_volume_attach_device(volume, server['id']) if dev: raise OpenStackCloudException( "Volume %s already attached to server %s on device %s" % (volume['id'], server['id'], dev) ) if volume['status'] != 'available': raise OpenStackCloudException( "Volume %s is not available. Status is '%s'" % (volume['id'], volume['status']) ) payload = {'volumeId': volume['id']} if device: payload['device'] = device data = _adapter._json_response( self._conn.compute.post( '/servers/{server_id}/os-volume_attachments'.format( server_id=server['id']), json=dict(volumeAttachment=payload)), error_message="Error attaching volume {volume_id} to server " "{server_id}".format(volume_id=volume['id'], server_id=server['id'])) if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for volume %s to attach." % volume['id']): try: self.list_volumes.invalidate(self) vol = self.get_volume(volume['id']) except Exception: self.log.debug( "Error getting volume info %s", volume['id'], exc_info=True) continue if self.get_volume_attach_device(vol, server['id']): break # TODO(Shrews) check to see if a volume can be in error status # and also attached. If so, we should move this # above the get_volume_attach_device call if vol['status'] == 'error': raise OpenStackCloudException( "Error in attaching volume %s" % volume['id'] ) return self._normalize_volume_attachment( self._get_and_munchify('volumeAttachment', data)) def _get_volume_kwargs(self, kwargs): name = kwargs.pop('name', kwargs.pop('display_name', None)) description = kwargs.pop('description', kwargs.pop('display_description', None)) if name: if self._is_client_version('volume', 2): kwargs['name'] = name else: kwargs['display_name'] = name if description: if self._is_client_version('volume', 2): kwargs['description'] = description else: kwargs['display_description'] = description return kwargs @_utils.valid_kwargs('name', 'display_name', 'description', 'display_description') def create_volume_snapshot(self, volume_id, force=False, wait=True, timeout=None, **kwargs): """Create a volume. :param volume_id: the ID of the volume to snapshot. :param force: If set to True the snapshot will be created even if the volume is attached to an instance, if False it will not :param name: name of the snapshot, one will be generated if one is not provided :param description: description of the snapshot, one will be generated if one is not provided :param wait: If true, waits for volume snapshot to be created. :param timeout: Seconds to wait for volume snapshot creation. None is forever. :returns: The created volume object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ kwargs = self._get_volume_kwargs(kwargs) payload = {'volume_id': volume_id, 'force': force} payload.update(kwargs) data = self._volume_client.post( '/snapshots', json=dict(snapshot=payload), error_message="Error creating snapshot of volume " "{volume_id}".format(volume_id=volume_id)) snapshot = self._get_and_munchify('snapshot', data) if wait: snapshot_id = snapshot['id'] for count in utils.iterate_timeout( timeout, "Timeout waiting for the volume snapshot to be available." ): snapshot = self.get_volume_snapshot_by_id(snapshot_id) if snapshot['status'] == 'available': break if snapshot['status'] == 'error': raise OpenStackCloudException( "Error in creating volume snapshot") # TODO(mordred) need to normalize snapshots. We were normalizing them # as volumes, which is an error. They need to be normalized as # volume snapshots, which are completely different objects return snapshot def get_volume_snapshot_by_id(self, snapshot_id): """Takes a snapshot_id and gets a dict of the snapshot that maches that ID. Note: This is more efficient than get_volume_snapshot. param: snapshot_id: ID of the volume snapshot. """ data = self._volume_client.get( '/snapshots/{snapshot_id}'.format(snapshot_id=snapshot_id), error_message="Error getting snapshot " "{snapshot_id}".format(snapshot_id=snapshot_id)) return self._normalize_volume( self._get_and_munchify('snapshot', data)) def get_volume_snapshot(self, name_or_id, filters=None): """Get a volume by name or ID. :param name_or_id: Name or ID of the volume snapshot. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A volume ``munch.Munch`` or None if no matching volume is found. """ return _utils._get_entity(self, 'volume_snapshot', name_or_id, filters) def create_volume_backup(self, volume_id, name=None, description=None, force=False, wait=True, timeout=None): """Create a volume backup. :param volume_id: the ID of the volume to backup. :param name: name of the backup, one will be generated if one is not provided :param description: description of the backup, one will be generated if one is not provided :param force: If set to True the backup will be created even if the volume is attached to an instance, if False it will not :param wait: If true, waits for volume backup to be created. :param timeout: Seconds to wait for volume backup creation. None is forever. :returns: The created volume backup object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ payload = { 'name': name, 'volume_id': volume_id, 'description': description, 'force': force, } data = self._volume_client.post( '/backups', json=dict(backup=payload), error_message="Error creating backup of volume " "{volume_id}".format(volume_id=volume_id)) backup = self._get_and_munchify('backup', data) if wait: backup_id = backup['id'] msg = ("Timeout waiting for the volume backup {} to be " "available".format(backup_id)) for _ in utils.iterate_timeout(timeout, msg): backup = self.get_volume_backup(backup_id) if backup['status'] == 'available': break if backup['status'] == 'error': raise OpenStackCloudException( "Error in creating volume backup {id}".format( id=backup_id)) return backup def get_volume_backup(self, name_or_id, filters=None): """Get a volume backup by name or ID. :returns: A backup ``munch.Munch`` or None if no matching backup is found. """ return _utils._get_entity(self, 'volume_backup', name_or_id, filters) def list_volume_snapshots(self, detailed=True, search_opts=None): """List all volume snapshots. :returns: A list of volume snapshots ``munch.Munch``. """ endpoint = '/snapshots/detail' if detailed else '/snapshots' data = self._volume_client.get( endpoint, params=search_opts, error_message="Error getting a list of snapshots") return self._get_and_munchify('snapshots', data) def list_volume_backups(self, detailed=True, search_opts=None): """ List all volume backups. :param bool detailed: Also list details for each entry :param dict search_opts: Search options A dictionary of meta data to use for further filtering. Example:: { 'name': 'my-volume-backup', 'status': 'available', 'volume_id': 'e126044c-7b4c-43be-a32a-c9cbbc9ddb56', 'all_tenants': 1 } :returns: A list of volume backups ``munch.Munch``. """ endpoint = '/backups/detail' if detailed else '/backups' data = self._volume_client.get( endpoint, params=search_opts, error_message="Error getting a list of backups") return self._get_and_munchify('backups', data) def delete_volume_backup(self, name_or_id=None, force=False, wait=False, timeout=None): """Delete a volume backup. :param name_or_id: Name or unique ID of the volume backup. :param force: Allow delete in state other than error or available. :param wait: If true, waits for volume backup to be deleted. :param timeout: Seconds to wait for volume backup deletion. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ volume_backup = self.get_volume_backup(name_or_id) if not volume_backup: return False msg = "Error in deleting volume backup" if force: self._volume_client.post( '/backups/{backup_id}/action'.format( backup_id=volume_backup['id']), json={'os-force_delete': None}, error_message=msg) else: self._volume_client.delete( '/backups/{backup_id}'.format( backup_id=volume_backup['id']), error_message=msg) if wait: msg = "Timeout waiting for the volume backup to be deleted." for count in utils.iterate_timeout(timeout, msg): if not self.get_volume_backup(volume_backup['id']): break return True def delete_volume_snapshot(self, name_or_id=None, wait=False, timeout=None): """Delete a volume snapshot. :param name_or_id: Name or unique ID of the volume snapshot. :param wait: If true, waits for volume snapshot to be deleted. :param timeout: Seconds to wait for volume snapshot deletion. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ volumesnapshot = self.get_volume_snapshot(name_or_id) if not volumesnapshot: return False self._volume_client.delete( '/snapshots/{snapshot_id}'.format( snapshot_id=volumesnapshot['id']), error_message="Error in deleting volume snapshot") if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for the volume snapshot to be deleted."): if not self.get_volume_snapshot(volumesnapshot['id']): break return True def get_server_id(self, name_or_id): server = self.get_server(name_or_id, bare=True) if server: return server['id'] return None def get_server_private_ip(self, server): return meta.get_server_private_ip(server, self) def get_server_public_ip(self, server): return meta.get_server_external_ipv4(self, server) def get_server_meta(self, server): # TODO(mordred) remove once ansible has moved to Inventory interface server_vars = meta.get_hostvars_from_server(self, server) groups = meta.get_groups_from_server(self, server, server_vars) return dict(server_vars=server_vars, groups=groups) def get_openstack_vars(self, server): return meta.get_hostvars_from_server(self, server) def _expand_server_vars(self, server): # Used by nodepool # TODO(mordred) remove after these make it into what we # actually want the API to be. return meta.expand_server_vars(self, server) def _find_floating_network_by_router(self): """Find the network providing floating ips by looking at routers.""" if self._floating_network_by_router_lock.acquire( not self._floating_network_by_router_run): if self._floating_network_by_router_run: self._floating_network_by_router_lock.release() return self._floating_network_by_router try: for router in self.list_routers(): if router['admin_state_up']: network_id = router.get( 'external_gateway_info', {}).get('network_id') if network_id: self._floating_network_by_router = network_id finally: self._floating_network_by_router_run = True self._floating_network_by_router_lock.release() return self._floating_network_by_router def available_floating_ip(self, network=None, server=None): """Get a floating IP from a network or a pool. Return the first available floating IP or allocate a new one. :param network: Name or ID of the network. :param server: Server the IP is for if known :returns: a (normalized) structure with a floating IP address description. """ if self._use_neutron_floating(): try: f_ips = self._normalize_floating_ips( self._neutron_available_floating_ips( network=network, server=server)) return f_ips[0] except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova f_ips = self._normalize_floating_ips( self._nova_available_floating_ips(pool=network) ) return f_ips[0] def _get_floating_network_id(self): # Get first existing external IPv4 network networks = self.get_external_ipv4_floating_networks() if networks: floating_network_id = networks[0]['id'] else: floating_network = self._find_floating_network_by_router() if floating_network: floating_network_id = floating_network else: raise OpenStackCloudResourceNotFound( "unable to find an external network") return floating_network_id def _neutron_available_floating_ips( self, network=None, project_id=None, server=None): """Get a floating IP from a network. Return a list of available floating IPs or allocate a new one and return it in a list of 1 element. :param network: A single network name or ID, or a list of them. :param server: (server) Server the Floating IP is for :returns: a list of floating IP addresses. :raises: ``OpenStackCloudResourceNotFound``, if an external network that meets the specified criteria cannot be found. """ if project_id is None: # Make sure we are only listing floatingIPs allocated the current # tenant. This is the default behaviour of Nova project_id = self.current_project_id if network: if isinstance(network, six.string_types): network = [network] # Use given list to get first matching external network floating_network_id = None for net in network: for ext_net in self.get_external_ipv4_floating_networks(): if net in (ext_net['name'], ext_net['id']): floating_network_id = ext_net['id'] break if floating_network_id: break if floating_network_id is None: raise OpenStackCloudResourceNotFound( "unable to find external network {net}".format( net=network) ) else: floating_network_id = self._get_floating_network_id() filters = { 'port': None, 'network': floating_network_id, 'location': {'project': {'id': project_id}}, } floating_ips = self._list_floating_ips() available_ips = _utils._filter_list( floating_ips, name_or_id=None, filters=filters) if available_ips: return available_ips # No available IP found or we didn't try # allocate a new Floating IP f_ip = self._neutron_create_floating_ip( network_id=floating_network_id, server=server) return [f_ip] def _nova_available_floating_ips(self, pool=None): """Get available floating IPs from a floating IP pool. Return a list of available floating IPs or allocate a new one and return it in a list of 1 element. :param pool: Nova floating IP pool name. :returns: a list of floating IP addresses. :raises: ``OpenStackCloudResourceNotFound``, if a floating IP pool is not specified and cannot be found. """ with _utils.shade_exceptions( "Unable to create floating IP in pool {pool}".format( pool=pool)): if pool is None: pools = self.list_floating_ip_pools() if not pools: raise OpenStackCloudResourceNotFound( "unable to find a floating ip pool") pool = pools[0]['name'] filters = { 'instance_id': None, 'pool': pool } floating_ips = self._nova_list_floating_ips() available_ips = _utils._filter_list( floating_ips, name_or_id=None, filters=filters) if available_ips: return available_ips # No available IP found or we did not try. # Allocate a new Floating IP f_ip = self._nova_create_floating_ip(pool=pool) return [f_ip] def create_floating_ip(self, network=None, server=None, fixed_address=None, nat_destination=None, port=None, wait=False, timeout=60): """Allocate a new floating IP from a network or a pool. :param network: Name or ID of the network that the floating IP should come from. :param server: (optional) Server dict for the server to create the IP for and to which it should be attached. :param fixed_address: (optional) Fixed IP to attach the floating ip to. :param nat_destination: (optional) Name or ID of the network that the fixed IP to attach the floating IP to should be on. :param port: (optional) The port ID that the floating IP should be attached to. Specifying a port conflicts with specifying a server, fixed_address or nat_destination. :param wait: (optional) Whether to wait for the IP to be active. Defaults to False. Only applies if a server is provided. :param timeout: (optional) How long to wait for the IP to be active. Defaults to 60. Only applies if a server is provided. :returns: a floating IP address :raises: ``OpenStackCloudException``, on operation error. """ if self._use_neutron_floating(): try: return self._neutron_create_floating_ip( network_name_or_id=network, server=server, fixed_address=fixed_address, nat_destination=nat_destination, port=port, wait=wait, timeout=timeout) except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova if port: raise OpenStackCloudException( "This cloud uses nova-network which does not support" " arbitrary floating-ip/port mappings. Please nudge" " your cloud provider to upgrade the networking stack" " to neutron, or alternately provide the server," " fixed_address and nat_destination arguments as appropriate") # Else, we are using Nova network f_ips = self._normalize_floating_ips( [self._nova_create_floating_ip(pool=network)]) return f_ips[0] def _submit_create_fip(self, kwargs): # Split into a method to aid in test mocking data = self._network_client.post( "/floatingips.json", json={"floatingip": kwargs}) return self._normalize_floating_ip( self._get_and_munchify('floatingip', data)) def _neutron_create_floating_ip( self, network_name_or_id=None, server=None, fixed_address=None, nat_destination=None, port=None, wait=False, timeout=60, network_id=None): if not network_id: if network_name_or_id: network = self.get_network(network_name_or_id) if not network: raise OpenStackCloudResourceNotFound( "unable to find network for floating ips with ID " "{0}".format(network_name_or_id)) network_id = network['id'] else: network_id = self._get_floating_network_id() kwargs = { 'floating_network_id': network_id, } if not port: if server: (port_obj, fixed_ip_address) = self._nat_destination_port( server, fixed_address=fixed_address, nat_destination=nat_destination) if port_obj: port = port_obj['id'] if fixed_ip_address: kwargs['fixed_ip_address'] = fixed_ip_address if port: kwargs['port_id'] = port fip = self._submit_create_fip(kwargs) fip_id = fip['id'] if port: # The FIP is only going to become active in this context # when we've attached it to something, which only occurs # if we've provided a port as a parameter if wait: try: for count in utils.iterate_timeout( timeout, "Timeout waiting for the floating IP" " to be ACTIVE", wait=self._FLOAT_AGE): fip = self.get_floating_ip(fip_id) if fip and fip['status'] == 'ACTIVE': break except OpenStackCloudTimeout: self.log.error( "Timed out on floating ip %(fip)s becoming active." " Deleting", {'fip': fip_id}) try: self.delete_floating_ip(fip_id) except Exception as e: self.log.error( "FIP LEAK: Attempted to delete floating ip " "%(fip)s but received %(exc)s exception: " "%(err)s", {'fip': fip_id, 'exc': e.__class__, 'err': str(e)}) raise if fip['port_id'] != port: if server: raise OpenStackCloudException( "Attempted to create FIP on port {port} for server" " {server} but FIP has port {port_id}".format( port=port, port_id=fip['port_id'], server=server['id'])) else: raise OpenStackCloudException( "Attempted to create FIP on port {port}" " but something went wrong".format(port=port)) return fip def _nova_create_floating_ip(self, pool=None): with _utils.shade_exceptions( "Unable to create floating IP in pool {pool}".format( pool=pool)): if pool is None: pools = self.list_floating_ip_pools() if not pools: raise OpenStackCloudResourceNotFound( "unable to find a floating ip pool") pool = pools[0]['name'] data = _adapter._json_response(self._conn.compute.post( '/os-floating-ips', json=dict(pool=pool))) pool_ip = self._get_and_munchify('floating_ip', data) # TODO(mordred) Remove this - it's just for compat data = _adapter._json_response( self._conn.compute.get('/os-floating-ips/{id}'.format( id=pool_ip['id']))) return self._get_and_munchify('floating_ip', data) def delete_floating_ip(self, floating_ip_id, retry=1): """Deallocate a floating IP from a project. :param floating_ip_id: a floating IP address ID. :param retry: number of times to retry. Optional, defaults to 1, which is in addition to the initial delete call. A value of 0 will also cause no checking of results to occur. :returns: True if the IP address has been deleted, False if the IP address was not found. :raises: ``OpenStackCloudException``, on operation error. """ for count in range(0, max(0, retry) + 1): result = self._delete_floating_ip(floating_ip_id) if (retry == 0) or not result: return result # Wait for the cached floating ip list to be regenerated if self._FLOAT_AGE: time.sleep(self._FLOAT_AGE) # neutron sometimes returns success when deleting a floating # ip. That's awesome. SO - verify that the delete actually # worked. Some clouds will set the status to DOWN rather than # deleting the IP immediately. This is, of course, a bit absurd. f_ip = self.get_floating_ip(id=floating_ip_id) if not f_ip or f_ip['status'] == 'DOWN': return True raise OpenStackCloudException( "Attempted to delete Floating IP {ip} with ID {id} a total of" " {retry} times. Although the cloud did not indicate any errors" " the floating ip is still in existence. Aborting further" " operations.".format( id=floating_ip_id, ip=f_ip['floating_ip_address'], retry=retry + 1)) def _delete_floating_ip(self, floating_ip_id): if self._use_neutron_floating(): try: return self._neutron_delete_floating_ip(floating_ip_id) except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) return self._nova_delete_floating_ip(floating_ip_id) def _neutron_delete_floating_ip(self, floating_ip_id): try: self._network_client.delete( "/floatingips/{fip_id}.json".format(fip_id=floating_ip_id), error_message="unable to delete floating IP") except OpenStackCloudResourceNotFound: return False except Exception as e: raise OpenStackCloudException( "Unable to delete floating IP ID {fip_id}: {msg}".format( fip_id=floating_ip_id, msg=str(e))) return True def _nova_delete_floating_ip(self, floating_ip_id): try: _adapter._json_response( self._conn.compute.delete( '/os-floating-ips/{id}'.format(id=floating_ip_id)), error_message='Unable to delete floating IP {fip_id}'.format( fip_id=floating_ip_id)) except OpenStackCloudURINotFound: return False return True def delete_unattached_floating_ips(self, retry=1): """Safely delete unattached floating ips. If the cloud can safely purge any unattached floating ips without race conditions, do so. Safely here means a specific thing. It means that you are not running this while another process that might do a two step create/attach is running. You can safely run this method while another process is creating servers and attaching floating IPs to them if either that process is using add_auto_ip from shade, or is creating the floating IPs by passing in a server to the create_floating_ip call. :param retry: number of times to retry. Optional, defaults to 1, which is in addition to the initial delete call. A value of 0 will also cause no checking of results to occur. :returns: True if Floating IPs have been deleted, False if not :raises: ``OpenStackCloudException``, on operation error. """ processed = [] if self._use_neutron_floating(): for ip in self.list_floating_ips(): if not ip['attached']: processed.append(self.delete_floating_ip( floating_ip_id=ip['id'], retry=retry)) return all(processed) if processed else False def _attach_ip_to_server( self, server, floating_ip, fixed_address=None, wait=False, timeout=60, skip_attach=False, nat_destination=None): """Attach a floating IP to a server. :param server: Server dict :param floating_ip: Floating IP dict to attach :param fixed_address: (optional) fixed address to which attach the floating IP to. :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param skip_attach: (optional) Skip the actual attach and just do the wait. Defaults to False. :param nat_destination: The fixed network the server's port for the FIP to attach to will come from. :returns: The server ``munch.Munch`` :raises: OpenStackCloudException, on operation error. """ # Short circuit if we're asking to attach an IP that's already # attached ext_ip = meta.get_server_ip(server, ext_tag='floating', public=True) if ext_ip == floating_ip['floating_ip_address']: return server if self._use_neutron_floating(): if not skip_attach: try: self._neutron_attach_ip_to_server( server=server, floating_ip=floating_ip, fixed_address=fixed_address, nat_destination=nat_destination) except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova else: # Nova network self._nova_attach_ip_to_server( server_id=server['id'], floating_ip_id=floating_ip['id'], fixed_address=fixed_address) if wait: # Wait for the address to be assigned to the server server_id = server['id'] for _ in utils.iterate_timeout( timeout, "Timeout waiting for the floating IP to be attached.", wait=self._SERVER_AGE): server = self.get_server(server_id) ext_ip = meta.get_server_ip( server, ext_tag='floating', public=True) if ext_ip == floating_ip['floating_ip_address']: return server return server def _nat_destination_port( self, server, fixed_address=None, nat_destination=None): """Returns server port that is on a nat_destination network Find a port attached to the server which is on a network which has a subnet which can be the destination of NAT. Such a network is referred to in shade as a "nat_destination" network. So this then is a function which returns a port on such a network that is associated with the given server. :param server: Server dict. :param fixed_address: Fixed ip address of the port :param nat_destination: Name or ID of the network of the port. """ # If we are caching port lists, we may not find the port for # our server if the list is old. Try for at least 2 cache # periods if that is the case. if self._PORT_AGE: timeout = self._PORT_AGE * 2 else: timeout = None for count in utils.iterate_timeout( timeout, "Timeout waiting for port to show up in list", wait=self._PORT_AGE): try: port_filter = {'device_id': server['id']} ports = self.search_ports(filters=port_filter) break except OpenStackCloudTimeout: ports = None if not ports: return (None, None) port = None if not fixed_address: if len(ports) > 1: if nat_destination: nat_network = self.get_network(nat_destination) if not nat_network: raise OpenStackCloudException( 'NAT Destination {nat_destination} was configured' ' but not found on the cloud. Please check your' ' config and your cloud and try again.'.format( nat_destination=nat_destination)) else: nat_network = self.get_nat_destination() if not nat_network: raise OpenStackCloudException( 'Multiple ports were found for server {server}' ' but none of the networks are a valid NAT' ' destination, so it is impossible to add a' ' floating IP. If you have a network that is a valid' ' destination for NAT and we could not find it,' ' please file a bug. But also configure the' ' nat_destination property of the networks list in' ' your clouds.yaml file. If you do not have a' ' clouds.yaml file, please make one - your setup' ' is complicated.'.format(server=server['id'])) maybe_ports = [] for maybe_port in ports: if maybe_port['network_id'] == nat_network['id']: maybe_ports.append(maybe_port) if not maybe_ports: raise OpenStackCloudException( 'No port on server {server} was found matching' ' your NAT destination network {dest}. Please ' ' check your config'.format( server=server['id'], dest=nat_network['name'])) ports = maybe_ports # Select the most recent available IPv4 address # To do this, sort the ports in reverse order by the created_at # field which is a string containing an ISO DateTime (which # thankfully sort properly) This way the most recent port created, # if there are more than one, will be the arbitrary port we # select. for port in sorted( ports, key=lambda p: p.get('created_at', 0), reverse=True): for address in port.get('fixed_ips', list()): try: ip = ipaddress.ip_address(address['ip_address']) except Exception: continue if ip.version == 4: fixed_address = address['ip_address'] return port, fixed_address raise OpenStackCloudException( "unable to find a free fixed IPv4 address for server " "{0}".format(server['id'])) # unfortunately a port can have more than one fixed IP: # we can't use the search_ports filtering for fixed_address as # they are contained in a list. e.g. # # "fixed_ips": [ # { # "subnet_id": "008ba151-0b8c-4a67-98b5-0d2b87666062", # "ip_address": "172.24.4.2" # } # ] # # Search fixed_address for p in ports: for fixed_ip in p['fixed_ips']: if fixed_address == fixed_ip['ip_address']: return (p, fixed_address) return (None, None) def _neutron_attach_ip_to_server( self, server, floating_ip, fixed_address=None, nat_destination=None): # Find an available port (port, fixed_address) = self._nat_destination_port( server, fixed_address=fixed_address, nat_destination=nat_destination) if not port: raise OpenStackCloudException( "unable to find a port for server {0}".format( server['id'])) floating_ip_args = {'port_id': port['id']} if fixed_address is not None: floating_ip_args['fixed_ip_address'] = fixed_address return self._network_client.put( "/floatingips/{fip_id}.json".format(fip_id=floating_ip['id']), json={'floatingip': floating_ip_args}, error_message=("Error attaching IP {ip} to " "server {server_id}".format( ip=floating_ip['id'], server_id=server['id']))) def _nova_attach_ip_to_server(self, server_id, floating_ip_id, fixed_address=None): f_ip = self.get_floating_ip( id=floating_ip_id) if f_ip is None: raise OpenStackCloudException( "unable to find floating IP {0}".format(floating_ip_id)) error_message = "Error attaching IP {ip} to instance {id}".format( ip=floating_ip_id, id=server_id) body = { 'address': f_ip['floating_ip_address'] } if fixed_address: body['fixed_address'] = fixed_address return _adapter._json_response( self._conn.compute.post( '/servers/{server_id}/action'.format(server_id=server_id), json=dict(addFloatingIp=body)), error_message=error_message) def detach_ip_from_server(self, server_id, floating_ip_id): """Detach a floating IP from a server. :param server_id: ID of a server. :param floating_ip_id: Id of the floating IP to detach. :returns: True if the IP has been detached, or False if the IP wasn't attached to any server. :raises: ``OpenStackCloudException``, on operation error. """ if self._use_neutron_floating(): try: return self._neutron_detach_ip_from_server( server_id=server_id, floating_ip_id=floating_ip_id) except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova # Nova network self._nova_detach_ip_from_server( server_id=server_id, floating_ip_id=floating_ip_id) def _neutron_detach_ip_from_server(self, server_id, floating_ip_id): f_ip = self.get_floating_ip(id=floating_ip_id) if f_ip is None or not f_ip['attached']: return False self._network_client.put( "/floatingips/{fip_id}.json".format(fip_id=floating_ip_id), json={"floatingip": {"port_id": None}}, error_message=("Error detaching IP {ip} from " "server {server_id}".format( ip=floating_ip_id, server_id=server_id))) return True def _nova_detach_ip_from_server(self, server_id, floating_ip_id): f_ip = self.get_floating_ip(id=floating_ip_id) if f_ip is None: raise OpenStackCloudException( "unable to find floating IP {0}".format(floating_ip_id)) error_message = "Error detaching IP {ip} from instance {id}".format( ip=floating_ip_id, id=server_id) return _adapter._json_response( self._conn.compute.post( '/servers/{server_id}/action'.format(server_id=server_id), json=dict(removeFloatingIp=dict( address=f_ip['floating_ip_address']))), error_message=error_message) return True def _add_ip_from_pool( self, server, network, fixed_address=None, reuse=True, wait=False, timeout=60, nat_destination=None): """Add a floating IP to a server from a given pool This method reuses available IPs, when possible, or allocate new IPs to the current tenant. The floating IP is attached to the given fixed address or to the first server port/fixed address :param server: Server dict :param network: Name or ID of the network. :param fixed_address: a fixed address :param reuse: Try to reuse existing ips. Defaults to True. :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param nat_destination: (optional) the name of the network of the port to associate with the floating ip. :returns: the updated server ``munch.Munch`` """ if reuse: f_ip = self.available_floating_ip(network=network) else: start_time = time.time() f_ip = self.create_floating_ip( server=server, network=network, nat_destination=nat_destination, wait=wait, timeout=timeout) timeout = timeout - (time.time() - start_time) # Wait for cache invalidation time so that we don't try # to attach the FIP a second time below time.sleep(self._SERVER_AGE) server = self.get_server(server.id) # We run attach as a second call rather than in the create call # because there are code flows where we will not have an attached # FIP yet. However, even if it was attached in the create, we run # the attach function below to get back the server dict refreshed # with the FIP information. return self._attach_ip_to_server( server=server, floating_ip=f_ip, fixed_address=fixed_address, wait=wait, timeout=timeout, nat_destination=nat_destination) def add_ip_list( self, server, ips, wait=False, timeout=60, fixed_address=None): """Attach a list of IPs to a server. :param server: a server object :param ips: list of floating IP addresses or a single address :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param fixed_address: (optional) Fixed address of the server to attach the IP to :returns: The updated server ``munch.Munch`` :raises: ``OpenStackCloudException``, on operation error. """ if type(ips) == list: ip = ips[0] else: ip = ips f_ip = self.get_floating_ip( id=None, filters={'floating_ip_address': ip}) return self._attach_ip_to_server( server=server, floating_ip=f_ip, wait=wait, timeout=timeout, fixed_address=fixed_address) def add_auto_ip(self, server, wait=False, timeout=60, reuse=True): """Add a floating IP to a server. This method is intended for basic usage. For advanced network architecture (e.g. multiple external networks or servers with multiple interfaces), use other floating IP methods. This method can reuse available IPs, or allocate new IPs to the current project. :param server: a server dictionary. :param reuse: Whether or not to attempt to reuse IPs, defaults to True. :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param reuse: Try to reuse existing ips. Defaults to True. :returns: Floating IP address attached to server. """ server = self._add_auto_ip( server, wait=wait, timeout=timeout, reuse=reuse) return server['interface_ip'] or None def _add_auto_ip(self, server, wait=False, timeout=60, reuse=True): skip_attach = False created = False if reuse: f_ip = self.available_floating_ip() else: start_time = time.time() f_ip = self.create_floating_ip( server=server, wait=wait, timeout=timeout) timeout = timeout - (time.time() - start_time) if server: # This gets passed in for both nova and neutron # but is only meaningful for the neutron logic branch skip_attach = True created = True try: # We run attach as a second call rather than in the create call # because there are code flows where we will not have an attached # FIP yet. However, even if it was attached in the create, we run # the attach function below to get back the server dict refreshed # with the FIP information. return self._attach_ip_to_server( server=server, floating_ip=f_ip, wait=wait, timeout=timeout, skip_attach=skip_attach) except OpenStackCloudTimeout: if self._use_neutron_floating() and created: # We are here because we created an IP on the port # It failed. Delete so as not to leak an unmanaged # resource self.log.error( "Timeout waiting for floating IP to become" " active. Floating IP %(ip)s:%(id)s was created for" " server %(server)s but is being deleted due to" " activation failure.", { 'ip': f_ip['floating_ip_address'], 'id': f_ip['id'], 'server': server['id']}) try: self.delete_floating_ip(f_ip['id']) except Exception as e: self.log.error( "FIP LEAK: Attempted to delete floating ip " "%(fip)s but received %(exc)s exception: %(err)s", {'fip': f_ip['id'], 'exc': e.__class__, 'err': str(e)}) raise e raise def add_ips_to_server( self, server, auto_ip=True, ips=None, ip_pool=None, wait=False, timeout=60, reuse=True, fixed_address=None, nat_destination=None): if ip_pool: server = self._add_ip_from_pool( server, ip_pool, reuse=reuse, wait=wait, timeout=timeout, fixed_address=fixed_address, nat_destination=nat_destination) elif ips: server = self.add_ip_list( server, ips, wait=wait, timeout=timeout, fixed_address=fixed_address) elif auto_ip: if self._needs_floating_ip(server, nat_destination): server = self._add_auto_ip( server, wait=wait, timeout=timeout, reuse=reuse) return server def _needs_floating_ip(self, server, nat_destination): """Figure out if auto_ip should add a floating ip to this server. If the server has a public_v4 it does not need a floating ip. If the server does not have a private_v4 it does not need a floating ip. If self.private then the server does not need a floating ip. If the cloud runs nova, and the server has a private_v4 and not a public_v4, then the server needs a floating ip. If the server has a private_v4 and no public_v4 and the cloud has a network from which floating IPs come that is connected via a router to the network from which the private_v4 address came, then the server needs a floating ip. If the server has a private_v4 and no public_v4 and the cloud does not have a network from which floating ips come, or it has one but that network is not connected to the network from which the server's private_v4 address came via a router, then the server does not need a floating ip. """ if not self._has_floating_ips(): return False if server['public_v4']: return False if not server['private_v4']: return False if self.private: return False if not self.has_service('network'): return True # No floating ip network - no FIPs try: self._get_floating_network_id() except OpenStackCloudException: return False (port_obj, fixed_ip_address) = self._nat_destination_port( server, nat_destination=nat_destination) if not port_obj or not fixed_ip_address: return False return True def _get_boot_from_volume_kwargs( self, image, boot_from_volume, boot_volume, volume_size, terminate_volume, volumes, kwargs): """Return block device mappings :param image: Image dict, name or id to boot with. """ # TODO(mordred) We're only testing this in functional tests. We need # to add unit tests for this too. if boot_volume or boot_from_volume or volumes: kwargs.setdefault('block_device_mapping_v2', []) else: return kwargs # If we have boot_from_volume but no root volume, then we're # booting an image from volume if boot_volume: volume = self.get_volume(boot_volume) if not volume: raise OpenStackCloudException( 'Volume {boot_volume} is not a valid volume' ' in {cloud}:{region}'.format( boot_volume=boot_volume, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': '0', 'delete_on_termination': terminate_volume, 'destination_type': 'volume', 'uuid': volume['id'], 'source_type': 'volume', } kwargs['block_device_mapping_v2'].append(block_mapping) kwargs['imageRef'] = '' elif boot_from_volume: if isinstance(image, dict): image_obj = image else: image_obj = self.get_image(image) if not image_obj: raise OpenStackCloudException( 'Image {image} is not a valid image in' ' {cloud}:{region}'.format( image=image, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': '0', 'delete_on_termination': terminate_volume, 'destination_type': 'volume', 'uuid': image_obj['id'], 'source_type': 'image', 'volume_size': volume_size, } kwargs['imageRef'] = '' kwargs['block_device_mapping_v2'].append(block_mapping) if volumes and kwargs['imageRef']: # If we're attaching volumes on boot but booting from an image, # we need to specify that in the BDM. block_mapping = { u'boot_index': 0, u'delete_on_termination': True, u'destination_type': u'local', u'source_type': u'image', u'uuid': kwargs['imageRef'], } kwargs['block_device_mapping_v2'].append(block_mapping) for volume in volumes: volume_obj = self.get_volume(volume) if not volume_obj: raise OpenStackCloudException( 'Volume {volume} is not a valid volume' ' in {cloud}:{region}'.format( volume=volume, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': '-1', 'delete_on_termination': False, 'destination_type': 'volume', 'uuid': volume_obj['id'], 'source_type': 'volume', } kwargs['block_device_mapping_v2'].append(block_mapping) if boot_volume or boot_from_volume or volumes: self.list_volumes.invalidate(self) return kwargs def _encode_server_userdata(self, userdata): if hasattr(userdata, 'read'): userdata = userdata.read() if not isinstance(userdata, six.binary_type): # If the userdata passed in is bytes, just send it unmodified if not isinstance(userdata, six.string_types): raise TypeError("%s can't be encoded" % type(text)) # If it's not bytes, make it bytes userdata = userdata.encode('utf-8', 'strict') # Once we have base64 bytes, make them into a utf-8 string for REST return base64.b64encode(userdata).decode('utf-8') @_utils.valid_kwargs( 'meta', 'files', 'userdata', 'reservation_id', 'return_raw', 'min_count', 'max_count', 'security_groups', 'key_name', 'availability_zone', 'block_device_mapping', 'block_device_mapping_v2', 'nics', 'scheduler_hints', 'config_drive', 'admin_pass', 'disk_config') def create_server( self, name, image=None, flavor=None, auto_ip=True, ips=None, ip_pool=None, root_volume=None, terminate_volume=False, wait=False, timeout=180, reuse_ips=True, network=None, boot_from_volume=False, volume_size='50', boot_volume=None, volumes=None, nat_destination=None, group=None, **kwargs): """Create a virtual server instance. :param name: Something to name the server. :param image: Image dict, name or ID to boot with. image is required unless boot_volume is given. :param flavor: Flavor dict, name or ID to boot onto. :param auto_ip: Whether to take actions to find a routable IP for the server. (defaults to True) :param ips: List of IPs to attach to the server (defaults to None) :param ip_pool: Name of the network or floating IP pool to get an address from. (defaults to None) :param root_volume: Name or ID of a volume to boot from (defaults to None - deprecated, use boot_volume) :param boot_volume: Name or ID of a volume to boot from (defaults to None) :param terminate_volume: If booting from a volume, whether it should be deleted when the server is destroyed. (defaults to False) :param volumes: (optional) A list of volumes to attach to the server :param meta: (optional) A dict of arbitrary key/value metadata to store for this server. Both keys and values must be <=255 characters. :param files: (optional, deprecated) A dict of files to overwrite on the server upon boot. Keys are file names (i.e. ``/etc/passwd``) and values are the file contents (either as a string or as a file-like object). A maximum of five entries is allowed, and each file must be 10k or less. :param reservation_id: a UUID for the set of servers being requested. :param min_count: (optional extension) The minimum number of servers to launch. :param max_count: (optional extension) The maximum number of servers to launch. :param security_groups: A list of security group names :param userdata: user data to pass to be exposed by the metadata server this can be a file type object as well or a string. :param key_name: (optional extension) name of previously created keypair to inject into the instance. :param availability_zone: Name of the availability zone for instance placement. :param block_device_mapping: (optional) A dict of block device mappings for this server. :param block_device_mapping_v2: (optional) A dict of block device mappings for this server. :param nics: (optional extension) an ordered list of nics to be added to this server, with information about connected networks, fixed IPs, port etc. :param scheduler_hints: (optional extension) arbitrary key-value pairs specified by the client to help boot an instance :param config_drive: (optional extension) value for config drive either boolean, or volume-id :param disk_config: (optional extension) control how the disk is partitioned when the server is created. possible values are 'AUTO' or 'MANUAL'. :param admin_pass: (optional extension) add a user supplied admin password. :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param reuse_ips: (optional) Whether to attempt to reuse pre-existing floating ips should a floating IP be needed (defaults to True) :param network: (optional) Network dict or name or ID to attach the server to. Mutually exclusive with the nics parameter. Can also be be a list of network names or IDs or network dicts. :param boot_from_volume: Whether to boot from volume. 'boot_volume' implies True, but boot_from_volume=True with no boot_volume is valid and will create a volume from the image and use that. :param volume_size: When booting an image from volume, how big should the created volume be? Defaults to 50. :param nat_destination: Which network should a created floating IP be attached to, if it's not possible to infer from the cloud's configuration. (Optional, defaults to None) :param group: ServerGroup dict, name or id to boot the server in. If a group is provided in both scheduler_hints and in the group param, the group param will win. (Optional, defaults to None) :returns: A ``munch.Munch`` representing the created server. :raises: OpenStackCloudException on operation error. """ # TODO(shade) Image is optional but flavor is not - yet flavor comes # after image in the argument list. Doh. if not flavor: raise TypeError( "create_server() missing 1 required argument: 'flavor'") if not image and not boot_volume: raise TypeError( "create_server() requires either 'image' or 'boot_volume'") # TODO(mordred) Add support for description starting in 2.19 security_groups = kwargs.get('security_groups', []) if security_groups and not isinstance(kwargs['security_groups'], list): security_groups = [security_groups] if security_groups: kwargs['security_groups'] = [] for sec_group in security_groups: kwargs['security_groups'].append(dict(name=sec_group)) if 'userdata' in kwargs: user_data = kwargs.pop('userdata') if user_data: kwargs['user_data'] = self._encode_server_userdata(user_data) for (desired, given) in ( ('OS-DCF:diskConfig', 'disk_config'), ('config_drive', 'config_drive'), ('key_name', 'key_name'), ('metadata', 'meta'), ('adminPass', 'admin_pass')): value = kwargs.pop(given, None) if value: kwargs[desired] = value hints = kwargs.pop('scheduler_hints', {}) if group: group_obj = self.get_server_group(group) if not group_obj: raise OpenStackCloudException( "Server Group {group} was requested but was not found" " on the cloud".format(group=group)) hints['group'] = group_obj['id'] if hints: kwargs['os:scheduler_hints'] = hints kwargs.setdefault('max_count', kwargs.get('max_count', 1)) kwargs.setdefault('min_count', kwargs.get('min_count', 1)) if 'nics' in kwargs and not isinstance(kwargs['nics'], list): if isinstance(kwargs['nics'], dict): # Be nice and help the user out kwargs['nics'] = [kwargs['nics']] else: raise OpenStackCloudException( 'nics parameter to create_server takes a list of dicts.' ' Got: {nics}'.format(nics=kwargs['nics'])) if network and ('nics' not in kwargs or not kwargs['nics']): nics = [] if not isinstance(network, list): network = [network] for net_name in network: if isinstance(net_name, dict) and 'id' in net_name: network_obj = net_name else: network_obj = self.get_network(name_or_id=net_name) if not network_obj: raise OpenStackCloudException( 'Network {network} is not a valid network in' ' {cloud}:{region}'.format( network=network, cloud=self.name, region=self.region_name)) nics.append({'net-id': network_obj['id']}) kwargs['nics'] = nics if not network and ('nics' not in kwargs or not kwargs['nics']): default_network = self.get_default_network() if default_network: kwargs['nics'] = [{'net-id': default_network['id']}] networks = [] for nic in kwargs.pop('nics', []): net = {} if 'net-id' in nic: # TODO(mordred) Make sure this is in uuid format net['uuid'] = nic.pop('net-id') # If there's a net-id, ignore net-name nic.pop('net-name', None) elif 'net-name' in nic: nic_net = self.get_network(nic['net-name']) if not nic_net: raise OpenStackCloudException( "Requested network {net} could not be found.".format( net=nic['net-name'])) net['uuid'] = nic_net['id'] # TODO(mordred) Add support for tag if server supports microversion # 2.32-2.36 or >= 2.42 for key in ('port', 'fixed_ip'): if key in nic: net[key] = nic.pop(key) if 'port-id' in nic: net['port'] = nic.pop('port-id') if nic: raise OpenStackCloudException( "Additional unsupported keys given for server network" " creation: {keys}".format(keys=nic.keys())) networks.append(net) if networks: kwargs['networks'] = networks if image: if isinstance(image, dict): kwargs['imageRef'] = image['id'] else: kwargs['imageRef'] = self.get_image(image).id if isinstance(flavor, dict): kwargs['flavorRef'] = flavor['id'] else: kwargs['flavorRef'] = self.get_flavor(flavor, get_extra=False).id if volumes is None: volumes = [] # nova cli calls this boot_volume. Let's be the same if root_volume and not boot_volume: boot_volume = root_volume kwargs = self._get_boot_from_volume_kwargs( image=image, boot_from_volume=boot_from_volume, boot_volume=boot_volume, volume_size=str(volume_size), terminate_volume=terminate_volume, volumes=volumes, kwargs=kwargs) kwargs['name'] = name endpoint = '/servers' # TODO(mordred) We're only testing this in functional tests. We need # to add unit tests for this too. if 'block_device_mapping_v2' in kwargs: endpoint = '/os-volumes_boot' with _utils.shade_exceptions("Error in creating instance"): data = _adapter._json_response( self._conn.compute.post(endpoint, json={'server': kwargs})) server = self._get_and_munchify('server', data) admin_pass = server.get('adminPass') or kwargs.get('admin_pass') if not wait: # This is a direct get call to skip the list_servers # cache which has absolutely no chance of containing the # new server. # Only do this if we're not going to wait for the server # to complete booting, because the only reason we do it # is to get a server record that is the return value from # get/list rather than the return value of create. If we're # going to do the wait loop below, this is a waste of a call server = self.get_server_by_id(server.id) if server.status == 'ERROR': raise OpenStackCloudCreateException( resource='server', resource_id=server.id) if wait: server = self.wait_for_server( server, auto_ip=auto_ip, ips=ips, ip_pool=ip_pool, reuse=reuse_ips, timeout=timeout, nat_destination=nat_destination, ) server.adminPass = admin_pass return server def wait_for_server( self, server, auto_ip=True, ips=None, ip_pool=None, reuse=True, timeout=180, nat_destination=None): """ Wait for a server to reach ACTIVE status. """ server_id = server['id'] timeout_message = "Timeout waiting for the server to come up." start_time = time.time() # There is no point in iterating faster than the list_servers cache for count in utils.iterate_timeout( timeout, timeout_message, # if _SERVER_AGE is 0 we still want to wait a bit # to be friendly with the server. wait=self._SERVER_AGE or 2): try: # Use the get_server call so that the list_servers # cache can be leveraged server = self.get_server(server_id) except Exception: continue if not server: continue # We have more work to do, but the details of that are # hidden from the user. So, calculate remaining timeout # and pass it down into the IP stack. remaining_timeout = timeout - int(time.time() - start_time) if remaining_timeout <= 0: raise OpenStackCloudTimeout(timeout_message) server = self.get_active_server( server=server, reuse=reuse, auto_ip=auto_ip, ips=ips, ip_pool=ip_pool, wait=True, timeout=remaining_timeout, nat_destination=nat_destination) if server is not None and server['status'] == 'ACTIVE': return server def get_active_server( self, server, auto_ip=True, ips=None, ip_pool=None, reuse=True, wait=False, timeout=180, nat_destination=None): if server['status'] == 'ERROR': if 'fault' in server and 'message' in server['fault']: raise OpenStackCloudException( "Error in creating the server: {reason}".format( reason=server['fault']['message']), extra_data=dict(server=server)) raise OpenStackCloudException( "Error in creating the server", extra_data=dict(server=server)) if server['status'] == 'ACTIVE': if 'addresses' in server and server['addresses']: return self.add_ips_to_server( server, auto_ip, ips, ip_pool, reuse=reuse, nat_destination=nat_destination, wait=wait, timeout=timeout) self.log.debug( 'Server %(server)s reached ACTIVE state without' ' being allocated an IP address.' ' Deleting server.', {'server': server['id']}) try: self._delete_server( server=server, wait=wait, timeout=timeout) except Exception as e: raise OpenStackCloudException( 'Server reached ACTIVE state without being' ' allocated an IP address AND then could not' ' be deleted: {0}'.format(e), extra_data=dict(server=server)) raise OpenStackCloudException( 'Server reached ACTIVE state without being' ' allocated an IP address.', extra_data=dict(server=server)) return None def rebuild_server(self, server_id, image_id, admin_pass=None, detailed=False, bare=False, wait=False, timeout=180): kwargs = {} if image_id: kwargs['imageRef'] = image_id if admin_pass: kwargs['adminPass'] = admin_pass data = _adapter._json_response( self._conn.compute.post( '/servers/{server_id}/action'.format(server_id=server_id), json={'rebuild': kwargs}), error_message="Error in rebuilding instance") server = self._get_and_munchify('server', data) if not wait: return self._expand_server( self._normalize_server(server), bare=bare, detailed=detailed) admin_pass = server.get('adminPass') or admin_pass for count in utils.iterate_timeout( timeout, "Timeout waiting for server {0} to " "rebuild.".format(server_id), wait=self._SERVER_AGE): try: server = self.get_server(server_id, bare=True) except Exception: continue if not server: continue if server['status'] == 'ERROR': raise OpenStackCloudException( "Error in rebuilding the server", extra_data=dict(server=server)) if server['status'] == 'ACTIVE': server.adminPass = admin_pass break return self._expand_server(server, detailed=detailed, bare=bare) def set_server_metadata(self, name_or_id, metadata): """Set metadata in a server instance. :param str name_or_id: The name or ID of the server instance to update. :param dict metadata: A dictionary with the key=value pairs to set in the server instance. It only updates the key=value pairs provided. Existing ones will remain untouched. :raises: OpenStackCloudException on operation error. """ server = self.get_server(name_or_id, bare=True) if not server: raise OpenStackCloudException( 'Invalid Server {server}'.format(server=name_or_id)) _adapter._json_response( self._conn.compute.post( '/servers/{server_id}/metadata'.format(server_id=server['id']), json={'metadata': metadata}), error_message='Error updating server metadata') def delete_server_metadata(self, name_or_id, metadata_keys): """Delete metadata from a server instance. :param str name_or_id: The name or ID of the server instance to update. :param metadata_keys: A list with the keys to be deleted from the server instance. :raises: OpenStackCloudException on operation error. """ server = self.get_server(name_or_id, bare=True) if not server: raise OpenStackCloudException( 'Invalid Server {server}'.format(server=name_or_id)) for key in metadata_keys: error_message = 'Error deleting metadata {key} on {server}'.format( key=key, server=name_or_id) _adapter._json_response( self._conn.compute.delete( '/servers/{server_id}/metadata/{key}'.format( server_id=server['id'], key=key)), error_message=error_message) def delete_server( self, name_or_id, wait=False, timeout=180, delete_ips=False, delete_ip_retry=1): """Delete a server instance. :param name_or_id: name or ID of the server to delete :param bool wait: If true, waits for server to be deleted. :param int timeout: Seconds to wait for server deletion. :param bool delete_ips: If true, deletes any floating IPs associated with the instance. :param int delete_ip_retry: Number of times to retry deleting any floating ips, should the first try be unsuccessful. :returns: True if delete succeeded, False otherwise if the server does not exist. :raises: OpenStackCloudException on operation error. """ # If delete_ips is True, we need the server to not be bare. server = self.get_server(name_or_id, bare=True) if not server: return False # This portion of the code is intentionally left as a separate # private method in order to avoid an unnecessary API call to get # a server we already have. return self._delete_server( server, wait=wait, timeout=timeout, delete_ips=delete_ips, delete_ip_retry=delete_ip_retry) def _delete_server_floating_ips(self, server, delete_ip_retry): # Does the server have floating ips in its # addresses dict? If not, skip this. server_floats = meta.find_nova_interfaces( server['addresses'], ext_tag='floating') for fip in server_floats: try: ip = self.get_floating_ip(id=None, filters={ 'floating_ip_address': fip['addr']}) except OpenStackCloudURINotFound: # We're deleting. If it doesn't exist - awesome # NOTE(mordred) If the cloud is a nova FIP cloud but # floating_ip_source is set to neutron, this # can lead to a FIP leak. continue if not ip: continue deleted = self.delete_floating_ip( ip['id'], retry=delete_ip_retry) if not deleted: raise OpenStackCloudException( "Tried to delete floating ip {floating_ip}" " associated with server {id} but there was" " an error deleting it. Not deleting server.".format( floating_ip=ip['floating_ip_address'], id=server['id'])) def _delete_server( self, server, wait=False, timeout=180, delete_ips=False, delete_ip_retry=1): if not server: return False if delete_ips and self._has_floating_ips(): self._delete_server_floating_ips(server, delete_ip_retry) try: _adapter._json_response( self._conn.compute.delete( '/servers/{id}'.format(id=server['id'])), error_message="Error in deleting server") except OpenStackCloudURINotFound: return False except Exception: raise if not wait: return True # If the server has volume attachments, or if it has booted # from volume, deleting it will change volume state so we will # need to invalidate the cache. Avoid the extra API call if # caching is not enabled. reset_volume_cache = False if (self.cache_enabled and self.has_service('volume') and self.get_volumes(server)): reset_volume_cache = True for count in utils.iterate_timeout( timeout, "Timed out waiting for server to get deleted.", # if _SERVER_AGE is 0 we still want to wait a bit # to be friendly with the server. wait=self._SERVER_AGE or 2): with _utils.shade_exceptions("Error in deleting server"): server = self.get_server(server['id'], bare=True) if not server: break if reset_volume_cache: self.list_volumes.invalidate(self) # Reset the list servers cache time so that the next list server # call gets a new list self._servers_time = self._servers_time - self._SERVER_AGE return True @_utils.valid_kwargs( 'name', 'description') def update_server(self, name_or_id, detailed=False, bare=False, **kwargs): """Update a server. :param name_or_id: Name of the server to be updated. :param detailed: Whether or not to add detailed additional information. Defaults to False. :param bare: Whether to skip adding any additional information to the server record. Defaults to False, meaning the addresses dict will be populated as needed from neutron. Setting to True implies detailed = False. :name: New name for the server :description: New description for the server :returns: a dictionary representing the updated server. :raises: OpenStackCloudException on operation error. """ server = self.get_server(name_or_id=name_or_id, bare=True) if server is None: raise OpenStackCloudException( "failed to find server '{server}'".format(server=name_or_id)) data = _adapter._json_response( self._conn.compute.put( '/servers/{server_id}'.format(server_id=server['id']), json={'server': kwargs}), error_message="Error updating server {0}".format(name_or_id)) server = self._normalize_server( self._get_and_munchify('server', data)) return self._expand_server(server, bare=bare, detailed=detailed) def create_server_group(self, name, policies): """Create a new server group. :param name: Name of the server group being created :param policies: List of policies for the server group. :returns: a dict representing the new server group. :raises: OpenStackCloudException on operation error. """ data = _adapter._json_response( self._conn.compute.post( '/os-server-groups', json={ 'server_group': { 'name': name, 'policies': policies}}), error_message="Unable to create server group {name}".format( name=name)) return self._get_and_munchify('server_group', data) def delete_server_group(self, name_or_id): """Delete a server group. :param name_or_id: Name or ID of the server group to delete :returns: True if delete succeeded, False otherwise :raises: OpenStackCloudException on operation error. """ server_group = self.get_server_group(name_or_id) if not server_group: self.log.debug("Server group %s not found for deleting", name_or_id) return False _adapter._json_response( self._conn.compute.delete( '/os-server-groups/{id}'.format(id=server_group['id'])), error_message="Error deleting server group {name}".format( name=name_or_id)) return True def list_containers(self, full_listing=True): """List containers. :param full_listing: Ignored. Present for backwards compat :returns: list of Munch of the container objects :raises: OpenStackCloudException on operation error. """ return self._object_store_client.get('/', params=dict(format='json')) def get_container(self, name, skip_cache=False): if skip_cache or name not in self._container_cache: try: container = self._object_store_client.head(name) self._container_cache[name] = container.headers except OpenStackCloudHTTPError as e: if e.response.status_code == 404: return None raise return self._container_cache[name] def create_container(self, name, public=False): container = self.get_container(name) if container: return container self._object_store_client.put(name) if public: self.set_container_access(name, 'public') return self.get_container(name, skip_cache=True) def delete_container(self, name): try: self._object_store_client.delete(name) return True except OpenStackCloudHTTPError as e: if e.response.status_code == 404: return False if e.response.status_code == 409: raise OpenStackCloudException( 'Attempt to delete container {container} failed. The' ' container is not empty. Please delete the objects' ' inside it before deleting the container'.format( container=name)) raise def update_container(self, name, headers): self._object_store_client.post(name, headers=headers) def set_container_access(self, name, access): if access not in OBJECT_CONTAINER_ACLS: raise OpenStackCloudException( "Invalid container access specified: %s. Must be one of %s" % (access, list(OBJECT_CONTAINER_ACLS.keys()))) header = {'x-container-read': OBJECT_CONTAINER_ACLS[access]} self.update_container(name, header) def get_container_access(self, name): container = self.get_container(name, skip_cache=True) if not container: raise OpenStackCloudException("Container not found: %s" % name) acl = container.get('x-container-read', '') for key, value in OBJECT_CONTAINER_ACLS.items(): # Convert to string for the comparison because swiftclient # returns byte values as bytes sometimes and apparently == # on bytes doesn't work like you'd think if str(acl) == str(value): return key raise OpenStackCloudException( "Could not determine container access for ACL: %s." % acl) def _get_file_hashes(self, filename): file_key = "{filename}:{mtime}".format( filename=filename, mtime=os.stat(filename).st_mtime) if file_key not in self._file_hash_cache: self.log.debug( 'Calculating hashes for %(filename)s', {'filename': filename}) md5 = hashlib.md5() sha256 = hashlib.sha256() with open(filename, 'rb') as file_obj: for chunk in iter(lambda: file_obj.read(8192), b''): md5.update(chunk) sha256.update(chunk) self._file_hash_cache[file_key] = dict( md5=md5.hexdigest(), sha256=sha256.hexdigest()) self.log.debug( "Image file %(filename)s md5:%(md5)s sha256:%(sha256)s", {'filename': filename, 'md5': self._file_hash_cache[file_key]['md5'], 'sha256': self._file_hash_cache[file_key]['sha256']}) return (self._file_hash_cache[file_key]['md5'], self._file_hash_cache[file_key]['sha256']) @_utils.cache_on_arguments() def get_object_capabilities(self): # The endpoint in the catalog has version and project-id in it # To get capabilities, we have to disassemble and reassemble the URL # This logic is taken from swiftclient endpoint = urllib.parse.urlparse( self._object_store_client.get_endpoint()) url = "{scheme}://{netloc}/info".format( scheme=endpoint.scheme, netloc=endpoint.netloc) return self._object_store_client.get(url) def get_object_segment_size(self, segment_size): """Get a segment size that will work given capabilities""" if segment_size is None: segment_size = DEFAULT_OBJECT_SEGMENT_SIZE min_segment_size = 0 try: caps = self.get_object_capabilities() except OpenStackCloudHTTPError as e: if e.response.status_code in (404, 412): # Clear the exception so that it doesn't linger # and get reported as an Inner Exception later _utils._exc_clear() server_max_file_size = DEFAULT_MAX_FILE_SIZE self.log.info( "Swift capabilities not supported. " "Using default max file size.") else: raise else: server_max_file_size = caps.get('swift', {}).get('max_file_size', 0) min_segment_size = caps.get('slo', {}).get('min_segment_size', 0) if segment_size > server_max_file_size: return server_max_file_size if segment_size < min_segment_size: return min_segment_size return segment_size def is_object_stale( self, container, name, filename, file_md5=None, file_sha256=None): metadata = self.get_object_metadata(container, name) if not metadata: self.log.debug( "swift stale check, no object: {container}/{name}".format( container=container, name=name)) return True if not (file_md5 or file_sha256): (file_md5, file_sha256) = self._get_file_hashes(filename) md5_key = metadata.get(OBJECT_MD5_KEY, '') sha256_key = metadata.get(OBJECT_SHA256_KEY, '') up_to_date = self._hashes_up_to_date( md5=file_md5, sha256=file_sha256, md5_key=md5_key, sha256_key=sha256_key) if not up_to_date: self.log.debug( "swift checksum mismatch: " " %(filename)s!=%(container)s/%(name)s", {'filename': filename, 'container': container, 'name': name}) return True self.log.debug( "swift object up to date: %(container)s/%(name)s", {'container': container, 'name': name}) return False def create_object( self, container, name, filename=None, md5=None, sha256=None, segment_size=None, use_slo=True, metadata=None, **headers): """Create a file object :param container: The name of the container to store the file in. This container will be created if it does not exist already. :param name: Name for the object within the container. :param filename: The path to the local file whose contents will be uploaded. :param md5: A hexadecimal md5 of the file. (Optional), if it is known and can be passed here, it will save repeating the expensive md5 process. It is assumed to be accurate. :param sha256: A hexadecimal sha256 of the file. (Optional) See md5. :param segment_size: Break the uploaded object into segments of this many bytes. (Optional) Shade will attempt to discover the maximum value for this from the server if it is not specified, or will use a reasonable default. :param headers: These will be passed through to the object creation API as HTTP Headers. :param use_slo: If the object is large enough to need to be a Large Object, use a static rather than dynamic object. Static Objects will delete segment objects when the manifest object is deleted. (optional, defaults to True) :param metadata: This dict will get changed into headers that set metadata of the object :raises: ``OpenStackCloudException`` on operation error. """ if not metadata: metadata = {} if not filename: filename = name # segment_size gets used as a step value in a range call, so needs # to be an int if segment_size: segment_size = int(segment_size) segment_size = self.get_object_segment_size(segment_size) file_size = os.path.getsize(filename) if not (md5 or sha256): (md5, sha256) = self._get_file_hashes(filename) headers[OBJECT_MD5_KEY] = md5 or '' headers[OBJECT_SHA256_KEY] = sha256 or '' for (k, v) in metadata.items(): headers['x-object-meta-' + k] = v # On some clouds this is not necessary. On others it is. I'm confused. self.create_container(container) if self.is_object_stale(container, name, filename, md5, sha256): endpoint = '{container}/{name}'.format( container=container, name=name) self.log.debug( "swift uploading %(filename)s to %(endpoint)s", {'filename': filename, 'endpoint': endpoint}) if file_size <= segment_size: self._upload_object(endpoint, filename, headers) else: self._upload_large_object( endpoint, filename, headers, file_size, segment_size, use_slo) def _upload_object(self, endpoint, filename, headers): return self._object_store_client.put( endpoint, headers=headers, data=open(filename, 'r')) def _get_file_segments(self, endpoint, filename, file_size, segment_size): # Use an ordered dict here so that testing can replicate things segments = collections.OrderedDict() for (index, offset) in enumerate(range(0, file_size, segment_size)): remaining = file_size - (index * segment_size) segment = _utils.FileSegment( filename, offset, segment_size if segment_size < remaining else remaining) name = '{endpoint}/{index:0>6}'.format( endpoint=endpoint, index=index) segments[name] = segment return segments def _object_name_from_url(self, url): '''Get container_name/object_name from the full URL called. Remove the Swift endpoint from the front of the URL, and remove the leaving / that will leave behind.''' endpoint = self._object_store_client.get_endpoint() object_name = url.replace(endpoint, '') if object_name.startswith('/'): object_name = object_name[1:] return object_name def _add_etag_to_manifest(self, segment_results, manifest): for result in segment_results: if 'Etag' not in result.headers: continue name = self._object_name_from_url(result.url) for entry in manifest: if entry['path'] == '/{name}'.format(name=name): entry['etag'] = result.headers['Etag'] def _upload_large_object( self, endpoint, filename, headers, file_size, segment_size, use_slo): # If the object is big, we need to break it up into segments that # are no larger than segment_size, upload each of them individually # and then upload a manifest object. The segments can be uploaded in # parallel, so we'll use the async feature of the TaskManager. segment_futures = [] segment_results = [] retry_results = [] retry_futures = [] manifest = [] # Get an OrderedDict with keys being the swift location for the # segment, the value a FileSegment file-like object that is a # slice of the data for the segment. segments = self._get_file_segments( endpoint, filename, file_size, segment_size) # Schedule the segments for upload for name, segment in segments.items(): # Async call to put - schedules execution and returns a future segment_future = self._object_store_client.put( name, headers=headers, data=segment, run_async=True) segment_futures.append(segment_future) # TODO(mordred) Collect etags from results to add to this manifest # dict. Then sort the list of dicts by path. manifest.append(dict( path='/{name}'.format(name=name), size_bytes=segment.length)) # Try once and collect failed results to retry segment_results, retry_results = task_manager.wait_for_futures( segment_futures, raise_on_error=False) self._add_etag_to_manifest(segment_results, manifest) for result in retry_results: # Grab the FileSegment for the failed upload so we can retry name = self._object_name_from_url(result.url) segment = segments[name] segment.seek(0) # Async call to put - schedules execution and returns a future segment_future = self._object_store_client.put( name, headers=headers, data=segment, run_async=True) # TODO(mordred) Collect etags from results to add to this manifest # dict. Then sort the list of dicts by path. retry_futures.append(segment_future) # If any segments fail the second time, just throw the error segment_results, retry_results = task_manager.wait_for_futures( retry_futures, raise_on_error=True) self._add_etag_to_manifest(segment_results, manifest) if use_slo: return self._finish_large_object_slo(endpoint, headers, manifest) else: return self._finish_large_object_dlo(endpoint, headers) def _finish_large_object_slo(self, endpoint, headers, manifest): # TODO(mordred) send an etag of the manifest, which is the md5sum # of the concatenation of the etags of the results headers = headers.copy() return self._object_store_client.put( endpoint, params={'multipart-manifest': 'put'}, headers=headers, data=json.dumps(manifest)) def _finish_large_object_dlo(self, endpoint, headers): headers = headers.copy() headers['X-Object-Manifest'] = endpoint return self._object_store_client.put(endpoint, headers=headers) def update_object(self, container, name, metadata=None, **headers): """Update the metadata of an object :param container: The name of the container the object is in :param name: Name for the object within the container. :param metadata: This dict will get changed into headers that set metadata of the object :param headers: These will be passed through to the object update API as HTTP Headers. :raises: ``OpenStackCloudException`` on operation error. """ if not metadata: metadata = {} metadata_headers = {} for (k, v) in metadata.items(): metadata_headers['x-object-meta-' + k] = v headers = dict(headers, **metadata_headers) return self._object_store_client.post( '{container}/{object}'.format( container=container, object=name), headers=headers) def list_objects(self, container, full_listing=True): """List objects. :param container: Name of the container to list objects in. :param full_listing: Ignored. Present for backwards compat :returns: list of Munch of the objects :raises: OpenStackCloudException on operation error. """ return self._object_store_client.get( container, params=dict(format='json')) def delete_object(self, container, name, meta=None): """Delete an object from a container. :param string container: Name of the container holding the object. :param string name: Name of the object to delete. :param dict meta: Metadata for the object in question. (optional, will be fetched if not provided) :returns: True if delete succeeded, False if the object was not found. :raises: OpenStackCloudException on operation error. """ # TODO(mordred) DELETE for swift returns status in text/plain format # like so: # Number Deleted: 15 # Number Not Found: 0 # Response Body: # Response Status: 200 OK # Errors: # We should ultimately do something with that try: if not meta: meta = self.get_object_metadata(container, name) if not meta: return False params = {} if meta.get('X-Static-Large-Object', None) == 'True': params['multipart-manifest'] = 'delete' self._object_store_client.delete( '{container}/{object}'.format( container=container, object=name), params=params) return True except OpenStackCloudHTTPError: return False def delete_autocreated_image_objects( self, container=OBJECT_AUTOCREATE_CONTAINER): """Delete all objects autocreated for image uploads. This method should generally not be needed, as shade should clean up the objects it uses for object-based image creation. If something goes wrong and it is found that there are leaked objects, this method can be used to delete any objects that shade has created on the user's behalf in service of image uploads. """ # This method only makes sense on clouds that use tasks if not self.image_api_use_tasks: return False deleted = False for obj in self.list_objects(container): meta = self.get_object_metadata(container, obj['name']) if meta.get(OBJECT_AUTOCREATE_KEY) == 'true': if self.delete_object(container, obj['name'], meta): deleted = True return deleted def get_object_metadata(self, container, name): try: return self._object_store_client.head( '{container}/{object}'.format( container=container, object=name)).headers except OpenStackCloudException as e: if e.response.status_code == 404: return None raise def get_object(self, container, obj, query_string=None, resp_chunk_size=1024, outfile=None): """Get the headers and body of an object :param string container: name of the container. :param string obj: name of the object. :param string query_string: query args for uri. (delimiter, prefix, etc.) :param int resp_chunk_size: chunk size of data to read. Only used if the results are being written to a file. (optional, defaults to 1k) :param outfile: Write the object to a file instead of returning the contents. If this option is given, body in the return tuple will be None. outfile can either be a file path given as a string, or a File like object. :returns: Tuple (headers, body) of the object, or None if the object is not found (404) :raises: OpenStackCloudException on operation error. """ # TODO(mordred) implement resp_chunk_size try: endpoint = '{container}/{object}'.format( container=container, object=obj) if query_string: endpoint = '{endpoint}?{query_string}'.format( endpoint=endpoint, query_string=query_string) response = self._object_store_client.get( endpoint, stream=True) response_headers = { k.lower(): v for k, v in response.headers.items()} if outfile: if isinstance(outfile, six.string_types): outfile_handle = open(outfile, 'wb') else: outfile_handle = outfile for chunk in response.iter_content( resp_chunk_size, decode_unicode=False): outfile_handle.write(chunk) if isinstance(outfile, six.string_types): outfile_handle.close() else: outfile_handle.flush() return (response_headers, None) else: return (response_headers, response.text) except OpenStackCloudHTTPError as e: if e.response.status_code == 404: return None raise def create_subnet(self, network_name_or_id, cidr=None, ip_version=4, enable_dhcp=False, subnet_name=None, tenant_id=None, allocation_pools=None, gateway_ip=None, disable_gateway_ip=False, dns_nameservers=None, host_routes=None, ipv6_ra_mode=None, ipv6_address_mode=None, use_default_subnetpool=False): """Create a subnet on a specified network. :param string network_name_or_id: The unique name or ID of the attached network. If a non-unique name is supplied, an exception is raised. :param string cidr: The CIDR. :param int ip_version: The IP version, which is 4 or 6. :param bool enable_dhcp: Set to ``True`` if DHCP is enabled and ``False`` if disabled. Default is ``False``. :param string subnet_name: The name of the subnet. :param string tenant_id: The ID of the tenant who owns the network. Only administrative users can specify a tenant ID other than their own. :param allocation_pools: A list of dictionaries of the start and end addresses for the allocation pools. For example:: [ { "start": "192.168.199.2", "end": "192.168.199.254" } ] :param string gateway_ip: The gateway IP address. When you specify both allocation_pools and gateway_ip, you must ensure that the gateway IP does not overlap with the specified allocation pools. :param bool disable_gateway_ip: Set to ``True`` if gateway IP address is disabled and ``False`` if enabled. It is not allowed with gateway_ip. Default is ``False``. :param dns_nameservers: A list of DNS name servers for the subnet. For example:: [ "8.8.8.7", "8.8.8.8" ] :param host_routes: A list of host route dictionaries for the subnet. For example:: [ { "destination": "0.0.0.0/0", "nexthop": "123.456.78.9" }, { "destination": "192.168.0.0/24", "nexthop": "192.168.0.1" } ] :param string ipv6_ra_mode: IPv6 Router Advertisement mode. Valid values are: 'dhcpv6-stateful', 'dhcpv6-stateless', or 'slaac'. :param string ipv6_address_mode: IPv6 address mode. Valid values are: 'dhcpv6-stateful', 'dhcpv6-stateless', or 'slaac'. :param bool use_default_subnetpool: Use the default subnetpool for ``ip_version`` to obtain a CIDR. It is required to pass ``None`` to the ``cidr`` argument when enabling this option. :returns: The new subnet object. :raises: OpenStackCloudException on operation error. """ network = self.get_network(network_name_or_id) if not network: raise OpenStackCloudException( "Network %s not found." % network_name_or_id) if disable_gateway_ip and gateway_ip: raise OpenStackCloudException( 'arg:disable_gateway_ip is not allowed with arg:gateway_ip') if not cidr and not use_default_subnetpool: raise OpenStackCloudException( 'arg:cidr is required when a subnetpool is not used') if cidr and use_default_subnetpool: raise OpenStackCloudException( 'arg:cidr must be set to None when use_default_subnetpool == ' 'True') # Be friendly on ip_version and allow strings if isinstance(ip_version, six.string_types): try: ip_version = int(ip_version) except ValueError: raise OpenStackCloudException('ip_version must be an integer') # The body of the neutron message for the subnet we wish to create. # This includes attributes that are required or have defaults. subnet = { 'network_id': network['id'], 'ip_version': ip_version, 'enable_dhcp': enable_dhcp } # Add optional attributes to the message. if cidr: subnet['cidr'] = cidr if subnet_name: subnet['name'] = subnet_name if tenant_id: subnet['tenant_id'] = tenant_id if allocation_pools: subnet['allocation_pools'] = allocation_pools if gateway_ip: subnet['gateway_ip'] = gateway_ip if disable_gateway_ip: subnet['gateway_ip'] = None if dns_nameservers: subnet['dns_nameservers'] = dns_nameservers if host_routes: subnet['host_routes'] = host_routes if ipv6_ra_mode: subnet['ipv6_ra_mode'] = ipv6_ra_mode if ipv6_address_mode: subnet['ipv6_address_mode'] = ipv6_address_mode if use_default_subnetpool: subnet['use_default_subnetpool'] = True data = self._network_client.post("/subnets.json", json={"subnet": subnet}) return self._get_and_munchify('subnet', data) def delete_subnet(self, name_or_id): """Delete a subnet. If a name, instead of a unique UUID, is supplied, it is possible that we could find more than one matching subnet since names are not required to be unique. An error will be raised in this case. :param name_or_id: Name or ID of the subnet being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ subnet = self.get_subnet(name_or_id) if not subnet: self.log.debug("Subnet %s not found for deleting", name_or_id) return False self._network_client.delete( "/subnets/{subnet_id}.json".format(subnet_id=subnet['id'])) return True def update_subnet(self, name_or_id, subnet_name=None, enable_dhcp=None, gateway_ip=None, disable_gateway_ip=None, allocation_pools=None, dns_nameservers=None, host_routes=None): """Update an existing subnet. :param string name_or_id: Name or ID of the subnet to update. :param string subnet_name: The new name of the subnet. :param bool enable_dhcp: Set to ``True`` if DHCP is enabled and ``False`` if disabled. :param string gateway_ip: The gateway IP address. When you specify both allocation_pools and gateway_ip, you must ensure that the gateway IP does not overlap with the specified allocation pools. :param bool disable_gateway_ip: Set to ``True`` if gateway IP address is disabled and ``False`` if enabled. It is not allowed with gateway_ip. Default is ``False``. :param allocation_pools: A list of dictionaries of the start and end addresses for the allocation pools. For example:: [ { "start": "192.168.199.2", "end": "192.168.199.254" } ] :param dns_nameservers: A list of DNS name servers for the subnet. For example:: [ "8.8.8.7", "8.8.8.8" ] :param host_routes: A list of host route dictionaries for the subnet. For example:: [ { "destination": "0.0.0.0/0", "nexthop": "123.456.78.9" }, { "destination": "192.168.0.0/24", "nexthop": "192.168.0.1" } ] :returns: The updated subnet object. :raises: OpenStackCloudException on operation error. """ subnet = {} if subnet_name: subnet['name'] = subnet_name if enable_dhcp is not None: subnet['enable_dhcp'] = enable_dhcp if gateway_ip: subnet['gateway_ip'] = gateway_ip if disable_gateway_ip: subnet['gateway_ip'] = None if allocation_pools: subnet['allocation_pools'] = allocation_pools if dns_nameservers: subnet['dns_nameservers'] = dns_nameservers if host_routes: subnet['host_routes'] = host_routes if not subnet: self.log.debug("No subnet data to update") return if disable_gateway_ip and gateway_ip: raise OpenStackCloudException( 'arg:disable_gateway_ip is not allowed with arg:gateway_ip') curr_subnet = self.get_subnet(name_or_id) if not curr_subnet: raise OpenStackCloudException( "Subnet %s not found." % name_or_id) data = self._network_client.put( "/subnets/{subnet_id}.json".format(subnet_id=curr_subnet['id']), json={"subnet": subnet}) return self._get_and_munchify('subnet', data) @_utils.valid_kwargs('name', 'admin_state_up', 'mac_address', 'fixed_ips', 'subnet_id', 'ip_address', 'security_groups', 'allowed_address_pairs', 'extra_dhcp_opts', 'device_owner', 'device_id') def create_port(self, network_id, **kwargs): """Create a port :param network_id: The ID of the network. (Required) :param name: A symbolic name for the port. (Optional) :param admin_state_up: The administrative status of the port, which is up (true, default) or down (false). (Optional) :param mac_address: The MAC address. (Optional) :param fixed_ips: List of ip_addresses and subnet_ids. See subnet_id and ip_address. (Optional) For example:: [ { "ip_address": "10.29.29.13", "subnet_id": "a78484c4-c380-4b47-85aa-21c51a2d8cbd" }, ... ] :param subnet_id: If you specify only a subnet ID, OpenStack Networking allocates an available IP from that subnet to the port. (Optional) If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. :param ip_address: If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. :param security_groups: List of security group UUIDs. (Optional) :param allowed_address_pairs: Allowed address pairs list (Optional) For example:: [ { "ip_address": "23.23.23.1", "mac_address": "fa:16:3e:c4:cd:3f" }, ... ] :param extra_dhcp_opts: Extra DHCP options. (Optional). For example:: [ { "opt_name": "opt name1", "opt_value": "value1" }, ... ] :param device_owner: The ID of the entity that uses this port. For example, a DHCP agent. (Optional) :param device_id: The ID of the device that uses this port. For example, a virtual server. (Optional) :returns: a ``munch.Munch`` describing the created port. :raises: ``OpenStackCloudException`` on operation error. """ kwargs['network_id'] = network_id data = self._network_client.post( "/ports.json", json={'port': kwargs}, error_message="Error creating port for network {0}".format( network_id)) return self._get_and_munchify('port', data) @_utils.valid_kwargs('name', 'admin_state_up', 'fixed_ips', 'security_groups', 'allowed_address_pairs', 'extra_dhcp_opts', 'device_owner', 'device_id') def update_port(self, name_or_id, **kwargs): """Update a port Note: to unset an attribute use None value. To leave an attribute untouched just omit it. :param name_or_id: name or ID of the port to update. (Required) :param name: A symbolic name for the port. (Optional) :param admin_state_up: The administrative status of the port, which is up (true) or down (false). (Optional) :param fixed_ips: List of ip_addresses and subnet_ids. (Optional) If you specify only a subnet ID, OpenStack Networking allocates an available IP from that subnet to the port. If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. For example:: [ { "ip_address": "10.29.29.13", "subnet_id": "a78484c4-c380-4b47-85aa-21c51a2d8cbd" }, ... ] :param security_groups: List of security group UUIDs. (Optional) :param allowed_address_pairs: Allowed address pairs list (Optional) For example:: [ { "ip_address": "23.23.23.1", "mac_address": "fa:16:3e:c4:cd:3f" }, ... ] :param extra_dhcp_opts: Extra DHCP options. (Optional). For example:: [ { "opt_name": "opt name1", "opt_value": "value1" }, ... ] :param device_owner: The ID of the entity that uses this port. For example, a DHCP agent. (Optional) :param device_id: The ID of the resource this port is attached to. :returns: a ``munch.Munch`` describing the updated port. :raises: OpenStackCloudException on operation error. """ port = self.get_port(name_or_id=name_or_id) if port is None: raise OpenStackCloudException( "failed to find port '{port}'".format(port=name_or_id)) data = self._network_client.put( "/ports/{port_id}.json".format(port_id=port['id']), json={"port": kwargs}, error_message="Error updating port {0}".format(name_or_id)) return self._get_and_munchify('port', data) def delete_port(self, name_or_id): """Delete a port :param name_or_id: ID or name of the port to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ port = self.get_port(name_or_id=name_or_id) if port is None: self.log.debug("Port %s not found for deleting", name_or_id) return False self._network_client.delete( "/ports/{port_id}.json".format(port_id=port['id']), error_message="Error deleting port {0}".format(name_or_id)) return True def create_security_group(self, name, description, project_id=None): """Create a new security group :param string name: A name for the security group. :param string description: Describes the security group. :param string project_id: Specify the project ID this security group will be created on (admin-only). :returns: A ``munch.Munch`` representing the new security group. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ # Security groups not supported if not self._has_secgroups(): raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) data = [] security_group_json = { 'security_group': { 'name': name, 'description': description }} if project_id is not None: security_group_json['security_group']['tenant_id'] = project_id if self._use_neutron_secgroups(): data = self._network_client.post( '/security-groups.json', json=security_group_json, error_message="Error creating security group {0}".format(name)) else: data = _adapter._json_response(self._conn.compute.post( '/os-security-groups', json=security_group_json)) return self._normalize_secgroup( self._get_and_munchify('security_group', data)) def delete_security_group(self, name_or_id): """Delete a security group :param string name_or_id: The name or unique ID of the security group. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ # Security groups not supported if not self._has_secgroups(): raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) # TODO(mordred): Let's come back and stop doing a GET before we do # the delete. secgroup = self.get_security_group(name_or_id) if secgroup is None: self.log.debug('Security group %s not found for deleting', name_or_id) return False if self._use_neutron_secgroups(): self._network_client.delete( '/security-groups/{sg_id}.json'.format(sg_id=secgroup['id']), error_message="Error deleting security group {0}".format( name_or_id) ) return True else: _adapter._json_response(self._conn.compute.delete( '/os-security-groups/{id}'.format(id=secgroup['id']))) return True @_utils.valid_kwargs('name', 'description') def update_security_group(self, name_or_id, **kwargs): """Update a security group :param string name_or_id: Name or ID of the security group to update. :param string name: New name for the security group. :param string description: New description for the security group. :returns: A ``munch.Munch`` describing the updated security group. :raises: OpenStackCloudException on operation error. """ # Security groups not supported if not self._has_secgroups(): raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) group = self.get_security_group(name_or_id) if group is None: raise OpenStackCloudException( "Security group %s not found." % name_or_id) if self._use_neutron_secgroups(): data = self._network_client.put( '/security-groups/{sg_id}.json'.format(sg_id=group['id']), json={'security_group': kwargs}, error_message="Error updating security group {0}".format( name_or_id)) else: for key in ('name', 'description'): kwargs.setdefault(key, group[key]) data = _adapter._json_response( self._conn.compute.put( '/os-security-groups/{id}'.format(id=group['id']), json={'security-group': kwargs})) return self._normalize_secgroup( self._get_and_munchify('security_group', data)) def create_security_group_rule(self, secgroup_name_or_id, port_range_min=None, port_range_max=None, protocol=None, remote_ip_prefix=None, remote_group_id=None, direction='ingress', ethertype='IPv4', project_id=None): """Create a new security group rule :param string secgroup_name_or_id: The security group name or ID to associate with this security group rule. If a non-unique group name is given, an exception is raised. :param int port_range_min: The minimum port number in the range that is matched by the security group rule. If the protocol is TCP or UDP, this value must be less than or equal to the port_range_max attribute value. If nova is used by the cloud provider for security groups, then a value of None will be transformed to -1. :param int port_range_max: The maximum port number in the range that is matched by the security group rule. The port_range_min attribute constrains the port_range_max attribute. If nova is used by the cloud provider for security groups, then a value of None will be transformed to -1. :param string protocol: The protocol that is matched by the security group rule. Valid values are None, tcp, udp, and icmp. :param string remote_ip_prefix: The remote IP prefix to be associated with this security group rule. This attribute matches the specified IP prefix as the source IP address of the IP packet. :param string remote_group_id: The remote group ID to be associated with this security group rule. :param string direction: Ingress or egress: The direction in which the security group rule is applied. For a compute instance, an ingress security group rule is applied to incoming (ingress) traffic for that instance. An egress rule is applied to traffic leaving the instance. :param string ethertype: Must be IPv4 or IPv6, and addresses represented in CIDR must match the ingress or egress rules. :param string project_id: Specify the project ID this security group will be created on (admin-only). :returns: A ``munch.Munch`` representing the new security group rule. :raises: OpenStackCloudException on operation error. """ # Security groups not supported if not self._has_secgroups(): raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) secgroup = self.get_security_group(secgroup_name_or_id) if not secgroup: raise OpenStackCloudException( "Security group %s not found." % secgroup_name_or_id) if self._use_neutron_secgroups(): # NOTE: Nova accepts -1 port numbers, but Neutron accepts None # as the equivalent value. rule_def = { 'security_group_id': secgroup['id'], 'port_range_min': None if port_range_min == -1 else port_range_min, 'port_range_max': None if port_range_max == -1 else port_range_max, 'protocol': protocol, 'remote_ip_prefix': remote_ip_prefix, 'remote_group_id': remote_group_id, 'direction': direction, 'ethertype': ethertype } if project_id is not None: rule_def['tenant_id'] = project_id data = self._network_client.post( '/security-group-rules.json', json={'security_group_rule': rule_def}, error_message="Error creating security group rule") else: # NOTE: Neutron accepts None for protocol. Nova does not. if protocol is None: raise OpenStackCloudException('Protocol must be specified') if direction == 'egress': self.log.debug( 'Rule creation failed: Nova does not support egress rules' ) raise OpenStackCloudException('No support for egress rules') # NOTE: Neutron accepts None for ports, but Nova requires -1 # as the equivalent value for ICMP. # # For TCP/UDP, if both are None, Neutron allows this and Nova # represents this as all ports (1-65535). Nova does not accept # None values, so to hide this difference, we will automatically # convert to the full port range. If only a single port value is # specified, it will error as normal. if protocol == 'icmp': if port_range_min is None: port_range_min = -1 if port_range_max is None: port_range_max = -1 elif protocol in ['tcp', 'udp']: if port_range_min is None and port_range_max is None: port_range_min = 1 port_range_max = 65535 security_group_rule_dict = dict(security_group_rule=dict( parent_group_id=secgroup['id'], ip_protocol=protocol, from_port=port_range_min, to_port=port_range_max, cidr=remote_ip_prefix, group_id=remote_group_id )) if project_id is not None: security_group_rule_dict[ 'security_group_rule']['tenant_id'] = project_id data = _adapter._json_response( self._conn.compute.post( '/os-security-group-rules', json=security_group_rule_dict )) return self._normalize_secgroup_rule( self._get_and_munchify('security_group_rule', data)) def delete_security_group_rule(self, rule_id): """Delete a security group rule :param string rule_id: The unique ID of the security group rule. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ # Security groups not supported if not self._has_secgroups(): raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) if self._use_neutron_secgroups(): try: self._network_client.delete( '/security-group-rules/{sg_id}.json'.format(sg_id=rule_id), error_message="Error deleting security group rule " "{0}".format(rule_id)) except OpenStackCloudResourceNotFound: return False return True else: _adapter._json_response(self._conn.compute.delete( '/os-security-group-rules/{id}'.format(id=rule_id))) return True def list_zones(self): """List all available zones. :returns: A list of zones dicts. """ data = self._dns_client.get( "/zones", error_message="Error fetching zones list") return self._get_and_munchify('zones', data) def get_zone(self, name_or_id, filters=None): """Get a zone by name or ID. :param name_or_id: Name or ID of the zone :param filters: A dictionary of meta data to use for further filtering OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A zone dict or None if no matching zone is found. """ return _utils._get_entity(self, 'zone', name_or_id, filters) def search_zones(self, name_or_id=None, filters=None): zones = self.list_zones() return _utils._filter_list(zones, name_or_id, filters) def create_zone(self, name, zone_type=None, email=None, description=None, ttl=None, masters=None): """Create a new zone. :param name: Name of the zone being created. :param zone_type: Type of the zone (primary/secondary) :param email: Email of the zone owner (only applies if zone_type is primary) :param description: Description of the zone :param ttl: TTL (Time to live) value in seconds :param masters: Master nameservers (only applies if zone_type is secondary) :returns: a dict representing the created zone. :raises: OpenStackCloudException on operation error. """ # We capitalize in case the user passes time in lowercase, as # designate call expects PRIMARY/SECONDARY if zone_type is not None: zone_type = zone_type.upper() if zone_type not in ('PRIMARY', 'SECONDARY'): raise OpenStackCloudException( "Invalid type %s, valid choices are PRIMARY or SECONDARY" % zone_type) zone = { "name": name, "email": email, "description": description, } if ttl is not None: zone["ttl"] = ttl if zone_type is not None: zone["type"] = zone_type if masters is not None: zone["masters"] = masters data = self._dns_client.post( "/zones", json=zone, error_message="Unable to create zone {name}".format(name=name)) return self._get_and_munchify(key=None, data=data) @_utils.valid_kwargs('email', 'description', 'ttl', 'masters') def update_zone(self, name_or_id, **kwargs): """Update a zone. :param name_or_id: Name or ID of the zone being updated. :param email: Email of the zone owner (only applies if zone_type is primary) :param description: Description of the zone :param ttl: TTL (Time to live) value in seconds :param masters: Master nameservers (only applies if zone_type is secondary) :returns: a dict representing the updated zone. :raises: OpenStackCloudException on operation error. """ zone = self.get_zone(name_or_id) if not zone: raise OpenStackCloudException( "Zone %s not found." % name_or_id) data = self._dns_client.patch( "/zones/{zone_id}".format(zone_id=zone['id']), json=kwargs, error_message="Error updating zone {0}".format(name_or_id)) return self._get_and_munchify(key=None, data=data) def delete_zone(self, name_or_id): """Delete a zone. :param name_or_id: Name or ID of the zone being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ zone = self.get_zone(name_or_id) if zone is None: self.log.debug("Zone %s not found for deleting", name_or_id) return False return self._dns_client.delete( "/zones/{zone_id}".format(zone_id=zone['id']), error_message="Error deleting zone {0}".format(name_or_id)) return True def list_recordsets(self, zone): """List all available recordsets. :param zone: Name or ID of the zone managing the recordset :returns: A list of recordsets. """ return self._dns_client.get( "/zones/{zone_id}/recordsets".format(zone_id=zone), error_message="Error fetching recordsets list") def get_recordset(self, zone, name_or_id): """Get a recordset by name or ID. :param zone: Name or ID of the zone managing the recordset :param name_or_id: Name or ID of the recordset :returns: A recordset dict or None if no matching recordset is found. """ try: return self._dns_client.get( "/zones/{zone_id}/recordsets/{recordset_id}".format( zone_id=zone, recordset_id=name_or_id), error_message="Error fetching recordset") except Exception: return None def search_recordsets(self, zone, name_or_id=None, filters=None): recordsets = self.list_recordsets(zone=zone) return _utils._filter_list(recordsets, name_or_id, filters) def create_recordset(self, zone, name, recordset_type, records, description=None, ttl=None): """Create a recordset. :param zone: Name or ID of the zone managing the recordset :param name: Name of the recordset :param recordset_type: Type of the recordset :param records: List of the recordset definitions :param description: Description of the recordset :param ttl: TTL value of the recordset :returns: a dict representing the created recordset. :raises: OpenStackCloudException on operation error. """ if self.get_zone(zone) is None: raise OpenStackCloudException( "Zone %s not found." % zone) # We capitalize the type in case the user sends in lowercase recordset_type = recordset_type.upper() body = { 'name': name, 'type': recordset_type, 'records': records } if description: body['description'] = description if ttl: body['ttl'] = ttl return self._dns_client.post( "/zones/{zone_id}/recordsets".format(zone_id=zone), json=body, error_message="Error creating recordset {name}".format(name=name)) @_utils.valid_kwargs('description', 'ttl', 'records') def update_recordset(self, zone, name_or_id, **kwargs): """Update a recordset. :param zone: Name or ID of the zone managing the recordset :param name_or_id: Name or ID of the recordset being updated. :param records: List of the recordset definitions :param description: Description of the recordset :param ttl: TTL (Time to live) value in seconds of the recordset :returns: a dict representing the updated recordset. :raises: OpenStackCloudException on operation error. """ zone_obj = self.get_zone(zone) if zone_obj is None: raise OpenStackCloudException( "Zone %s not found." % zone) recordset_obj = self.get_recordset(zone, name_or_id) if recordset_obj is None: raise OpenStackCloudException( "Recordset %s not found." % name_or_id) new_recordset = self._dns_client.put( "/zones/{zone_id}/recordsets/{recordset_id}".format( zone_id=zone_obj['id'], recordset_id=name_or_id), json=kwargs, error_message="Error updating recordset {0}".format(name_or_id)) return new_recordset def delete_recordset(self, zone, name_or_id): """Delete a recordset. :param zone: Name or ID of the zone managing the recordset. :param name_or_id: Name or ID of the recordset being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ zone = self.get_zone(zone) if zone is None: self.log.debug("Zone %s not found for deleting", zone) return False recordset = self.get_recordset(zone['id'], name_or_id) if recordset is None: self.log.debug("Recordset %s not found for deleting", name_or_id) return False self._dns_client.delete( "/zones/{zone_id}/recordsets/{recordset_id}".format( zone_id=zone['id'], recordset_id=name_or_id), error_message="Error deleting recordset {0}".format(name_or_id)) return True @_utils.cache_on_arguments() def list_cluster_templates(self, detail=False): """List cluster templates. :param bool detail. Ignored. Included for backwards compat. ClusterTemplates are always returned with full details. :returns: a list of dicts containing the cluster template details. :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ with _utils.shade_exceptions("Error fetching cluster template list"): data = self._container_infra_client.get( '/baymodels/detail') return self._normalize_cluster_templates( self._get_and_munchify('baymodels', data)) list_baymodels = list_cluster_templates def search_cluster_templates( self, name_or_id=None, filters=None, detail=False): """Search cluster templates. :param name_or_id: cluster template name or ID. :param filters: a dict containing additional filters to use. :param detail: a boolean to control if we need summarized or detailed output. :returns: a list of dict containing the cluster templates :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ cluster_templates = self.list_cluster_templates(detail=detail) return _utils._filter_list( cluster_templates, name_or_id, filters) search_baymodels = search_cluster_templates def get_cluster_template(self, name_or_id, filters=None, detail=False): """Get a cluster template by name or ID. :param name_or_id: Name or ID of the cluster template. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A cluster template dict or None if no matching cluster template is found. """ return _utils._get_entity(self, 'cluster_template', name_or_id, filters=filters, detail=detail) get_baymodel = get_cluster_template def create_cluster_template( self, name, image_id=None, keypair_id=None, coe=None, **kwargs): """Create a cluster template. :param string name: Name of the cluster template. :param string image_id: Name or ID of the image to use. :param string keypair_id: Name or ID of the keypair to use. :param string coe: Name of the coe for the cluster template. Other arguments will be passed in kwargs. :returns: a dict containing the cluster template description :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ error_message = ("Error creating cluster template of name" " {cluster_template_name}".format( cluster_template_name=name)) with _utils.shade_exceptions(error_message): body = kwargs.copy() body['name'] = name body['image_id'] = image_id body['keypair_id'] = keypair_id body['coe'] = coe cluster_template = self._container_infra_client.post( '/baymodels', json=body) self.list_cluster_templates.invalidate(self) return cluster_template create_baymodel = create_cluster_template def delete_cluster_template(self, name_or_id): """Delete a cluster template. :param name_or_id: Name or unique ID of the cluster template. :returns: True if the delete succeeded, False if the cluster template was not found. :raises: OpenStackCloudException on operation error. """ cluster_template = self.get_cluster_template(name_or_id) if not cluster_template: self.log.debug( "Cluster template %(name_or_id)s does not exist", {'name_or_id': name_or_id}, exc_info=True) return False with _utils.shade_exceptions("Error in deleting cluster template"): self._container_infra_client.delete( '/baymodels/{id}'.format(id=cluster_template['id'])) self.list_cluster_templates.invalidate(self) return True delete_baymodel = delete_cluster_template @_utils.valid_kwargs('name', 'image_id', 'flavor_id', 'master_flavor_id', 'keypair_id', 'external_network_id', 'fixed_network', 'dns_nameserver', 'docker_volume_size', 'labels', 'coe', 'http_proxy', 'https_proxy', 'no_proxy', 'network_driver', 'tls_disabled', 'public', 'registry_enabled', 'volume_driver') def update_cluster_template(self, name_or_id, operation, **kwargs): """Update a cluster template. :param name_or_id: Name or ID of the cluster template being updated. :param operation: Operation to perform - add, remove, replace. Other arguments will be passed with kwargs. :returns: a dict representing the updated cluster template. :raises: OpenStackCloudException on operation error. """ self.list_cluster_templates.invalidate(self) cluster_template = self.get_cluster_template(name_or_id) if not cluster_template: raise OpenStackCloudException( "Cluster template %s not found." % name_or_id) if operation not in ['add', 'replace', 'remove']: raise TypeError( "%s operation not in 'add', 'replace', 'remove'" % operation) patches = _utils.generate_patches_from_kwargs(operation, **kwargs) # No need to fire an API call if there is an empty patch if not patches: return cluster_template with _utils.shade_exceptions( "Error updating cluster template {0}".format(name_or_id)): self._container_infra_client.patch( '/baymodels/{id}'.format(id=cluster_template['id']), json=patches) new_cluster_template = self.get_cluster_template(name_or_id) return new_cluster_template update_baymodel = update_cluster_template def list_nics(self): msg = "Error fetching machine port list" data = self._baremetal_client.get("/ports", microversion="1.6", error_message=msg) return data['ports'] def list_nics_for_machine(self, uuid): """Returns a list of ports present on the machine node. :param uuid: String representing machine UUID value in order to identify the machine. :returns: A list of ports. """ msg = "Error fetching port list for node {node_id}".format( node_id=uuid) url = "/nodes/{node_id}/ports".format(node_id=uuid) data = self._baremetal_client.get(url, microversion="1.6", error_message=msg) return data['ports'] def get_nic_by_mac(self, mac): try: url = '/ports/detail?address=%s' % mac data = self._baremetal_client.get(url) if len(data['ports']) == 1: return data['ports'][0] except Exception: pass return None def list_machines(self): msg = "Error fetching machine node list" data = self._baremetal_client.get("/nodes", microversion="1.6", error_message=msg) return self._get_and_munchify('nodes', data) def get_machine(self, name_or_id): """Get Machine by name or uuid Search the baremetal host out by utilizing the supplied id value which can consist of a name or UUID. :param name_or_id: A node name or UUID that will be looked up. :returns: ``munch.Munch`` representing the node found or None if no nodes are found. """ # NOTE(TheJulia): This is the initial microversion shade support for # ironic was created around. Ironic's default behavior for newer # versions is to expose the field, but with a value of None for # calls by a supported, yet older microversion. # Consensus for moving forward with microversion handling in shade # seems to be to take the same approach, although ironic's API # does it for the user. version = "1.6" try: url = '/nodes/{node_id}'.format(node_id=name_or_id) return self._normalize_machine( self._baremetal_client.get(url, microversion=version)) except Exception: return None def get_machine_by_mac(self, mac): """Get machine by port MAC address :param mac: Port MAC address to query in order to return a node. :returns: ``munch.Munch`` representing the node found or None if the node is not found. """ try: port_url = '/ports/detail?address={mac}'.format(mac=mac) port = self._baremetal_client.get(port_url, microversion=1.6) machine_url = '/nodes/{machine}'.format( machine=port['ports'][0]['node_uuid']) return self._baremetal_client.get(machine_url, microversion=1.6) except Exception: return None def inspect_machine(self, name_or_id, wait=False, timeout=3600): """Inspect a Barmetal machine Engages the Ironic node inspection behavior in order to collect metadata about the baremetal machine. :param name_or_id: String representing machine name or UUID value in order to identify the machine. :param wait: Boolean value controlling if the method is to wait for the desired state to be reached or a failure to occur. :param timeout: Integer value, defautling to 3600 seconds, for the$ wait state to reach completion. :returns: ``munch.Munch`` representing the current state of the machine upon exit of the method. """ return_to_available = False machine = self.get_machine(name_or_id) if not machine: raise OpenStackCloudException( "Machine inspection failed to find: %s." % name_or_id) # NOTE(TheJulia): If in available state, we can do this, however # We need to to move the host back to m if "available" in machine['provision_state']: return_to_available = True # NOTE(TheJulia): Changing available machine to managedable state # and due to state transitions we need to until that transition has # completd. self.node_set_provision_state(machine['uuid'], 'manage', wait=True, timeout=timeout) elif ("manage" not in machine['provision_state'] and "inspect failed" not in machine['provision_state']): raise OpenStackCloudException( "Machine must be in 'manage' or 'available' state to " "engage inspection: Machine: %s State: %s" % (machine['uuid'], machine['provision_state'])) with _utils.shade_exceptions("Error inspecting machine"): machine = self.node_set_provision_state(machine['uuid'], 'inspect') if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for node transition to " "target state of 'inspect'"): machine = self.get_machine(name_or_id) if "inspect failed" in machine['provision_state']: raise OpenStackCloudException( "Inspection of node %s failed, last error: %s" % (machine['uuid'], machine['last_error'])) if "manageable" in machine['provision_state']: break if return_to_available: machine = self.node_set_provision_state( machine['uuid'], 'provide', wait=wait, timeout=timeout) return(machine) def register_machine(self, nics, wait=False, timeout=3600, lock_timeout=600, **kwargs): """Register Baremetal with Ironic Allows for the registration of Baremetal nodes with Ironic and population of pertinant node information or configuration to be passed to the Ironic API for the node. This method also creates ports for a list of MAC addresses passed in to be utilized for boot and potentially network configuration. If a failure is detected creating the network ports, any ports created are deleted, and the node is removed from Ironic. :param nics: An array of MAC addresses that represent the network interfaces for the node to be created. Example:: [ {'mac': 'aa:bb:cc:dd:ee:01'}, {'mac': 'aa:bb:cc:dd:ee:02'} ] :param wait: Boolean value, defaulting to false, to wait for the node to reach the available state where the node can be provisioned. It must be noted, when set to false, the method will still wait for locks to clear before sending the next required command. :param timeout: Integer value, defautling to 3600 seconds, for the wait state to reach completion. :param lock_timeout: Integer value, defaulting to 600 seconds, for locks to clear. :param kwargs: Key value pairs to be passed to the Ironic API, including uuid, name, chassis_uuid, driver_info, parameters. :raises: OpenStackCloudException on operation error. :returns: Returns a ``munch.Munch`` representing the new baremetal node. """ msg = ("Baremetal machine node failed to be created.") port_msg = ("Baremetal machine port failed to be created.") url = '/nodes' # TODO(TheJulia): At some point we need to figure out how to # handle data across when the requestor is defining newer items # with the older api. machine = self._baremetal_client.post(url, json=kwargs, error_message=msg, microversion="1.6") created_nics = [] try: for row in nics: payload = {'address': row['mac'], 'node_uuid': machine['uuid']} nic = self._baremetal_client.post('/ports', json=payload, error_message=port_msg) created_nics.append(nic['uuid']) except Exception as e: self.log.debug("ironic NIC registration failed", exc_info=True) # TODO(mordred) Handle failures here try: for uuid in created_nics: try: port_url = '/ports/{uuid}'.format(uuid=uuid) # NOTE(TheJulia): Added in hope that it is logged. port_msg = ('Failed to delete port {port} for node' '{node}').format(port=uuid, node=machine['uuid']) self._baremetal_client.delete(port_url, error_message=port_msg) except Exception: pass finally: version = "1.6" msg = "Baremetal machine failed to be deleted." url = '/nodes/{node_id}'.format( node_id=machine['uuid']) self._baremetal_client.delete(url, error_message=msg, microversion=version) raise OpenStackCloudException( "Error registering NICs with the baremetal service: %s" % str(e)) with _utils.shade_exceptions( "Error transitioning node to available state"): if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for node transition to " "available state"): machine = self.get_machine(machine['uuid']) # Note(TheJulia): Per the Ironic state code, a node # that fails returns to enroll state, which means a failed # node cannot be determined at this point in time. if machine['provision_state'] in ['enroll']: self.node_set_provision_state( machine['uuid'], 'manage') elif machine['provision_state'] in ['manageable']: self.node_set_provision_state( machine['uuid'], 'provide') elif machine['last_error'] is not None: raise OpenStackCloudException( "Machine encountered a failure: %s" % machine['last_error']) # Note(TheJulia): Earlier versions of Ironic default to # None and later versions default to available up until # the introduction of enroll state. # Note(TheJulia): The node will transition through # cleaning if it is enabled, and we will wait for # completion. elif machine['provision_state'] in ['available', None]: break else: if machine['provision_state'] in ['enroll']: self.node_set_provision_state(machine['uuid'], 'manage') # Note(TheJulia): We need to wait for the lock to clear # before we attempt to set the machine into provide state # which allows for the transition to available. for count in utils.iterate_timeout( lock_timeout, "Timeout waiting for reservation to clear " "before setting provide state"): machine = self.get_machine(machine['uuid']) if (machine['reservation'] is None and machine['provision_state'] is not 'enroll'): # NOTE(TheJulia): In this case, the node has # has moved on from the previous state and is # likely not being verified, as no lock is # present on the node. self.node_set_provision_state( machine['uuid'], 'provide') machine = self.get_machine(machine['uuid']) break elif machine['provision_state'] in [ 'cleaning', 'available']: break elif machine['last_error'] is not None: raise OpenStackCloudException( "Machine encountered a failure: %s" % machine['last_error']) if not isinstance(machine, str): return self._normalize_machine(machine) else: return machine def unregister_machine(self, nics, uuid, wait=False, timeout=600): """Unregister Baremetal from Ironic Removes entries for Network Interfaces and baremetal nodes from an Ironic API :param nics: An array of strings that consist of MAC addresses to be removed. :param string uuid: The UUID of the node to be deleted. :param wait: Boolean value, defaults to false, if to block the method upon the final step of unregistering the machine. :param timeout: Integer value, representing seconds with a default value of 600, which controls the maximum amount of time to block the method's completion on. :raises: OpenStackCloudException on operation failure. """ machine = self.get_machine(uuid) invalid_states = ['active', 'cleaning', 'clean wait', 'clean failed'] if machine['provision_state'] in invalid_states: raise OpenStackCloudException( "Error unregistering node '%s' due to current provision " "state '%s'" % (uuid, machine['provision_state'])) # NOTE(TheJulia) There is a high possibility of a lock being present # if the machine was just moved through the state machine. This was # previously concealed by exception retry logic that detected the # failure, and resubitted the request in python-ironicclient. try: self.wait_for_baremetal_node_lock(machine, timeout=timeout) except OpenStackCloudException as e: raise OpenStackCloudException("Error unregistering node '%s': " "Exception occured while waiting " "to be able to proceed: %s" % (machine['uuid'], e)) for nic in nics: port_msg = ("Error removing NIC {nic} from baremetal API for " "node {uuid}").format(nic=nic, uuid=uuid) port_url = '/ports/detail?address={mac}'.format(mac=nic['mac']) port = self._baremetal_client.get(port_url, microversion=1.6, error_message=port_msg) port_url = '/ports/{uuid}'.format(uuid=port['ports'][0]['uuid']) _utils._call_client_and_retry(self._baremetal_client.delete, port_url, retry_on=[409, 503], error_message=port_msg) with _utils.shade_exceptions( "Error unregistering machine {node_id} from the baremetal " "API".format(node_id=uuid)): # NOTE(TheJulia): While this should not matter microversion wise, # ironic assumes all calls without an explicit microversion to be # version 1.0. Ironic expects to deprecate support for older # microversions in future releases, as such, we explicitly set # the version to what we have been using with the client library.. version = "1.6" msg = "Baremetal machine failed to be deleted" url = '/nodes/{node_id}'.format( node_id=uuid) _utils._call_client_and_retry(self._baremetal_client.delete, url, retry_on=[409, 503], error_message=msg, microversion=version) if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for machine to be deleted"): if not self.get_machine(uuid): break def patch_machine(self, name_or_id, patch): """Patch Machine Information This method allows for an interface to manipulate node entries within Ironic. :param node_id: The server object to attach to. :param patch: The JSON Patch document is a list of dictonary objects that comply with RFC 6902 which can be found at https://tools.ietf.org/html/rfc6902. Example patch construction:: patch=[] patch.append({ 'op': 'remove', 'path': '/instance_info' }) patch.append({ 'op': 'replace', 'path': '/name', 'value': 'newname' }) patch.append({ 'op': 'add', 'path': '/driver_info/username', 'value': 'administrator' }) :raises: OpenStackCloudException on operation error. :returns: ``munch.Munch`` representing the newly updated node. """ msg = ("Error updating machine via patch operation on node " "{node}".format(node=name_or_id)) url = '/nodes/{node_id}'.format(node_id=name_or_id) return self._normalize_machine( self._baremetal_client.patch(url, json=patch, error_message=msg)) def update_machine(self, name_or_id, chassis_uuid=None, driver=None, driver_info=None, name=None, instance_info=None, instance_uuid=None, properties=None): """Update a machine with new configuration information A user-friendly method to perform updates of a machine, in whole or part. :param string name_or_id: A machine name or UUID to be updated. :param string chassis_uuid: Assign a chassis UUID to the machine. NOTE: As of the Kilo release, this value cannot be changed once set. If a user attempts to change this value, then the Ironic API, as of Kilo, will reject the request. :param string driver: The driver name for controlling the machine. :param dict driver_info: The dictonary defining the configuration that the driver will utilize to control the machine. Permutations of this are dependent upon the specific driver utilized. :param string name: A human relatable name to represent the machine. :param dict instance_info: A dictonary of configuration information that conveys to the driver how the host is to be configured when deployed. be deployed to the machine. :param string instance_uuid: A UUID value representing the instance that the deployed machine represents. :param dict properties: A dictonary defining the properties of a machine. :raises: OpenStackCloudException on operation error. :returns: ``munch.Munch`` containing a machine sub-dictonary consisting of the updated data returned from the API update operation, and a list named changes which contains all of the API paths that received updates. """ machine = self.get_machine(name_or_id) if not machine: raise OpenStackCloudException( "Machine update failed to find Machine: %s. " % name_or_id) machine_config = {} new_config = {} try: if chassis_uuid: machine_config['chassis_uuid'] = machine['chassis_uuid'] new_config['chassis_uuid'] = chassis_uuid if driver: machine_config['driver'] = machine['driver'] new_config['driver'] = driver if driver_info: machine_config['driver_info'] = machine['driver_info'] new_config['driver_info'] = driver_info if name: machine_config['name'] = machine['name'] new_config['name'] = name if instance_info: machine_config['instance_info'] = machine['instance_info'] new_config['instance_info'] = instance_info if instance_uuid: machine_config['instance_uuid'] = machine['instance_uuid'] new_config['instance_uuid'] = instance_uuid if properties: machine_config['properties'] = machine['properties'] new_config['properties'] = properties except KeyError as e: self.log.debug( "Unexpected machine response missing key %s [%s]", e.args[0], name_or_id) raise OpenStackCloudException( "Machine update failed - machine [%s] missing key %s. " "Potential API issue." % (name_or_id, e.args[0])) try: patch = jsonpatch.JsonPatch.from_diff(machine_config, new_config) except Exception as e: raise OpenStackCloudException( "Machine update failed - Error generating JSON patch object " "for submission to the API. Machine: %s Error: %s" % (name_or_id, str(e))) with _utils.shade_exceptions( "Machine update failed - patch operation failed on Machine " "{node}".format(node=name_or_id) ): if not patch: return dict( node=machine, changes=None ) else: machine = self.patch_machine(machine['uuid'], list(patch)) change_list = [] for change in list(patch): change_list.append(change['path']) return dict( node=machine, changes=change_list ) def validate_node(self, uuid): # TODO(TheJulia): There are soooooo many other interfaces # that we can support validating, while these are essential, # we should support more. # TODO(TheJulia): Add a doc string :( msg = ("Failed to query the API for validation status of " "node {node_id}").format(node_id=uuid) url = '/nodes/{node_id}/validate'.format(node_id=uuid) ifaces = self._baremetal_client.get(url, error_message=msg) if not ifaces['deploy'] or not ifaces['power']: raise OpenStackCloudException( "ironic node %s failed to validate. " "(deploy: %s, power: %s)" % (ifaces['deploy'], ifaces['power'])) def node_set_provision_state(self, name_or_id, state, configdrive=None, wait=False, timeout=3600): """Set Node Provision State Enables a user to provision a Machine and optionally define a config drive to be utilized. :param string name_or_id: The Name or UUID value representing the baremetal node. :param string state: The desired provision state for the baremetal node. :param string configdrive: An optional URL or file or path representing the configdrive. In the case of a directory, the client API will create a properly formatted configuration drive file and post the file contents to the API for deployment. :param boolean wait: A boolean value, defaulted to false, to control if the method will wait for the desire end state to be reached before returning. :param integer timeout: Integer value, defaulting to 3600 seconds, representing the amount of time to wait for the desire end state to be reached. :raises: OpenStackCloudException on operation error. :returns: ``munch.Munch`` representing the current state of the machine upon exit of the method. """ # NOTE(TheJulia): Default microversion for this call is 1.6. # Setting locally until we have determined our master plan regarding # microversion handling. version = "1.6" msg = ("Baremetal machine node failed change provision state to " "{state}".format(state=state)) url = '/nodes/{node_id}/states/provision'.format( node_id=name_or_id) payload = {'target': state} if configdrive: payload['configdrive'] = configdrive machine = _utils._call_client_and_retry(self._baremetal_client.put, url, retry_on=[409, 503], json=payload, error_message=msg, microversion=version) if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for node transition to " "target state of '%s'" % state): machine = self.get_machine(name_or_id) if 'failed' in machine['provision_state']: raise OpenStackCloudException( "Machine encountered a failure.") # NOTE(TheJulia): This performs matching if the requested # end state matches the state the node has reached. if state in machine['provision_state']: break # NOTE(TheJulia): This performs matching for cases where # the reqeusted state action ends in available state. if ("available" in machine['provision_state'] and state in ["provide", "deleted"]): break else: machine = self.get_machine(name_or_id) return machine def set_machine_maintenance_state( self, name_or_id, state=True, reason=None): """Set Baremetal Machine Maintenance State Sets Baremetal maintenance state and maintenance reason. :param string name_or_id: The Name or UUID value representing the baremetal node. :param boolean state: The desired state of the node. True being in maintenance where as False means the machine is not in maintenance mode. This value defaults to True if not explicitly set. :param string reason: An optional freeform string that is supplied to the baremetal API to allow for notation as to why the node is in maintenance state. :raises: OpenStackCloudException on operation error. :returns: None """ msg = ("Error setting machine maintenance state to {state} on node " "{node}").format(state=state, node=name_or_id) url = '/nodes/{name_or_id}/maintenance'.format(name_or_id=name_or_id) if state: payload = {'reason': reason} self._baremetal_client.put(url, json=payload, error_message=msg) else: self._baremetal_client.delete(url, error_message=msg) return None def remove_machine_from_maintenance(self, name_or_id): """Remove Baremetal Machine from Maintenance State Similarly to set_machine_maintenance_state, this method removes a machine from maintenance state. It must be noted that this method simpily calls set_machine_maintenace_state for the name_or_id requested and sets the state to False. :param string name_or_id: The Name or UUID value representing the baremetal node. :raises: OpenStackCloudException on operation error. :returns: None """ self.set_machine_maintenance_state(name_or_id, False) def _set_machine_power_state(self, name_or_id, state): """Set machine power state to on or off This private method allows a user to turn power on or off to a node via the Baremetal API. :params string name_or_id: A string representing the baremetal node to have power turned to an "on" state. :params string state: A value of "on", "off", or "reboot" that is passed to the baremetal API to be asserted to the machine. In the case of the "reboot" state, Ironic will return the host to the "on" state. :raises: OpenStackCloudException on operation error or. :returns: None """ msg = ("Error setting machine power state to {state} on node " "{node}").format(state=state, node=name_or_id) url = '/nodes/{name_or_id}/states/power'.format(name_or_id=name_or_id) if 'reboot' in state: desired_state = 'rebooting' else: desired_state = 'power {state}'.format(state=state) payload = {'target': desired_state} _utils._call_client_and_retry(self._baremetal_client.put, url, retry_on=[409, 503], json=payload, error_message=msg, microversion="1.6") return None def set_machine_power_on(self, name_or_id): """Activate baremetal machine power This is a method that sets the node power state to "on". :params string name_or_id: A string representing the baremetal node to have power turned to an "on" state. :raises: OpenStackCloudException on operation error. :returns: None """ self._set_machine_power_state(name_or_id, 'on') def set_machine_power_off(self, name_or_id): """De-activate baremetal machine power This is a method that sets the node power state to "off". :params string name_or_id: A string representing the baremetal node to have power turned to an "off" state. :raises: OpenStackCloudException on operation error. :returns: """ self._set_machine_power_state(name_or_id, 'off') def set_machine_power_reboot(self, name_or_id): """De-activate baremetal machine power This is a method that sets the node power state to "reboot", which in essence changes the machine power state to "off", and that back to "on". :params string name_or_id: A string representing the baremetal node to have power turned to an "off" state. :raises: OpenStackCloudException on operation error. :returns: None """ self._set_machine_power_state(name_or_id, 'reboot') def activate_node(self, uuid, configdrive=None, wait=False, timeout=1200): self.node_set_provision_state( uuid, 'active', configdrive, wait=wait, timeout=timeout) def deactivate_node(self, uuid, wait=False, timeout=1200): self.node_set_provision_state( uuid, 'deleted', wait=wait, timeout=timeout) def set_node_instance_info(self, uuid, patch): msg = ("Error updating machine via patch operation on node " "{node}".format(node=uuid)) url = '/nodes/{node_id}'.format(node_id=uuid) return self._baremetal_client.patch(url, json=patch, error_message=msg) def purge_node_instance_info(self, uuid): patch = [] patch.append({'op': 'remove', 'path': '/instance_info'}) msg = ("Error updating machine via patch operation on node " "{node}".format(node=uuid)) url = '/nodes/{node_id}'.format(node_id=uuid) return self._baremetal_client.patch(url, json=patch, error_message=msg) def wait_for_baremetal_node_lock(self, node, timeout=30): """Wait for a baremetal node to have no lock. Baremetal nodes in ironic have a reservation lock that is used to represent that a conductor has locked the node while performing some sort of action, such as changing configuration as a result of a machine state change. This lock can occur during power syncronization, and prevents updates to objects attached to the node, such as ports. In the vast majority of cases, locks should clear in a few seconds, and as such this method will only wait for 30 seconds. The default wait is two seconds between checking if the lock has cleared. This method is intended for use by methods that need to gracefully block without genreating errors, however this method does prevent another client or a timer from triggering a lock immediately after we see the lock as having cleared. :param node: The json representation of the node, specificially looking for the node 'uuid' and 'reservation' fields. :param timeout: Integer in seconds to wait for the lock to clear. Default: 30 :raises: OpenStackCloudException upon client failure. :returns: None """ # TODO(TheJulia): This _can_ still fail with a race # condition in that between us checking the status, # a conductor where the conductor could still obtain # a lock before we are able to obtain a lock. # This means we should handle this with such conections if node['reservation'] is None: return else: msg = 'Waiting for lock to be released for node {node}'.format( node=node['uuid']) for count in utils.iterate_timeout(timeout, msg, 2): current_node = self.get_machine(node['uuid']) if current_node['reservation'] is None: return @_utils.valid_kwargs('type', 'service_type', 'description') def create_service(self, name, enabled=True, **kwargs): """Create a service. :param name: Service name. :param type: Service type. (type or service_type required.) :param service_type: Service type. (type or service_type required.) :param description: Service description (optional). :param enabled: Whether the service is enabled (v3 only) :returns: a ``munch.Munch`` containing the services description, i.e. the following attributes:: - id: - name: - type: - service_type: - description: :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ type_ = kwargs.pop('type', None) service_type = kwargs.pop('service_type', None) # TODO(mordred) When this changes to REST, force interface=admin # in the adapter call if self._is_client_version('identity', 2): url, key = '/OS-KSADM/services', 'OS-KSADM:service' kwargs['type'] = type_ or service_type else: url, key = '/services', 'service' kwargs['type'] = type_ or service_type kwargs['enabled'] = enabled kwargs['name'] = name msg = 'Failed to create service {name}'.format(name=name) data = self._identity_client.post( url, json={key: kwargs}, error_message=msg) service = self._get_and_munchify(key, data) return _utils.normalize_keystone_services([service])[0] @_utils.valid_kwargs('name', 'enabled', 'type', 'service_type', 'description') def update_service(self, name_or_id, **kwargs): # NOTE(SamYaple): Service updates are only available on v3 api if self._is_client_version('identity', 2): raise OpenStackCloudUnavailableFeature( 'Unavailable Feature: Service update requires Identity v3' ) # NOTE(SamYaple): Keystone v3 only accepts 'type' but shade accepts # both 'type' and 'service_type' with a preference # towards 'type' type_ = kwargs.pop('type', None) service_type = kwargs.pop('service_type', None) if type_ or service_type: kwargs['type'] = type_ or service_type if self._is_client_version('identity', 2): url, key = '/OS-KSADM/services', 'OS-KSADM:service' else: url, key = '/services', 'service' service = self.get_service(name_or_id) msg = 'Error in updating service {service}'.format(service=name_or_id) data = self._identity_client.patch( '{url}/{id}'.format(url=url, id=service['id']), json={key: kwargs}, endpoint_filter={'interface': 'admin'}, error_message=msg) service = self._get_and_munchify(key, data) return _utils.normalize_keystone_services([service])[0] def list_services(self): """List all Keystone services. :returns: a list of ``munch.Munch`` containing the services description :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ if self._is_client_version('identity', 2): url, key = '/OS-KSADM/services', 'OS-KSADM:services' else: url, key = '/services', 'services' data = self._identity_client.get( url, endpoint_filter={'interface': 'admin'}, error_message="Failed to list services") services = self._get_and_munchify(key, data) return _utils.normalize_keystone_services(services) def search_services(self, name_or_id=None, filters=None): """Search Keystone services. :param name_or_id: Name or id of the desired service. :param filters: a dict containing additional filters to use. e.g. {'type': 'network'}. :returns: a list of ``munch.Munch`` containing the services description :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ services = self.list_services() return _utils._filter_list(services, name_or_id, filters) def get_service(self, name_or_id, filters=None): """Get exactly one Keystone service. :param name_or_id: Name or id of the desired service. :param filters: a dict containing additional filters to use. e.g. {'type': 'network'} :returns: a ``munch.Munch`` containing the services description, i.e. the following attributes:: - id: - name: - type: - description: :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call or if multiple matches are found. """ return _utils._get_entity(self, 'service', name_or_id, filters) def delete_service(self, name_or_id): """Delete a Keystone service. :param name_or_id: Service name or id. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call """ service = self.get_service(name_or_id=name_or_id) if service is None: self.log.debug("Service %s not found for deleting", name_or_id) return False if self._is_client_version('identity', 2): url = '/OS-KSADM/services' else: url = '/services' error_msg = 'Failed to delete service {id}'.format(id=service['id']) self._identity_client.delete( '{url}/{id}'.format(url=url, id=service['id']), endpoint_filter={'interface': 'admin'}, error_message=error_msg) return True @_utils.valid_kwargs('public_url', 'internal_url', 'admin_url') def create_endpoint(self, service_name_or_id, url=None, interface=None, region=None, enabled=True, **kwargs): """Create a Keystone endpoint. :param service_name_or_id: Service name or id for this endpoint. :param url: URL of the endpoint :param interface: Interface type of the endpoint :param public_url: Endpoint public URL. :param internal_url: Endpoint internal URL. :param admin_url: Endpoint admin URL. :param region: Endpoint region. :param enabled: Whether the endpoint is enabled NOTE: Both v2 (public_url, internal_url, admin_url) and v3 (url, interface) calling semantics are supported. But you can only use one of them at a time. :returns: a list of ``munch.Munch`` containing the endpoint description :raises: OpenStackCloudException if the service cannot be found or if something goes wrong during the openstack API call. """ public_url = kwargs.pop('public_url', None) internal_url = kwargs.pop('internal_url', None) admin_url = kwargs.pop('admin_url', None) if (url or interface) and (public_url or internal_url or admin_url): raise OpenStackCloudException( "create_endpoint takes either url and interface OR" " public_url, internal_url, admin_url") service = self.get_service(name_or_id=service_name_or_id) if service is None: raise OpenStackCloudException("service {service} not found".format( service=service_name_or_id)) if self._is_client_version('identity', 2): if url: # v2.0 in use, v3-like arguments, one endpoint created if interface != 'public': raise OpenStackCloudException( "Error adding endpoint for service {service}." " On a v2 cloud the url/interface API may only be" " used for public url. Try using the public_url," " internal_url, admin_url parameters instead of" " url and interface".format( service=service_name_or_id)) endpoint_args = {'publicurl': url} else: # v2.0 in use, v2.0-like arguments, one endpoint created endpoint_args = {} if public_url: endpoint_args.update({'publicurl': public_url}) if internal_url: endpoint_args.update({'internalurl': internal_url}) if admin_url: endpoint_args.update({'adminurl': admin_url}) # keystone v2.0 requires 'region' arg even if it is None endpoint_args.update( {'service_id': service['id'], 'region': region}) data = self._identity_client.post( '/endpoints', json={'endpoint': endpoint_args}, endpoint_filter={'interface': 'admin'}, error_message=("Failed to create endpoint for service" " {service}".format(service=service['name']))) return [self._get_and_munchify('endpoint', data)] else: endpoints_args = [] if url: # v3 in use, v3-like arguments, one endpoint created endpoints_args.append( {'url': url, 'interface': interface, 'service_id': service['id'], 'enabled': enabled, 'region': region}) else: # v3 in use, v2.0-like arguments, one endpoint created for each # interface url provided endpoint_args = {'region': region, 'enabled': enabled, 'service_id': service['id']} if public_url: endpoint_args.update({'url': public_url, 'interface': 'public'}) endpoints_args.append(endpoint_args.copy()) if internal_url: endpoint_args.update({'url': internal_url, 'interface': 'internal'}) endpoints_args.append(endpoint_args.copy()) if admin_url: endpoint_args.update({'url': admin_url, 'interface': 'admin'}) endpoints_args.append(endpoint_args.copy()) endpoints = [] error_msg = ("Failed to create endpoint for service" " {service}".format(service=service['name'])) for args in endpoints_args: data = self._identity_client.post( '/endpoints', json={'endpoint': args}, error_message=error_msg) endpoints.append(self._get_and_munchify('endpoint', data)) return endpoints @_utils.valid_kwargs('enabled', 'service_name_or_id', 'url', 'interface', 'region') def update_endpoint(self, endpoint_id, **kwargs): # NOTE(SamYaple): Endpoint updates are only available on v3 api if self._is_client_version('identity', 2): raise OpenStackCloudUnavailableFeature( 'Unavailable Feature: Endpoint update' ) service_name_or_id = kwargs.pop('service_name_or_id', None) if service_name_or_id is not None: kwargs['service_id'] = service_name_or_id data = self._identity_client.patch( '/endpoints/{}'.format(endpoint_id), json={'endpoint': kwargs}, error_message="Failed to update endpoint {}".format(endpoint_id)) return self._get_and_munchify('endpoint', data) def list_endpoints(self): """List Keystone endpoints. :returns: a list of ``munch.Munch`` containing the endpoint description :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ # Force admin interface if v2.0 is in use v2 = self._is_client_version('identity', 2) kwargs = {'endpoint_filter': {'interface': 'admin'}} if v2 else {} data = self._identity_client.get( '/endpoints', error_message="Failed to list endpoints", **kwargs) endpoints = self._get_and_munchify('endpoints', data) return endpoints def search_endpoints(self, id=None, filters=None): """List Keystone endpoints. :param id: endpoint id. :param filters: a dict containing additional filters to use. e.g. {'region': 'region-a.geo-1'} :returns: a list of ``munch.Munch`` containing the endpoint description. Each dict contains the following attributes:: - id: - region: - public_url: - internal_url: (optional) - admin_url: (optional) :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ # NOTE(SamYaple): With keystone v3 we can filter directly via the # the keystone api, but since the return of all the endpoints even in # large environments is small, we can continue to filter in shade just # like the v2 api. endpoints = self.list_endpoints() return _utils._filter_list(endpoints, id, filters) def get_endpoint(self, id, filters=None): """Get exactly one Keystone endpoint. :param id: endpoint id. :param filters: a dict containing additional filters to use. e.g. {'region': 'region-a.geo-1'} :returns: a ``munch.Munch`` containing the endpoint description. i.e. a ``munch.Munch`` containing the following attributes:: - id: - region: - public_url: - internal_url: (optional) - admin_url: (optional) """ return _utils._get_entity(self, 'endpoint', id, filters) def delete_endpoint(self, id): """Delete a Keystone endpoint. :param id: Id of the endpoint to delete. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ endpoint = self.get_endpoint(id=id) if endpoint is None: self.log.debug("Endpoint %s not found for deleting", id) return False # Force admin interface if v2.0 is in use v2 = self._is_client_version('identity', 2) kwargs = {'endpoint_filter': {'interface': 'admin'}} if v2 else {} error_msg = "Failed to delete endpoint {id}".format(id=id) self._identity_client.delete('/endpoints/{id}'.format(id=id), error_message=error_msg, **kwargs) return True def create_domain(self, name, description=None, enabled=True): """Create a domain. :param name: The name of the domain. :param description: A description of the domain. :param enabled: Is the domain enabled or not (default True). :returns: a ``munch.Munch`` containing the domain representation. :raise OpenStackCloudException: if the domain cannot be created. """ domain_ref = {'name': name, 'enabled': enabled} if description is not None: domain_ref['description'] = description msg = 'Failed to create domain {name}'.format(name=name) data = self._identity_client.post( '/domains', json={'domain': domain_ref}, error_message=msg) domain = self._get_and_munchify('domain', data) return _utils.normalize_domains([domain])[0] def update_domain( self, domain_id=None, name=None, description=None, enabled=None, name_or_id=None): if domain_id is None: if name_or_id is None: raise OpenStackCloudException( "You must pass either domain_id or name_or_id value" ) dom = self.get_domain(None, name_or_id) if dom is None: raise OpenStackCloudException( "Domain {0} not found for updating".format(name_or_id) ) domain_id = dom['id'] domain_ref = {} domain_ref.update({'name': name} if name else {}) domain_ref.update({'description': description} if description else {}) domain_ref.update({'enabled': enabled} if enabled is not None else {}) error_msg = "Error in updating domain {id}".format(id=domain_id) data = self._identity_client.patch( '/domains/{id}'.format(id=domain_id), json={'domain': domain_ref}, error_message=error_msg) domain = self._get_and_munchify('domain', data) return _utils.normalize_domains([domain])[0] def delete_domain(self, domain_id=None, name_or_id=None): """Delete a domain. :param domain_id: ID of the domain to delete. :param name_or_id: Name or ID of the domain to delete. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ if domain_id is None: if name_or_id is None: raise OpenStackCloudException( "You must pass either domain_id or name_or_id value" ) dom = self.get_domain(name_or_id=name_or_id) if dom is None: self.log.debug( "Domain %s not found for deleting", name_or_id) return False domain_id = dom['id'] # A domain must be disabled before deleting self.update_domain(domain_id, enabled=False) error_msg = "Failed to delete domain {id}".format(id=domain_id) self._identity_client.delete('/domains/{id}'.format(id=domain_id), error_message=error_msg) return True def list_domains(self, **filters): """List Keystone domains. :returns: a list of ``munch.Munch`` containing the domain description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ data = self._identity_client.get( '/domains', params=filters, error_message="Failed to list domains") domains = self._get_and_munchify('domains', data) return _utils.normalize_domains(domains) def search_domains(self, filters=None, name_or_id=None): """Search Keystone domains. :param name_or_id: domain name or id :param dict filters: A dict containing additional filters to use. Keys to search on are id, name, enabled and description. :returns: a list of ``munch.Munch`` containing the domain description. Each ``munch.Munch`` contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ if filters is None: filters = {} if name_or_id is not None: domains = self.list_domains() return _utils._filter_list(domains, name_or_id, filters) else: return self.list_domains(**filters) def get_domain(self, domain_id=None, name_or_id=None, filters=None): """Get exactly one Keystone domain. :param domain_id: domain id. :param name_or_id: domain name or id. :param dict filters: A dict containing additional filters to use. Keys to search on are id, name, enabled and description. :returns: a ``munch.Munch`` containing the domain description, or None if not found. Each ``munch.Munch`` contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ if domain_id is None: # NOTE(SamYaple): search_domains() has filters and name_or_id # in the wrong positional order which prevents _get_entity from # being able to return quickly if passing a domain object so we # duplicate that logic here if hasattr(name_or_id, 'id'): return name_or_id return _utils._get_entity(self, 'domain', filters, name_or_id) else: error_msg = 'Failed to get domain {id}'.format(id=domain_id) data = self._identity_client.get( '/domains/{id}'.format(id=domain_id), error_message=error_msg) domain = self._get_and_munchify('domain', data) return _utils.normalize_domains([domain])[0] @_utils.valid_kwargs('domain_id') @_utils.cache_on_arguments() def list_groups(self, **kwargs): """List Keystone Groups. :param domain_id: domain id. :returns: A list of ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ data = self._identity_client.get( '/groups', params=kwargs, error_message="Failed to list groups") return _utils.normalize_groups(self._get_and_munchify('groups', data)) @_utils.valid_kwargs('domain_id') def search_groups(self, name_or_id=None, filters=None, **kwargs): """Search Keystone groups. :param name: Group name or id. :param filters: A dict containing additional filters to use. :param domain_id: domain id. :returns: A list of ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ groups = self.list_groups(**kwargs) return _utils._filter_list(groups, name_or_id, filters) @_utils.valid_kwargs('domain_id') def get_group(self, name_or_id, filters=None, **kwargs): """Get exactly one Keystone group. :param id: Group name or id. :param filters: A dict containing additional filters to use. :param domain_id: domain id. :returns: A ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ return _utils._get_entity(self, 'group', name_or_id, filters, **kwargs) def create_group(self, name, description, domain=None): """Create a group. :param string name: Group name. :param string description: Group description. :param string domain: Domain name or ID for the group. :returns: A ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ group_ref = {'name': name} if description: group_ref['description'] = description if domain: dom = self.get_domain(domain) if not dom: raise OpenStackCloudException( "Creating group {group} failed: Invalid domain " "{domain}".format(group=name, domain=domain) ) group_ref['domain_id'] = dom['id'] error_msg = "Error creating group {group}".format(group=name) data = self._identity_client.post( '/groups', json={'group': group_ref}, error_message=error_msg) group = self._get_and_munchify('group', data) self.list_groups.invalidate(self) return _utils.normalize_groups([group])[0] @_utils.valid_kwargs('domain_id') def update_group(self, name_or_id, name=None, description=None, **kwargs): """Update an existing group :param string name: New group name. :param string description: New group description. :param domain_id: domain id. :returns: A ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ self.list_groups.invalidate(self) group = self.get_group(name_or_id, **kwargs) if group is None: raise OpenStackCloudException( "Group {0} not found for updating".format(name_or_id) ) group_ref = {} if name: group_ref['name'] = name if description: group_ref['description'] = description error_msg = "Unable to update group {name}".format(name=name_or_id) data = self._identity_client.patch( '/groups/{id}'.format(id=group['id']), json={'group': group_ref}, error_message=error_msg) group = self._get_and_munchify('group', data) self.list_groups.invalidate(self) return _utils.normalize_groups([group])[0] @_utils.valid_kwargs('domain_id') def delete_group(self, name_or_id, **kwargs): """Delete a group :param name_or_id: ID or name of the group to delete. :param domain_id: domain id. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ group = self.get_group(name_or_id, **kwargs) if group is None: self.log.debug( "Group %s not found for deleting", name_or_id) return False error_msg = "Unable to delete group {name}".format(name=name_or_id) self._identity_client.delete('/groups/{id}'.format(id=group['id']), error_message=error_msg) self.list_groups.invalidate(self) return True @_utils.valid_kwargs('domain_id') def list_roles(self, **kwargs): """List Keystone roles. :param domain_id: domain id for listing roles (v3) :returns: a list of ``munch.Munch`` containing the role description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ v2 = self._is_client_version('identity', 2) url = '/OS-KSADM/roles' if v2 else '/roles' data = self._identity_client.get( url, params=kwargs, error_message="Failed to list roles") return self._normalize_roles(self._get_and_munchify('roles', data)) @_utils.valid_kwargs('domain_id') def search_roles(self, name_or_id=None, filters=None, **kwargs): """Seach Keystone roles. :param string name: role name or id. :param dict filters: a dict containing additional filters to use. :param domain_id: domain id (v3) :returns: a list of ``munch.Munch`` containing the role description. Each ``munch.Munch`` contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ roles = self.list_roles(**kwargs) return _utils._filter_list(roles, name_or_id, filters) @_utils.valid_kwargs('domain_id') def get_role(self, name_or_id, filters=None, **kwargs): """Get exactly one Keystone role. :param id: role name or id. :param filters: a dict containing additional filters to use. :param domain_id: domain id (v3) :returns: a single ``munch.Munch`` containing the role description. Each ``munch.Munch`` contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ return _utils._get_entity(self, 'role', name_or_id, filters, **kwargs) def _keystone_v2_role_assignments(self, user, project=None, role=None, **kwargs): data = self._identity_client.get( "/tenants/{tenant}/users/{user}/roles".format( tenant=project, user=user), error_message="Failed to list role assignments") roles = self._get_and_munchify('roles', data) ret = [] for tmprole in roles: if role is not None and role != tmprole.id: continue ret.append({ 'role': { 'id': tmprole.id }, 'scope': { 'project': { 'id': project, } }, 'user': { 'id': user, } }) return ret def _keystone_v3_role_assignments(self, **filters): # NOTE(samueldmq): different parameters have different representation # patterns as query parameters in the call to the list role assignments # API. The code below handles each set of patterns separately and # renames the parameters names accordingly, ignoring 'effective', # 'include_names' and 'include_subtree' whose do not need any renaming. for k in ('group', 'role', 'user'): if k in filters: filters[k + '.id'] = filters[k] del filters[k] for k in ('project', 'domain'): if k in filters: filters['scope.' + k + '.id'] = filters[k] del filters[k] if 'os_inherit_extension_inherited_to' in filters: filters['scope.OS-INHERIT:inherited_to'] = ( filters['os_inherit_extension_inherited_to']) del filters['os_inherit_extension_inherited_to'] data = self._identity_client.get( '/role_assignments', params=filters, error_message="Failed to list role assignments") return self._get_and_munchify('role_assignments', data) def list_role_assignments(self, filters=None): """List Keystone role assignments :param dict filters: Dict of filter conditions. Acceptable keys are: * 'user' (string) - User ID to be used as query filter. * 'group' (string) - Group ID to be used as query filter. * 'project' (string) - Project ID to be used as query filter. * 'domain' (string) - Domain ID to be used as query filter. * 'role' (string) - Role ID to be used as query filter. * 'os_inherit_extension_inherited_to' (string) - Return inherited role assignments for either 'projects' or 'domains' * 'effective' (boolean) - Return effective role assignments. * 'include_subtree' (boolean) - Include subtree 'user' and 'group' are mutually exclusive, as are 'domain' and 'project'. NOTE: For keystone v2, only user, project, and role are used. Project and user are both required in filters. :returns: a list of ``munch.Munch`` containing the role assignment description. Contains the following attributes:: - id: - user|group: - project|domain: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ # NOTE(samueldmq): although 'include_names' is a valid query parameter # in the keystone v3 list role assignments API, it would have NO effect # on shade due to normalization. It is not documented as an acceptable # filter in the docs above per design! if not filters: filters = {} # NOTE(samueldmq): the docs above say filters are *IDs*, though if # munch.Munch objects are passed, this still works for backwards # compatibility as keystoneclient allows either IDs or objects to be # passed in. # TODO(samueldmq): fix the docs above to advertise munch.Munch objects # can be provided as parameters too for k, v in filters.items(): if isinstance(v, munch.Munch): filters[k] = v['id'] if self._is_client_version('identity', 2): if filters.get('project') is None or filters.get('user') is None: raise OpenStackCloudException( "Must provide project and user for keystone v2" ) assignments = self._keystone_v2_role_assignments(**filters) else: assignments = self._keystone_v3_role_assignments(**filters) return _utils.normalize_role_assignments(assignments) def create_flavor(self, name, ram, vcpus, disk, flavorid="auto", ephemeral=0, swap=0, rxtx_factor=1.0, is_public=True): """Create a new flavor. :param name: Descriptive name of the flavor :param ram: Memory in MB for the flavor :param vcpus: Number of VCPUs for the flavor :param disk: Size of local disk in GB :param flavorid: ID for the flavor (optional) :param ephemeral: Ephemeral space size in GB :param swap: Swap space in MB :param rxtx_factor: RX/TX factor :param is_public: Make flavor accessible to the public :returns: A ``munch.Munch`` describing the new flavor. :raises: OpenStackCloudException on operation error. """ with _utils.shade_exceptions("Failed to create flavor {name}".format( name=name)): payload = { 'disk': disk, 'OS-FLV-EXT-DATA:ephemeral': ephemeral, 'id': flavorid, 'os-flavor-access:is_public': is_public, 'name': name, 'ram': ram, 'rxtx_factor': rxtx_factor, 'swap': swap, 'vcpus': vcpus, } if flavorid == 'auto': payload['id'] = None data = _adapter._json_response(self._conn.compute.post( '/flavors', json=dict(flavor=payload))) return self._normalize_flavor( self._get_and_munchify('flavor', data)) def delete_flavor(self, name_or_id): """Delete a flavor :param name_or_id: ID or name of the flavor to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ flavor = self.get_flavor(name_or_id, get_extra=False) if flavor is None: self.log.debug( "Flavor %s not found for deleting", name_or_id) return False _adapter._json_response( self._conn.compute.delete( '/flavors/{id}'.format(id=flavor['id'])), error_message="Unable to delete flavor {name}".format( name=name_or_id)) return True def set_flavor_specs(self, flavor_id, extra_specs): """Add extra specs to a flavor :param string flavor_id: ID of the flavor to update. :param dict extra_specs: Dictionary of key-value pairs. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudResourceNotFound if flavor ID is not found. """ _adapter._json_response( self._conn.compute.post( "/flavors/{id}/os-extra_specs".format(id=flavor_id), json=dict(extra_specs=extra_specs)), error_message="Unable to set flavor specs") def unset_flavor_specs(self, flavor_id, keys): """Delete extra specs from a flavor :param string flavor_id: ID of the flavor to update. :param keys: List of spec keys to delete. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudResourceNotFound if flavor ID is not found. """ for key in keys: _adapter._json_response( self._conn.compute.delete( "/flavors/{id}/os-extra_specs/{key}".format( id=flavor_id, key=key)), error_message="Unable to delete flavor spec {0}".format(key)) def _mod_flavor_access(self, action, flavor_id, project_id): """Common method for adding and removing flavor access """ with _utils.shade_exceptions("Error trying to {action} access from " "flavor ID {flavor}".format( action=action, flavor=flavor_id)): endpoint = '/flavors/{id}/action'.format(id=flavor_id) access = {'tenant': project_id} access_key = '{action}TenantAccess'.format(action=action) _adapter._json_response( self._conn.compute.post(endpoint, json={access_key: access})) def add_flavor_access(self, flavor_id, project_id): """Grant access to a private flavor for a project/tenant. :param string flavor_id: ID of the private flavor. :param string project_id: ID of the project/tenant. :raises: OpenStackCloudException on operation error. """ self._mod_flavor_access('add', flavor_id, project_id) def remove_flavor_access(self, flavor_id, project_id): """Revoke access from a private flavor for a project/tenant. :param string flavor_id: ID of the private flavor. :param string project_id: ID of the project/tenant. :raises: OpenStackCloudException on operation error. """ self._mod_flavor_access('remove', flavor_id, project_id) def list_flavor_access(self, flavor_id): """List access from a private flavor for a project/tenant. :param string flavor_id: ID of the private flavor. :returns: a list of ``munch.Munch`` containing the access description :raises: OpenStackCloudException on operation error. """ data = _adapter._json_response( self._conn.compute.get( '/flavors/{id}/os-flavor-access'.format(id=flavor_id)), error_message=( "Error trying to list access from flavorID {flavor}".format( flavor=flavor_id))) return _utils.normalize_flavor_accesses( self._get_and_munchify('flavor_access', data)) @_utils.valid_kwargs('domain_id') def create_role(self, name, **kwargs): """Create a Keystone role. :param string name: The name of the role. :param domain_id: domain id (v3) :returns: a ``munch.Munch`` containing the role description :raise OpenStackCloudException: if the role cannot be created """ v2 = self._is_client_version('identity', 2) url = '/OS-KSADM/roles' if v2 else '/roles' kwargs['name'] = name msg = 'Failed to create role {name}'.format(name=name) data = self._identity_client.post( url, json={'role': kwargs}, error_message=msg) role = self._get_and_munchify('role', data) return self._normalize_role(role) @_utils.valid_kwargs('domain_id') def update_role(self, name_or_id, name, **kwargs): """Update a Keystone role. :param name_or_id: Name or id of the role to update :param string name: The new role name :param domain_id: domain id :returns: a ``munch.Munch`` containing the role description :raise OpenStackCloudException: if the role cannot be created """ if self._is_client_version('identity', 2): raise OpenStackCloudUnavailableFeature( 'Unavailable Feature: Role update requires Identity v3' ) kwargs['name_or_id'] = name_or_id role = self.get_role(**kwargs) if role is None: self.log.debug( "Role %s not found for updating", name_or_id) return False msg = 'Failed to update role {name}'.format(name=name_or_id) json_kwargs = {'role_id': role.id, 'role': {'name': name}} data = self._identity_client.patch('/roles', error_message=msg, json=json_kwargs) role = self._get_and_munchify('role', data) return self._normalize_role(role) @_utils.valid_kwargs('domain_id') def delete_role(self, name_or_id, **kwargs): """Delete a Keystone role. :param string id: Name or id of the role to delete. :param domain_id: domain id (v3) :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ role = self.get_role(name_or_id, **kwargs) if role is None: self.log.debug( "Role %s not found for deleting", name_or_id) return False v2 = self._is_client_version('identity', 2) url = '{preffix}/{id}'.format( preffix='/OS-KSADM/roles' if v2 else '/roles', id=role['id']) error_msg = "Unable to delete role {name}".format(name=name_or_id) self._identity_client.delete(url, error_message=error_msg) return True def _get_grant_revoke_params(self, role, user=None, group=None, project=None, domain=None): role = self.get_role(role) if role is None: return {} data = {'role': role.id} # domain and group not available in keystone v2.0 is_keystone_v2 = self._is_client_version('identity', 2) filters = {} if not is_keystone_v2 and domain: filters['domain_id'] = data['domain'] = \ self.get_domain(domain)['id'] if user: data['user'] = self.get_user(user, filters=filters) if project: # drop domain in favor of project data.pop('domain', None) data['project'] = self.get_project(project, filters=filters) if not is_keystone_v2 and group: data['group'] = self.get_group(group, filters=filters) return data def grant_role(self, name_or_id, user=None, group=None, project=None, domain=None, wait=False, timeout=60): """Grant a role to a user. :param string name_or_id: The name or id of the role. :param string user: The name or id of the user. :param string group: The name or id of the group. (v3) :param string project: The name or id of the project. :param string domain: The id of the domain. (v3) :param bool wait: Wait for role to be granted :param int timeout: Timeout to wait for role to be granted NOTE: domain is a required argument when the grant is on a project, user or group specified by name. In that situation, they are all considered to be in that domain. If different domains are in use in the same role grant, it is required to specify those by ID. NOTE: for wait and timeout, sometimes granting roles is not instantaneous. NOTE: project is required for keystone v2 :returns: True if the role is assigned, otherwise False :raise OpenStackCloudException: if the role cannot be granted """ data = self._get_grant_revoke_params(name_or_id, user, group, project, domain) filters = data.copy() if not data: raise OpenStackCloudException( 'Role {0} not found.'.format(name_or_id)) if data.get('user') is not None and data.get('group') is not None: raise OpenStackCloudException( 'Specify either a group or a user, not both') if data.get('user') is None and data.get('group') is None: raise OpenStackCloudException( 'Must specify either a user or a group') if self._is_client_version('identity', 2) and \ data.get('project') is None: raise OpenStackCloudException( 'Must specify project for keystone v2') if self.list_role_assignments(filters=filters): self.log.debug('Assignment already exists') return False error_msg = "Error granting access to role: {0}".format(data) if self._is_client_version('identity', 2): # For v2.0, only tenant/project assignment is supported url = "/tenants/{t}/users/{u}/roles/OS-KSADM/{r}".format( t=data['project']['id'], u=data['user']['id'], r=data['role']) self._identity_client.put(url, error_message=error_msg, endpoint_filter={'interface': 'admin'}) else: if data.get('project') is None and data.get('domain') is None: raise OpenStackCloudException( 'Must specify either a domain or project') # For v3, figure out the assignment type and build the URL if data.get('domain'): url = "/domains/{}".format(data['domain']) else: url = "/projects/{}".format(data['project']['id']) if data.get('group'): url += "/groups/{}".format(data['group']['id']) else: url += "/users/{}".format(data['user']['id']) url += "/roles/{}".format(data.get('role')) self._identity_client.put(url, error_message=error_msg) if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for role to be granted"): if self.list_role_assignments(filters=filters): break return True def revoke_role(self, name_or_id, user=None, group=None, project=None, domain=None, wait=False, timeout=60): """Revoke a role from a user. :param string name_or_id: The name or id of the role. :param string user: The name or id of the user. :param string group: The name or id of the group. (v3) :param string project: The name or id of the project. :param string domain: The id of the domain. (v3) :param bool wait: Wait for role to be revoked :param int timeout: Timeout to wait for role to be revoked NOTE: for wait and timeout, sometimes revoking roles is not instantaneous. NOTE: project is required for keystone v2 :returns: True if the role is revoke, otherwise False :raise OpenStackCloudException: if the role cannot be removed """ data = self._get_grant_revoke_params(name_or_id, user, group, project, domain) filters = data.copy() if not data: raise OpenStackCloudException( 'Role {0} not found.'.format(name_or_id)) if data.get('user') is not None and data.get('group') is not None: raise OpenStackCloudException( 'Specify either a group or a user, not both') if data.get('user') is None and data.get('group') is None: raise OpenStackCloudException( 'Must specify either a user or a group') if self._is_client_version('identity', 2) and \ data.get('project') is None: raise OpenStackCloudException( 'Must specify project for keystone v2') if not self.list_role_assignments(filters=filters): self.log.debug('Assignment does not exist') return False error_msg = "Error revoking access to role: {0}".format(data) if self._is_client_version('identity', 2): # For v2.0, only tenant/project assignment is supported url = "/tenants/{t}/users/{u}/roles/OS-KSADM/{r}".format( t=data['project']['id'], u=data['user']['id'], r=data['role']) self._identity_client.delete( url, error_message=error_msg, endpoint_filter={'interface': 'admin'}) else: if data.get('project') is None and data.get('domain') is None: raise OpenStackCloudException( 'Must specify either a domain or project') # For v3, figure out the assignment type and build the URL if data.get('domain'): url = "/domains/{}".format(data['domain']) else: url = "/projects/{}".format(data['project']['id']) if data.get('group'): url += "/groups/{}".format(data['group']['id']) else: url += "/users/{}".format(data['user']['id']) url += "/roles/{}".format(data.get('role')) self._identity_client.delete(url, error_message=error_msg) if wait: for count in utils.iterate_timeout( timeout, "Timeout waiting for role to be revoked"): if not self.list_role_assignments(filters=filters): break return True def list_hypervisors(self): """List all hypervisors :returns: A list of hypervisor ``munch.Munch``. """ data = _adapter._json_response( self._conn.compute.get('/os-hypervisors/detail'), error_message="Error fetching hypervisor list") return self._get_and_munchify('hypervisors', data) def search_aggregates(self, name_or_id=None, filters=None): """Seach host aggregates. :param name: aggregate name or id. :param filters: a dict containing additional filters to use. :returns: a list of dicts containing the aggregates :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ aggregates = self.list_aggregates() return _utils._filter_list(aggregates, name_or_id, filters) def list_aggregates(self): """List all available host aggregates. :returns: A list of aggregate dicts. """ data = _adapter._json_response( self._conn.compute.get('/os-aggregates'), error_message="Error fetching aggregate list") return self._get_and_munchify('aggregates', data) def get_aggregate(self, name_or_id, filters=None): """Get an aggregate by name or ID. :param name_or_id: Name or ID of the aggregate. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'availability_zone': 'nova', 'metadata': { 'cpu_allocation_ratio': '1.0' } } :returns: An aggregate dict or None if no matching aggregate is found. """ return _utils._get_entity(self, 'aggregate', name_or_id, filters) def create_aggregate(self, name, availability_zone=None): """Create a new host aggregate. :param name: Name of the host aggregate being created :param availability_zone: Availability zone to assign hosts :returns: a dict representing the new host aggregate. :raises: OpenStackCloudException on operation error. """ data = _adapter._json_response( self._conn.compute.post( '/os-aggregates', json={'aggregate': { 'name': name, 'availability_zone': availability_zone }}), error_message="Unable to create host aggregate {name}".format( name=name)) return self._get_and_munchify('aggregate', data) @_utils.valid_kwargs('name', 'availability_zone') def update_aggregate(self, name_or_id, **kwargs): """Update a host aggregate. :param name_or_id: Name or ID of the aggregate being updated. :param name: New aggregate name :param availability_zone: Availability zone to assign to hosts :returns: a dict representing the updated host aggregate. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: raise OpenStackCloudException( "Host aggregate %s not found." % name_or_id) data = _adapter._json_response( self._conn.compute.put( '/os-aggregates/{id}'.format(id=aggregate['id']), json={'aggregate': kwargs}), error_message="Error updating aggregate {name}".format( name=name_or_id)) return self._get_and_munchify('aggregate', data) def delete_aggregate(self, name_or_id): """Delete a host aggregate. :param name_or_id: Name or ID of the host aggregate to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: self.log.debug("Aggregate %s not found for deleting", name_or_id) return False return _adapter._json_response( self._conn.compute.delete( '/os-aggregates/{id}'.format(id=aggregate['id'])), error_message="Error deleting aggregate {name}".format( name=name_or_id)) return True def set_aggregate_metadata(self, name_or_id, metadata): """Set aggregate metadata, replacing the existing metadata. :param name_or_id: Name of the host aggregate to update :param metadata: Dict containing metadata to replace (Use {'key': None} to remove a key) :returns: a dict representing the new host aggregate. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: raise OpenStackCloudException( "Host aggregate %s not found." % name_or_id) err_msg = "Unable to set metadata for host aggregate {name}".format( name=name_or_id) data = _adapter._json_response( self._conn.compute.post( '/os-aggregates/{id}/action'.format(id=aggregate['id']), json={'set_metadata': {'metadata': metadata}}), error_message=err_msg) return self._get_and_munchify('aggregate', data) def add_host_to_aggregate(self, name_or_id, host_name): """Add a host to an aggregate. :param name_or_id: Name or ID of the host aggregate. :param host_name: Host to add. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: raise OpenStackCloudException( "Host aggregate %s not found." % name_or_id) err_msg = "Unable to add host {host} to aggregate {name}".format( host=host_name, name=name_or_id) return _adapter._json_response( self._conn.compute.post( '/os-aggregates/{id}/action'.format(id=aggregate['id']), json={'add_host': {'host': host_name}}), error_message=err_msg) def remove_host_from_aggregate(self, name_or_id, host_name): """Remove a host from an aggregate. :param name_or_id: Name or ID of the host aggregate. :param host_name: Host to remove. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: raise OpenStackCloudException( "Host aggregate %s not found." % name_or_id) err_msg = "Unable to remove host {host} to aggregate {name}".format( host=host_name, name=name_or_id) return _adapter._json_response( self._conn.compute.post( '/os-aggregates/{id}/action'.format(id=aggregate['id']), json={'remove_host': {'host': host_name}}), error_message=err_msg) def get_volume_type_access(self, name_or_id): """Return a list of volume_type_access. :param name_or_id: Name or ID of the volume type. :raises: OpenStackCloudException on operation error. """ volume_type = self.get_volume_type(name_or_id) if not volume_type: raise OpenStackCloudException( "VolumeType not found: %s" % name_or_id) data = self._volume_client.get( '/types/{id}/os-volume-type-access'.format(id=volume_type.id), error_message="Unable to get volume type access" " {name}".format(name=name_or_id)) return self._normalize_volume_type_accesses( self._get_and_munchify('volume_type_access', data)) def add_volume_type_access(self, name_or_id, project_id): """Grant access on a volume_type to a project. :param name_or_id: ID or name of a volume_type :param project_id: A project id NOTE: the call works even if the project does not exist. :raises: OpenStackCloudException on operation error. """ volume_type = self.get_volume_type(name_or_id) if not volume_type: raise OpenStackCloudException( "VolumeType not found: %s" % name_or_id) with _utils.shade_exceptions(): payload = {'project': project_id} self._volume_client.post( '/types/{id}/action'.format(id=volume_type.id), json=dict(addProjectAccess=payload), error_message="Unable to authorize {project} " "to use volume type {name}".format( name=name_or_id, project=project_id)) def remove_volume_type_access(self, name_or_id, project_id): """Revoke access on a volume_type to a project. :param name_or_id: ID or name of a volume_type :param project_id: A project id :raises: OpenStackCloudException on operation error. """ volume_type = self.get_volume_type(name_or_id) if not volume_type: raise OpenStackCloudException( "VolumeType not found: %s" % name_or_id) with _utils.shade_exceptions(): payload = {'project': project_id} self._volume_client.post( '/types/{id}/action'.format(id=volume_type.id), json=dict(removeProjectAccess=payload), error_message="Unable to revoke {project} " "to use volume type {name}".format( name=name_or_id, project=project_id)) def set_compute_quotas(self, name_or_id, **kwargs): """ Set a quota in a project :param name_or_id: project name or id :param kwargs: key/value pairs of quota name and quota value :raises: OpenStackCloudException if the resource to set the quota does not exist. """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") # compute_quotas = {key: val for key, val in kwargs.items() # if key in quota.COMPUTE_QUOTAS} # TODO(ghe): Manage volume and network quotas # network_quotas = {key: val for key, val in kwargs.items() # if key in quota.NETWORK_QUOTAS} # volume_quotas = {key: val for key, val in kwargs.items() # if key in quota.VOLUME_QUOTAS} kwargs['force'] = True _adapter._json_response( self._conn.compute.put( '/os-quota-sets/{project}'.format(project=proj.id), json={'quota_set': kwargs}), error_message="No valid quota or resource") def get_compute_quotas(self, name_or_id): """ Get quota for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the quotas """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") data = _adapter._json_response( self._conn.compute.get( '/os-quota-sets/{project}'.format(project=proj.id))) return self._get_and_munchify('quota_set', data) def delete_compute_quotas(self, name_or_id): """ Delete quota for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project or the nova client call failed :returns: dict with the quotas """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") return _adapter._json_response( self._conn.compute.delete( '/os-quota-sets/{project}'.format(project=proj.id))) def get_compute_usage(self, name_or_id, start=None, end=None): """ Get usage for a specific project :param name_or_id: project name or id :param start: :class:`datetime.datetime` or string. Start date in UTC Defaults to 2010-07-06T12:00:00Z (the date the OpenStack project was started) :param end: :class:`datetime.datetime` or string. End date in UTC. Defaults to now :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the usage """ def parse_date(date): try: return iso8601.parse_date(date) except iso8601.iso8601.ParseError: # Yes. This is an exception mask. However,iso8601 is an # implementation detail - and the error message is actually # less informative. raise OpenStackCloudException( "Date given, {date}, is invalid. Please pass in a date" " string in ISO 8601 format -" " YYYY-MM-DDTHH:MM:SS".format( date=date)) def parse_datetime_for_nova(date): # Must strip tzinfo from the date- it breaks Nova. Also, # Nova is expecting this in UTC. If someone passes in an # ISO8601 date string or a datetime with timzeone data attached, # strip the timezone data but apply offset math first so that # the user's well formed perfectly valid date will be used # correctly. offset = date.utcoffset() if offset: date = date - datetime.timedelta(hours=offset) return date.replace(tzinfo=None) if not start: start = parse_date('2010-07-06') elif not isinstance(start, datetime.datetime): start = parse_date(start) if not end: end = datetime.datetime.utcnow() elif not isinstance(start, datetime.datetime): end = parse_date(end) start = parse_datetime_for_nova(start) end = parse_datetime_for_nova(end) proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist: {}".format( name=proj.id)) data = _adapter._json_response( self._conn.compute.get( '/os-simple-tenant-usage/{project}'.format(project=proj.id), params=dict(start=start.isoformat(), end=end.isoformat())), error_message="Unable to get usage for project: {name}".format( name=proj.id)) return self._normalize_compute_usage( self._get_and_munchify('tenant_usage', data)) def set_volume_quotas(self, name_or_id, **kwargs): """ Set a volume quota in a project :param name_or_id: project name or id :param kwargs: key/value pairs of quota name and quota value :raises: OpenStackCloudException if the resource to set the quota does not exist. """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") kwargs['tenant_id'] = proj.id self._volume_client.put( '/os-quota-sets/{tenant_id}'.format(tenant_id=proj.id), json={'quota_set': kwargs}, error_message="No valid quota or resource") def get_volume_quotas(self, name_or_id): """ Get volume quotas for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the quotas """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") data = self._volume_client.get( '/os-quota-sets/{tenant_id}'.format(tenant_id=proj.id), error_message="cinder client call failed") return self._get_and_munchify('quota_set', data) def delete_volume_quotas(self, name_or_id): """ Delete volume quotas for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project or the cinder client call failed :returns: dict with the quotas """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") return self._volume_client.delete( '/os-quota-sets/{tenant_id}'.format(tenant_id=proj.id), error_message="cinder client call failed") def set_network_quotas(self, name_or_id, **kwargs): """ Set a network quota in a project :param name_or_id: project name or id :param kwargs: key/value pairs of quota name and quota value :raises: OpenStackCloudException if the resource to set the quota does not exist. """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") self._network_client.put( '/quotas/{project_id}.json'.format(project_id=proj.id), json={'quota': kwargs}, error_message=("Error setting Neutron's quota for " "project {0}".format(proj.id))) def get_network_quotas(self, name_or_id, details=False): """ Get network quotas for a project :param name_or_id: project name or id :param details: if set to True it will return details about usage of quotas by given project :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the quotas """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") url = '/quotas/{project_id}'.format(project_id=proj.id) if details: url = url + "/details" url = url + ".json" data = self._network_client.get( url, error_message=("Error fetching Neutron's quota for " "project {0}".format(proj.id))) return self._get_and_munchify('quota', data) def get_network_extensions(self): """Get Cloud provided network extensions :returns: set of Neutron extension aliases """ return self._neutron_extensions() def delete_network_quotas(self, name_or_id): """ Delete network quotas for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project or the network client call failed :returns: dict with the quotas """ proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException("project does not exist") self._network_client.delete( '/quotas/{project_id}.json'.format(project_id=proj.id), error_message=("Error deleting Neutron's quota for " "project {0}".format(proj.id))) def list_magnum_services(self): """List all Magnum services. :returns: a list of dicts containing the service details. :raises: OpenStackCloudException on operation error. """ with _utils.shade_exceptions("Error fetching Magnum services list"): data = self._container_infra_client.get('/mservices') return self._normalize_magnum_services( self._get_and_munchify('mservices', data)) openstacksdk-0.11.3/openstack/cloud/_heat/0000775000175100017510000000000013236151501020463 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/cloud/_heat/template_format.py0000666000175100017510000000503513236151340024226 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import yaml if hasattr(yaml, 'CSafeLoader'): yaml_loader = yaml.CSafeLoader else: yaml_loader = yaml.SafeLoader if hasattr(yaml, 'CSafeDumper'): yaml_dumper = yaml.CSafeDumper else: yaml_dumper = yaml.SafeDumper def _construct_yaml_str(self, node): # Override the default string handling function # to always return unicode objects return self.construct_scalar(node) yaml_loader.add_constructor(u'tag:yaml.org,2002:str', _construct_yaml_str) # Unquoted dates like 2013-05-23 in yaml files get loaded as objects of type # datetime.data which causes problems in API layer when being processed by # openstack.common.jsonutils. Therefore, make unicode string out of timestamps # until jsonutils can handle dates. yaml_loader.add_constructor(u'tag:yaml.org,2002:timestamp', _construct_yaml_str) def parse(tmpl_str): """Takes a string and returns a dict containing the parsed structure. This includes determination of whether the string is using the JSON or YAML format. """ # strip any whitespace before the check tmpl_str = tmpl_str.strip() if tmpl_str.startswith('{'): tpl = json.loads(tmpl_str) else: try: tpl = yaml.load(tmpl_str, Loader=yaml_loader) except yaml.YAMLError: # NOTE(prazumovsky): we need to return more informative error for # user, so use SafeLoader, which return error message with template # snippet where error has been occurred. try: tpl = yaml.load(tmpl_str, Loader=yaml.SafeLoader) except yaml.YAMLError as yea: raise ValueError(yea) else: if tpl is None: tpl = {} # Looking for supported version keys in the loaded template if not ('HeatTemplateFormatVersion' in tpl or 'heat_template_version' in tpl or 'AWSTemplateFormatVersion' in tpl): raise ValueError("Template format version not found.") return tpl openstacksdk-0.11.3/openstack/cloud/_heat/utils.py0000666000175100017510000000347413236151340022210 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import os from six.moves.urllib import error from six.moves.urllib import parse from six.moves.urllib import request from openstack.cloud import exc def base_url_for_url(url): parsed = parse.urlparse(url) parsed_dir = os.path.dirname(parsed.path) return parse.urljoin(url, parsed_dir) def normalise_file_path_to_url(path): if parse.urlparse(path).scheme: return path path = os.path.abspath(path) return parse.urljoin('file:', request.pathname2url(path)) def read_url_content(url): try: # TODO(mordred) Use requests content = request.urlopen(url).read() except error.URLError: raise exc.OpenStackCloudException( 'Could not fetch contents for %s' % url) if content: try: content.decode('utf-8') except ValueError: content = base64.encodestring(content) return content def resource_nested_identifier(rsrc): nested_link = [l for l in rsrc.links or [] if l.get('rel') == 'nested'] if nested_link: nested_href = nested_link[0].get('href') nested_identifier = nested_href.split("/")[-2:] return "/".join(nested_identifier) openstacksdk-0.11.3/openstack/cloud/_heat/__init__.py0000666000175100017510000000000013236151340022565 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/cloud/_heat/event_utils.py0000666000175100017510000000655213236151340023411 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import time from openstack.cloud import meta def get_events(cloud, stack_id, event_args, marker=None, limit=None): # TODO(mordred) FIX THIS ONCE assert_calls CAN HANDLE QUERY STRINGS params = collections.OrderedDict() for k in sorted(event_args.keys()): params[k] = event_args[k] if marker: event_args['marker'] = marker if limit: event_args['limit'] = limit data = cloud._orchestration_client.get( '/stacks/{id}/events'.format(id=stack_id), params=params) events = meta.get_and_munchify('events', data) # Show which stack the event comes from (for nested events) for e in events: e['stack_name'] = stack_id.split("/")[0] return events def poll_for_events( cloud, stack_name, action=None, poll_period=5, marker=None): """Continuously poll events and logs for performed action on stack.""" if action: stop_status = ('%s_FAILED' % action, '%s_COMPLETE' % action) stop_check = lambda a: a in stop_status else: stop_check = lambda a: a.endswith('_COMPLETE') or a.endswith('_FAILED') no_event_polls = 0 msg_template = "\n Stack %(name)s %(status)s \n" def is_stack_event(event): if event.get('resource_name', '') != stack_name: return False phys_id = event.get('physical_resource_id', '') links = dict((l.get('rel'), l.get('href')) for l in event.get('links', [])) stack_id = links.get('stack', phys_id).rsplit('/', 1)[-1] return stack_id == phys_id while True: events = get_events( cloud, stack_id=stack_name, event_args={'sort_dir': 'asc', 'marker': marker}) if len(events) == 0: no_event_polls += 1 else: no_event_polls = 0 # set marker to last event that was received. marker = getattr(events[-1], 'id', None) for event in events: # check if stack event was also received if is_stack_event(event): stack_status = getattr(event, 'resource_status', '') msg = msg_template % dict( name=stack_name, status=stack_status) if stop_check(stack_status): return stack_status, msg if no_event_polls >= 2: # after 2 polls with no events, fall back to a stack get stack = cloud.get_stack(stack_name) stack_status = stack['stack_status'] msg = msg_template % dict( name=stack_name, status=stack_status) if stop_check(stack_status): return stack_status, msg # go back to event polling again no_event_polls = 0 time.sleep(poll_period) openstacksdk-0.11.3/openstack/cloud/_heat/template_utils.py0000666000175100017510000002604413236151340024101 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import json import six from six.moves.urllib import parse from six.moves.urllib import request from openstack.cloud._heat import environment_format from openstack.cloud._heat import template_format from openstack.cloud._heat import utils from openstack.cloud import exc def get_template_contents(template_file=None, template_url=None, template_object=None, object_request=None, files=None, existing=False): is_object = False tpl = None # Transform a bare file path to a file:// URL. if template_file: template_url = utils.normalise_file_path_to_url(template_file) if template_url: tpl = request.urlopen(template_url).read() elif template_object: is_object = True template_url = template_object tpl = object_request and object_request('GET', template_object) elif existing: return {}, None else: raise exc.OpenStackCloudException( 'Must provide one of template_file,' ' template_url or template_object') if not tpl: raise exc.OpenStackCloudException( 'Could not fetch template from %s' % template_url) try: if isinstance(tpl, six.binary_type): tpl = tpl.decode('utf-8') template = template_format.parse(tpl) except ValueError as e: raise exc.OpenStackCloudException( 'Error parsing template %(url)s %(error)s' % {'url': template_url, 'error': e}) tmpl_base_url = utils.base_url_for_url(template_url) if files is None: files = {} resolve_template_get_files(template, files, tmpl_base_url, is_object, object_request) return files, template def resolve_template_get_files(template, files, template_base_url, is_object=False, object_request=None): def ignore_if(key, value): if key != 'get_file' and key != 'type': return True if not isinstance(value, six.string_types): return True if (key == 'type' and not value.endswith(('.yaml', '.template'))): return True return False def recurse_if(value): return isinstance(value, (dict, list)) get_file_contents(template, files, template_base_url, ignore_if, recurse_if, is_object, object_request) def is_template(file_content): try: if isinstance(file_content, six.binary_type): file_content = file_content.decode('utf-8') template_format.parse(file_content) except (ValueError, TypeError): return False return True def get_file_contents(from_data, files, base_url=None, ignore_if=None, recurse_if=None, is_object=False, object_request=None): if recurse_if and recurse_if(from_data): if isinstance(from_data, dict): recurse_data = from_data.values() else: recurse_data = from_data for value in recurse_data: get_file_contents(value, files, base_url, ignore_if, recurse_if, is_object, object_request) if isinstance(from_data, dict): for key, value in from_data.items(): if ignore_if and ignore_if(key, value): continue if base_url and not base_url.endswith('/'): base_url = base_url + '/' str_url = parse.urljoin(base_url, value) if str_url not in files: if is_object and object_request: file_content = object_request('GET', str_url) else: file_content = utils.read_url_content(str_url) if is_template(file_content): if is_object: template = get_template_contents( template_object=str_url, files=files, object_request=object_request)[1] else: template = get_template_contents( template_url=str_url, files=files)[1] file_content = json.dumps(template) files[str_url] = file_content # replace the data value with the normalised absolute URL from_data[key] = str_url def deep_update(old, new): '''Merge nested dictionaries.''' # Prevents an error if in a previous iteration # old[k] = None but v[k] = {...}, if old is None: old = {} for k, v in new.items(): if isinstance(v, collections.Mapping): r = deep_update(old.get(k, {}), v) old[k] = r else: old[k] = new[k] return old def process_multiple_environments_and_files(env_paths=None, template=None, template_url=None, env_path_is_object=None, object_request=None, env_list_tracker=None): """Reads one or more environment files. Reads in each specified environment file and returns a dictionary of the filenames->contents (suitable for the files dict) and the consolidated environment (after having applied the correct overrides based on order). If a list is provided in the env_list_tracker parameter, the behavior is altered to take advantage of server-side environment resolution. Specifically, this means: * Populating env_list_tracker with an ordered list of environment file URLs to be passed to the server * Including the contents of each environment file in the returned files dict, keyed by one of the URLs in env_list_tracker :param env_paths: list of paths to the environment files to load; if None, empty results will be returned :type env_paths: list or None :param template: unused; only included for API compatibility :param template_url: unused; only included for API compatibility :param env_list_tracker: if specified, environment filenames will be stored within :type env_list_tracker: list or None :return: tuple of files dict and a dict of the consolidated environment :rtype: tuple """ merged_files = {} merged_env = {} # If we're keeping a list of environment files separately, include the # contents of the files in the files dict include_env_in_files = env_list_tracker is not None if env_paths: for env_path in env_paths: files, env = process_environment_and_files( env_path=env_path, template=template, template_url=template_url, env_path_is_object=env_path_is_object, object_request=object_request, include_env_in_files=include_env_in_files) # 'files' looks like {"filename1": contents, "filename2": contents} # so a simple update is enough for merging merged_files.update(files) # 'env' can be a deeply nested dictionary, so a simple update is # not enough merged_env = deep_update(merged_env, env) if env_list_tracker is not None: env_url = utils.normalise_file_path_to_url(env_path) env_list_tracker.append(env_url) return merged_files, merged_env def process_environment_and_files(env_path=None, template=None, template_url=None, env_path_is_object=None, object_request=None, include_env_in_files=False): """Loads a single environment file. Returns an entry suitable for the files dict which maps the environment filename to its contents. :param env_path: full path to the file to load :type env_path: str or None :param include_env_in_files: if specified, the raw environment file itself will be included in the returned files dict :type include_env_in_files: bool :return: tuple of files dict and the loaded environment as a dict :rtype: (dict, dict) """ files = {} env = {} is_object = env_path_is_object and env_path_is_object(env_path) if is_object: raw_env = object_request and object_request('GET', env_path) env = environment_format.parse(raw_env) env_base_url = utils.base_url_for_url(env_path) resolve_environment_urls( env.get('resource_registry'), files, env_base_url, is_object=True, object_request=object_request) elif env_path: env_url = utils.normalise_file_path_to_url(env_path) env_base_url = utils.base_url_for_url(env_url) raw_env = request.urlopen(env_url).read() env = environment_format.parse(raw_env) resolve_environment_urls( env.get('resource_registry'), files, env_base_url) if include_env_in_files: files[env_url] = json.dumps(env) return files, env def resolve_environment_urls(resource_registry, files, env_base_url, is_object=False, object_request=None): """Handles any resource URLs specified in an environment. :param resource_registry: mapping of type name to template filename :type resource_registry: dict :param files: dict to store loaded file contents into :type files: dict :param env_base_url: base URL to look in when loading files :type env_base_url: str or None """ if resource_registry is None: return rr = resource_registry base_url = rr.get('base_url', env_base_url) def ignore_if(key, value): if key == 'base_url': return True if isinstance(value, dict): return True if '::' in value: # Built in providers like: "X::Compute::Server" # don't need downloading. return True if key in ['hooks', 'restricted_actions']: return True get_file_contents(rr, files, base_url, ignore_if, is_object=is_object, object_request=object_request) for res_name, res_dict in rr.get('resources', {}).items(): res_base_url = res_dict.get('base_url', base_url) get_file_contents( res_dict, files, res_base_url, ignore_if, is_object=is_object, object_request=object_request) openstacksdk-0.11.3/openstack/cloud/_heat/environment_format.py0000666000175100017510000000353213236151340024757 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import yaml from openstack.cloud._heat import template_format SECTIONS = ( PARAMETER_DEFAULTS, PARAMETERS, RESOURCE_REGISTRY, ENCRYPTED_PARAM_NAMES, EVENT_SINKS, PARAMETER_MERGE_STRATEGIES ) = ( 'parameter_defaults', 'parameters', 'resource_registry', 'encrypted_param_names', 'event_sinks', 'parameter_merge_strategies' ) def parse(env_str): """Takes a string and returns a dict containing the parsed structure. This includes determination of whether the string is using the YAML format. """ try: env = yaml.load(env_str, Loader=template_format.yaml_loader) except yaml.YAMLError: # NOTE(prazumovsky): we need to return more informative error for # user, so use SafeLoader, which return error message with template # snippet where error has been occurred. try: env = yaml.load(env_str, Loader=yaml.SafeLoader) except yaml.YAMLError as yea: raise ValueError(yea) else: if env is None: env = {} elif not isinstance(env, dict): raise ValueError( 'The environment is not a valid YAML mapping data type.') for param in env: if param not in SECTIONS: raise ValueError('environment has wrong section "%s"' % param) return env openstacksdk-0.11.3/openstack/cloud/exc.py0000666000175100017510000000303013236151340020533 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from openstack import exceptions OpenStackCloudException = exceptions.SDKException OpenStackCloudTimeout = exceptions.ResourceTimeout class OpenStackCloudCreateException(OpenStackCloudException): def __init__(self, resource, resource_id, extra_data=None, **kwargs): super(OpenStackCloudCreateException, self).__init__( message="Error creating {resource}: {resource_id}".format( resource=resource, resource_id=resource_id), extra_data=extra_data, **kwargs) self.resource_id = resource_id class OpenStackCloudUnavailableExtension(OpenStackCloudException): pass class OpenStackCloudUnavailableFeature(OpenStackCloudException): pass # Backwards compat OpenStackCloudHTTPError = exceptions.HttpException OpenStackCloudBadRequest = exceptions.BadRequestException OpenStackCloudURINotFound = exceptions.NotFoundException OpenStackCloudResourceNotFound = OpenStackCloudURINotFound openstacksdk-0.11.3/openstack/cloud/__init__.py0000666000175100017510000000471713236151364021536 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import keystoneauth1.exceptions from openstack._log import enable_logging # noqa from openstack.cloud.exc import * # noqa from openstack.cloud.openstackcloud import OpenStackCloud def _get_openstack_config(app_name=None, app_version=None): import openstack.config return openstack.config.OpenStackConfig( app_name=app_name, app_version=app_version) # TODO(shade) This wants to be remove before we make a release. def openstack_clouds( config=None, debug=False, cloud=None, strict=False, app_name=None, app_version=None): if not config: config = _get_openstack_config(app_name, app_version) try: if cloud is None: return [ OpenStackCloud( cloud=f.name, debug=debug, cloud_config=cloud_region, strict=strict) for cloud_region in config.get_all() ] else: return [ OpenStackCloud( cloud=f.name, debug=debug, cloud_config=cloud_region, strict=strict) for cloud_region in config.get_all() if cloud_region.name == cloud ] except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e: raise OpenStackCloudException( "Invalid cloud configuration: {exc}".format(exc=str(e))) def openstack_cloud( config=None, strict=False, app_name=None, app_version=None, **kwargs): if not config: config = _get_openstack_config(app_name, app_version) try: cloud_region = config.get_one(**kwargs) except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e: raise OpenStackCloudException( "Invalid cloud configuration: {exc}".format(exc=str(e))) return OpenStackCloud(cloud_config=cloud_region, strict=strict) openstacksdk-0.11.3/openstack/cloud/tests/0000775000175100017510000000000013236151501020545 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/cloud/tests/__init__.py0000666000175100017510000000000013236151340022647 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/cloud/inventory.py0000666000175100017510000000601513236151364022025 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import openstack.config import openstack.cloud from openstack.cloud import _utils class OpenStackInventory(object): # Put this here so the capability can be detected with hasattr on the class extra_config = None def __init__( self, config_files=None, refresh=False, private=False, config_key=None, config_defaults=None, cloud=None, use_direct_get=False): if config_files is None: config_files = [] config = openstack.config.loader.OpenStackConfig( config_files=openstack.config.loader.CONFIG_FILES + config_files) self.extra_config = config.get_extra_config( config_key, config_defaults) if cloud is None: self.clouds = [ openstack.cloud.OpenStackCloud(cloud_config=cloud_region) for cloud_region in config.get_all() ] else: try: self.clouds = [ openstack.cloud.OpenStackCloud( cloud_config=config.get_one(cloud)) ] except openstack.config.exceptions.OpenStackConfigException as e: raise openstack.cloud.OpenStackCloudException(e) if private: for cloud in self.clouds: cloud.private = True # Handle manual invalidation of entire persistent cache if refresh: for cloud in self.clouds: cloud._cache.invalidate() def list_hosts(self, expand=True, fail_on_cloud_config=True): hostvars = [] for cloud in self.clouds: try: # Cycle on servers for server in cloud.list_servers(detailed=expand): hostvars.append(server) except openstack.cloud.OpenStackCloudException: # Don't fail on one particular cloud as others may work if fail_on_cloud_config: raise return hostvars def search_hosts(self, name_or_id=None, filters=None, expand=True): hosts = self.list_hosts(expand=expand) return _utils._filter_list(hosts, name_or_id, filters) def get_host(self, name_or_id, filters=None, expand=True): if expand: func = self.search_hosts else: func = functools.partial(self.search_hosts, expand=False) return _utils._get_entity(self, func, name_or_id, filters) openstacksdk-0.11.3/openstack/cloud/_tasks.py0000666000175100017510000000605113236151340021246 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. from openstack import task_manager class IronicTask(task_manager.Task): def __init__(self, client, **kwargs): super(IronicTask, self).__init__(**kwargs) self.client = client class MachineCreate(IronicTask): def main(self): return self.client.ironic_client.node.create(*self.args, **self.kwargs) class MachineDelete(IronicTask): def main(self): return self.client.ironic_client.node.delete(*self.args, **self.kwargs) class MachinePatch(IronicTask): def main(self): return self.client.ironic_client.node.update(*self.args, **self.kwargs) class MachinePortGet(IronicTask): def main(self): return self.client.ironic_client.port.get(*self.args, **self.kwargs) class MachinePortGetByAddress(IronicTask): def main(self): return self.client.ironic_client.port.get_by_address( *self.args, **self.kwargs) class MachinePortCreate(IronicTask): def main(self): return self.client.ironic_client.port.create(*self.args, **self.kwargs) class MachinePortDelete(IronicTask): def main(self): return self.client.ironic_client.port.delete(*self.args, **self.kwargs) class MachinePortList(IronicTask): def main(self): return self.client.ironic_client.port.list() class MachineNodeGet(IronicTask): def main(self): return self.client.ironic_client.node.get(*self.args, **self.kwargs) class MachineNodeList(IronicTask): def main(self): return self.client.ironic_client.node.list(*self.args, **self.kwargs) class MachineNodePortList(IronicTask): def main(self): return self.client.ironic_client.node.list_ports( *self.args, **self.kwargs) class MachineNodeUpdate(IronicTask): def main(self): return self.client.ironic_client.node.update(*self.args, **self.kwargs) class MachineNodeValidate(IronicTask): def main(self): return self.client.ironic_client.node.validate( *self.args, **self.kwargs) class MachineSetMaintenance(IronicTask): def main(self): return self.client.ironic_client.node.set_maintenance( *self.args, **self.kwargs) class MachineSetPower(IronicTask): def main(self): return self.client.ironic_client.node.set_power_state( *self.args, **self.kwargs) class MachineSetProvision(IronicTask): def main(self): return self.client.ironic_client.node.set_provision_state( *self.args, **self.kwargs) openstacksdk-0.11.3/openstack/connection.py0000666000175100017510000003326113236151364021024 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ The :class:`~openstack.connection.Connection` class is the primary interface to the Python SDK. It maintains a context for a connection to a region of a cloud provider. The :class:`~openstack.connection.Connection` has an attribute to access each OpenStack service. At a minimum, the :class:`~openstack.connection.Connection` class needs to be created with a config or the parameters to build one. While the overall system is very flexible, there are four main use cases for different ways to create a :class:`~openstack.connection.Connection`. * Using config settings and keyword arguments as described in :ref:`openstack-config` * Using only keyword arguments passed to the constructor ignoring config files and environment variables. * Using an existing authenticated `keystoneauth1.session.Session`, such as might exist inside of an OpenStack service operational context. * Using an existing :class:`~openstack.config.cloud_region.CloudRegion`. Using config settings --------------------- For users who want to create a :class:`~openstack.connection.Connection` making use of named clouds in ``clouds.yaml`` files, ``OS_`` environment variables and python keyword arguments, the :func:`openstack.connect` factory function is the recommended way to go: .. code-block:: python import openstack conn = openstack.connect(cloud='example', region_name='earth1') If the application in question is a command line application that should also accept command line arguments, an `argparse.Namespace` can be passed to :func:`openstack.connect` that will have relevant arguments added to it and then subsequently consumed by the construtor: .. code-block:: python import argparse import openstack options = argparse.ArgumentParser(description='Awesome OpenStack App') conn = openstack.connect(options=options) Using Only Keyword Arguments ---------------------------- If the application wants to avoid loading any settings from ``clouds.yaml`` or environment variables, use the :class:`~openstack.connection.Connection` constructor directly. As long as the ``cloud`` argument is omitted or ``None``, the :class:`~openstack.connection.Connection` constructor will not load settings from files or the environment. .. note:: This is a different default behavior than the :func:`~openstack.connect` factory function. In :func:`~openstack.connect` if ``cloud`` is omitted or ``None``, a default cloud will be loaded, defaulting to the ``envvars`` cloud if it exists. .. code-block:: python from openstack import connection conn = connection.Connection( region_name='example-region', auth=dict( auth_url='https://auth.example.com', username='amazing-user', password='super-secret-password', project_id='33aa1afc-03fe-43b8-8201-4e0d3b4b8ab5', user_domain_id='054abd68-9ad9-418b-96d3-3437bb376703'), compute_api_version='2', identity_interface='internal') Per-service settings as needed by `keystoneauth1.adapter.Adapter` such as ``api_version``, ``service_name``, and ``interface`` can be set, as seen above, by prefixing them with the official ``service-type`` name of the service. ``region_name`` is a setting for the entire :class:`~openstack.config.cloud_region.CloudRegion` and cannot be set per service. From existing authenticated Session ----------------------------------- For applications that already have an authenticated Session, simply passing it to the :class:`~openstack.connection.Connection` constructor is all that is needed: .. code-block:: python from openstack import connection conn = connection.Connection( session=session, region_name='example-region', compute_api_version='2', identity_interface='internal') From existing CloudRegion ------------------------- If you already have an :class:`~openstack.config.cloud_region.CloudRegion` you can pass it in instead: .. code-block:: python from openstack import connection import openstack.config config = openstack.config.get_cloud_region( cloud='example', region_name='earth') conn = connection.Connection(config=config) Using the Connection -------------------- Services are accessed through an attribute named after the service's official service-type. List ~~~~ An iterator containing a list of all the projects is retrieved in this manner: .. code-block:: python projects = conn.identity.projects() Find or create ~~~~~~~~~~~~~~ If you wanted to make sure you had a network named 'zuul', you would first try to find it and if that fails, you would create it:: network = conn.network.find_network("zuul") if network is None: network = conn.network.create_network(name="zuul") Additional information about the services can be found in the :ref:`service-proxies` documentation. """ __all__ = [ 'from_config', 'Connection', ] import warnings import keystoneauth1.exceptions import requestsexceptions import six from openstack import _log from openstack import _meta from openstack import config as _config from openstack.config import cloud_region from openstack import exceptions from openstack import service_description from openstack import task_manager if requestsexceptions.SubjectAltNameWarning: warnings.filterwarnings( 'ignore', category=requestsexceptions.SubjectAltNameWarning) _logger = _log.setup_logging('openstack') def from_config(cloud=None, config=None, options=None, **kwargs): """Create a Connection using openstack.config :param str cloud: Use the `cloud` configuration details when creating the Connection. :param openstack.config.cloud_region.CloudRegion config: An existing CloudRegion configuration. If no `config` is provided, `openstack.config.OpenStackConfig` will be called, and the provided `name` will be used in determining which cloud's configuration details will be used in creation of the `Connection` instance. :param argparse.Namespace options: Allows direct passing in of options to be added to the cloud config. This does not have to be an actual instance of argparse.Namespace, despite the naming of the the `openstack.config.loader.OpenStackConfig.get_one` argument to which it is passed. :rtype: :class:`~openstack.connection.Connection` """ # TODO(mordred) Backwards compat while we transition cloud = kwargs.pop('cloud_name', cloud) config = kwargs.pop('cloud_config', config) if config is None: config = _config.OpenStackConfig().get_one( cloud=cloud, argparse=options, **kwargs) return Connection(config=config) class Connection(six.with_metaclass(_meta.ConnectionMeta)): def __init__(self, cloud=None, config=None, session=None, app_name=None, app_version=None, # TODO(shade) Remove these once we've shifted # python-openstackclient to not use the profile interface. authenticator=None, profile=None, extra_services=None, **kwargs): """Create a connection to a cloud. A connection needs information about how to connect, how to authenticate and how to select the appropriate services to use. The recommended way to provide this information is by referencing a named cloud config from an existing `clouds.yaml` file. The cloud name ``envvars`` may be used to consume a cloud configured via ``OS_`` environment variables. A pre-existing :class:`~openstack.config.cloud_region.CloudRegion` object can be passed in lieu of a cloud name, for cases where the user already has a fully formed CloudRegion and just wants to use it. Similarly, if for some reason the user already has a :class:`~keystoneauth1.session.Session` and wants to use it, it may be passed in. :param str cloud: Name of the cloud from config to use. :param config: CloudRegion object representing the config for the region of the cloud in question. :type config: :class:`~openstack.config.cloud_region.CloudRegion` :param session: A session object compatible with :class:`~keystoneauth1.session.Session`. :type session: :class:`~keystoneauth1.session.Session` :param str app_name: Name of the application to be added to User Agent. :param str app_version: Version of the application to be added to User Agent. :param authenticator: DEPRECATED. Only exists for short-term backwards compatibility for python-openstackclient while we transition. See :doc:`transition_from_profile` for details. :param profile: DEPRECATED. Only exists for short-term backwards compatibility for python-openstackclient while we transition. See :doc:`transition_from_profile` for details. :param extra_services: List of :class:`~openstack.service_description.ServiceDescription` objects describing services that openstacksdk otherwise does not know about. :param kwargs: If a config is not provided, the rest of the parameters provided are assumed to be arguments to be passed to the CloudRegion contructor. """ self.config = config self._extra_services = {} if extra_services: for service in extra_services: self._extra_services[service.service_type] = service if not self.config: if profile: import openstack.profile # TODO(shade) Remove this once we've shifted # python-openstackclient to not use the profile interface. self.config = openstack.profile._get_config_from_profile( profile, authenticator, **kwargs) elif session: self.config = cloud_region.from_session( session=session, app_name=app_name, app_version=app_version, load_yaml_config=False, load_envvars=False, **kwargs) else: self.config = _config.get_cloud_region( cloud=cloud, app_name=app_name, app_version=app_version, load_yaml_config=cloud is not None, load_envvars=cloud is not None, **kwargs) if self.config.name: tm_name = ':'.join([ self.config.name, self.config.region_name or 'unknown']) else: tm_name = self.config.region_name or 'unknown' self.task_manager = task_manager.TaskManager(name=tm_name) if session: # TODO(mordred) Expose constructor option for this in OCC self.config._keystone_session = session self.session = self.config.get_session() # Hide a reference to the connection on the session to help with # backwards compatibility for folks trying to just pass conn.session # to a Resource method's session argument. self.session._sdk_connection = self self._proxies = {} def add_service(self, service): """Add a service to the Connection. Attaches an instance of the :class:`~openstack.proxy.BaseProxy` class contained in :class:`~openstack.service_description.ServiceDescription`. The :class:`~openstack.proxy.BaseProxy` will be attached to the `Connection` by its ``service_type`` and by any ``aliases`` that may be specified. :param openstack.service_description.ServiceDescription service: Object describing the service to be attached. As a convenience, if ``service`` is a string it will be treated as a ``service_type`` and a basic :class:`~openstack.service_description.ServiceDescription` will be created. """ # If we don't have a proxy, just instantiate BaseProxy so that # we get an adapter. if isinstance(service, six.string_types): service = service_description.ServiceDescription(service) # Register the proxy class with every known alias for attr_name in service.all_types: setattr(self, attr_name.replace('-', '_'), service) def authorize(self): """Authorize this Connection .. note:: This method is optional. When an application makes a call to any OpenStack service, this method allows you to request a token manually before attempting to do anything else. :returns: A string token. :raises: :class:`~openstack.exceptions.HttpException` if the authorization fails due to reasons like the credentials provided are unable to be authorized or the `auth_type` argument is missing, etc. """ try: return self.session.get_token() except keystoneauth1.exceptions.ClientException as e: raise exceptions.raise_from_response(e.response) openstacksdk-0.11.3/openstack/exceptions.py0000666000175100017510000001601313236151340021034 0ustar zuulzuul00000000000000# Copyright 2010 Jacob Kaplan-Moss # Copyright 2011 Nebula, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Exception definitions. """ import re from requests import exceptions as _rex import six class SDKException(Exception): """The base exception class for all exceptions this library raises.""" def __init__(self, message=None, extra_data=None): self.message = self.__class__.__name__ if message is None else message self.extra_data = extra_data super(SDKException, self).__init__(self.message) OpenStackCloudException = SDKException class EndpointNotFound(SDKException): """A mismatch occurred between what the client and server expect.""" def __init__(self, message=None): super(EndpointNotFound, self).__init__(message) class InvalidResponse(SDKException): """The response from the server is not valid for this request.""" def __init__(self, response): super(InvalidResponse, self).__init__() self.response = response class InvalidRequest(SDKException): """The request to the server is not valid.""" def __init__(self, message=None): super(InvalidRequest, self).__init__(message) class HttpException(SDKException, _rex.HTTPError): def __init__(self, message='Error', response=None, http_status=None, details=None, request_id=None): # TODO(shade) Remove http_status parameter and the ability for response # to be None once we're not mocking Session everywhere. if not message: if response: message = "{name}: {code}".format( name=self.__class__.__name__, code=response.status_code) else: message = "{name}: Unknown error".format( name=self.__class__.__name__) # Call directly rather than via super to control parameters SDKException.__init__(self, message=message) _rex.HTTPError.__init__(self, message, response=response) if response: self.request_id = response.headers.get('x-openstack-request-id') self.status_code = response.status_code else: self.request_id = request_id self.status_code = http_status self.details = details self.url = self.request and self.request.url or None self.method = self.request and self.request.method or None self.source = "Server" if self.status_code is not None and (400 <= self.status_code < 500): self.source = "Client" def __unicode__(self): # 'Error' is the default value for self.message. If self.message isn't # 'Error', then someone has set a more informative error message # and we should use it. If it is 'Error', then we should construct a # better message from the information we do have. if not self.url or self.message != 'Error': return super(HttpException, self).__str__() if self.url: remote_error = "{source} Error for url: {url}".format( source=self.source, url=self.url) if self.details: remote_error += ', ' if self.details: remote_error += six.text_type(self.details) return "{message}: {remote_error}".format( message=super(HttpException, self).__str__(), remote_error=remote_error) def __str__(self): return self.__unicode__() class NotFoundException(HttpException): """HTTP 404 Not Found.""" pass class BadRequestException(HttpException): """HTTP 400 Bad Request.""" pass class MethodNotSupported(SDKException): """The resource does not support this operation type.""" def __init__(self, resource, method): # This needs to work with both classes and instances. try: name = resource.__name__ except AttributeError: name = resource.__class__.__name__ message = ('The %s method is not supported for %s.%s' % (method, resource.__module__, name)) super(MethodNotSupported, self).__init__(message=message) class DuplicateResource(SDKException): """More than one resource exists with that name.""" pass class ResourceNotFound(NotFoundException): """No resource exists with that name or id.""" pass class ResourceTimeout(SDKException): """Timeout waiting for resource.""" pass class ResourceFailure(SDKException): """General resource failure.""" pass class InvalidResourceQuery(SDKException): """Invalid query params for resource.""" pass def raise_from_response(response, error_message=None): """Raise an instance of an HTTPException based on keystoneauth response.""" if response.status_code < 400: return if response.status_code == 404: cls = NotFoundException elif response.status_code == 400: cls = BadRequestException else: cls = HttpException details = None content_type = response.headers.get('content-type', '') if response.content and 'application/json' in content_type: # Iterate over the nested objects to retrieve "message" attribute. # TODO(shade) Add exception handling for times when the content type # is lying. try: content = response.json() messages = [obj.get('message') for obj in content.values() if isinstance(obj, dict)] # Join all of the messages together nicely and filter out any # objects that don't have a "message" attr. details = '\n'.join(msg for msg in messages if msg) except Exception: details = response.text elif response.content and 'text/html' in content_type: # Split the lines, strip whitespace and inline HTML from the response. details = [re.sub(r'<.+?>', '', i.strip()) for i in response.text.splitlines()] details = list(set([msg for msg in details if msg])) # Return joined string separated by colons. details = ': '.join(details) if not details and response.reason: details = response.reason else: details = response.text http_status = response.status_code request_id = response.headers.get('x-openstack-request-id') raise cls( message=error_message, response=response, details=details, http_status=http_status, request_id=request_id ) class ArgumentDeprecationWarning(Warning): """A deprecated argument has been provided.""" pass openstacksdk-0.11.3/openstack/compute/0000775000175100017510000000000013236151501017751 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/compute/version.py0000666000175100017510000000177513236151340022025 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = compute_service.ComputeService( version=compute_service.ComputeService.UNVERSIONED ) # capabilities allow_list = True # Properties links = resource.Body('links') status = resource.Body('status') updated = resource.Body('updated') openstacksdk-0.11.3/openstack/compute/v2/0000775000175100017510000000000013236151501020300 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/compute/v2/server_ip.py0000666000175100017510000000361013236151340022653 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource from openstack import utils class ServerIP(resource.Resource): resources_key = 'addresses' base_path = '/servers/%(server_id)s/ips' service = compute_service.ComputeService() # capabilities allow_list = True # Properties #: The IP address. The format of the address depends on :attr:`version` address = resource.Body('addr') #: The network label, such as public or private. network_label = resource.URI('network_label') #: The ID for the server. server_id = resource.URI('server_id') # Version of the IP protocol. Currently either 4 or 6. version = resource.Body('version') @classmethod def list(cls, session, paginated=False, server_id=None, network_label=None, **params): url = cls.base_path % {"server_id": server_id} if network_label is not None: url = utils.urljoin(url, network_label) resp = session.get(url,) resp = resp.json() if network_label is None: resp = resp[cls.resources_key] for label, addresses in resp.items(): for address in addresses: yield cls.existing(network_label=label, address=address["addr"], version=address["version"]) openstacksdk-0.11.3/openstack/compute/v2/server_group.py0000666000175100017510000000252613236151340023404 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class ServerGroup(resource.Resource): resource_key = 'server_group' resources_key = 'server_groups' base_path = '/os-server-groups' service = compute_service.ComputeService() _query_mapping = resource.QueryParameters("all_projects") # capabilities allow_create = True allow_get = True allow_delete = True allow_list = True # Properties #: A name identifying the server group name = resource.Body('name') #: The list of policies supported by the server group policies = resource.Body('policies') #: The list of members in the server group member_ids = resource.Body('members') #: The metadata associated with the server group metadata = resource.Body('metadata') openstacksdk-0.11.3/openstack/compute/v2/flavor.py0000666000175100017510000000462113236151340022151 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class Flavor(resource.Resource): resource_key = 'flavor' resources_key = 'flavors' base_path = '/flavors' service = compute_service.ComputeService() # capabilities allow_create = True allow_get = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( "sort_key", "sort_dir", min_disk="minDisk", min_ram="minRam") # Properties #: Links pertaining to this flavor. This is a list of dictionaries, #: each including keys ``href`` and ``rel``. links = resource.Body('links') #: The name of this flavor. name = resource.Body('name') #: Size of the disk this flavor offers. *Type: int* disk = resource.Body('disk', type=int) #: ``True`` if this is a publicly visible flavor. ``False`` if this is #: a private image. *Type: bool* is_public = resource.Body('os-flavor-access:is_public', type=bool) #: The amount of RAM (in MB) this flavor offers. *Type: int* ram = resource.Body('ram', type=int) #: The number of virtual CPUs this flavor offers. *Type: int* vcpus = resource.Body('vcpus', type=int) #: Size of the swap partitions. swap = resource.Body('swap') #: Size of the ephemeral data disk attached to this server. *Type: int* ephemeral = resource.Body('OS-FLV-EXT-DATA:ephemeral', type=int) #: ``True`` if this flavor is disabled, ``False`` if not. *Type: bool* is_disabled = resource.Body('OS-FLV-DISABLED:disabled', type=bool) #: The bandwidth scaling factor this flavor receives on the network. rxtx_factor = resource.Body('rxtx_factor', type=float) class FlavorDetail(Flavor): base_path = '/flavors/detail' allow_create = False allow_get = False allow_update = False allow_delete = False allow_list = True openstacksdk-0.11.3/openstack/compute/v2/hypervisor.py0000666000175100017510000000502413236151340023070 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class Hypervisor(resource.Resource): resource_key = 'hypervisor' resources_key = 'hypervisors' base_path = '/os-hypervisors' service = compute_service.ComputeService() # capabilities allow_get = True allow_list = True # Properties #: Status of hypervisor status = resource.Body('status') #: State of hypervisor state = resource.Body('state') #: Name of hypervisor name = resource.Body('hypervisor_hostname') #: Service details service_details = resource.Body('service') #: Count of the VCPUs in use vcpus_used = resource.Body('vcpus_used') #: Count of all VCPUs vcpus = resource.Body('vcpus') #: Count of the running virtual machines running_vms = resource.Body('running_vms') #: The type of hypervisor hypervisor_type = resource.Body('hypervisor_type') #: Version of the hypervisor hypervisor_version = resource.Body('hypervisor_version') #: The amount, in gigabytes, of local storage used local_disk_used = resource.Body('local_gb_used') #: The amount, in gigabytes, of the local storage device local_disk_size = resource.Body('local_gb') #: The amount, in gigabytes, of free space on the local storage device local_disk_free = resource.Body('free_disk_gb') #: The amount, in megabytes, of memory memory_used = resource.Body('memory_mb_used') #: The amount, in megabytes, of total memory memory_size = resource.Body('memory_mb') #: The amount, in megabytes, of available memory memory_free = resource.Body('free_ram_mb') #: Measurement of the hypervisor's current workload current_workload = resource.Body('current_workload') #: Information about the hypervisor's CPU cpu_info = resource.Body('cpu_info') #: IP address of the host host_ip = resource.Body('host_ip') #: Disk space available to the scheduler disk_available = resource.Body("disk_available_least") openstacksdk-0.11.3/openstack/compute/v2/server_interface.py0000666000175100017510000000267313236151340024213 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class ServerInterface(resource.Resource): resource_key = 'interfaceAttachment' resources_key = 'interfaceAttachments' base_path = '/servers/%(server_id)s/os-interface' service = compute_service.ComputeService() # capabilities allow_create = True allow_get = True allow_update = False allow_delete = True allow_list = True #: Fixed IP addresses with subnet IDs. fixed_ips = resource.Body('fixed_ips') #: The MAC address. mac_addr = resource.Body('mac_addr') #: The network ID. net_id = resource.Body('net_id') #: The ID of the port for which you want to create an interface. port_id = resource.Body('port_id', alternate_id=True) #: The port state. port_state = resource.Body('port_state') #: The ID for the server. server_id = resource.URI('server_id') openstacksdk-0.11.3/openstack/compute/v2/extension.py0000666000175100017510000000301013236151340022663 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class Extension(resource.Resource): resource_key = 'extension' resources_key = 'extensions' base_path = '/extensions' service = compute_service.ComputeService() id_attribute = "alias" # capabilities allow_get = True allow_list = True # Properties #: A short name by which this extension is also known. alias = resource.Body('alias', alternate_id=True) #: Text describing this extension's purpose. description = resource.Body('description') #: Links pertaining to this extension. This is a list of dictionaries, #: each including keys ``href`` and ``rel``. links = resource.Body('links') #: The name of the extension. name = resource.Body('name') #: A URL pointing to the namespace for this extension. namespace = resource.Body('namespace') #: Timestamp when this extension was last updated. updated_at = resource.Body('updated') openstacksdk-0.11.3/openstack/compute/v2/image.py0000666000175100017510000000445213236151340021744 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack.compute.v2 import metadata from openstack import resource class Image(resource.Resource, metadata.MetadataMixin): resource_key = 'image' resources_key = 'images' base_path = '/images' service = compute_service.ComputeService() # capabilities allow_get = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( "server", "name", "status", "type", min_disk="minDisk", min_ram="minRam", changes_since="changes-since") # Properties #: Links pertaining to this image. This is a list of dictionaries, #: each including keys ``href`` and ``rel``, and optionally ``type``. links = resource.Body('links') #: The name of this image. name = resource.Body('name') #: Timestamp when the image was created. created_at = resource.Body('created') #: Metadata pertaining to this image. *Type: dict* metadata = resource.Body('metadata', type=dict) #: The mimimum disk size. *Type: int* min_disk = resource.Body('minDisk', type=int) #: The minimum RAM size. *Type: int* min_ram = resource.Body('minRam', type=int) #: If this image is still building, its progress is represented here. #: Once an image is created, progres will be 100. *Type: int* progress = resource.Body('progress', type=int) #: The status of this image. status = resource.Body('status') #: Timestamp when the image was updated. updated_at = resource.Body('updated') #: Size of the image in bytes. *Type: int* size = resource.Body('OS-EXT-IMG-SIZE:size', type=int) class ImageDetail(Image): base_path = '/images/detail' allow_get = False allow_delete = False allow_list = True openstacksdk-0.11.3/openstack/compute/v2/service.py0000666000175100017510000000450313236151340022317 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource from openstack import utils class Service(resource.Resource): resource_key = 'service' resources_key = 'services' base_path = '/os-services' service = compute_service.ComputeService() # capabilities allow_list = True allow_update = True # Properties #: Status of service status = resource.Body('status') #: State of service state = resource.Body('state') #: Name of service binary = resource.Body('binary') #: Id of service id = resource.Body('id') #: Disabled reason of service disables_reason = resource.Body('disabled_reason') #: Host where service runs host = resource.Body('host') #: The availability zone of service zone = resource.Body("zone") def _action(self, session, action, body): url = utils.urljoin(Service.base_path, action) return session.put(url, json=body) def force_down(self, session, host, binary): """Force a service down.""" body = { 'host': host, 'binary': binary, 'forced_down': True, } return self._action(session, 'force-down', body) def enable(self, session, host, binary): """Enable service.""" body = { 'host': host, 'binary': binary, } return self._action(session, 'enable', body) def disable(self, session, host, binary, reason=None): """Disable service.""" body = { 'host': host, 'binary': binary, } if not reason: action = 'disable' else: body['disabled_reason'] = reason action = 'disable-log-reason' return self._action(session, action, body) openstacksdk-0.11.3/openstack/compute/v2/__init__.py0000666000175100017510000000000013236151340022402 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/compute/v2/availability_zone.py0000666000175100017510000000222013236151340024356 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class AvailabilityZone(resource.Resource): resources_key = 'availabilityZoneInfo' base_path = '/os-availability-zone' service = compute_service.ComputeService() # capabilities allow_list = True # Properties #: name of availability zone name = resource.Body('zoneName') #: state of availability zone state = resource.Body('zoneState') #: hosts of availability zone hosts = resource.Body('hosts') class AvailabilityZoneDetail(AvailabilityZone): base_path = '/os-availability-zone/detail' openstacksdk-0.11.3/openstack/compute/v2/keypair.py0000666000175100017510000000471113236151340022324 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class Keypair(resource.Resource): resource_key = 'keypair' resources_key = 'keypairs' base_path = '/os-keypairs' service = compute_service.ComputeService() # capabilities allow_create = True allow_get = True allow_delete = True allow_list = True # Properties #: The short fingerprint associated with the ``public_key`` for #: this keypair. fingerprint = resource.Body('fingerprint') # NOTE: There is in fact an 'id' field. However, it's not useful # because all operations use the 'name' as an identifier. # Additionally, the 'id' field only appears *after* creation, # so suddenly you have an 'id' field filled in after the fact, # and it just gets in the way. We need to cover this up by listing # name as alternate_id and listing id as coming from name. #: The id identifying the keypair id = resource.Body('name') #: A name identifying the keypair name = resource.Body('name', alternate_id=True) #: The private key for the keypair private_key = resource.Body('private_key') #: The SSH public key that is paired with the server. public_key = resource.Body('public_key') def _consume_attrs(self, mapping, attrs): # TODO(mordred) This should not be required. However, without doing # it **SOMETIMES** keypair picks up id and not name. This is a hammer. if 'id' in attrs: attrs.setdefault('name', attrs.pop('id')) return super(Keypair, self)._consume_attrs(mapping, attrs) @classmethod def list(cls, session, paginated=False): resp = session.get(cls.base_path, headers={"Accept": "application/json"}) resp = resp.json() resp = resp[cls.resources_key] for data in resp: value = cls.existing(**data[cls.resource_key]) yield value openstacksdk-0.11.3/openstack/compute/v2/volume_attachment.py0000666000175100017510000000271713236151340024403 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class VolumeAttachment(resource.Resource): resource_key = 'volumeAttachment' resources_key = 'volumeAttachments' base_path = '/servers/%(server_id)s/os-volume_attachments' service = compute_service.ComputeService() # capabilities allow_create = True allow_get = True allow_update = False allow_delete = True allow_list = True _query_mapping = resource.QueryParameters("limit", "offset") #: Name of the device such as, /dev/vdb. device = resource.Body('device') #: The ID of the attachment. id = resource.Body('id') #: The ID for the server. server_id = resource.URI('server_id') #: The ID of the attached volume. volume_id = resource.Body('volumeId') #: The ID of the attachment you want to delete or update. attachment_id = resource.Body('attachment_id', alternate_id=True) openstacksdk-0.11.3/openstack/compute/v2/server.py0000666000175100017510000003564513236151340022200 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack.compute.v2 import metadata from openstack import resource from openstack import utils class Server(resource.Resource, metadata.MetadataMixin): resource_key = 'server' resources_key = 'servers' base_path = '/servers' service = compute_service.ComputeService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( "image", "flavor", "name", "status", "host", "all_tenants", "sort_key", "sort_dir", "reservation_id", "tags", "project_id", tags_any="tags-any", not_tags="not-tags", not_tags_any="not-tags-any", is_deleted="deleted", ipv4_address="ip", ipv6_address="ip6", changes_since="changes-since") #: A list of dictionaries holding links relevant to this server. links = resource.Body('links') access_ipv4 = resource.Body('accessIPv4') access_ipv6 = resource.Body('accessIPv6') #: A dictionary of addresses this server can be accessed through. #: The dictionary contains keys such as ``private`` and ``public``, #: each containing a list of dictionaries for addresses of that type. #: The addresses are contained in a dictionary with keys ``addr`` #: and ``version``, which is either 4 or 6 depending on the protocol #: of the IP address. *Type: dict* addresses = resource.Body('addresses', type=dict) #: Timestamp of when the server was created. created_at = resource.Body('created') #: The flavor reference, as a ID or full URL, for the flavor to use for #: this server. flavor_id = resource.Body('flavorRef') #: The flavor property as returned from server. flavor = resource.Body('flavor', type=dict) #: An ID representing the host of this server. host_id = resource.Body('hostId') #: The image reference, as a ID or full URL, for the image to use for #: this server. image_id = resource.Body('imageRef') #: The image property as returned from server. image = resource.Body('image', type=dict) #: Metadata stored for this server. *Type: dict* metadata = resource.Body('metadata', type=dict) #: While the server is building, this value represents the percentage #: of completion. Once it is completed, it will be 100. *Type: int* progress = resource.Body('progress', type=int) #: The ID of the project this server is associated with. project_id = resource.Body('tenant_id') #: The state this server is in. Valid values include ``ACTIVE``, #: ``BUILDING``, ``DELETED``, ``ERROR``, ``HARD_REBOOT``, ``PASSWORD``, #: ``PAUSED``, ``REBOOT``, ``REBUILD``, ``RESCUED``, ``RESIZED``, #: ``REVERT_RESIZE``, ``SHUTOFF``, ``SOFT_DELETED``, ``STOPPED``, #: ``SUSPENDED``, ``UNKNOWN``, or ``VERIFY_RESIZE``. status = resource.Body('status') #: Timestamp of when this server was last updated. updated_at = resource.Body('updated') #: The ID of the owners of this server. user_id = resource.Body('user_id') #: The name of an associated keypair key_name = resource.Body('key_name') #: The disk configuration. Either AUTO or MANUAL. disk_config = resource.Body('OS-DCF:diskConfig') #: Indicates whether a configuration drive enables metadata injection. #: Not all cloud providers enable this feature. has_config_drive = resource.Body('config_drive') #: The name of the availability zone this server is a part of. availability_zone = resource.Body('OS-EXT-AZ:availability_zone') #: The power state of this server. power_state = resource.Body('OS-EXT-STS:power_state') #: The task state of this server. task_state = resource.Body('OS-EXT-STS:task_state') #: The VM state of this server. vm_state = resource.Body('OS-EXT-STS:vm_state') #: A list of an attached volumes. Each item in the list contains at least #: an "id" key to identify the specific volumes. attached_volumes = resource.Body( 'os-extended-volumes:volumes_attached') #: The timestamp when the server was launched. launched_at = resource.Body('OS-SRV-USG:launched_at') #: The timestamp when the server was terminated (if it has been). terminated_at = resource.Body('OS-SRV-USG:terminated_at') #: A list of applicable security groups. Each group contains keys for #: description, name, id, and rules. security_groups = resource.Body('security_groups') #: When a server is first created, it provides the administrator password. admin_password = resource.Body('adminPass') #: The file path and contents, text only, to inject into the server at #: launch. The maximum size of the file path data is 255 bytes. #: The maximum limit is The number of allowed bytes in the decoded, #: rather than encoded, data. personality = resource.Body('personality') #: Configuration information or scripts to use upon launch. #: Must be Base64 encoded. user_data = resource.Body('OS-EXT-SRV-ATTR:user_data') #: Enables fine grained control of the block device mapping for an #: instance. This is typically used for booting servers from volumes. block_device_mapping = resource.Body('block_device_mapping_v2') #: The dictionary of data to send to the scheduler. scheduler_hints = resource.Body('OS-SCH-HNT:scheduler_hints', type=dict) #: A networks object. Required parameter when there are multiple #: networks defined for the tenant. When you do not specify the #: networks parameter, the server attaches to the only network #: created for the current tenant. networks = resource.Body('networks') #: The hypervisor host name. Appears in the response for administrative #: users only. hypervisor_hostname = resource.Body('OS-EXT-SRV-ATTR:hypervisor_hostname') #: The instance name. The Compute API generates the instance name from the #: instance name template. Appears in the response for administrative users #: only. instance_name = resource.Body('OS-EXT-SRV-ATTR:instance_name') def _prepare_request(self, requires_id=True, prepend_key=True): request = super(Server, self)._prepare_request(requires_id=requires_id, prepend_key=prepend_key) server_body = request.body[self.resource_key] # Some names exist without prefix on requests but with a prefix # on responses. If we find that we've populated one of these # attributes with something and then go to make a request, swap out # the name to the bare version. # Availability Zones exist with a prefix on response, but not request az_key = "OS-EXT-AZ:availability_zone" if az_key in server_body: server_body["availability_zone"] = server_body.pop(az_key) # User Data exists with a prefix on response, but not request ud_key = "OS-EXT-SRV-ATTR:user_data" if ud_key in server_body: server_body["user_data"] = server_body.pop(ud_key) # Scheduler hints are sent in a top-level scope, not within the # resource_key scope like everything else. If we try to send # scheduler_hints, pop them out of the resource_key scope and into # their own top-level scope. hint_key = "OS-SCH-HNT:scheduler_hints" if hint_key in server_body: request.body[hint_key] = server_body.pop(hint_key) return request def _action(self, session, body): """Preform server actions given the message body.""" # NOTE: This is using Server.base_path instead of self.base_path # as both Server and ServerDetail instances can be acted on, but # the URL used is sans any additional /detail/ part. url = utils.urljoin(Server.base_path, self.id, 'action') headers = {'Accept': ''} return session.post( url, json=body, headers=headers) def change_password(self, session, new_password): """Change the administrator password to the given password.""" body = {'changePassword': {'adminPass': new_password}} self._action(session, body) def get_password(self, session): """Get the encrypted administrator password.""" url = utils.urljoin(Server.base_path, self.id, 'os-server-password') return session.get(url, endpoint_filter=self.service) def reboot(self, session, reboot_type): """Reboot server where reboot_type might be 'SOFT' or 'HARD'.""" body = {'reboot': {'type': reboot_type}} self._action(session, body) def force_delete(self, session): """Force delete a server.""" body = {'forceDelete': None} self._action(session, body) def rebuild(self, session, name, admin_password, preserve_ephemeral=False, image=None, access_ipv4=None, access_ipv6=None, metadata=None, personality=None): """Rebuild the server with the given arguments.""" action = { 'name': name, 'adminPass': admin_password, 'preserve_ephemeral': preserve_ephemeral } if image is not None: action['imageRef'] = resource.Resource._get_id(image) if access_ipv4 is not None: action['accessIPv4'] = access_ipv4 if access_ipv6 is not None: action['accessIPv6'] = access_ipv6 if metadata is not None: action['metadata'] = metadata if personality is not None: action['personality'] = personality body = {'rebuild': action} response = self._action(session, body) self._translate_response(response) return self def resize(self, session, flavor): """Resize server to flavor reference.""" body = {'resize': {'flavorRef': flavor}} self._action(session, body) def confirm_resize(self, session): """Confirm the resize of the server.""" body = {'confirmResize': None} self._action(session, body) def revert_resize(self, session): """Revert the resize of the server.""" body = {'revertResize': None} self._action(session, body) def create_image(self, session, name, metadata=None): """Create image from server.""" action = {'name': name} if metadata is not None: action['metadata'] = metadata body = {'createImage': action} self._action(session, body) def add_security_group(self, session, security_group): body = {"addSecurityGroup": {"name": security_group}} self._action(session, body) def remove_security_group(self, session, security_group): body = {"removeSecurityGroup": {"name": security_group}} self._action(session, body) def reset_state(self, session, state): body = {"os-resetState": {"state": state}} self._action(session, body) def add_fixed_ip(self, session, network_id): body = {"addFixedIp": {"networkId": network_id}} self._action(session, body) def remove_fixed_ip(self, session, address): body = {"removeFixedIp": {"address": address}} self._action(session, body) def add_floating_ip(self, session, address, fixed_address=None): body = {"addFloatingIp": {"address": address}} if fixed_address is not None: body['addFloatingIp']['fixed_address'] = fixed_address self._action(session, body) def remove_floating_ip(self, session, address): body = {"removeFloatingIp": {"address": address}} self._action(session, body) def backup(self, session, name, backup_type, rotation): body = { "createBackup": { "name": name, "backup_type": backup_type, "rotation": rotation } } self._action(session, body) def pause(self, session): body = {"pause": None} self._action(session, body) def unpause(self, session): body = {"unpause": None} self._action(session, body) def suspend(self, session): body = {"suspend": None} self._action(session, body) def resume(self, session): body = {"resume": None} self._action(session, body) def lock(self, session): body = {"lock": None} self._action(session, body) def unlock(self, session): body = {"unlock": None} self._action(session, body) def rescue(self, session, admin_pass=None, image_ref=None): body = {"rescue": {}} if admin_pass is not None: body["rescue"]["adminPass"] = admin_pass if image_ref is not None: body["rescue"]["rescue_image_ref"] = image_ref self._action(session, body) def unrescue(self, session): body = {"unrescue": None} self._action(session, body) def evacuate(self, session, host=None, admin_pass=None, force=None): body = {"evacuate": {}} if host is not None: body["evacuate"]["host"] = host if admin_pass is not None: body["evacuate"]["adminPass"] = admin_pass if force is not None: body["evacuate"]["force"] = force self._action(session, body) def start(self, session): body = {"os-start": None} self._action(session, body) def stop(self, session): body = {"os-stop": None} self._action(session, body) def shelve(self, session): body = {"shelve": None} self._action(session, body) def unshelve(self, session): body = {"unshelve": None} self._action(session, body) def migrate(self, session): body = {"migrate": None} self._action(session, body) def get_console_output(self, session, length=None): body = {"os-getConsoleOutput": {}} if length is not None: body["os-getConsoleOutput"]["length"] = length resp = self._action(session, body) return resp.json() def live_migrate(self, session, host, force): body = { "os-migrateLive": { "host": host, "block_migration": "auto", "force": force } } self._action(session, body) class ServerDetail(Server): base_path = '/servers/detail' # capabilities allow_create = False allow_get = False allow_update = False allow_delete = False allow_list = True openstacksdk-0.11.3/openstack/compute/v2/limits.py0000666000175100017510000000767713236151340022177 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute import compute_service from openstack import resource class AbsoluteLimits(resource.Resource): #: The number of key-value pairs that can be set as image metadata. image_meta = resource.Body("maxImageMeta") #: The maximum number of personality contents that can be supplied. personality = resource.Body("maxPersonality") #: The maximum size, in bytes, of a personality. personality_size = resource.Body("maxPersonalitySize") #: The maximum amount of security group rules allowed. security_group_rules = resource.Body("maxSecurityGroupRules") #: The maximum amount of security groups allowed. security_groups = resource.Body("maxSecurityGroups") #: The amount of security groups currently in use. security_groups_used = resource.Body("totalSecurityGroupsUsed") #: The number of key-value pairs that can be set as sever metadata. server_meta = resource.Body("maxServerMeta") #: The maximum amount of cores. total_cores = resource.Body("maxTotalCores") #: The amount of cores currently in use. total_cores_used = resource.Body("totalCoresUsed") #: The maximum amount of floating IPs. floating_ips = resource.Body("maxTotalFloatingIps") #: The amount of floating IPs currently in use. floating_ips_used = resource.Body("totalFloatingIpsUsed") #: The maximum amount of instances. instances = resource.Body("maxTotalInstances") #: The amount of instances currently in use. instances_used = resource.Body("totalInstancesUsed") #: The maximum amount of keypairs. keypairs = resource.Body("maxTotalKeypairs") #: The maximum RAM size in megabytes. total_ram = resource.Body("maxTotalRAMSize") #: The RAM size in megabytes currently in use. total_ram_used = resource.Body("totalRAMUsed") #: The maximum amount of server groups. server_groups = resource.Body("maxServerGroups") #: The amount of server groups currently in use. server_groups_used = resource.Body("totalServerGroupsUsed") #: The maximum number of members in a server group. server_group_members = resource.Body("maxServerGroupMembers") class RateLimit(resource.Resource): # TODO(mordred) Make a resource type for the contents of limit and add # it to list_type here. #: A list of the specific limits that apply to the ``regex`` and ``uri``. limits = resource.Body("limit", type=list) #: A regex representing which routes this rate limit applies to. regex = resource.Body("regex") #: A URI representing which routes this rate limit applies to. uri = resource.Body("uri") class Limits(resource.Resource): base_path = "/limits" resource_key = "limits" service = compute_service.ComputeService() allow_get = True absolute = resource.Body("absolute", type=AbsoluteLimits) rate = resource.Body("rate", type=list, list_type=RateLimit) def get(self, session, requires_id=False, error_message=None): """Get the Limits resource. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :returns: A Limits instance :rtype: :class:`~openstack.compute.v2.limits.Limits` """ # TODO(mordred) We shouldn't have to subclass just to declare # requires_id = False. return super(Limits, self).get( session=session, requires_id=False, error_message=error_message) openstacksdk-0.11.3/openstack/compute/v2/metadata.py0000666000175100017510000000655613236151340022451 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from openstack import utils class MetadataMixin(object): def _metadata(self, method, key=None, clear=False, delete=False, **metadata): for k, v in metadata.items(): if not isinstance(v, six.string_types): raise ValueError("The value for %s (%s) must be " "a text string" % (k, v)) # If we're in a ServerDetail, we need to pop the "detail" portion # of the URL off and then everything else will work the same. pos = self.base_path.find("detail") if pos != -1: base = self.base_path[:pos] else: base = self.base_path if key is not None: url = utils.urljoin(base, self.id, "metadata", key) else: url = utils.urljoin(base, self.id, "metadata") kwargs = {} if metadata or clear: # 'meta' is the key for singular modifications. # 'metadata' is the key for mass modifications. key = "meta" if key is not None else "metadata" kwargs["json"] = {key: metadata} headers = {"Accept": ""} if delete else {} response = method(url, headers=headers, **kwargs) # DELETE doesn't return a JSON body while everything else does. return response.json() if not delete else None def get_metadata(self, session): """Retrieve metadata :param session: The session to use for this request. :returns: A dictionary of the requested metadata. All keys and values are Unicode text. :rtype: dict """ result = self._metadata(session.get) return result["metadata"] def set_metadata(self, session, **metadata): """Update metadata This call will replace only the metadata with the same keys given here. Metadata with other keys will not be modified. :param session: The session to use for this request. :param kwargs metadata: key/value metadata pairs to be update on this server instance. All keys and values are stored as Unicode. :returns: A dictionary of the metadata after being updated. All keys and values are Unicode text. :rtype: dict """ if not metadata: return dict() result = self._metadata(session.post, **metadata) return result["metadata"] def delete_metadata(self, session, keys): """Delete metadata Note: This method will do a HTTP DELETE request for every key in keys. :param session: The session to use for this request. :param list keys: The keys to delete. :rtype: ``None`` """ for key in keys: self._metadata(session.delete, key=key, delete=True) openstacksdk-0.11.3/openstack/compute/v2/_proxy.py0000666000175100017510000015515413236151340022210 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute.v2 import availability_zone from openstack.compute.v2 import extension from openstack.compute.v2 import flavor as _flavor from openstack.compute.v2 import hypervisor as _hypervisor from openstack.compute.v2 import image as _image from openstack.compute.v2 import keypair as _keypair from openstack.compute.v2 import limits from openstack.compute.v2 import server as _server from openstack.compute.v2 import server_group as _server_group from openstack.compute.v2 import server_interface as _server_interface from openstack.compute.v2 import server_ip from openstack.compute.v2 import service as _service from openstack.compute.v2 import volume_attachment as _volume_attachment from openstack import proxy from openstack import resource class Proxy(proxy.BaseProxy): def find_extension(self, name_or_id, ignore_missing=True): """Find a single extension :param name_or_id: The name or ID of an extension. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.compute.v2.extension.Extension` or None """ return self._find(extension.Extension, name_or_id, ignore_missing=ignore_missing) def extensions(self): """Retrieve a generator of extensions :returns: A generator of extension instances. :rtype: :class:`~openstack.compute.v2.extension.Extension` """ return self._list(extension.Extension, paginated=False) def find_flavor(self, name_or_id, ignore_missing=True): """Find a single flavor :param name_or_id: The name or ID of a flavor. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.compute.v2.flavor.Flavor` or None """ return self._find(_flavor.Flavor, name_or_id, ignore_missing=ignore_missing) def create_flavor(self, **attrs): """Create a new flavor from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.compute.v2.flavor.Flavor`, comprised of the properties on the Flavor class. :returns: The results of flavor creation :rtype: :class:`~openstack.compute.v2.flavor.Flavor` """ return self._create(_flavor.Flavor, **attrs) def delete_flavor(self, flavor, ignore_missing=True): """Delete a flavor :param flavor: The value can be either the ID of a flavor or a :class:`~openstack.compute.v2.flavor.Flavor` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the flavor does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent flavor. :returns: ``None`` """ self._delete(_flavor.Flavor, flavor, ignore_missing=ignore_missing) def get_flavor(self, flavor): """Get a single flavor :param flavor: The value can be the ID of a flavor or a :class:`~openstack.compute.v2.flavor.Flavor` instance. :returns: One :class:`~openstack.compute.v2.flavor.Flavor` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_flavor.Flavor, flavor) def flavors(self, details=True, **query): """Return a generator of flavors :param bool details: When ``True``, returns :class:`~openstack.compute.v2.flavor.FlavorDetail` objects, otherwise :class:`~openstack.compute.v2.flavor.Flavor`. *Default: ``True``* :param kwargs \*\*query: Optional query parameters to be sent to limit the flavors being returned. :returns: A generator of flavor objects """ flv = _flavor.FlavorDetail if details else _flavor.Flavor return self._list(flv, paginated=True, **query) def delete_image(self, image, ignore_missing=True): """Delete an image :param image: The value can be either the ID of an image or a :class:`~openstack.compute.v2.image.Image` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the image does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent image. :returns: ``None`` """ self._delete(_image.Image, image, ignore_missing=ignore_missing) def find_image(self, name_or_id, ignore_missing=True): """Find a single image :param name_or_id: The name or ID of a image. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.compute.v2.image.Image` or None """ return self._find(_image.Image, name_or_id, ignore_missing=ignore_missing) def get_image(self, image): """Get a single image :param image: The value can be the ID of an image or a :class:`~openstack.compute.v2.image.Image` instance. :returns: One :class:`~openstack.compute.v2.image.Image` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_image.Image, image) def images(self, details=True, **query): """Return a generator of images :param bool details: When ``True``, returns :class:`~openstack.compute.v2.image.ImageDetail` objects, otherwise :class:`~openstack.compute.v2.image.Image`. *Default: ``True``* :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of image objects """ img = _image.ImageDetail if details else _image.Image return self._list(img, paginated=True, **query) def _get_base_resource(self, res, base): # Metadata calls for Image and Server can work for both those # resources but also ImageDetail and ServerDetail. If we get # either class, use it, otherwise create an instance of the base. if isinstance(res, base): return res else: return base(id=res) def get_image_metadata(self, image): """Return a dictionary of metadata for an image :param image: Either the ID of an image or a :class:`~openstack.compute.v2.image.Image` or :class:`~openstack.compute.v2.image.ImageDetail` instance. :returns: A :class:`~openstack.compute.v2.image.Image` with only the image's metadata. All keys and values are Unicode text. :rtype: :class:`~openstack.compute.v2.image.Image` """ res = self._get_base_resource(image, _image.Image) metadata = res.get_metadata(self) result = _image.Image.existing(id=res.id, metadata=metadata) return result def set_image_metadata(self, image, **metadata): """Update metadata for an image :param image: Either the ID of an image or a :class:`~openstack.compute.v2.image.Image` or :class:`~openstack.compute.v2.image.ImageDetail` instance. :param kwargs metadata: Key/value pairs to be updated in the image's metadata. No other metadata is modified by this call. All keys and values are stored as Unicode. :returns: A :class:`~openstack.compute.v2.image.Image` with only the image's metadata. All keys and values are Unicode text. :rtype: :class:`~openstack.compute.v2.image.Image` """ res = self._get_base_resource(image, _image.Image) metadata = res.set_metadata(self, **metadata) result = _image.Image.existing(id=res.id, metadata=metadata) return result def delete_image_metadata(self, image, keys): """Delete metadata for an image Note: This method will do a HTTP DELETE request for every key in keys. :param image: Either the ID of an image or a :class:`~openstack.compute.v2.image.Image` or :class:`~openstack.compute.v2.image.ImageDetail` instance. :param keys: The keys to delete. :rtype: ``None`` """ res = self._get_base_resource(image, _image.Image) return res.delete_metadata(self, keys) def create_keypair(self, **attrs): """Create a new keypair from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.compute.v2.keypair.Keypair`, comprised of the properties on the Keypair class. :returns: The results of keypair creation :rtype: :class:`~openstack.compute.v2.keypair.Keypair` """ return self._create(_keypair.Keypair, **attrs) def delete_keypair(self, keypair, ignore_missing=True): """Delete a keypair :param keypair: The value can be either the ID of a keypair or a :class:`~openstack.compute.v2.keypair.Keypair` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the keypair does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent keypair. :returns: ``None`` """ self._delete(_keypair.Keypair, keypair, ignore_missing=ignore_missing) def get_keypair(self, keypair): """Get a single keypair :param keypair: The value can be the ID of a keypair or a :class:`~openstack.compute.v2.keypair.Keypair` instance. :returns: One :class:`~openstack.compute.v2.keypair.Keypair` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_keypair.Keypair, keypair) def find_keypair(self, name_or_id, ignore_missing=True): """Find a single keypair :param name_or_id: The name or ID of a keypair. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.compute.v2.keypair.Keypair` or None """ return self._find(_keypair.Keypair, name_or_id, ignore_missing=ignore_missing) def keypairs(self): """Return a generator of keypairs :returns: A generator of keypair objects :rtype: :class:`~openstack.compute.v2.keypair.Keypair` """ return self._list(_keypair.Keypair, paginated=False) def get_limits(self): """Retrieve limits that are applied to the project's account :returns: A Limits object, including both :class:`~openstack.compute.v2.limits.AbsoluteLimits` and :class:`~openstack.compute.v2.limits.RateLimits` :rtype: :class:`~openstack.compute.v2.limits.Limits` """ return self._get(limits.Limits) def create_server(self, **attrs): """Create a new server from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.compute.v2.server.Server`, comprised of the properties on the Server class. :returns: The results of server creation :rtype: :class:`~openstack.compute.v2.server.Server` """ return self._create(_server.Server, **attrs) def delete_server(self, server, ignore_missing=True, force=False): """Delete a server :param server: The value can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the server does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent server :param bool force: When set to ``True``, the server deletion will be forced immediately. :returns: ``None`` """ if force: server = self._get_resource(_server.Server, server) server.force_delete(self) else: self._delete(_server.Server, server, ignore_missing=ignore_missing) def find_server(self, name_or_id, ignore_missing=True): """Find a single server :param name_or_id: The name or ID of a server. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.compute.v2.server.Server` or None """ return self._find(_server.Server, name_or_id, ignore_missing=ignore_missing) def get_server(self, server): """Get a single server :param server: The value can be the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: One :class:`~openstack.compute.v2.server.Server` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_server.Server, server) def servers(self, details=True, **query): """Retrieve a generator of servers :param bool details: When set to ``False`` :class:`~openstack.compute.v2.server.Server` instances will be returned. The default, ``True``, will cause :class:`~openstack.compute.v2.server.ServerDetail` instances to be returned. :param kwargs \*\*query: Optional query parameters to be sent to limit the servers being returned. Available parameters include: * changes_since: A time/date stamp for when the server last changed status. * image: An image resource or ID. * flavor: A flavor resource or ID. * name: Name of the server as a string. Can be queried with regular expressions. The regular expression ?name=bob returns both bob and bobb. If you must match on only bob, you can use a regular expression that matches the syntax of the underlying database server that is implemented for Compute, such as MySQL or PostgreSQL. * status: Value of the status of the server so that you can filter on "ACTIVE" for example. * host: Name of the host as a string. * limit: Requests a specified page size of returned items from the query. Returns a number of items up to the specified limit value. Use the limit parameter to make an initial limited request and use the ID of the last-seen item from the response as the marker parameter value in a subsequent limited request. * marker: Specifies the ID of the last-seen item. Use the limit parameter to make an initial limited request and use the ID of the last-seen item from the response as the marker parameter value in a subsequent limited request. :returns: A generator of server instances. """ srv = _server.ServerDetail if details else _server.Server return self._list(srv, paginated=True, **query) def update_server(self, server, **attrs): """Update a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :attrs kwargs: The attributes to update on the server represented by ``server``. :returns: The updated server :rtype: :class:`~openstack.compute.v2.server.Server` """ return self._update(_server.Server, server, **attrs) def change_server_password(self, server, new_password): """Change the administrator password :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param str new_password: The new password to be set. :returns: None """ server = self._get_resource(_server.Server, server) server.change_password(self, new_password) def get_server_password(self, server): """Get the administrator password :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: encrypted password. """ server = self._get_resource(_server.Server, server) return server.get_password(self._session) def reset_server_state(self, server, state): """Reset the state of server :param server: The server can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server`. :param state: The state of the server to be set, `active` or `error` are valid. :returns: None """ res = self._get_base_resource(server, _server.Server) res.reset_state(self, state) def reboot_server(self, server, reboot_type): """Reboot a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param str reboot_type: The type of reboot to perform. "HARD" and "SOFT" are the current options. :returns: None """ server = self._get_resource(_server.Server, server) server.reboot(self, reboot_type) def rebuild_server(self, server, name, admin_password, **attrs): """Rebuild a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param str name: The name of the server :param str admin_password: The administrator password :param bool preserve_ephemeral: Indicates whether the server is rebuilt with the preservation of the ephemeral partition. *Default: False* :param str image: The id of an image to rebuild with. *Default: None* :param str access_ipv4: The IPv4 address to rebuild with. *Default: None* :param str access_ipv6: The IPv6 address to rebuild with. *Default: None* :param dict metadata: A dictionary of metadata to rebuild with. *Default: None* :param personality: A list of dictionaries, each including a **path** and **contents** key, to be injected into the rebuilt server at launch. *Default: None* :returns: The rebuilt :class:`~openstack.compute.v2.server.Server` instance. """ server = self._get_resource(_server.Server, server) return server.rebuild(self, name, admin_password, **attrs) def resize_server(self, server, flavor): """Resize a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param flavor: Either the ID of a flavor or a :class:`~openstack.compute.v2.flavor.Flavor` instance. :returns: None """ server = self._get_resource(_server.Server, server) flavor_id = resource.Resource._get_id(flavor) server.resize(self, flavor_id) def confirm_server_resize(self, server): """Confirm a server resize :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.confirm_resize(self) def revert_server_resize(self, server): """Revert a server resize :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.revert_resize(self) def create_server_image(self, server, name, metadata=None): """Create an image from a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param str name: The name of the image to be created. :param dict metadata: A dictionary of metadata to be set on the image. :returns: None """ server = self._get_resource(_server.Server, server) server.create_image(self, name, metadata) def add_security_group_to_server(self, server, security_group): """Add a security group to a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param security_group: Either the ID of a security group or a :class:`~openstack.network.v2.security_group.SecurityGroup` instance. :returns: None """ server = self._get_resource(_server.Server, server) security_group_id = resource.Resource._get_id(security_group) server.add_security_group(self, security_group_id) def remove_security_group_from_server(self, server, security_group): """Remove a security group from a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param security_group: Either the ID of a security group or a :class:`~openstack.network.v2.security_group.SecurityGroup` instance. :returns: None """ server = self._get_resource(_server.Server, server) security_group_id = resource.Resource._get_id(security_group) server.remove_security_group(self, security_group_id) def add_fixed_ip_to_server(self, server, network_id): """Adds a fixed IP address to a server instance. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param network_id: The ID of the network from which a fixed IP address is about to be allocated. :returns: None """ server = self._get_resource(_server.Server, server) server.add_fixed_ip(self, network_id) def remove_fixed_ip_from_server(self, server, address): """Removes a fixed IP address from a server instance. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param address: The fixed IP address to be disassociated from the server. :returns: None """ server = self._get_resource(_server.Server, server) server.remove_fixed_ip(self, address) def add_floating_ip_to_server(self, server, address, fixed_address=None): """Adds a floating IP address to a server instance. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param address: The floating IP address to be added to the server. :param fixed_address: The fixed IP address to be associated with the floating IP address. Used when the server is connected to multiple networks. :returns: None """ server = self._get_resource(_server.Server, server) server.add_floating_ip(self, address, fixed_address=fixed_address) def remove_floating_ip_from_server(self, server, address): """Removes a floating IP address from a server instance. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param address: The floating IP address to be disassociated from the server. :returns: None """ server = self._get_resource(_server.Server, server) server.remove_floating_ip(self, address) def backup_server(self, server, name, backup_type, rotation): """Backup a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param name: The name of the backup image. :param backup_type: The type of the backup, for example, daily. :param rotation: The rotation of the back up image, the oldest image will be removed when image count exceed the rotation count. :returns: None """ server = self._get_resource(_server.Server, server) server.backup(self, name, backup_type, rotation) def pause_server(self, server): """Pauses a server and changes its status to ``PAUSED``. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.pause(self) def unpause_server(self, server): """Unpauses a paused server and changes its status to ``ACTIVE``. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.unpause(self) def suspend_server(self, server): """Suspends a server and changes its status to ``SUSPENDED``. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.suspend(self) def resume_server(self, server): """Resumes a suspended server and changes its status to ``ACTIVE``. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.resume(self) def lock_server(self, server): """Locks a server. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.lock(self) def unlock_server(self, server): """Unlocks a locked server. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.unlock(self) def rescue_server(self, server, admin_pass=None, image_ref=None): """Puts a server in rescue mode and changes it status to ``RESCUE``. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param admin_pass: The password for the rescued server. If you omit this parameter, the operation generates a new password. :param image_ref: The image reference to use to rescue your server. This can be the image ID or its full URL. If you omit this parameter, the base image reference will be used. :returns: None """ server = self._get_resource(_server.Server, server) server.rescue(self, admin_pass=admin_pass, image_ref=image_ref) def unrescue_server(self, server): """Unrescues a server and changes its status to ``ACTIVE``. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.unrescue(self) def evacuate_server(self, server, host=None, admin_pass=None, force=None): """Evacuates a server from a failed host to a new host. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param host: An optional parameter specifying the name or ID of the host to which the server is evacuated. :param admin_pass: An optional parameter specifying the administrative password to access the evacuated or rebuilt server. :param force: Force an evacuation by not verifying the provided destination host by the scheduler. (New in API version 2.29). :returns: None """ server = self._get_resource(_server.Server, server) server.evacuate(self, host=host, admin_pass=admin_pass, force=force) def start_server(self, server): """Starts a stopped server and changes its state to ``ACTIVE``. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.start(self) def stop_server(self, server): """Stops a running server and changes its state to ``SHUTOFF``. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.stop(self) def shelve_server(self, server): """Shelves a server. All associated data and resources are kept but anything still in memory is not retained. Policy defaults enable only users with administrative role or the owner of the server to perform this operation. Cloud provides could change this permission though. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.shelve(self) def unshelve_server(self, server): """Unselves or restores a shelved server. Policy defaults enable only users with administrative role or the owner of the server to perform this operation. Cloud provides could change this permission though. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.unshelve(self) def get_server_console_output(self, server, length=None): """Return the console output for a server. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param length: Optional number of line to fetch from the end of console log. All lines will be returned if this is not specified. :returns: The console output as a dict. Control characters will be escaped to create a valid JSON string. """ server = self._get_resource(_server.Server, server) return server.get_console_output(self, length=length) def wait_for_server(self, server, status='ACTIVE', failures=['ERROR'], interval=2, wait=120): return resource.wait_for_status( self, server, status, failures, interval, wait) def create_server_interface(self, server, **attrs): """Create a new server interface from attributes :param server: The server can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance that the interface belongs to. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.compute.v2.server_interface.ServerInterface`, comprised of the properties on the ServerInterface class. :returns: The results of server interface creation :rtype: :class:`~openstack.compute.v2.server_interface.ServerInterface` """ server_id = resource.Resource._get_id(server) return self._create(_server_interface.ServerInterface, server_id=server_id, **attrs) def delete_server_interface(self, server_interface, server=None, ignore_missing=True): """Delete a server interface :param server_interface: The value can be either the ID of a server interface or a :class:`~openstack.compute.v2.server_interface.ServerInterface` instance. :param server: This parameter need to be specified when ServerInterface ID is given as value. It can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance that the interface belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the server interface does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent server interface. :returns: ``None`` """ server_id = self._get_uri_attribute(server_interface, server, "server_id") server_interface = resource.Resource._get_id(server_interface) self._delete(_server_interface.ServerInterface, port_id=server_interface, server_id=server_id, ignore_missing=ignore_missing) def get_server_interface(self, server_interface, server=None): """Get a single server interface :param server_interface: The value can be the ID of a server interface or a :class:`~openstack.compute.v2.server_interface.ServerInterface` instance. :param server: This parameter need to be specified when ServerInterface ID is given as value. It can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance that the interface belongs to. :returns: One :class:`~openstack.compute.v2.server_interface.ServerInterface` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ server_id = self._get_uri_attribute(server_interface, server, "server_id") server_interface = resource.Resource._get_id(server_interface) return self._get(_server_interface.ServerInterface, server_id=server_id, port_id=server_interface) def server_interfaces(self, server): """Return a generator of server interfaces :param server: The server can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server`. :returns: A generator of ServerInterface objects :rtype: :class:`~openstack.compute.v2.server_interface.ServerInterface` """ server_id = resource.Resource._get_id(server) return self._list(_server_interface.ServerInterface, paginated=False, server_id=server_id) def server_ips(self, server, network_label=None): """Return a generator of server IPs :param server: The server can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server`. :param network_label: The name of a particular network to list IP addresses from. :returns: A generator of ServerIP objects :rtype: :class:`~openstack.compute.v2.server_ip.ServerIP` """ server_id = resource.Resource._get_id(server) return self._list(server_ip.ServerIP, paginated=False, server_id=server_id, network_label=network_label) def availability_zones(self, details=False): """Return a generator of availability zones :param bool details: Return extra details about the availability zones. This defaults to `False` as it generally requires extra permission. :returns: A generator of availability zone :rtype: :class:`~openstack.compute.v2.availability_zone.\ AvailabilityZone` """ if details: az = availability_zone.AvailabilityZoneDetail else: az = availability_zone.AvailabilityZone return self._list(az, paginated=False) def get_server_metadata(self, server): """Return a dictionary of metadata for a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` or :class:`~openstack.compute.v2.server.ServerDetail` instance. :returns: A :class:`~openstack.compute.v2.server.Server` with only the server's metadata. All keys and values are Unicode text. :rtype: :class:`~openstack.compute.v2.server.Server` """ res = self._get_base_resource(server, _server.Server) metadata = res.get_metadata(self) result = _server.Server.existing(id=res.id, metadata=metadata) return result def set_server_metadata(self, server, **metadata): """Update metadata for a server :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` or :class:`~openstack.compute.v2.server.ServerDetail` instance. :param kwargs metadata: Key/value pairs to be updated in the server's metadata. No other metadata is modified by this call. All keys and values are stored as Unicode. :returns: A :class:`~openstack.compute.v2.server.Server` with only the server's metadata. All keys and values are Unicode text. :rtype: :class:`~openstack.compute.v2.server.Server` """ res = self._get_base_resource(server, _server.Server) metadata = res.set_metadata(self, **metadata) result = _server.Server.existing(id=res.id, metadata=metadata) return result def delete_server_metadata(self, server, keys): """Delete metadata for a server Note: This method will do a HTTP DELETE request for every key in keys. :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` or :class:`~openstack.compute.v2.server.ServerDetail` instance. :param keys: The keys to delete :rtype: ``None`` """ res = self._get_base_resource(server, _server.Server) return res.delete_metadata(self, keys) def create_server_group(self, **attrs): """Create a new server group from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.compute.v2.server_group.ServerGroup`, comprised of the properties on the ServerGroup class. :returns: The results of server group creation :rtype: :class:`~openstack.compute.v2.server_group.ServerGroup` """ return self._create(_server_group.ServerGroup, **attrs) def delete_server_group(self, server_group, ignore_missing=True): """Delete a server group :param server_group: The value can be either the ID of a server group or a :class:`~openstack.compute.v2.server_group.ServerGroup` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the server group does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent server group. :returns: ``None`` """ self._delete(_server_group.ServerGroup, server_group, ignore_missing=ignore_missing) def find_server_group(self, name_or_id, ignore_missing=True): """Find a single server group :param name_or_id: The name or ID of a server group. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.compute.v2.server_group.ServerGroup` object or None """ return self._find(_server_group.ServerGroup, name_or_id, ignore_missing=ignore_missing) def get_server_group(self, server_group): """Get a single server group :param server_group: The value can be the ID of a server group or a :class:`~openstack.compute.v2.server_group.ServerGroup` instance. :returns: A :class:`~openstack.compute.v2.server_group.ServerGroup` object. :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_server_group.ServerGroup, server_group) def server_groups(self, **query): """Return a generator of server groups :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of ServerGroup objects :rtype: :class:`~openstack.compute.v2.server_group.ServerGroup` """ return self._list(_server_group.ServerGroup, paginated=False, **query) def hypervisors(self): """Return a generator of hypervisor :returns: A generator of hypervisor :rtype: class: `~openstack.compute.v2.hypervisor.Hypervisor` """ return self._list(_hypervisor.Hypervisor, paginated=False) def find_hypervisor(self, name_or_id, ignore_missing=True): """Find a hypervisor from name or id to get the corresponding info :param name_or_id: The name or id of a hypervisor :returns: One: class:`~openstack.compute.v2.hypervisor.Hypervisor` object or None """ return self._find(_hypervisor.Hypervisor, name_or_id, ignore_missing=ignore_missing) def get_hypervisor(self, hypervisor): """Get a single hypervisor :param hypervisor: The value can be the ID of a hypervisor or a :class:`~openstack.compute.v2.hypervisor.Hypervisor` instance. :returns: A :class:`~openstack.compute.v2.hypervisor.Hypervisor` object. :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_hypervisor.Hypervisor, hypervisor) def force_service_down(self, service, host, binary): """Force a service down :param service: Either the ID of a service or a :class:`~openstack.compute.v2.server.Service` instance. :param str host: The host where service runs. :param str binary: The name of service. :returns: None """ service = self._get_resource(_service.Service, service) service.force_down(self, host, binary) def disable_service(self, service, host, binary, disabled_reason=None): """Disable a service :param service: Either the ID of a service or a :class:`~openstack.compute.v2.server.Service` instance. :param str host: The host where service runs. :param str binary: The name of service. :param str disabled_reason: The reason of force down a service. :returns: None """ service = self._get_resource(_service.Service, service) service.disable(self, host, binary, disabled_reason) def enable_service(self, service, host, binary): """Enable a service :param service: Either the ID of a service or a :class:`~openstack.compute.v2.server.Service` instance. :param str host: The host where service runs. :param str binary: The name of service. :returns: None """ service = self._get_resource(_service.Service, service) service.enable(self, host, binary) def services(self): """Return a generator of service :returns: A generator of service :rtype: class: `~openstack.compute.v2.service.Service` """ return self._list(_service.Service, paginated=False) def create_volume_attachment(self, server, **attrs): """Create a new volume attachment from attributes :param server: The server can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.compute.v2.volume_attachment.VolumeAttachment`, comprised of the properties on the VolumeAttachment class. :returns: The results of volume attachment creation :rtype: :class:`~openstack.compute.v2.volume_attachment.VolumeAttachment` """ server_id = resource.Resource._get_id(server) return self._create(_volume_attachment.VolumeAttachment, server_id=server_id, **attrs) def update_volume_attachment(self, volume_attachment, server, **attrs): """update a volume attachment :param volume_attachment: The value can be either the ID of a volume attachment or a :class:`~openstack.compute.v2.volume_attachment.VolumeAttachment` instance. :param server: This parameter need to be specified when VolumeAttachment ID is given as value. It can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance that the attachment belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the volume attachment does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent volume attachment. :returns: ``None`` """ server_id = self._get_uri_attribute(volume_attachment, server, "server_id") volume_attachment = resource.Resource._get_id(volume_attachment) return self._update(_volume_attachment.VolumeAttachment, attachment_id=volume_attachment, server_id=server_id) def delete_volume_attachment(self, volume_attachment, server, ignore_missing=True): """Delete a volume attachment :param volume_attachment: The value can be either the ID of a volume attachment or a :class:`~openstack.compute.v2.volume_attachment.VolumeAttachment` instance. :param server: This parameter need to be specified when VolumeAttachment ID is given as value. It can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance that the attachment belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the volume attachment does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent volume attachment. :returns: ``None`` """ server_id = self._get_uri_attribute(volume_attachment, server, "server_id") volume_attachment = resource.Resource._get_id(volume_attachment) self._delete(_volume_attachment.VolumeAttachment, attachment_id=volume_attachment, server_id=server_id, ignore_missing=ignore_missing) def get_volume_attachment(self, volume_attachment, server, ignore_missing=True): """Get a single volume attachment :param volume_attachment: The value can be the ID of a volume attachment or a :class:`~openstack.compute.v2.volume_attachment.VolumeAttachment` instance. :param server: This parameter need to be specified when VolumeAttachment ID is given as value. It can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance that the attachment belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the volume attachment does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent volume attachment. :returns: One :class:`~openstack.compute.v2.volume_attachment.VolumeAttachment` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ server_id = self._get_uri_attribute(volume_attachment, server, "server_id") volume_attachment = resource.Resource._get_id(volume_attachment) return self._get(_volume_attachment.VolumeAttachment, server_id=server_id, attachment_id=volume_attachment, ignore_missing=ignore_missing) def volume_attachments(self, server): """Return a generator of volume attachments :param server: The server can be either the ID of a server or a :class:`~openstack.compute.v2.server.Server`. :returns: A generator of VolumeAttachment objects :rtype: :class:`~openstack.compute.v2.volume_attachment.VolumeAttachment` """ server_id = resource.Resource._get_id(server) return self._list(_volume_attachment.VolumeAttachment, paginated=False, server_id=server_id) def migrate_server(self, server): """Migrate a server from one host to another :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :returns: None """ server = self._get_resource(_server.Server, server) server.migrate(self) def live_migrate_server(self, server, host=None, force=False): """Migrate a server from one host to target host :param server: Either the ID of a server or a :class:`~openstack.compute.v2.server.Server` instance. :param host: The host to which to migrate the server :param force: Force a live-migration by not verifying the provided destination host by the scheduler. :returns: None """ server = self._get_resource(_server.Server, server) server.live_migrate(self, host, force) openstacksdk-0.11.3/openstack/compute/__init__.py0000666000175100017510000000000013236151340022053 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/compute/compute_service.py0000666000175100017510000000165013236151340023524 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class ComputeService(service_filter.ServiceFilter): """The compute service.""" valid_versions = [service_filter.ValidVersion('v2')] def __init__(self, version=None): """Create a compute service.""" super(ComputeService, self).__init__(service_type='compute', version=version) openstacksdk-0.11.3/openstack/task_manager.py0000666000175100017510000001506413236151340021314 0ustar zuulzuul00000000000000# Copyright (C) 2011-2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. import concurrent.futures import sys import threading import time import keystoneauth1.exceptions import six import openstack._log from openstack import exceptions _log = openstack._log.setup_logging('openstack.task_manager') class Task(object): """Represent a remote task to be performed on an OpenStack Cloud. Some consumers need to inject things like rate-limiting or auditing around each external REST interaction. Task provides an interface to encapsulate each such interaction. Also, although shade itself operates normally in a single-threaded direct action manner, consuming programs may provide a multi-threaded TaskManager themselves. For that reason, Task uses threading events to ensure appropriate wait conditions. These should be a no-op in single-threaded applications. A consumer is expected to overload the main method. :param dict kw: Any args that are expected to be passed to something in the main payload at execution time. """ def __init__(self, main=None, name=None, run_async=False, *args, **kwargs): self._exception = None self._traceback = None self._result = None self._response = None self._finished = threading.Event() self._main = main self._run_async = run_async self.args = args self.kwargs = kwargs self.name = name or type(self).__name__ def main(self): return self._main(*self.args, **self.kwargs) @property def run_async(self): return self._run_async def done(self, result): self._result = result self._finished.set() def exception(self, e, tb): self._exception = e self._traceback = tb self._finished.set() def wait(self, raw=False): self._finished.wait() if self._exception: six.reraise(type(self._exception), self._exception, self._traceback) return self._result def run(self): try: # Retry one time if we get a retriable connection failure try: self.done(self.main()) except keystoneauth1.exceptions.RetriableConnectionFailure as e: self.done(self.main()) except Exception as e: self.exception(e, sys.exc_info()[2]) class TaskManager(object): def __init__(self, name, log=_log, workers=5, **kwargs): self.name = name self._executor = None self._log = log self._workers = workers @property def executor(self): if not self._executor: self._executor = concurrent.futures.ThreadPoolExecutor( max_workers=self._workers) return self._executor def stop(self): """ This is a direct action passthrough TaskManager """ if self._executor: self._executor.shutdown() def run(self): """ This is a direct action passthrough TaskManager """ pass def submit_task(self, task): """Submit and execute the given task. :param task: The task to execute. :param bool raw: If True, return the raw result as received from the underlying client call. """ return self.run_task(task=task) def submit_function(self, method, name=None, *args, **kwargs): """ Allows submitting an arbitrary method for work. :param method: Callable to run in the TaskManager. :param str name: Name to use for the generated Task object. :param args: positional arguments to pass to the method when it runs. :param kwargs: keyword arguments to pass to the method when it runs. """ task = Task(main=method, name=name, *args, **kwargs) return self.submit_task(task) def submit_function_async(self, method, name=None, *args, **kwargs): """ Allows submitting an arbitrary method for async work scheduling. :param method: Callable to run in the TaskManager. :param str name: Name to use for the generated Task object. :param args: positional arguments to pass to the method when it runs. :param kwargs: keyword arguments to pass to the method when it runs. """ task = Task(method=method, name=name, run_async=True, **kwargs) return self.submit_task(task) def pre_run_task(self, task): self._log.debug( "Manager %s running task %s", self.name, task.name) def run_task(self, task): if task.run_async: return self._run_task_async(task) else: return self._run_task(task) def post_run_task(self, elapsed_time, task): self._log.debug( "Manager %s ran task %s in %ss", self.name, task.name, elapsed_time) def _run_task_async(self, task): self._log.debug( "Manager %s submitting task %s", self.name, task.name) return self.executor.submit(self._run_task, task) def _run_task(self, task): self.pre_run_task(task) start = time.time() task.run() end = time.time() dt = end - start self.post_run_task(dt, task) return task.wait() def wait_for_futures(futures, raise_on_error=True, log=_log): '''Collect results or failures from a list of running future tasks.''' results = [] retries = [] # Check on each result as its thread finishes for completed in concurrent.futures.as_completed(futures): try: result = completed.result() exceptions.raise_from_response(result) results.append(result) except (keystoneauth1.exceptions.RetriableConnectionFailure, exceptions.HttpException) as e: log.exception( "Exception processing async task: {e}".format(e=str(e))) if raise_on_error: raise # If we get an exception, put the result into a list so we # can try again retries.append(result) return results, retries openstacksdk-0.11.3/openstack/resource.py0000666000175100017510000012747713236151340020523 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ The :class:`~openstack.resource.Resource` class is a base class that represent a remote resource. The attributes that comprise a request or response for this resource are specified as class members on the Resource subclass where their values are of a component type, including :class:`~openstack.resource.Body`, :class:`~openstack.resource.Header`, and :class:`~openstack.resource.URI`. For update management, :class:`~openstack.resource.Resource` employs a series of :class:`~openstack.resource._ComponentManager` instances to look after the attributes of that particular component type. This is particularly useful for Body and Header types, so that only the values necessary are sent in requests to the server. When making requests, each of the managers are looked at to gather the necessary URI, body, and header data to build a request to be sent via keystoneauth's sessions. Responses from keystoneauth are then converted into this Resource class' appropriate components and types and then returned to the caller. """ import collections import itertools from keystoneauth1 import adapter from requests import structures from openstack import exceptions from openstack import format from openstack import utils def _convert_type(value, data_type, list_type=None): # This should allow handling list of dicts that have their own # Component type directly. See openstack/compute/v2/limits.py # and the RateLimit type for an example. if not data_type: return value if issubclass(data_type, list): if isinstance(value, (list, tuple, set)): if not list_type: return value ret = [] for raw in value: ret.append(_convert_type(raw, list_type)) return ret elif list_type: return [_convert_type(value, list_type)] # "if-match" in Object is a good example of the need here return [value] elif isinstance(value, data_type): return value if not isinstance(value, data_type): if issubclass(data_type, format.Formatter): return data_type.deserialize(value) # This should allow handling sub-dicts that have their own # Component type directly. See openstack/compute/v2/limits.py # and the AbsoluteLimits type for an example. if isinstance(value, dict): return data_type(**value) return data_type(value) class _BaseComponent(object): # The name this component is being tracked as in the Resource key = None # The class to be used for mappings _map_cls = dict def __init__(self, name, type=None, default=None, alias=None, alternate_id=False, list_type=None, **kwargs): """A typed descriptor for a component that makes up a Resource :param name: The name this component exists as on the server :param type: The type this component is expected to be by the server. By default this is None, meaning any value you specify will work. If you specify type=dict and then set a component to a string, __set__ will fail, for example. :param default: Typically None, but any other default can be set. :param alias: If set, alternative attribute on object to return. :param alternate_id: When `True`, this property is known internally as a value that can be sent with requests that require an ID but when `id` is not a name the Resource has. This is a relatively uncommon case, and this setting should only be used once per Resource. :param list_type: If type is `list`, list_type designates what the type of the elements of the list should be. """ self.name = name self.type = type self.default = default self.alias = alias self.alternate_id = alternate_id self.list_type = list_type def __get__(self, instance, owner): if instance is None: return None attributes = getattr(instance, self.key) try: value = attributes[self.name] except KeyError: if self.alias: return getattr(instance, self.alias) return self.default # self.type() should not be called on None objects. if value is None: return None return _convert_type(value, self.type, self.list_type) def __set__(self, instance, value): if value != self.default: value = _convert_type(value, self.type, self.list_type) attributes = getattr(instance, self.key) attributes[self.name] = value def __delete__(self, instance): try: attributes = getattr(instance, self.key) del attributes[self.name] except KeyError: pass class Body(_BaseComponent): """Body attributes""" key = "_body" class Header(_BaseComponent): """Header attributes""" key = "_header" _map_cls = structures.CaseInsensitiveDict class URI(_BaseComponent): """URI attributes""" key = "_uri" class _ComponentManager(collections.MutableMapping): """Storage of a component type""" def __init__(self, attributes=None, synchronized=False): self.attributes = dict() if attributes is None else attributes.copy() self._dirty = set() if synchronized else set(self.attributes.keys()) def __getitem__(self, key): return self.attributes[key] def __setitem__(self, key, value): try: orig = self.attributes[key] except KeyError: changed = True else: changed = orig != value if changed: self.attributes[key] = value self._dirty.add(key) def __delitem__(self, key): del self.attributes[key] self._dirty.add(key) def __iter__(self): return iter(self.attributes) def __len__(self): return len(self.attributes) @property def dirty(self): """Return a dict of modified attributes""" return dict((key, self.attributes.get(key, None)) for key in self._dirty) def clean(self): """Signal that the resource no longer has modified attributes""" self._dirty = set() class _Request(object): """Prepared components that go into a KSA request""" def __init__(self, url, body, headers): self.url = url self.body = body self.headers = headers class QueryParameters(object): def __init__(self, *names, **mappings): """Create a dict of accepted query parameters :param names: List of strings containing client-side query parameter names. Each name in the list maps directly to the name expected by the server. :param mappings: Key-value pairs where the key is the client-side name we'll accept here and the value is the name the server expects, e.g, changes_since=changes-since By default, both limit and marker are included in the initial mapping as they're the most common query parameters used for listing resources. """ self._mapping = {"limit": "limit", "marker": "marker"} self._mapping.update(dict({name: name for name in names}, **mappings)) def _validate(self, query, base_path=None): """Check that supplied query keys match known query mappings :param dict query: Collection of key-value pairs where each key is the client-side parameter name or server side name. :param base_path: Formatted python string of the base url path for the resource. """ expected_params = list(self._mapping.keys()) expected_params += self._mapping.values() if base_path: expected_params += utils.get_string_format_keys(base_path) invalid_keys = set(query.keys()) - set(expected_params) if invalid_keys: raise exceptions.InvalidResourceQuery( message="Invalid query params: %s" % ",".join(invalid_keys), extra_data=invalid_keys) def _transpose(self, query): """Transpose the keys in query based on the mapping If a query is supplied with its server side name, we will still use it, but take preference to the client-side name when both are supplied. :param dict query: Collection of key-value pairs where each key is the client-side parameter name to be transposed to its server side name. """ result = {} for key, value in self._mapping.items(): if key in query: result[value] = query[key] elif value in query: result[value] = query[value] return result class Resource(object): #: Singular form of key for resource. resource_key = None #: Plural form of key for resource. resources_key = None #: Key used for pagination links pagination_key = None #: The ID of this resource. id = Body("id") #: The name of this resource. name = Body("name") #: The location of this resource. location = Header("Location") #: Mapping of accepted query parameter names. _query_mapping = QueryParameters() #: The base part of the URI for this resource. base_path = "" #: The service associated with this resource to find the service URL. service = None #: Allow create operation for this resource. allow_create = False #: Allow get operation for this resource. allow_get = False #: Allow update operation for this resource. allow_update = False #: Allow delete operation for this resource. allow_delete = False #: Allow list operation for this resource. allow_list = False #: Allow head operation for this resource. allow_head = False #: Method for udating a resource (PUT, PATCH, POST) update_method = "PUT" #: Method for creating a resource (POST, PUT) create_method = "POST" #: Do calls for this resource require an id requires_id = True #: Do responses for this resource have bodies has_body = True def __init__(self, _synchronized=False, **attrs): """The base resource :param bool _synchronized: This is not intended to be used directly. See :meth:`~openstack.resource.Resource.new` and :meth:`~openstack.resource.Resource.existing`. """ # NOTE: _collect_attrs modifies **attrs in place, removing # items as they match up with any of the body, header, # or uri mappings. body, header, uri = self._collect_attrs(attrs) # TODO(briancurtin): at this point if attrs has anything left # they're not being set anywhere. Log this? Raise exception? # How strict should we be here? Should strict be an option? self._body = _ComponentManager(attributes=body, synchronized=_synchronized) self._header = _ComponentManager(attributes=header, synchronized=_synchronized) self._uri = _ComponentManager(attributes=uri, synchronized=_synchronized) def __repr__(self): pairs = ["%s=%s" % (k, v) for k, v in dict(itertools.chain( self._body.attributes.items(), self._header.attributes.items(), self._uri.attributes.items())).items()] args = ", ".join(pairs) return "%s.%s(%s)" % ( self.__module__, self.__class__.__name__, args) def __eq__(self, comparand): """Return True if another resource has the same contents""" return all([self._body.attributes == comparand._body.attributes, self._header.attributes == comparand._header.attributes, self._uri.attributes == comparand._uri.attributes]) def __getattribute__(self, name): """Return an attribute on this instance This is mostly a pass-through except for a specialization on the 'id' name, as this can exist under a different name via the `alternate_id` argument to resource.Body. """ if name == "id": if name in self._body: return self._body[name] else: try: return self._body[self._alternate_id()] except KeyError: return None else: return object.__getattribute__(self, name) def _update(self, **attrs): """Given attributes, update them on this instance This is intended to be used from within the proxy layer when updating instances that may have already been created. """ body, header, uri = self._collect_attrs(attrs) self._body.update(body) self._header.update(header) self._uri.update(uri) def _collect_attrs(self, attrs): """Given attributes, return a dict per type of attribute This method splits up **attrs into separate dictionaries that correspond to the relevant body, header, and uri attributes that exist on this class. """ body = self._consume_body_attrs(attrs) header = self._consume_header_attrs(attrs) uri = self._consume_uri_attrs(attrs) return body, header, uri def _consume_body_attrs(self, attrs): return self._consume_mapped_attrs(Body, attrs) def _consume_header_attrs(self, attrs): return self._consume_mapped_attrs(Header, attrs) def _consume_uri_attrs(self, attrs): return self._consume_mapped_attrs(URI, attrs) def _update_from_body_attrs(self, attrs): body = self._consume_body_attrs(attrs) self._body.attributes.update(body) self._body.clean() def _update_from_header_attrs(self, attrs): headers = self._consume_header_attrs(attrs) self._header.attributes.update(headers) self._header.clean() def _update_uri_from_attrs(self, attrs): uri = self._consume_uri_attrs(attrs) self._uri.attributes.update(uri) self._uri.clean() def _consume_mapped_attrs(self, mapping_cls, attrs): mapping = self._get_mapping(mapping_cls) return self._consume_attrs(mapping, attrs) def _consume_attrs(self, mapping, attrs): """Given a mapping and attributes, return relevant matches This method finds keys in attrs that exist in the mapping, then both transposes them to their server-side equivalent key name to be returned, and finally pops them out of attrs. This allows us to only calculate their place and existence in a particular type of Resource component one time, rather than looking at the same source dict several times. """ relevant_attrs = {} consumed_keys = [] for key, value in attrs.items(): # We want the key lookup in mapping to be case insensitive if the # mapping is, thus the use of get. We want value to be exact. # If we find a match, we then have to loop over the mapping for # to find the key to return, as there isn't really a "get me the # key that matches this other key". We lower() in the inner loop # because we've already done case matching in the outer loop. if key in mapping.values() or mapping.get(key): for map_key, map_value in mapping.items(): if key.lower() in (map_key.lower(), map_value.lower()): relevant_attrs[map_key] = value consumed_keys.append(key) continue for key in consumed_keys: attrs.pop(key) return relevant_attrs @classmethod def _get_mapping(cls, component): """Return a dict of attributes of a given component on the class""" mapping = component._map_cls() ret = component._map_cls() # Since we're looking at class definitions we need to include # subclasses, so check the whole MRO. for klass in cls.__mro__: for key, value in klass.__dict__.items(): if isinstance(value, component): # Make sure base classes don't end up overwriting # mappings we've found previously in subclasses. if key not in mapping: # Make it this way first, to get MRO stuff correct. mapping[key] = value.name for k, v in mapping.items(): ret[v] = k return ret @classmethod def _body_mapping(cls): """Return all Body members of this class""" return cls._get_mapping(Body) @classmethod def _header_mapping(cls): """Return all Header members of this class""" return cls._get_mapping(Header) @classmethod def _uri_mapping(cls): """Return all URI members of this class""" return cls._get_mapping(URI) @classmethod def _alternate_id(cls): """Return the name of any value known as an alternate_id NOTE: This will only ever return the first such alternate_id. Only one alternate_id should be specified. Returns an empty string if no name exists, as this method is consumed by _get_id and passed to getattr. """ for value in cls.__dict__.values(): if isinstance(value, Body): if value.alternate_id: return value.name return "" @staticmethod def _get_id(value): """If a value is a Resource, return the canonical ID This will return either the value specified by `id` or `alternate_id` in that order if `value` is a Resource. If `value` is anything other than a Resource, likely to be a string already representing an ID, it is returned. """ if isinstance(value, Resource): return value.id else: return value @classmethod def new(cls, **kwargs): """Create a new instance of this resource. When creating the instance set the ``_synchronized`` parameter of :class:`Resource` to ``False`` to indicate that the resource does not yet exist on the server side. This marks all attributes passed in ``**kwargs`` as "dirty" on the resource, and thusly tracked as necessary in subsequent calls such as :meth:`update`. :param dict kwargs: Each of the named arguments will be set as attributes on the resulting Resource object. """ return cls(_synchronized=False, **kwargs) @classmethod def existing(cls, **kwargs): """Create an instance of an existing remote resource. When creating the instance set the ``_synchronized`` parameter of :class:`Resource` to ``True`` to indicate that it represents the state of an existing server-side resource. As such, all attributes passed in ``**kwargs`` are considered "clean", such that an immediate :meth:`update` call would not generate a body of attributes to be modified on the server. :param dict kwargs: Each of the named arguments will be set as attributes on the resulting Resource object. """ return cls(_synchronized=True, **kwargs) def to_dict(self, body=True, headers=True, ignore_none=False): """Return a dictionary of this resource's contents :param bool body: Include the :class:`~openstack.resource.Body` attributes in the returned dictionary. :param bool headers: Include the :class:`~openstack.resource.Header` attributes in the returned dictionary. :param bool ignore_none: When True, exclude key/value pairs where the value is None. This will exclude attributes that the server hasn't returned. :return: A dictionary of key/value pairs where keys are named as they exist as attributes of this class. """ mapping = {} components = [] if body: components.append(Body) if headers: components.append(Header) if not components: raise ValueError( "At least one of `body` or `headers` must be True") # isinstance stricly requires this to be a tuple components = tuple(components) # NOTE: This is similar to the implementation in _get_mapping # but is slightly different in that we're looking at an instance # and we're mapping names on this class to their actual stored # values. # Since we're looking at class definitions we need to include # subclasses, so check the whole MRO. for klass in self.__class__.__mro__: for key, value in klass.__dict__.items(): if isinstance(value, components): # Make sure base classes don't end up overwriting # mappings we've found previously in subclasses. if key not in mapping: value = getattr(self, key, None) if ignore_none and value is None: continue if isinstance(value, Resource): mapping[key] = value.to_dict() elif (value and isinstance(value, list) and isinstance(value[0], Resource)): converted = [] for raw in value: converted.append(raw.to_dict()) mapping[key] = converted else: mapping[key] = value return mapping def _prepare_request(self, requires_id=None, prepend_key=False): """Prepare a request to be sent to the server Create operations don't require an ID, but all others do, so only try to append an ID when it's needed with requires_id. Create and update operations sometimes require their bodies to be contained within an dict -- if the instance contains a resource_key and prepend_key=True, the body will be wrapped in a dict with that key. Return a _Request object that contains the constructed URI as well a body and headers that are ready to send. Only dirty body and header contents will be returned. """ if requires_id is None: requires_id = self.requires_id body = self._body.dirty if prepend_key and self.resource_key is not None: body = {self.resource_key: body} # TODO(mordred) Ensure headers have string values better than this headers = {} for k, v in self._header.dirty.items(): if isinstance(v, list): headers[k] = ", ".join(v) else: headers[k] = str(v) uri = self.base_path % self._uri.attributes if requires_id: if self.id is None: raise exceptions.InvalidRequest( "Request requires an ID but none was found") uri = utils.urljoin(uri, self.id) return _Request(uri, body, headers) def _translate_response(self, response, has_body=None, error_message=None): """Given a KSA response, inflate this instance with its data DELETE operations don't return a body, so only try to work with a body when has_body is True. This method updates attributes that correspond to headers and body on this instance and clears the dirty set. """ if has_body is None: has_body = self.has_body exceptions.raise_from_response(response, error_message=error_message) if has_body: body = response.json() if self.resource_key and self.resource_key in body: body = body[self.resource_key] body = self._consume_body_attrs(body) self._body.attributes.update(body) self._body.clean() headers = self._consume_header_attrs(response.headers) self._header.attributes.update(headers) self._header.clean() @classmethod def _get_session(cls, session): """Attempt to get an Adapter from a raw session. Some older code used conn.session has the session argument to Resource methods. That does not work anymore, as Resource methods expect an Adapter not a session. We've hidden an _sdk_connection on the Session stored on the connection. If we get something that isn't an Adapter, pull the connection from the Session and look up the adapter by service_type. """ # TODO(mordred) We'll need to do this for every method in every # Resource class that is calling session.$something to be complete. if isinstance(session, adapter.Adapter): return session if hasattr(session, '_sdk_connection'): service_type = cls.service['service_type'] return getattr(session._sdk_connection, service_type) raise ValueError( "The session argument to Resource methods requires either an" " instance of an openstack.proxy.Proxy object or at the very least" " a raw keystoneauth1.adapter.Adapter.") def create(self, session, prepend_key=True): """Create a remote resource based on this instance. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :param prepend_key: A boolean indicating whether the resource_key should be prepended in a resource creation request. Default to True. :return: This :class:`Resource` instance. :raises: :exc:`~openstack.exceptions.MethodNotSupported` if :data:`Resource.allow_create` is not set to ``True``. """ if not self.allow_create: raise exceptions.MethodNotSupported(self, "create") session = self._get_session(session) if self.create_method == 'PUT': request = self._prepare_request(requires_id=True, prepend_key=prepend_key) response = session.put(request.url, json=request.body, headers=request.headers) elif self.create_method == 'POST': request = self._prepare_request(requires_id=False, prepend_key=prepend_key) response = session.post(request.url, json=request.body, headers=request.headers) else: raise exceptions.ResourceFailure( msg="Invalid create method: %s" % self.create_method) self._translate_response(response) return self def get(self, session, requires_id=True, error_message=None): """Get a remote resource based on this instance. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :param boolean requires_id: A boolean indicating whether resource ID should be part of the requested URI. :return: This :class:`Resource` instance. :raises: :exc:`~openstack.exceptions.MethodNotSupported` if :data:`Resource.allow_get` is not set to ``True``. """ if not self.allow_get: raise exceptions.MethodNotSupported(self, "get") request = self._prepare_request(requires_id=requires_id) session = self._get_session(session) response = session.get(request.url) kwargs = {} if error_message: kwargs['error_message'] = error_message self._translate_response(response, **kwargs) return self def head(self, session): """Get headers from a remote resource based on this instance. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :return: This :class:`Resource` instance. :raises: :exc:`~openstack.exceptions.MethodNotSupported` if :data:`Resource.allow_head` is not set to ``True``. """ if not self.allow_head: raise exceptions.MethodNotSupported(self, "head") request = self._prepare_request() session = self._get_session(session) response = session.head(request.url, headers={"Accept": ""}) self._translate_response(response, has_body=False) return self def update(self, session, prepend_key=True, has_body=True): """Update the remote resource based on this instance. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :param prepend_key: A boolean indicating whether the resource_key should be prepended in a resource update request. Default to True. :return: This :class:`Resource` instance. :raises: :exc:`~openstack.exceptions.MethodNotSupported` if :data:`Resource.allow_update` is not set to ``True``. """ # The id cannot be dirty for an update self._body._dirty.discard("id") # Only try to update if we actually have anything to update. if not any([self._body.dirty, self._header.dirty]): return self if not self.allow_update: raise exceptions.MethodNotSupported(self, "update") request = self._prepare_request(prepend_key=prepend_key) session = self._get_session(session) if self.update_method == 'PATCH': response = session.patch( request.url, json=request.body, headers=request.headers) elif self.update_method == 'POST': response = session.post( request.url, json=request.body, headers=request.headers) elif self.update_method == 'PUT': response = session.put( request.url, json=request.body, headers=request.headers) else: raise exceptions.ResourceFailure( msg="Invalid update method: %s" % self.update_method) self._translate_response(response, has_body=has_body) return self def delete(self, session, error_message=None): """Delete the remote resource based on this instance. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :return: This :class:`Resource` instance. :raises: :exc:`~openstack.exceptions.MethodNotSupported` if :data:`Resource.allow_update` is not set to ``True``. """ if not self.allow_delete: raise exceptions.MethodNotSupported(self, "delete") request = self._prepare_request() session = self._get_session(session) response = session.delete(request.url, headers={"Accept": ""}) kwargs = {} if error_message: kwargs['error_message'] = error_message self._translate_response(response, has_body=False, **kwargs) return self @classmethod def list(cls, session, paginated=False, **params): """This method is a generator which yields resource objects. This resource object list generator handles pagination and takes query params for response filtering. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :param bool paginated: ``True`` if a GET to this resource returns a paginated series of responses, or ``False`` if a GET returns only one page of data. **When paginated is False only one page of data will be returned regardless of the API's support of pagination.** :param dict params: These keyword arguments are passed through the :meth:`~openstack.resource.QueryParamter._transpose` method to find if any of them match expected query parameters to be sent in the *params* argument to :meth:`~keystoneauth1.adapter.Adapter.get`. They are additionally checked against the :data:`~openstack.resource.Resource.base_path` format string to see if any path fragments need to be filled in by the contents of this argument. :return: A generator of :class:`Resource` objects. :raises: :exc:`~openstack.exceptions.MethodNotSupported` if :data:`Resource.allow_list` is not set to ``True``. :raises: :exc:`~openstack.exceptions.InvalidResourceQuery` if query contains invalid params. """ if not cls.allow_list: raise exceptions.MethodNotSupported(cls, "list") session = cls._get_session(session) cls._query_mapping._validate(params, base_path=cls.base_path) query_params = cls._query_mapping._transpose(params) uri = cls.base_path % params limit = query_params.get('limit') # Track the total number of resources yielded so we can paginate # swift objects total_yielded = 0 while uri: # Copy query_params due to weird mock unittest interactions response = session.get( uri, headers={"Accept": "application/json"}, params=query_params.copy()) exceptions.raise_from_response(response) data = response.json() # Discard any existing pagination keys query_params.pop('marker', None) query_params.pop('limit', None) if cls.resources_key: resources = data[cls.resources_key] else: resources = data if not isinstance(resources, list): resources = [resources] marker = None for raw_resource in resources: # Do not allow keys called "self" through. Glance chose # to name a key "self", so we need to pop it out because # we can't send it through cls.existing and into the # Resource initializer. "self" is already the first # argument and is practically a reserved word. raw_resource.pop("self", None) value = cls.existing(**raw_resource) marker = value.id yield value total_yielded += 1 if resources and paginated: uri, next_params = cls._get_next_link( uri, response, data, marker, limit, total_yielded) query_params.update(next_params) else: return @classmethod def _get_next_link(cls, uri, response, data, marker, limit, total_yielded): next_link = None params = {} if isinstance(data, dict): pagination_key = cls.pagination_key if not pagination_key and 'links' in data: # api-wg guidelines are for a links dict in the main body pagination_key == 'links' if not pagination_key and cls.resources_key: # Nova has a {key}_links dict in the main body pagination_key = '{key}_links'.format(key=cls.resources_key) if pagination_key: links = data.get(pagination_key, {}) for item in links: if item.get('rel') == 'next' and 'href' in item: next_link = item['href'] break # Glance has a next field in the main body next_link = next_link or data.get('next') if not next_link and 'next' in response.links: # RFC5988 specifies Link headers and requests parses them if they # are there. We prefer link dicts in resource body, but if those # aren't there and Link headers are, use them. next_link = response.links['next']['uri'] # Swift provides a count of resources in a header and a list body if not next_link and cls.pagination_key: total_count = response.headers.get(cls.pagination_key) if total_count: total_count = int(total_count) if total_count > total_yielded: params['marker'] = marker if limit: params['limit'] = limit next_link = uri # If we still have no link, and limit was given and is non-zero, # and the number of records yielded equals the limit, then the user # is playing pagination ball so we should go ahead and try once more. if not next_link and limit: next_link = uri params['marker'] = marker params['limit'] = limit return next_link, params @classmethod def _get_one_match(cls, name_or_id, results): """Given a list of results, return the match""" the_result = None for maybe_result in results: id_value = cls._get_id(maybe_result) name_value = maybe_result.name if (id_value == name_or_id) or (name_value == name_or_id): # Only allow one resource to be found. If we already # found a match, raise an exception to show it. if the_result is None: the_result = maybe_result else: msg = "More than one %s exists with the name '%s'." msg = (msg % (cls.__name__, name_or_id)) raise exceptions.DuplicateResource(msg) return the_result @classmethod def find(cls, session, name_or_id, ignore_missing=True, **params): """Find a resource by its name or id. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :param name_or_id: This resource's identifier, if needed by the request. The default is ``None``. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict params: Any additional parameters to be passed into underlying methods, such as to :meth:`~openstack.resource.Resource.existing` in order to pass on URI parameters. :return: The :class:`Resource` object matching the given name or id or None if nothing matches. :raises: :class:`openstack.exceptions.DuplicateResource` if more than one resource is found for this request. :raises: :class:`openstack.exceptions.ResourceNotFound` if nothing is found and ignore_missing is ``False``. """ # Try to short-circuit by looking directly for a matching ID. try: match = cls.existing(id=name_or_id, **params) return match.get(session) except exceptions.NotFoundException: pass data = cls.list(session, **params) result = cls._get_one_match(name_or_id, data) if result is not None: return result if ignore_missing: return None raise exceptions.ResourceNotFound( "No %s found for %s" % (cls.__name__, name_or_id)) def wait_for_status(session, resource, status, failures, interval, wait): """Wait for the resource to be in a particular status. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :param resource: The resource to wait on to reach the status. The resource must have a status attribute. :type resource: :class:`~openstack.resource.Resource` :param status: Desired status of the resource. :param list failures: Statuses that would indicate the transition failed such as 'ERROR'. Defaults to ['ERROR']. :param interval: Number of seconds to wait between checks. :param wait: Maximum number of seconds to wait for transition. :return: Method returns self on success. :raises: :class:`~openstack.exceptions.ResourceTimeout` transition to status failed to occur in wait seconds. :raises: :class:`~openstack.exceptions.ResourceFailure` resource transitioned to one of the failure states. :raises: :class:`~AttributeError` if the resource does not have a status attribute """ if resource.status == status: return resource if failures is None: failures = ['ERROR'] failures = [f.lower() for f in failures] name = "{res}:{id}".format(res=resource.__class__.__name__, id=resource.id) msg = "Timeout waiting for {name} to transition to {status}".format( name=name, status=status) for count in utils.iterate_timeout( timeout=wait, message=msg, wait=interval): resource = resource.get(session) new_status = resource.status if not resource: raise exceptions.ResourceFailure( "{name} went away while waiting for {status}".format( name=name, status=status)) if new_status.lower() == status.lower(): return resource if resource.status.lower() in failures: raise exceptions.ResourceFailure( "{name} transitioned to failure state {status}".format( name=name, status=resource.status)) def wait_for_delete(session, resource, interval, wait): """Wait for the resource to be deleted. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :param resource: The resource to wait on to be deleted. :type resource: :class:`~openstack.resource.Resource` :param interval: Number of seconds to wait between checks. :param wait: Maximum number of seconds to wait for the delete. :return: Method returns self on success. :raises: :class:`~openstack.exceptions.ResourceTimeout` transition to status failed to occur in wait seconds. """ orig_resource = resource for count in utils.iterate_timeout( timeout=wait, message="Timeout waiting for {res}:{id} to delete".format( res=resource.__class__.__name__, id=resource.id), wait=interval): try: resource = resource.get(session) if not resource: return orig_resource if resource.status.lower() == 'deleted': return resource except exceptions.NotFoundException: return orig_resource openstacksdk-0.11.3/openstack/_log.py0000666000175100017510000000760713236151340017604 0ustar zuulzuul00000000000000# Copyright (c) 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import sys def setup_logging(name, handlers=None, level=None): """Set up logging for a named logger. Gets and initializes a named logger, ensuring it at least has a `logging.NullHandler` attached. :param str name: Name of the logger. :param list handlers: A list of `logging.Handler` objects to attach to the logger. :param int level: Log level to set the logger at. :returns: A `logging.Logger` object that can be used to emit log messages. """ handlers = handlers or [] log = logging.getLogger(name) if len(log.handlers) == 0 and not handlers: h = logging.NullHandler() log.addHandler(h) for h in handlers: log.addHandler(h) if level: log.setLevel(level) return log def enable_logging( debug=False, http_debug=False, path=None, stream=None, format_stream=False, format_template='%(asctime)s %(levelname)s: %(name)s %(message)s'): """Enable logging output. Helper function to enable logging. This function is available for debugging purposes and for folks doing simple applications who want an easy 'just make it work for me'. For more complex applications or for those who want more flexibility, the standard library ``logging`` package will receive these messages in any handlers you create. :param bool debug: Set this to ``True`` to receive debug messages. :param bool http_debug: Set this to ``True`` to receive debug messages including HTTP requests and responses. This implies ``debug=True``. :param str path: If a *path* is specified, logging output will written to that file in addition to sys.stderr. The path is passed to logging.FileHandler, which will append messages the file (and create it if needed). :param stream: One of ``None `` or ``sys.stdout`` or ``sys.stderr``. If it is ``None``, nothing is logged to a stream. If it isn't ``None``, console output is logged to this stream. :param bool format_stream: If format_stream is False, the default, apply ``format_template`` to ``path`` but not to ``stream`` outputs. If True, apply ``format_template`` to ``stream`` outputs as well. :param str format_template: Template to pass to :class:`logging.Formatter`. :rtype: None """ if not stream and not path: stream = sys.stdout if http_debug: debug = True if debug: level = logging.DEBUG else: level = logging.INFO formatter = logging.Formatter(format_template) handlers = [] if stream is not None: console = logging.StreamHandler(stream) if format_stream: console.setFormatter(formatter) handlers.append(console) if path is not None: file_handler = logging.FileHandler(path) file_handler.setFormatter(formatter) handlers.append(file_handler) if http_debug: # Enable HTTP level tracing setup_logging('keystoneauth', handlers=handlers, level=level) setup_logging('openstack', handlers=handlers, level=level) # Suppress warning about keystoneauth loggers setup_logging('keystoneauth.discovery') setup_logging('keystoneauth.identity.base') setup_logging('keystoneauth.identity.generic.base') openstacksdk-0.11.3/openstack/message/0000775000175100017510000000000013236151501017721 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/message/version.py0000666000175100017510000000172613236151340021771 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.message import message_service from openstack import resource class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = message_service.MessageService( version=message_service.MessageService.UNVERSIONED ) # capabilities allow_list = True # Properties links = resource.Body('links') status = resource.Body('status') openstacksdk-0.11.3/openstack/message/v2/0000775000175100017510000000000013236151501020250 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/message/v2/queue.py0000666000175100017510000001206713236151340021757 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.message import message_service from openstack import resource class Queue(resource.Resource): # FIXME(anyone): The name string of `location` field of Zaqar API response # is lower case. That is inconsistent with the guide from API-WG. This is # a workaround for this issue. location = resource.Header("location") resources_key = "queues" base_path = "/queues" service = message_service.MessageService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True # Properties #: The default TTL of messages defined for a queue, which will effect for #: any messages posted to the queue. default_message_ttl = resource.Body("_default_message_ttl") #: Description of the queue. description = resource.Body("description") #: The max post size of messages defined for a queue, which will effect #: for any messages posted to the queue. max_messages_post_size = resource.Body("_max_messages_post_size") #: Name of the queue. The name is the unique identity of a queue. It #: must not exceed 64 bytes in length, and it is limited to US-ASCII #: letters, digits, underscores, and hyphens. name = resource.Body("name", alternate_id=True) #: The ID to identify the client accessing Zaqar API. Must be specified #: in header for each API request. client_id = resource.Header("Client-ID") #: The ID to identify the project accessing Zaqar API. Must be specified #: in case keystone auth is not enabled in Zaqar service. project_id = resource.Header("X-PROJECT-ID") def create(self, session, prepend_key=True): request = self._prepare_request(requires_id=True, prepend_key=prepend_key) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.put(request.url, json=request.body, headers=request.headers) self._translate_response(response, has_body=False) return self @classmethod def list(cls, session, paginated=False, **params): """This method is a generator which yields queue objects. This is almost the copy of list method of resource.Resource class. The only difference is the request header now includes `Client-ID` and `X-PROJECT-ID` fields which are required by Zaqar v2 API. """ more_data = True query_params = cls._query_mapping._transpose(params) uri = cls.base_path % params headers = { "Client-ID": params.get('client_id', None) or str(uuid.uuid4()), "X-PROJECT-ID": params.get('project_id', None ) or session.get_project_id() } while more_data: resp = session.get(uri, headers=headers, params=query_params) resp = resp.json() resp = resp[cls.resources_key] if not resp: more_data = False yielded = 0 new_marker = None for data in resp: value = cls.existing(**data) new_marker = value.id yielded += 1 yield value if not paginated: return if "limit" in query_params and yielded < query_params["limit"]: return query_params["limit"] = yielded query_params["marker"] = new_marker def get(self, session, requires_id=True, error_message=None): request = self._prepare_request(requires_id=requires_id) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.get(request.url, headers=headers) self._translate_response(response) return self def delete(self, session): request = self._prepare_request() headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.delete(request.url, headers=headers) self._translate_response(response, has_body=False) return self openstacksdk-0.11.3/openstack/message/v2/subscription.py0000666000175100017510000001315713236151340023360 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.message import message_service from openstack import resource class Subscription(resource.Resource): # FIXME(anyone): The name string of `location` field of Zaqar API response # is lower case. That is inconsistent with the guide from API-WG. This is # a workaround for this issue. location = resource.Header("location") resources_key = 'subscriptions' base_path = '/queues/%(queue_name)s/subscriptions' service = message_service.MessageService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True # Properties #: The value in seconds indicating how long the subscription has existed. age = resource.Body("age") #: Alternate id of the subscription. This key is used in response of #: subscription create API to return id of subscription created. subscription_id = resource.Body("subscription_id", alternate_id=True) #: The extra metadata for the subscription. The value must be a dict. #: If the subscriber is `mailto`. The options can contain `from` and #: `subject` to indicate the email's author and title. options = resource.Body("options", type=dict) #: The queue name which the subscription is registered on. source = resource.Body("source") #: The destination of the message. Two kinds of subscribers are supported: #: http/https and email. The http/https subscriber should start with #: `http/https`. The email subscriber should start with `mailto`. subscriber = resource.Body("subscriber") #: Number of seconds the subscription remains alive? The ttl value must #: be great than 60 seconds. The default value is 3600 seconds. ttl = resource.Body("ttl") #: The queue name which the subscription is registered on. queue_name = resource.URI("queue_name") #: The ID to identify the client accessing Zaqar API. Must be specified #: in header for each API request. client_id = resource.Header("Client-ID") #: The ID to identify the project. Must be provided when keystone #: authentication is not enabled in Zaqar service. project_id = resource.Header("X-PROJECT-ID") def create(self, session, prepend_key=True): request = self._prepare_request(requires_id=False, prepend_key=prepend_key) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.post(request.url, json=request.body, headers=request.headers) self._translate_response(response) return self @classmethod def list(cls, session, paginated=True, **params): """This method is a generator which yields subscription objects. This is almost the copy of list method of resource.Resource class. The only difference is the request header now includes `Client-ID` and `X-PROJECT-ID` fields which are required by Zaqar v2 API. """ more_data = True uri = cls.base_path % params headers = { "Client-ID": params.get('client_id', None) or str(uuid.uuid4()), "X-PROJECT-ID": params.get('project_id', None ) or session.get_project_id() } query_params = cls._query_mapping._transpose(params) while more_data: resp = session.get(uri, headers=headers, params=query_params) resp = resp.json() resp = resp[cls.resources_key] if not resp: more_data = False yielded = 0 new_marker = None for data in resp: value = cls.existing(**data) new_marker = value.id yielded += 1 yield value if not paginated: return if "limit" in query_params and yielded < query_params["limit"]: return query_params["limit"] = yielded query_params["marker"] = new_marker def get(self, session, requires_id=True, error_message=None): request = self._prepare_request(requires_id=requires_id) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.get(request.url, headers=request.headers) self._translate_response(response) return self def delete(self, session): request = self._prepare_request() headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.delete(request.url, headers=request.headers) self._translate_response(response, has_body=False) return self openstacksdk-0.11.3/openstack/message/v2/__init__.py0000666000175100017510000000000013236151340022352 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/message/v2/message.py0000666000175100017510000001254213236151340022255 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.message import message_service from openstack import resource class Message(resource.Resource): # FIXME(anyone): The name string of `location` field of Zaqar API response # is lower case. That is inconsistent with the guide from API-WG. This is # a workaround for this issue. location = resource.Header("location") resources_key = 'messages' base_path = '/queues/%(queue_name)s/messages' service = message_service.MessageService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True _query_mapping = resource.QueryParameters("echo", "include_claimed") # Properties #: The value in second to specify how long the message has been #: posted to the queue. age = resource.Body("age") #: A dictionary specifies an arbitrary document that constitutes the #: body of the message being sent. body = resource.Body("body") #: An uri string describe the location of the message resource. href = resource.Body("href") #: The value in seconds to specify how long the server waits before #: marking the message as expired and removing it from the queue. ttl = resource.Body("ttl") #: The name of target queue message is post to or got from. queue_name = resource.URI("queue_name") #: The ID to identify the client accessing Zaqar API. Must be specified #: in header for each API request. client_id = resource.Header("Client-ID") #: The ID to identify the project accessing Zaqar API. Must be specified #: in case keystone auth is not enabled in Zaqar service. project_id = resource.Header("X-PROJECT-ID") def post(self, session, messages): request = self._prepare_request(requires_id=False, prepend_key=True) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) request.body = {'messages': messages} response = session.post(request.url, json=request.body, headers=request.headers) return response.json()['resources'] @classmethod def list(cls, session, paginated=True, **params): """This method is a generator which yields message objects. This is almost the copy of list method of resource.Resource class. The only difference is the request header now includes `Client-ID` and `X-PROJECT-ID` fields which are required by Zaqar v2 API. """ more_data = True uri = cls.base_path % params headers = { "Client-ID": params.get('client_id', None) or str(uuid.uuid4()), "X-PROJECT-ID": params.get('project_id', None ) or session.get_project_id() } query_params = cls._query_mapping._transpose(params) while more_data: resp = session.get(uri, headers=headers, params=query_params) resp = resp.json() resp = resp[cls.resources_key] if not resp: more_data = False yielded = 0 new_marker = None for data in resp: value = cls.existing(**data) new_marker = value.id yielded += 1 yield value if not paginated: return if "limit" in query_params and yielded < query_params["limit"]: return query_params["limit"] = yielded query_params["marker"] = new_marker def get(self, session, requires_id=True, error_message=None): request = self._prepare_request(requires_id=requires_id) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.get(request.url, headers=headers) self._translate_response(response) return self def delete(self, session): request = self._prepare_request() headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) # For Zaqar v2 API requires client to specify claim_id as query # parameter when deleting a message that has been claimed, we # rebuild the request URI if claim_id is not None. if self.claim_id: request.url += '?claim_id=%s' % self.claim_id response = session.delete(request.url, headers=headers) self._translate_response(response, has_body=False) return self openstacksdk-0.11.3/openstack/message/v2/claim.py0000666000175100017510000001141313236151340021712 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.message import message_service from openstack import resource class Claim(resource.Resource): # FIXME(anyone): The name string of `location` field of Zaqar API response # is lower case. That is inconsistent with the guide from API-WG. This is # a workaround for this issue. location = resource.Header("location") resources_key = 'claims' base_path = '/queues/%(queue_name)s/claims' service = message_service.MessageService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True update_method = 'PATCH' # Properties #: The value in seconds indicating how long the claim has existed. age = resource.Body("age") #: In case worker stops responding for a long time, the server will #: extend the lifetime of claimed messages to be at least as long as #: the lifetime of the claim itself, plus the specified grace period. #: Must between 60 and 43200 seconds(12 hours). grace = resource.Body("grace") #: The number of messages to claim. Default 10, up to 20. limit = resource.Body("limit") #: Messages have been successfully claimed. messages = resource.Body("messages") #: Number of seconds the server wait before releasing the claim. Must #: between 60 and 43200 seconds(12 hours). ttl = resource.Body("ttl") #: The name of queue to claim message from. queue_name = resource.URI("queue_name") #: The ID to identify the client accessing Zaqar API. Must be specified #: in header for each API request. client_id = resource.Header("Client-ID") #: The ID to identify the project. Must be provided when keystone #: authentication is not enabled in Zaqar service. project_id = resource.Header("X-PROJECT-ID") def _translate_response(self, response, has_body=True): super(Claim, self)._translate_response(response, has_body=has_body) if has_body and self.location: # Extract claim ID from location self.id = self.location.split("claims/")[1] def create(self, session, prepend_key=False): request = self._prepare_request(requires_id=False, prepend_key=prepend_key) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.post(request.url, json=request.body, headers=request.headers) # For case no message was claimed successfully, 204 No Content # message will be returned. In other cases, we translate response # body which has `messages` field(list) included. if response.status_code != 204: self._translate_response(response) return self def get(self, session, requires_id=True, error_message=None): request = self._prepare_request(requires_id=requires_id) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.get(request.url, headers=request.headers) self._translate_response(response) return self def update(self, session, prepend_key=False, has_body=False): request = self._prepare_request(prepend_key=prepend_key) headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) session.patch(request.url, json=request.body, headers=request.headers) return self def delete(self, session): request = self._prepare_request() headers = { "Client-ID": self.client_id or str(uuid.uuid4()), "X-PROJECT-ID": self.project_id or session.get_project_id() } request.headers.update(headers) response = session.delete(request.url, headers=request.headers) self._translate_response(response, has_body=False) return self openstacksdk-0.11.3/openstack/message/v2/_proxy.py0000666000175100017510000003117513236151340022154 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.message.v2 import claim as _claim from openstack.message.v2 import message as _message from openstack.message.v2 import queue as _queue from openstack.message.v2 import subscription as _subscription from openstack import proxy from openstack import resource class Proxy(proxy.BaseProxy): def create_queue(self, **attrs): """Create a new queue from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.message.v2.queue.Queue`, comprised of the properties on the Queue class. :returns: The results of queue creation :rtype: :class:`~openstack.message.v2.queue.Queue` """ return self._create(_queue.Queue, **attrs) def get_queue(self, queue): """Get a queue :param queue: The value can be the name of a queue or a :class:`~openstack.message.v2.queue.Queue` instance. :returns: One :class:`~openstack.message.v2.queue.Queue` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no queue matching the name could be found. """ return self._get(_queue.Queue, queue) def queues(self, **query): """Retrieve a generator of queues :param kwargs \*\*query: Optional query parameters to be sent to restrict the queues to be returned. Available parameters include: * limit: Requests at most the specified number of items be returned from the query. * marker: Specifies the ID of the last-seen queue. Use the limit parameter to make an initial limited request and use the ID of the last-seen queue from the response as the marker parameter value in a subsequent limited request. :returns: A generator of queue instances. """ return self._list(_queue.Queue, paginated=True, **query) def delete_queue(self, value, ignore_missing=True): """Delete a queue :param value: The value can be either the name of a queue or a :class:`~openstack.message.v2.queue.Queue` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the queue does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent queue. :returns: ``None`` """ return self._delete(_queue.Queue, value, ignore_missing=ignore_missing) def post_message(self, queue_name, messages): """Post messages to given queue :param queue_name: The name of target queue to post message to. :param messages: List of messages body and TTL to post. :type messages: :py:class:`list` :returns: A string includes location of messages successfully posted. """ message = self._get_resource(_message.Message, None, queue_name=queue_name) return message.post(self, messages) def messages(self, queue_name, **query): """Retrieve a generator of messages :param queue_name: The name of target queue to query messages from. :param kwargs \*\*query: Optional query parameters to be sent to restrict the messages to be returned. Available parameters include: * limit: Requests at most the specified number of items be returned from the query. * marker: Specifies the ID of the last-seen subscription. Use the limit parameter to make an initial limited request and use the ID of the last-seen subscription from the response as the marker parameter value in a subsequent limited request. * echo: Indicate if the messages can be echoed back to the client that posted them. * include_claimed: Indicate if the messages list should include the claimed messages. :returns: A generator of message instances. """ query["queue_name"] = queue_name return self._list(_message.Message, paginated=True, **query) def get_message(self, queue_name, message): """Get a message :param queue_name: The name of target queue to get message from. :param message: The value can be the name of a message or a :class:`~openstack.message.v2.message.Message` instance. :returns: One :class:`~openstack.message.v2.message.Message` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no message matching the criteria could be found. """ message = self._get_resource(_message.Message, message, queue_name=queue_name) return self._get(_message.Message, message) def delete_message(self, queue_name, value, claim=None, ignore_missing=True): """Delete a message :param queue_name: The name of target queue to delete message from. :param value: The value can be either the name of a message or a :class:`~openstack.message.v2.message.Message` instance. :param claim: The value can be the ID or a :class:`~openstack.message.v2.claim.Claim` instance of the claim seizing the message. If None, the message has not been claimed. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the message does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent message. :returns: ``None`` """ message = self._get_resource(_message.Message, value, queue_name=queue_name) message.claim_id = resource.Resource._get_id(claim) return self._delete(_message.Message, message, ignore_missing=ignore_missing) def create_subscription(self, queue_name, **attrs): """Create a new subscription from attributes :param queue_name: The name of target queue to subscribe on. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.message.v2.subscription.Subscription`, comprised of the properties on the Subscription class. :returns: The results of subscription creation :rtype: :class:`~openstack.message.v2.subscription.Subscription` """ return self._create(_subscription.Subscription, queue_name=queue_name, **attrs) def subscriptions(self, queue_name, **query): """Retrieve a generator of subscriptions :param queue_name: The name of target queue to subscribe on. :param kwargs \*\*query: Optional query parameters to be sent to restrict the subscriptions to be returned. Available parameters include: * limit: Requests at most the specified number of items be returned from the query. * marker: Specifies the ID of the last-seen subscription. Use the limit parameter to make an initial limited request and use the ID of the last-seen subscription from the response as the marker parameter value in a subsequent limited request. :returns: A generator of subscription instances. """ query["queue_name"] = queue_name return self._list(_subscription.Subscription, paginated=True, **query) def get_subscription(self, queue_name, subscription): """Get a subscription :param queue_name: The name of target queue of subscription. :param message: The value can be the ID of a subscription or a :class:`~openstack.message.v2.subscription.Subscription` instance. :returns: One :class:`~openstack.message.v2.subscription.Subscription` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no subscription matching the criteria could be found. """ subscription = self._get_resource(_subscription.Subscription, subscription, queue_name=queue_name) return self._get(_subscription.Subscription, subscription) def delete_subscription(self, queue_name, value, ignore_missing=True): """Delete a subscription :param queue_name: The name of target queue to delete subscription from. :param value: The value can be either the name of a subscription or a :class:`~openstack.message.v2.subscription.Subscription` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the subscription does not exist. When set to ``True``, no exception will be thrown when attempting to delete a nonexistent subscription. :returns: ``None`` """ subscription = self._get_resource(_subscription.Subscription, value, queue_name=queue_name) return self._delete(_subscription.Subscription, subscription, ignore_missing=ignore_missing) def create_claim(self, queue_name, **attrs): """Create a new claim from attributes :param queue_name: The name of target queue to claim message from. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.message.v2.claim.Claim`, comprised of the properties on the Claim class. :returns: The results of claim creation :rtype: :class:`~openstack.message.v2.claim.Claim` """ return self._create(_claim.Claim, queue_name=queue_name, **attrs) def get_claim(self, queue_name, claim): """Get a claim :param queue_name: The name of target queue to claim message from. :param claim: The value can be either the ID of a claim or a :class:`~openstack.message.v2.claim.Claim` instance. :returns: One :class:`~openstack.message.v2.claim.Claim` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no claim matching the criteria could be found. """ return self._get(_claim.Claim, claim, queue_name=queue_name) def update_claim(self, queue_name, claim, **attrs): """Update an existing claim from attributes :param queue_name: The name of target queue to claim message from. :param claim: The value can be either the ID of a claim or a :class:`~openstack.message.v2.claim.Claim` instance. :param dict attrs: Keyword arguments which will be used to update a :class:`~openstack.message.v2.claim.Claim`, comprised of the properties on the Claim class. :returns: The results of claim update :rtype: :class:`~openstack.message.v2.claim.Claim` """ return self._update(_claim.Claim, claim, queue_name=queue_name, **attrs) def delete_claim(self, queue_name, claim, ignore_missing=True): """Delete a claim :param queue_name: The name of target queue to claim messages from. :param claim: The value can be either the ID of a claim or a :class:`~openstack.message.v2.claim.Claim` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the claim does not exist. When set to ``True``, no exception will be thrown when attempting to delete a nonexistent claim. :returns: ``None`` """ return self._delete(_claim.Claim, claim, queue_name=queue_name, ignore_missing=ignore_missing) openstacksdk-0.11.3/openstack/message/message_service.py0000666000175100017510000000163713236151340023451 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class MessageService(service_filter.ServiceFilter): """The message service.""" valid_versions = [service_filter.ValidVersion('v2')] def __init__(self, version=None): """Create a message service.""" super(MessageService, self).__init__( service_type='messaging', version=version ) openstacksdk-0.11.3/openstack/message/__init__.py0000666000175100017510000000000013236151340022023 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/object_store/0000775000175100017510000000000013236151501020757 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/object_store/object_store_service.py0000666000175100017510000000170413236151340025540 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class ObjectStoreService(service_filter.ServiceFilter): """The object store service.""" valid_versions = [service_filter.ValidVersion('v1')] def __init__(self, version=None): """Create an object store service.""" super(ObjectStoreService, self).__init__(service_type='object-store', version=version) openstacksdk-0.11.3/openstack/object_store/v1/0000775000175100017510000000000013236151501021305 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/object_store/v1/_base.py0000666000175100017510000000540113236151340022733 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import exceptions from openstack.object_store import object_store_service from openstack import resource class BaseResource(resource.Resource): service = object_store_service.ObjectStoreService() update_method = 'POST' create_method = 'PUT' #: Metadata stored for this resource. *Type: dict* metadata = dict() _custom_metadata_prefix = None _system_metadata = dict() def _calculate_headers(self, metadata): headers = {} for key in metadata: if key in self._system_metadata.keys(): header = self._system_metadata[key] elif key in self._system_metadata.values(): header = key else: if key.startswith(self._custom_metadata_prefix): header = key else: header = self._custom_metadata_prefix + key headers[header] = metadata[key] return headers def set_metadata(self, session, metadata): request = self._prepare_request() response = session.post( request.url, headers=self._calculate_headers(metadata)) self._translate_response(response, has_body=False) response = session.head(request.url) self._translate_response(response, has_body=False) return self def delete_metadata(self, session, keys): request = self._prepare_request() headers = {key: '' for key in keys} response = session.post( request.url, headers=self._calculate_headers(headers)) exceptions.raise_from_response( response, error_message="Error deleting metadata keys") return self def _set_metadata(self, headers): self.metadata = dict() for header in headers: if header.startswith(self._custom_metadata_prefix): key = header[len(self._custom_metadata_prefix):].lower() self.metadata[key] = headers[header] def _translate_response(self, response, has_body=None, error_message=None): super(BaseResource, self)._translate_response( response, has_body=has_body, error_message=error_message) self._set_metadata(response.headers) openstacksdk-0.11.3/openstack/object_store/v1/account.py0000666000175100017510000000334413236151340023322 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.object_store.v1 import _base from openstack import resource class Account(_base.BaseResource): _custom_metadata_prefix = "X-Account-Meta-" base_path = "/" allow_get = True allow_update = True allow_head = True #: The total number of bytes that are stored in Object Storage for #: the account. account_bytes_used = resource.Header("x-account-bytes-used", type=int) #: The number of containers. account_container_count = resource.Header("x-account-container-count", type=int) #: The number of objects in the account. account_object_count = resource.Header("x-account-object-count", type=int) #: The secret key value for temporary URLs. If not set, #: this header is not returned by this operation. meta_temp_url_key = resource.Header("x-account-meta-temp-url-key") #: A second secret key value for temporary URLs. If not set, #: this header is not returned by this operation. meta_temp_url_key_2 = resource.Header("x-account-meta-temp-url-key-2") #: The timestamp of the transaction. timestamp = resource.Header("x-timestamp") has_body = False requires_id = False openstacksdk-0.11.3/openstack/object_store/v1/container.py0000666000175100017510000001267413236151340023656 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.object_store.v1 import _base from openstack import resource class Container(_base.BaseResource): _custom_metadata_prefix = "X-Container-Meta-" _system_metadata = { "content_type": "content-type", "is_content_type_detected": "x-detect-content-type", "versions_location": "x-versions-location", "read_ACL": "x-container-read", "write_ACL": "x-container-write", "sync_to": "x-container-sync-to", "sync_key": "x-container-sync-key" } base_path = "/" pagination_key = 'X-Account-Container-Count' allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True allow_head = True # Container body data (when id=None) #: The name of the container. name = resource.Body("name", alternate_id=True, alias='id') #: The number of objects in the container. count = resource.Body("count", type=int, alias='object_count') #: The total number of bytes that are stored in Object Storage #: for the container. bytes = resource.Body("bytes", type=int, alias='bytes_used') # Container metadata (when id=name) #: The number of objects. object_count = resource.Header( "x-container-object-count", type=int, alias='count') #: The count of bytes used in total. bytes_used = resource.Header( "x-container-bytes-used", type=int, alias='bytes') #: The timestamp of the transaction. timestamp = resource.Header("x-timestamp") # Request headers (when id=None) #: If set to True, Object Storage queries all replicas to return the #: most recent one. If you omit this header, Object Storage responds #: faster after it finds one valid replica. Because setting this #: header to True is more expensive for the back end, use it only #: when it is absolutely needed. *Type: bool* is_newest = resource.Header("x-newest", type=bool) # Request headers (when id=name) #: The ACL that grants read access. If not set, this header is not #: returned by this operation. read_ACL = resource.Header("x-container-read") #: The ACL that grants write access. If not set, this header is not #: returned by this operation. write_ACL = resource.Header("x-container-write") #: The destination for container synchronization. If not set, #: this header is not returned by this operation. sync_to = resource.Header("x-container-sync-to") #: The secret key for container synchronization. If not set, #: this header is not returned by this operation. sync_key = resource.Header("x-container-sync-key") #: Enables versioning on this container. The value is the name #: of another container. You must UTF-8-encode and then URL-encode #: the name before you include it in the header. To disable #: versioning, set the header to an empty string. versions_location = resource.Header("x-versions-location") #: The MIME type of the list of names. content_type = resource.Header("content-type") #: If set to true, Object Storage guesses the content type based #: on the file extension and ignores the value sent in the #: Content-Type header, if present. *Type: bool* is_content_type_detected = resource.Header("x-detect-content-type", type=bool) # TODO(mordred) Shouldn't if-none-match be handled more systemically? #: In combination with Expect: 100-Continue, specify an #: "If-None-Match: \*" header to query whether the server already #: has a copy of the object before any data is sent. if_none_match = resource.Header("if-none-match") @classmethod def new(cls, **kwargs): # Container uses name as id. Proxy._get_resource calls # Resource.new(id=name) but then we need to do container.name # It's the same thing for Container - make it be the same. name = kwargs.pop('id', None) if name: kwargs.setdefault('name', name) return Container(_synchronized=True, **kwargs) def create(self, session, prepend_key=True): """Create a remote resource based on this instance. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :param prepend_key: A boolean indicating whether the resource_key should be prepended in a resource creation request. Default to True. :return: This :class:`Resource` instance. :raises: :exc:`~openstack.exceptions.MethodNotSupported` if :data:`Resource.allow_create` is not set to ``True``. """ request = self._prepare_request( requires_id=True, prepend_key=prepend_key) response = session.put( request.url, json=request.body, headers=request.headers) self._translate_response(response, has_body=False) return self openstacksdk-0.11.3/openstack/object_store/v1/obj.py0000666000175100017510000003143213236151340022437 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from openstack import exceptions from openstack.object_store import object_store_service from openstack.object_store.v1 import _base from openstack import resource class Object(_base.BaseResource): _custom_metadata_prefix = "X-Object-Meta-" _system_metadata = { "content_disposition": "content-disposition", "content_encoding": "content-encoding", "content_type": "content-type", "delete_after": "x-delete-after", "delete_at": "x-delete-at", "is_content_type_detected": "x-detect-content-type", } base_path = "/%(container)s" pagination_key = 'X-Container-Object-Count' service = object_store_service.ObjectStoreService() allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True allow_head = True # Data to be passed during a POST call to create an object on the server. # TODO(mordred) Make a base class BaseDataResource that can be used here # and with glance images that has standard overrides for dealing with # binary data. data = None # URL parameters #: The unique name for the container. container = resource.URI("container") #: The unique name for the object. name = resource.Body("name", alternate_id=True) # Object details # Make these private because they should only matter in the case where # we have a Body with no headers (like if someone programmatically is # creating an Object) _hash = resource.Body("hash") _bytes = resource.Body("bytes", type=int) _last_modified = resource.Body("last_modified") _content_type = resource.Body("content_type") # Headers for HEAD and GET requests #: If set to True, Object Storage queries all replicas to return #: the most recent one. If you omit this header, Object Storage #: responds faster after it finds one valid replica. Because #: setting this header to True is more expensive for the back end, #: use it only when it is absolutely needed. *Type: bool* is_newest = resource.Header("x-newest", type=bool) #: TODO(briancurtin) there's a lot of content here... range = resource.Header("range", type=dict) #: See http://www.ietf.org/rfc/rfc2616.txt. if_match = resource.Header("if-match", type=list) #: In combination with Expect: 100-Continue, specify an #: "If-None-Match: \*" header to query whether the server already #: has a copy of the object before any data is sent. if_none_match = resource.Header("if-none-match", type=list) #: See http://www.ietf.org/rfc/rfc2616.txt. if_modified_since = resource.Header("if-modified-since", type=str) #: See http://www.ietf.org/rfc/rfc2616.txt. if_unmodified_since = resource.Header("if-unmodified-since", type=str) # Query parameters #: Used with temporary URLs to sign the request. For more #: information about temporary URLs, see OpenStack Object Storage #: API v1 Reference. signature = resource.Header("signature") #: Used with temporary URLs to specify the expiry time of the #: signature. For more information about temporary URLs, see #: OpenStack Object Storage API v1 Reference. expires_at = resource.Header("expires") #: If you include the multipart-manifest=get query parameter and #: the object is a large object, the object contents are not #: returned. Instead, the manifest is returned in the #: X-Object-Manifest response header for dynamic large objects #: or in the response body for static large objects. multipart_manifest = resource.Header("multipart-manifest") # Response headers from HEAD and GET #: HEAD operations do not return content. However, in this #: operation the value in the Content-Length header is not the #: size of the response body. Instead it contains the size of #: the object, in bytes. content_length = resource.Header( "content-length", type=int, alias='_bytes') #: The MIME type of the object. content_type = resource.Header("content-type", alias="_content_type") #: The type of ranges that the object accepts. accept_ranges = resource.Header("accept-ranges") #: For objects smaller than 5 GB, this value is the MD5 checksum #: of the object content. The value is not quoted. #: For manifest objects, this value is the MD5 checksum of the #: concatenated string of MD5 checksums and ETags for each of #: the segments in the manifest, and not the MD5 checksum of #: the content that was downloaded. Also the value is enclosed #: in double-quote characters. #: You are strongly recommended to compute the MD5 checksum of #: the response body as it is received and compare this value #: with the one in the ETag header. If they differ, the content #: was corrupted, so retry the operation. etag = resource.Header("etag", alias='_hash') #: Set to True if this object is a static large object manifest object. #: *Type: bool* is_static_large_object = resource.Header("x-static-large-object", type=bool) #: If set, the value of the Content-Encoding metadata. #: If not set, this header is not returned by this operation. content_encoding = resource.Header("content-encoding") #: If set, specifies the override behavior for the browser. #: For example, this header might specify that the browser use #: a download program to save this file rather than show the file, #: which is the default. #: If not set, this header is not returned by this operation. content_disposition = resource.Header("content-disposition") #: Specifies the number of seconds after which the object is #: removed. Internally, the Object Storage system stores this #: value in the X-Delete-At metadata item. delete_after = resource.Header("x-delete-after", type=int) #: If set, the time when the object will be deleted by the system #: in the format of a UNIX Epoch timestamp. #: If not set, this header is not returned by this operation. delete_at = resource.Header("x-delete-at") #: If set, to this is a dynamic large object manifest object. #: The value is the container and object name prefix of the #: segment objects in the form container/prefix. object_manifest = resource.Header("x-object-manifest") #: The timestamp of the transaction. timestamp = resource.Header("x-timestamp") #: The date and time that the object was created or the last #: time that the metadata was changed. last_modified_at = resource.Header("last-modified", alias='_last_modified') # Headers for PUT and POST requests #: Set to chunked to enable chunked transfer encoding. If used, #: do not set the Content-Length header to a non-zero value. transfer_encoding = resource.Header("transfer-encoding") #: If set to true, Object Storage guesses the content type based #: on the file extension and ignores the value sent in the #: Content-Type header, if present. *Type: bool* is_content_type_detected = resource.Header("x-detect-content-type", type=bool) #: If set, this is the name of an object used to create the new #: object by copying the X-Copy-From object. The value is in form #: {container}/{object}. You must UTF-8-encode and then URL-encode #: the names of the container and object before you include them #: in the header. #: Using PUT with X-Copy-From has the same effect as using the #: COPY operation to copy an object. copy_from = resource.Header("x-copy-from") has_body = False def __init__(self, data=None, **attrs): super(_base.BaseResource, self).__init__(**attrs) self.data = data # The Object Store treats the metadata for its resources inconsistently so # Object.set_metadata must override the BaseResource.set_metadata to # account for it. def set_metadata(self, session, metadata): # Filter out items with empty values so the create metadata behaviour # is the same as account and container filtered_metadata = \ {key: value for key, value in metadata.items() if value} # Update from remote if we only have locally created information if not self.last_modified_at: self.head(session) # Get a copy of the original metadata so it doesn't get erased on POST # and update it with the new metadata values. metadata = copy.deepcopy(self.metadata) metadata.update(filtered_metadata) # Include any original system metadata so it doesn't get erased on POST for key in self._system_metadata: value = getattr(self, key) if value and key not in metadata: metadata[key] = value request = self._prepare_request() headers = self._calculate_headers(metadata) response = session.post(request.url, headers=headers) self._translate_response(response, has_body=False) self.metadata.update(metadata) return self # The Object Store treats the metadata for its resources inconsistently so # Object.delete_metadata must override the BaseResource.delete_metadata to # account for it. def delete_metadata(self, session, keys): if not keys: return # If we have an empty object, update it from the remote side so that # we have a copy of the original metadata. Deleting metadata requires # POSTing and overwriting all of the metadata. If we already have # metadata locally, assume this is an existing object. if not self.metadata: self.head(session) metadata = copy.deepcopy(self.metadata) # Include any original system metadata so it doesn't get erased on POST for key in self._system_metadata: value = getattr(self, key) if value: metadata[key] = value # Remove the requested metadata keys # TODO(mordred) Why don't we just look at self._header_mapping() # instead of having system_metadata? deleted = False attr_keys_to_delete = set() for key in keys: if key == 'delete_after': del(metadata['delete_at']) else: if key in metadata: del(metadata[key]) # Delete the attribute from the local copy of the object. # Metadata that doesn't have Component attributes is # handled by self.metadata being reset when we run # self.head if hasattr(self, key): attr_keys_to_delete.add(key) deleted = True # Nothing to delete, skip the POST if not deleted: return self request = self._prepare_request() response = session.post( request.url, headers=self._calculate_headers(metadata)) exceptions.raise_from_response( response, error_message="Error deleting metadata keys") # Only delete from local object if the remote delete was successful for key in attr_keys_to_delete: delattr(self, key) # Just update ourselves from remote again. return self.head(session) def _download(self, session, error_message=None, stream=False): request = self._prepare_request() request.headers['Accept'] = 'bytes' response = session.get( request.url, headers=request.headers, stream=stream) exceptions.raise_from_response(response, error_message=error_message) return response def download(self, session, error_message=None): response = self._download(session, error_message=error_message) return response.content def stream(self, session, error_message=None, chunk_size=1024): response = self._download( session, error_message=error_message, stream=True) return response.iter_content(chunk_size, decode_unicode=False) def create(self, session): request = self._prepare_request() request.headers['Accept'] = '' response = session.put( request.url, data=self.data, headers=request.headers) self._translate_response(response, has_body=False) return self openstacksdk-0.11.3/openstack/object_store/v1/__init__.py0000666000175100017510000000000013236151340023407 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/object_store/v1/_proxy.py0000666000175100017510000003460313236151340023210 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.object_store.v1 import account as _account from openstack.object_store.v1 import container as _container from openstack.object_store.v1 import obj as _obj from openstack import proxy class Proxy(proxy.BaseProxy): Account = _account.Account Container = _container.Container Object = _obj.Object def get_account_metadata(self): """Get metadata for this account. :rtype: :class:`~openstack.object_store.v1.account.Account` """ return self._head(_account.Account) def set_account_metadata(self, **metadata): """Set metadata for this account. :param kwargs metadata: Key/value pairs to be set as metadata on the container. Custom metadata can be set. Custom metadata are keys and values defined by the user. """ account = self._get_resource(_account.Account, None) account.set_metadata(self, metadata) def delete_account_metadata(self, keys): """Delete metadata for this account. :param keys: The keys of metadata to be deleted. """ account = self._get_resource(_account.Account, None) account.delete_metadata(self, keys) def containers(self, **query): """Obtain Container objects for this account. :param kwargs query: Optional query parameters to be sent to limit the resources being returned. :rtype: A generator of :class:`~openstack.object_store.v1.container.Container` objects. """ return self._list(_container.Container, paginated=True, **query) def create_container(self, name, **attrs): """Create a new container from attributes :param container: Name of the container to create. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.object_store.v1.container.Container`, comprised of the properties on the Container class. :returns: The results of container creation :rtype: :class:`~openstack.object_store.v1.container.Container` """ return self._create(_container.Container, name=name, **attrs) def delete_container(self, container, ignore_missing=True): """Delete a container :param container: The value can be either the name of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the container does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent server. :returns: ``None`` """ self._delete(_container.Container, container, ignore_missing=ignore_missing) def get_container_metadata(self, container): """Get metadata for a container :param container: The value can be the name of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :returns: One :class:`~openstack.object_store.v1.container.Container` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._head(_container.Container, container) def set_container_metadata(self, container, **metadata): """Set metadata for a container. :param container: The value can be the name of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :param kwargs metadata: Key/value pairs to be set as metadata on the container. Both custom and system metadata can be set. Custom metadata are keys and values defined by the user. System metadata are keys defined by the Object Store and values defined by the user. The system metadata keys are: - `content_type` - `is_content_type_detected` - `versions_location` - `read_ACL` - `write_ACL` - `sync_to` - `sync_key` """ res = self._get_resource(_container.Container, container) res.set_metadata(self, metadata) return res def delete_container_metadata(self, container, keys): """Delete metadata for a container. :param container: The value can be the ID of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :param keys: The keys of metadata to be deleted. """ res = self._get_resource(_container.Container, container) res.delete_metadata(self, keys) return res def objects(self, container, **query): """Return a generator that yields the Container's objects. :param container: A container object or the name of a container that you want to retrieve objects from. :type container: :class:`~openstack.object_store.v1.container.Container` :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :rtype: A generator of :class:`~openstack.object_store.v1.obj.Object` objects. """ container = self._get_container_name(container=container) for obj in self._list( _obj.Object, container=container, paginated=True, **query): obj.container = container yield obj def _get_container_name(self, obj=None, container=None): if obj is not None: obj = self._get_resource(_obj.Object, obj) if obj.container is not None: return obj.container if container is not None: container = self._get_resource(_container.Container, container) return container.name raise ValueError("container must be specified") def get_object(self, obj, container=None): """Get the data associated with an object :param obj: The value can be the name of an object or a :class:`~openstack.object_store.v1.obj.Object` instance. :param container: The value can be the name of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :returns: The contents of the object. Use the :func:`~get_object_metadata` method if you want an object resource. :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ container_name = self._get_container_name( obj=obj, container=container) return self._get(_obj.Object, obj, container=container_name) def download_object(self, obj, container=None, **attrs): """Download the data contained inside an object. :param obj: The value can be the name of an object or a :class:`~openstack.object_store.v1.obj.Object` instance. :param container: The value can be the name of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ container_name = self._get_container_name( obj=obj, container=container) obj = self._get_resource( _obj.Object, obj, container=container_name, **attrs) return obj.download(self) def stream_object(self, obj, container=None, chunk_size=1024, **attrs): """Stream the data contained inside an object. :param obj: The value can be the name of an object or a :class:`~openstack.object_store.v1.obj.Object` instance. :param container: The value can be the name of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. :returns: An iterator that iterates over chunk_size bytes """ container_name = self._get_container_name( obj=obj, container=container) container_name = self._get_container_name(container=container) obj = self._get_resource( _obj.Object, obj, container=container_name, **attrs) return obj.stream(self, chunk_size=chunk_size) def create_object(self, container, name, **attrs): """Upload a new object from attributes :param container: The value can be the name of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :param name: Name of the object to create. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.object_store.v1.obj.Object`, comprised of the properties on the Object class. :returns: The results of object creation :rtype: :class:`~openstack.object_store.v1.container.Container` """ # TODO(mordred) Add ability to stream data from a file # TODO(mordred) Use create_object from OpenStackCloud container_name = self._get_container_name(container=container) return self._create( _obj.Object, container=container_name, name=name, **attrs) # Backwards compat upload_object = create_object def copy_object(self): """Copy an object.""" raise NotImplementedError def delete_object(self, obj, ignore_missing=True, container=None): """Delete an object :param obj: The value can be either the name of an object or a :class:`~openstack.object_store.v1.container.Container` instance. :param container: The value can be the ID of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the object does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent server. :returns: ``None`` """ container_name = self._get_container_name(obj, container) self._delete(_obj.Object, obj, ignore_missing=ignore_missing, container=container_name) def get_object_metadata(self, obj, container=None): """Get metadata for an object. :param obj: The value can be the name of an object or a :class:`~openstack.object_store.v1.obj.Object` instance. :param container: The value can be the ID of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :returns: One :class:`~openstack.object_store.v1.obj.Object` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ container_name = self._get_container_name(obj, container) return self._head(_obj.Object, obj, container=container_name) def set_object_metadata(self, obj, container=None, **metadata): """Set metadata for an object. Note: This method will do an extra HEAD call. :param obj: The value can be the name of an object or a :class:`~openstack.object_store.v1.obj.Object` instance. :param container: The value can be the name of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :param kwargs metadata: Key/value pairs to be set as metadata on the container. Both custom and system metadata can be set. Custom metadata are keys and values defined by the user. System metadata are keys defined by the Object Store and values defined by the user. The system metadata keys are: - `content_type` - `content_encoding` - `content_disposition` - `delete_after` - `delete_at` - `is_content_type_detected` """ container_name = self._get_container_name(obj, container) res = self._get_resource(_obj.Object, obj, container=container_name) res.set_metadata(self, metadata) return res def delete_object_metadata(self, obj, container=None, keys=None): """Delete metadata for an object. :param obj: The value can be the name of an object or a :class:`~openstack.object_store.v1.obj.Object` instance. :param container: The value can be the ID of a container or a :class:`~openstack.object_store.v1.container.Container` instance. :param keys: The keys of metadata to be deleted. """ container_name = self._get_container_name(obj, container) res = self._get_resource(_obj.Object, obj, container=container_name) res.delete_metadata(self, keys) return res openstacksdk-0.11.3/openstack/object_store/__init__.py0000666000175100017510000000000013236151340023061 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/utils.py0000666000175100017510000001110013236151340020003 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import string import time import deprecation from openstack import _log from openstack import exceptions from openstack import version def deprecated(deprecated_in=None, removed_in=None, details=""): """Mark a method as deprecated :param deprecated_in: The version string where this method is deprecated. Generally this is the next version to be released. :param removed_in: The version where this method will be removed from the code base. Generally this is the next major version. This argument is helpful for the tests when using ``deprecation.fail_if_not_removed``. :param str details: Helpful details to callers and the documentation. This will usually be a recommendation for alternate code to use. """ # As all deprecations within this library have the same current_version, # return a partial function with the library version always set. partial = functools.partial(deprecation.deprecated, current_version=version.__version__) # TODO(shade) shade's tags break these - so hard override them for now. # We'll want a patch fixing this before we cut any releases. removed_in = '2.0.0' return partial(deprecated_in=deprecated_in, removed_in=removed_in, details=details) @deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.enable_logging instead") def enable_logging(*args, **kwargs): """Backwards compatibility wrapper function. openstacksdk has had enable_logging in utils. It's in _log now and exposed directly at openstack.enable_logging. """ return _log.enable_logging(*args, **kwargs) def urljoin(*args): """A custom version of urljoin that simply joins strings into a path. The real urljoin takes into account web semantics like when joining a url like /path this should be joined to http://host/path as it is an anchored link. We generally won't care about that in client. """ return '/'.join(str(a or '').strip('/') for a in args) def iterate_timeout(timeout, message, wait=2): """Iterate and raise an exception on timeout. This is a generator that will continually yield and sleep for wait seconds, and if the timeout is reached, will raise an exception with . """ log = _log.setup_logging('openstack.iterate_timeout') try: # None as a wait winds up flowing well in the per-resource cache # flow. We could spread this logic around to all of the calling # points, but just having this treat None as "I don't have a value" # seems friendlier if wait is None: wait = 2 elif wait == 0: # wait should be < timeout, unless timeout is None wait = 0.1 if timeout is None else min(0.1, timeout) wait = float(wait) except ValueError: raise exceptions.SDKException( "Wait value must be an int or float value. {wait} given" " instead".format(wait=wait)) start = time.time() count = 0 while (timeout is None) or (time.time() < start + timeout): count += 1 yield count log.debug('Waiting %s seconds', wait) time.sleep(wait) raise exceptions.ResourceTimeout(message) def get_string_format_keys(fmt_string, old_style=True): """Gets a list of required keys from a format string Required mostly for parsing base_path urls for required keys, which use the old style string formatting. """ if old_style: class AccessSaver(object): def __init__(self): self.keys = [] def __getitem__(self, key): self.keys.append(key) a = AccessSaver() fmt_string % a return a.keys else: keys = [] for t in string.Formatter().parse(fmt_string): if t[1] is not None: keys.append(t[1]) return keys openstacksdk-0.11.3/openstack/clustering/0000775000175100017510000000000013236151501020454 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/clustering/version.py0000666000175100017510000000175413236151340022525 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = clustering_service.ClusteringService( version=clustering_service.ClusteringService.UNVERSIONED ) # capabilities allow_list = True # Properties links = resource.Body('links') status = resource.Body('status') openstacksdk-0.11.3/openstack/clustering/clustering_service.py0000666000175100017510000000170313236151340024731 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class ClusteringService(service_filter.ServiceFilter): """The clustering service.""" valid_versions = [service_filter.ValidVersion('v1')] UNVERSIONED = None def __init__(self, version=None): """Create a clustering service.""" super(ClusteringService, self).__init__( service_type='clustering', version=version ) openstacksdk-0.11.3/openstack/clustering/v1/0000775000175100017510000000000013236151501021002 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/clustering/v1/policy_type.py0000666000175100017510000000220613236151340023717 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class PolicyType(resource.Resource): resource_key = 'policy_type' resources_key = 'policy_types' base_path = '/policy-types' service = clustering_service.ClusteringService() # Capabilities allow_list = True allow_get = True # Properties #: Name of policy type. name = resource.Body('name', alternate_id=True) #: The schema of the policy type. schema = resource.Body('schema') #: The support status of the policy type support_status = resource.Body('support_status') openstacksdk-0.11.3/openstack/clustering/v1/build_info.py0000666000175100017510000000177713236151340023505 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class BuildInfo(resource.Resource): base_path = '/build-info' resource_key = 'build_info' service = clustering_service.ClusteringService() # Capabilities allow_get = True # Properties #: String representation of the API build version api = resource.Body('api') #: String representation of the engine build version engine = resource.Body('engine') openstacksdk-0.11.3/openstack/clustering/v1/cluster_attr.py0000666000175100017510000000231213236151340024070 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class ClusterAttr(resource.Resource): resources_key = 'cluster_attributes' base_path = '/clusters/%(cluster_id)s/attrs/%(path)s' service = clustering_service.ClusteringService() # capabilities allow_list = True # Properties #: The identity of the cluster cluster_id = resource.URI('cluster_id') #: The json path string for attribute retrieval path = resource.URI('path') #: The id of the node that carries the attribute value. node_id = resource.Body('id') #: The value of the attribute requested. attr_value = resource.Body('value') openstacksdk-0.11.3/openstack/clustering/v1/action.py0000666000175100017510000000562313236151340022642 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class Action(resource.Resource): resource_key = 'action' resources_key = 'actions' base_path = '/actions' service = clustering_service.ClusteringService() # Capabilities allow_list = True allow_get = True _query_mapping = resource.QueryParameters( 'name', 'action', 'status', 'sort', 'global_project', target_id='target') # Properties #: Name of the action. name = resource.Body('name') #: ID of the target object, which can be a cluster or a node. target_id = resource.Body('target') #: Built-in type name of action. action = resource.Body('action') #: A string representation of the reason why the action was created. cause = resource.Body('cause') #: The owning engine that is currently running the action. owner_id = resource.Body('owner') #: The ID of the user who created this action. user_id = resource.Body('user') #: The ID of the project this profile belongs to. project_id = resource.Body('project') #: The domain ID of the action. domain_id = resource.Body('domain') #: Interval in seconds between two consecutive executions. interval = resource.Body('interval') #: The time the action was started. start_at = resource.Body('start_time') #: The time the action completed execution. end_at = resource.Body('end_time') #: The timeout in seconds. timeout = resource.Body('timeout') #: Current status of the action. status = resource.Body('status') #: A string describing the reason that brought the action to its current # status. status_reason = resource.Body('status_reason') #: A dictionary containing the inputs to the action. inputs = resource.Body('inputs', type=dict) #: A dictionary containing the outputs to the action. outputs = resource.Body('outputs', type=dict) #: A list of actions that must finish before this action starts execution. depends_on = resource.Body('depends_on', type=list) #: A list of actions that can start only after this action has finished. depended_by = resource.Body('depended_by', type=list) #: Timestamp when the action is created. created_at = resource.Body('created_at') #: Timestamp when the action was last updated. updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/clustering/v1/profile_type.py0000666000175100017510000000222113236151340024055 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class ProfileType(resource.Resource): resource_key = 'profile_type' resources_key = 'profile_types' base_path = '/profile-types' service = clustering_service.ClusteringService() # Capabilities allow_list = True allow_get = True # Properties #: Name of the profile type. name = resource.Body('name', alternate_id=True) #: The schema of the profile type. schema = resource.Body('schema') #: The support status of the profile type support_status = resource.Body('support_status') openstacksdk-0.11.3/openstack/clustering/v1/cluster.py0000666000175100017510000001513513236151340023045 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource from openstack import utils class Cluster(resource.Resource): resource_key = 'cluster' resources_key = 'clusters' base_path = '/clusters' service = clustering_service.ClusteringService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'name', 'status', 'sort', 'global_project') # Properties #: The name of the cluster. name = resource.Body('name') #: The ID of the profile used by this cluster. profile_id = resource.Body('profile_id') #: The ID of the user who created this cluster, thus the owner of it. user_id = resource.Body('user') #: The ID of the project this cluster belongs to. project_id = resource.Body('project') #: The domain ID of the cluster owner. domain_id = resource.Body('domain') #: Timestamp of when the cluster was initialized. #: *Type: datetime object parsed from ISO 8601 formatted string* init_at = resource.Body('init_at') #: Timestamp of when the cluster was created. #: *Type: datetime object parsed from ISO 8601 formatted string* created_at = resource.Body('created_at') #: Timestamp of when the cluster was last updated. #: *Type: datetime object parsed from ISO 8601 formatted string* updated_at = resource.Body('updated_at') #: Lower bound (inclusive) for the size of the cluster. min_size = resource.Body('min_size', type=int) #: Upper bound (inclusive) for the size of the cluster. A value of #: -1 indicates that there is no upper limit of cluster size. max_size = resource.Body('max_size', type=int) #: Desired capacity for the cluster. A cluster would be created at the #: scale specified by this value. desired_capacity = resource.Body('desired_capacity', type=int) #: Default timeout (in seconds) for cluster operations. timeout = resource.Body('timeout') #: A string representation of the cluster status. status = resource.Body('status') #: A string describing the reason why the cluster in current status. status_reason = resource.Body('status_reason') #: A dictionary configuration for cluster. config = resource.Body('config', type=dict) #: A collection of key-value pairs that are attached to the cluster. metadata = resource.Body('metadata', type=dict) #: A dictionary with some runtime data associated with the cluster. data = resource.Body('data', type=dict) #: A list IDs of nodes that are members of the cluster. node_ids = resource.Body('nodes') #: Name of the profile used by the cluster. profile_name = resource.Body('profile_name') #: Specify whether the cluster update should only pertain to the profile. is_profile_only = resource.Body('profile_only', type=bool) #: A dictionary with dependency information of the cluster dependents = resource.Body('dependents', type=dict) def action(self, session, body): url = utils.urljoin(self.base_path, self._get_id(self), 'actions') resp = session.post(url, json=body) return resp.json() def add_nodes(self, session, nodes): body = { 'add_nodes': { 'nodes': nodes, } } return self.action(session, body) def del_nodes(self, session, nodes, **params): data = {'nodes': nodes} data.update(params) body = { 'del_nodes': data } return self.action(session, body) def replace_nodes(self, session, nodes): body = { 'replace_nodes': { 'nodes': nodes, } } return self.action(session, body) def scale_out(self, session, count=None): body = { 'scale_out': { 'count': count, } } return self.action(session, body) def scale_in(self, session, count=None): body = { 'scale_in': { 'count': count, } } return self.action(session, body) def resize(self, session, **params): body = { 'resize': params } return self.action(session, body) def policy_attach(self, session, policy_id, **params): data = {'policy_id': policy_id} data.update(params) body = { 'policy_attach': data } return self.action(session, body) def policy_detach(self, session, policy_id): body = { 'policy_detach': { 'policy_id': policy_id, } } return self.action(session, body) def policy_update(self, session, policy_id, **params): data = {'policy_id': policy_id} data.update(params) body = { 'policy_update': data } return self.action(session, body) def check(self, session, **params): body = { 'check': params } return self.action(session, body) def recover(self, session, **params): body = { 'recover': params } return self.action(session, body) def op(self, session, operation, **params): """Perform an operation on the cluster. :param session: A session object used for sending request. :param operation: A string representing the operation to be performed. :param dict params: An optional dict providing the parameters for the operation. :returns: A dictionary containing the action ID. """ url = utils.urljoin(self.base_path, self.id, 'ops') resp = session.post(url, json={operation: params}) return resp.json() def force_delete(self, session): """Force delete a cluster.""" body = {'force': True} url = utils.urljoin(self.base_path, self.id) resp = session.delete(url, json=body) self._translate_response(resp) return self openstacksdk-0.11.3/openstack/clustering/v1/policy.py0000666000175100017510000000405513236151340022662 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class Policy(resource.Resource): resource_key = 'policy' resources_key = 'policies' base_path = '/policies' service = clustering_service.ClusteringService() # Capabilities allow_list = True allow_get = True allow_create = True allow_delete = True allow_update = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'name', 'type', 'sort', 'global_project') # Properties #: The name of the policy. name = resource.Body('name') #: The type name of the policy. type = resource.Body('type') #: The ID of the project this policy belongs to. project_id = resource.Body('project') # The domain ID of the policy. domain_id = resource.Body('domain') #: The ID of the user who created this policy. user_id = resource.Body('user') #: The timestamp when the policy is created. created_at = resource.Body('created_at') #: The timestamp when the policy was last updated. updated_at = resource.Body('updated_at') #: The specification of the policy. spec = resource.Body('spec', type=dict) #: A dictionary containing runtime data of the policy. data = resource.Body('data', type=dict) class PolicyValidate(Policy): base_path = '/policies/validate' # Capabilities allow_list = False allow_get = False allow_create = True allow_delete = False allow_update = False update_method = 'PUT' openstacksdk-0.11.3/openstack/clustering/v1/node.py0000666000175100017510000001462413236151340022313 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource from openstack import utils class Node(resource.Resource): resource_key = 'node' resources_key = 'nodes' base_path = '/nodes' service = clustering_service.ClusteringService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'show_details', 'name', 'sort', 'global_project', 'cluster_id', 'status') # Properties #: The name of the node. name = resource.Body('name') #: The ID of the physical object that backs the node. physical_id = resource.Body('physical_id') #: The ID of the cluster in which this node is a member. #: A node is an orphan node if this field is empty. cluster_id = resource.Body('cluster_id') #: The ID of the profile used by this node. profile_id = resource.Body('profile_id') #: The domain ID of the node. domain_id = resource.Body('domain') #: The ID of the user who created this node. user_id = resource.Body('user') #: The ID of the project this node belongs to. project_id = resource.Body('project') #: The name of the profile used by this node. profile_name = resource.Body('profile_name') #: An integer that is unique inside the owning cluster. #: A value of -1 means this node is an orphan node. index = resource.Body('index', type=int) #: A string indicating the role the node plays in a cluster. role = resource.Body('role') #: The timestamp of the node object's initialization. #: *Type: datetime object parsed from ISO 8601 formatted string* init_at = resource.Body('init_at') #: The timestamp of the node's creation, i.e. the physical object #: represented by this node is also created. #: *Type: datetime object parsed from ISO 8601 formatted string* created_at = resource.Body('created_at') #: The timestamp the node was last updated. #: *Type: datetime object parsed from ISO 8601 formatted string* updated_at = resource.Body('updated_at') #: A string indicating the node's status. status = resource.Body('status') #: A string describing why the node entered its current status. status_reason = resource.Body('status_reason') #: A map containing key-value pairs attached to the node. metadata = resource.Body('metadata', type=dict) #: A map containing some runtime data for this node. data = resource.Body('data', type=dict) #: A map containing the details of the physical object this node #: represents details = resource.Body('details', type=dict) #: A map containing the dependency of nodes dependents = resource.Body('dependents', type=dict) def _action(self, session, body): """Procedure the invoke an action API. :param session: A session object used for sending request. :param body: The body of action to be sent. """ url = utils.urljoin(self.base_path, self.id, 'actions') resp = session.post(url, json=body) return resp.json() def check(self, session, **params): """An action procedure for the node to check its health status. :param session: A session object used for sending request. :returns: A dictionary containing the action ID. """ body = { 'check': params } return self._action(session, body) def recover(self, session, **params): """An action procedure for the node to recover. :param session: A session object used for sending request. :returns: A dictionary containing the action ID. """ body = { 'recover': params } return self._action(session, body) def op(self, session, operation, **params): """Perform an operation on the specified node. :param session: A session object used for sending request. :param operation: A string representing the operation to be performed. :param dict params: An optional dict providing the parameters for the operation. :returns: A dictionary containing the action ID. """ url = utils.urljoin(self.base_path, self.id, 'ops') resp = session.post(url, json={operation: params}) return resp.json() def adopt(self, session, preview=False, **params): """Adopt a node for management. :param session: A session object used for sending request. :param preview: A boolean indicating whether the adoption is a preview. A "preview" does not create the node object. :param dict params: A dict providing the details of a node to be adopted. """ if preview: path = 'adopt-preview' attrs = { 'identity': params.get('identity'), 'overrides': params.get('overrides'), 'type': params.get('type'), 'snapshot': params.get('snapshot') } else: path = 'adopt' attrs = params url = utils.urljoin(self.base_path, path) resp = session.post(url, json=attrs) if preview: return resp.json() self._translate_response(resp) return self def force_delete(self, session): """Force delete a node.""" body = {'force': True} url = utils.urljoin(self.base_path, self.id) resp = session.delete(url, json=body) self._translate_response(resp) return self class NodeDetail(Node): base_path = '/nodes/%(node_id)s?show_details=True' allow_create = False allow_get = True allow_update = False allow_delete = False allow_list = False node_id = resource.URI('node_id') openstacksdk-0.11.3/openstack/clustering/v1/service.py0000666000175100017510000000253013236151340023017 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class Service(resource.Resource): resource_key = 'service' resources_key = 'services' base_path = '/services' service = clustering_service.ClusteringService() # Capabilities allow_list = True # Properties #: Status of service status = resource.Body('status') #: State of service state = resource.Body('state') #: Name of service binary = resource.Body('binary') #: Disabled reason of service disabled_reason = resource.Body('disabled_reason') #: Host where service runs host = resource.Body('host') #: The timestamp the service was last updated. #: *Type: datetime object parsed from ISO 8601 formatted string* updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/clustering/v1/__init__.py0000666000175100017510000000000013236151340023104 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/clustering/v1/profile.py0000666000175100017510000000405413236151340023022 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class Profile(resource.Resource): resource_key = 'profile' resources_key = 'profiles' base_path = '/profiles' service = clustering_service.ClusteringService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'sort', 'global_project', 'type', 'name') # Bodyerties #: The name of the profile name = resource.Body('name') #: The type of the profile. type = resource.Body('type') #: The ID of the project this profile belongs to. project_id = resource.Body('project') #: The domain ID of the profile. domain_id = resource.Body('domain') #: The ID of the user who created this profile. user_id = resource.Body('user') #: The spec of the profile. spec = resource.Body('spec', type=dict) #: A collection of key-value pairs that are attached to the profile. metadata = resource.Body('metadata', type=dict) #: Timestamp of when the profile was created. created_at = resource.Body('created_at') #: Timestamp of when the profile was last updated. updated_at = resource.Body('updated_at') class ProfileValidate(Profile): base_path = '/profiles/validate' allow_create = True allow_get = False allow_update = False allow_delete = False allow_list = False update_method = 'PUT' openstacksdk-0.11.3/openstack/clustering/v1/event.py0000666000175100017510000000410613236151340022501 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class Event(resource.Resource): resource_key = 'event' resources_key = 'events' base_path = '/events' service = clustering_service.ClusteringService() # Capabilities allow_list = True allow_get = True _query_mapping = resource.QueryParameters( 'cluster_id', 'action', 'level', 'sort', 'global_project', obj_id='oid', obj_name='oname', obj_type='otype', ) # Properties #: Timestamp string (in ISO8601 format) when the event was generated. generated_at = resource.Body('timestamp') #: The UUID of the object related to this event. obj_id = resource.Body('oid') #: The name of the object related to this event. obj_name = resource.Body('oname') #: The type name of the object related to this event. obj_type = resource.Body('otype') #: The UUID of the cluster related to this event, if any. cluster_id = resource.Body('cluster_id') #: The event level (priority). level = resource.Body('level') #: The ID of the user. user_id = resource.Body('user') #: The ID of the project (tenant). project_id = resource.Body('project') #: The string representation of the action associated with the event. action = resource.Body('action') #: The status of the associated object. status = resource.Body('status') #: A string description of the reason that brought the object into its #: current status. status_reason = resource.Body('status_reason') openstacksdk-0.11.3/openstack/clustering/v1/receiver.py0000666000175100017510000000442113236151340023164 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class Receiver(resource.Resource): resource_key = 'receiver' resources_key = 'receivers' base_path = '/receivers' service = clustering_service.ClusteringService() # Capabilities allow_list = True allow_get = True allow_create = True allow_update = True allow_delete = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'name', 'type', 'cluster_id', 'action', 'sort', 'global_project', user_id='user') # Properties #: The name of the receiver. name = resource.Body('name') #: The type of the receiver. type = resource.Body('type') #: The ID of the user who created the receiver, thus the owner of it. user_id = resource.Body('user') #: The ID of the project this receiver belongs to. project_id = resource.Body('project') #: The domain ID of the receiver. domain_id = resource.Body('domain') #: The ID of the targeted cluster. cluster_id = resource.Body('cluster_id') #: The name of the targeted action. action = resource.Body('action') #: Timestamp of when the receiver was created. created_at = resource.Body('created_at') #: Timestamp of when the receiver was last updated. updated_at = resource.Body('updated_at') #: The credential of the impersonated user. actor = resource.Body('actor', type=dict) #: A dictionary containing key-value pairs that are provided to the #: targeted action. params = resource.Body('params', type=dict) #: The information about the channel through which you can trigger the #: receiver hence the associated action. channel = resource.Body('channel', type=dict) openstacksdk-0.11.3/openstack/clustering/v1/cluster_policy.py0000666000175100017510000000322713236151340024423 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering import clustering_service from openstack import resource class ClusterPolicy(resource.Resource): resource_key = 'cluster_policy' resources_key = 'cluster_policies' base_path = '/clusters/%(cluster_id)s/policies' service = clustering_service.ClusteringService() # Capabilities allow_list = True allow_get = True _query_mapping = resource.QueryParameters( 'sort', 'policy_name', 'policy_type', is_enabled='enabled') # Properties #: ID of the policy object. policy_id = resource.Body('policy_id', alternate_id=True) #: Name of the policy object. policy_name = resource.Body('policy_name') #: ID of the cluster object. cluster_id = resource.URI('cluster_id') #: Name of the cluster object. cluster_name = resource.Body('cluster_name') #: Type string of the policy. policy_type = resource.Body('policy_type') #: Whether the policy is enabled on the cluster. *Type: bool* is_enabled = resource.Body('enabled', type=bool) #: Data associated with the cluster-policy binding. data = resource.Body('data', type=dict) openstacksdk-0.11.3/openstack/clustering/v1/_proxy.py0000666000175100017510000014752413236151340022714 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.clustering.v1 import action as _action from openstack.clustering.v1 import build_info from openstack.clustering.v1 import cluster as _cluster from openstack.clustering.v1 import cluster_attr as _cluster_attr from openstack.clustering.v1 import cluster_policy as _cluster_policy from openstack.clustering.v1 import event as _event from openstack.clustering.v1 import node as _node from openstack.clustering.v1 import policy as _policy from openstack.clustering.v1 import policy_type as _policy_type from openstack.clustering.v1 import profile as _profile from openstack.clustering.v1 import profile_type as _profile_type from openstack.clustering.v1 import receiver as _receiver from openstack.clustering.v1 import service as _service from openstack import proxy from openstack import resource from openstack import utils class Proxy(proxy.BaseProxy): def get_build_info(self): """Get build info for service engine and API :returns: A dictionary containing the API and engine revision string. """ return self._get(build_info.BuildInfo, requires_id=False) def profile_types(self, **query): """Get a generator of profile types. :returns: A generator of objects that are of type :class:`~openstack.clustering.v1.profile_type.ProfileType` """ return self._list(_profile_type.ProfileType, paginated=False, **query) def get_profile_type(self, profile_type): """Get the details about a profile_type. :param name: The name of the profile_type to retrieve or an object of :class:`~openstack.clustering.v1.profile_type.ProfileType`. :returns: A :class:`~openstack.clustering.v1.profile_type.ProfileType` object. :raises: :class:`~openstack.exceptions.ResourceNotFound` when no profile_type matching the name could be found. """ return self._get(_profile_type.ProfileType, profile_type) def policy_types(self, **query): """Get a generator of policy types. :returns: A generator of objects that are of type :class:`~openstack.clustering.v1.policy_type.PolicyType` """ return self._list(_policy_type.PolicyType, paginated=False, **query) def get_policy_type(self, policy_type): """Get the details about a policy_type. :param policy_type: The name of a poicy_type or an object of :class:`~openstack.clustering.v1.policy_type.PolicyType`. :returns: A :class:`~openstack.clustering.v1.policy_type.PolicyType` object. :raises: :class:`~openstack.exceptions.ResourceNotFound` when no policy_type matching the name could be found. """ return self._get(_policy_type.PolicyType, policy_type) def create_profile(self, **attrs): """Create a new profile from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.clustering.v1.profile.Profile`, it is comprised of the properties on the Profile class. :returns: The results of profile creation. :rtype: :class:`~openstack.clustering.v1.profile.Profile`. """ return self._create(_profile.Profile, **attrs) def delete_profile(self, profile, ignore_missing=True): """Delete a profile. :param profile: The value can be either the name or ID of a profile or a :class:`~openstack.clustering.v1.profile.Profile` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the profile could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent profile. :returns: ``None`` """ self._delete(_profile.Profile, profile, ignore_missing=ignore_missing) def find_profile(self, name_or_id, ignore_missing=True): """Find a single profile. :param str name_or_id: The name or ID of a profile. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.clustering.v1.profile.Profile` object or None """ return self._find(_profile.Profile, name_or_id, ignore_missing=ignore_missing) def get_profile(self, profile): """Get a single profile. :param profile: The value can be the name or ID of a profile or a :class:`~openstack.clustering.v1.profile.Profile` instance. :returns: One :class:`~openstack.clustering.v1.profile.Profile` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no profile matching the criteria could be found. """ return self._get(_profile.Profile, profile) def profiles(self, **query): """Retrieve a generator of profiles. :param kwargs \*\*query: Optional query parameters to be sent to restrict the profiles to be returned. Available parameters include: * name: The name of a profile. * type: The type name of a profile. * metadata: A list of key-value pairs that are associated with a profile. * sort: A list of sorting keys separated by commas. Each sorting key can optionally be attached with a sorting direction modifier which can be ``asc`` or ``desc``. * limit: Requests a specified size of returned items from the query. Returns a number of items up to the specified limit value. * marker: Specifies the ID of the last-seen item. Use the limit parameter to make an initial limited request and use the ID of the last-seen item from the response as the marker parameter value in a subsequent limited request. * global_project: A boolean value indicating whether profiles from all projects will be returned. :returns: A generator of profile instances. """ return self._list(_profile.Profile, paginated=True, **query) def update_profile(self, profile, **attrs): """Update a profile. :param profile: Either the name or the ID of the profile, or an instance of :class:`~openstack.clustering.v1.profile.Profile`. :param attrs: The attributes to update on the profile represented by the ``value`` parameter. :returns: The updated profile. :rtype: :class:`~openstack.clustering.v1.profile.Profile` """ return self._update(_profile.Profile, profile, **attrs) def validate_profile(self, **attrs): """Validate a profile spec. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.clustering.v1.profile.ProfileValidate`, it is comprised of the properties on the Profile class. :returns: The results of profile validation. :rtype: :class:`~openstack.clustering.v1.profile.ProfileValidate`. """ return self._create(_profile.ProfileValidate, **attrs) def create_cluster(self, **attrs): """Create a new cluster from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.clustering.v1.cluster.Cluster`, it is comprised of the properties on the Cluster class. :returns: The results of cluster creation. :rtype: :class:`~openstack.clustering.v1.cluster.Cluster`. """ return self._create(_cluster.Cluster, **attrs) def delete_cluster(self, cluster, ignore_missing=True, force_delete=False): """Delete a cluster. :param cluster: The value can be either the name or ID of a cluster or a :class:`~openstack.cluster.v1.cluster.Cluster` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the cluster could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent cluster. :param bool force_delete: When set to ``True``, the cluster deletion will be forced immediately. :returns: The instance of the Cluster which was deleted. :rtype: :class:`~openstack.cluster.v1.cluster.Cluster`. """ if force_delete: server = self._get_resource(_cluster.Cluster, cluster) return server.force_delete(self) else: return self._delete(_cluster.Cluster, cluster, ignore_missing=ignore_missing) def find_cluster(self, name_or_id, ignore_missing=True): """Find a single cluster. :param str name_or_id: The name or ID of a cluster. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.clustering.v1.cluster.Cluster` object or None """ return self._find(_cluster.Cluster, name_or_id, ignore_missing=ignore_missing) def get_cluster(self, cluster): """Get a single cluster. :param cluster: The value can be the name or ID of a cluster or a :class:`~openstack.clustering.v1.cluster.Cluster` instance. :returns: One :class:`~openstack.clustering.v1.cluster.Cluster` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no cluster matching the criteria could be found. """ return self._get(_cluster.Cluster, cluster) def clusters(self, **query): """Retrieve a generator of clusters. :param kwargs \*\*query: Optional query parameters to be sent to restrict the clusters to be returned. Available parameters include: * name: The name of a cluster. * status: The current status of a cluster. * sort: A list of sorting keys separated by commas. Each sorting key can optionally be attached with a sorting direction modifier which can be ``asc`` or ``desc``. * limit: Requests a specified size of returned items from the query. Returns a number of items up to the specified limit value. * marker: Specifies the ID of the last-seen item. Use the limit parameter to make an initial limited request and use the ID of the last-seen item from the response as the marker parameter value in a subsequent limited request. * global_project: A boolean value indicating whether clusters from all projects will be returned. :returns: A generator of cluster instances. """ return self._list(_cluster.Cluster, paginated=True, **query) def update_cluster(self, cluster, **attrs): """Update a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param attrs: The attributes to update on the cluster represented by the ``cluster`` parameter. :returns: The updated cluster. :rtype: :class:`~openstack.clustering.v1.cluster.Cluster` """ return self._update(_cluster.Cluster, cluster, **attrs) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use add_nodes_to_cluster instead") def cluster_add_nodes(self, cluster, nodes): """Add nodes to a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param nodes: List of nodes to be added to the cluster. :returns: A dict containing the action initiated by this operation. """ return self.add_nodes_to_cluster(cluster, nodes) def add_nodes_to_cluster(self, cluster, nodes): """Add nodes to a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param nodes: List of nodes to be added to the cluster. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.add_nodes(self, nodes) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use remove_nodes_from_cluster instead") def cluster_del_nodes(self, cluster, nodes, **params): """Remove nodes from a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param nodes: List of nodes to be removed from the cluster. :param kwargs \*\*params: Optional query parameters to be sent to restrict the nodes to be returned. Available parameters include: * destroy_after_deletion: A boolean value indicating whether the deleted nodes to be destroyed right away. :returns: A dict containing the action initiated by this operation. """ return self.remove_nodes_from_cluster(cluster, nodes, **params) def remove_nodes_from_cluster(self, cluster, nodes, **params): """Remove nodes from a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param nodes: List of nodes to be removed from the cluster. :param kwargs \*\*params: Optional query parameters to be sent to restrict the nodes to be returned. Available parameters include: * destroy_after_deletion: A boolean value indicating whether the deleted nodes to be destroyed right away. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.del_nodes(self, nodes, **params) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use replace_nodes_in_cluster instead") def cluster_replace_nodes(self, cluster, nodes): """Replace the nodes in a cluster with specified nodes. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param nodes: List of nodes to be deleted/added to the cluster. :returns: A dict containing the action initiated by this operation. """ return self.replace_nodes_in_cluster(cluster, nodes) def replace_nodes_in_cluster(self, cluster, nodes): """Replace the nodes in a cluster with specified nodes. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param nodes: List of nodes to be deleted/added to the cluster. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.replace_nodes(self, nodes) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use scale_out_cluster instead") def cluster_scale_out(self, cluster, count=None): """Inflate the size of a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param count: Optional parameter specifying the number of nodes to be added. :returns: A dict containing the action initiated by this operation. """ return self.scale_out_cluster(cluster, count) def scale_out_cluster(self, cluster, count=None): """Inflate the size of a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param count: Optional parameter specifying the number of nodes to be added. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.scale_out(self, count) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use scale_in_cluster instead") def cluster_scale_in(self, cluster, count=None): """Shrink the size of a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param count: Optional parameter specifying the number of nodes to be removed. :returns: A dict containing the action initiated by this operation. """ return self.scale_in_cluster(cluster, count) def scale_in_cluster(self, cluster, count=None): """Shrink the size of a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param count: Optional parameter specifying the number of nodes to be removed. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.scale_in(self, count) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use resize_cluster instead") def cluster_resize(self, cluster, **params): """Resize of cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param dict \*\*params: A dictionary providing the parameters for the resize action. :returns: A dict containing the action initiated by this operation. """ return self.resize_cluster(cluster, **params) def resize_cluster(self, cluster, **params): """Resize of cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param dict \*\*params: A dictionary providing the parameters for the resize action. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.resize(self, **params) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use attach_policy_to_cluster instead") def cluster_attach_policy(self, cluster, policy, **params): """Attach a policy to a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param policy: Either the name or the ID of a policy. :param dict \*\*params: A dictionary containing the properties for the policy to be attached. :returns: A dict containing the action initiated by this operation. """ return self.attach_policy_to_cluster(cluster, policy, **params) def attach_policy_to_cluster(self, cluster, policy, **params): """Attach a policy to a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param policy: Either the name or the ID of a policy. :param dict \*\*params: A dictionary containing the properties for the policy to be attached. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.policy_attach(self, policy, **params) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use detach_policy_from_cluster instead") def cluster_detach_policy(self, cluster, policy): """Detach a policy from a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param policy: Either the name or the ID of a policy. :returns: A dict containing the action initiated by this operation. """ return self.detach_policy_from_cluster(cluster, policy) def detach_policy_from_cluster(self, cluster, policy): """Detach a policy from a cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param policy: Either the name or the ID of a policy. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.policy_detach(self, policy) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use update_cluster_policy instead") def cluster_update_policy(self, cluster, policy, **params): """Change properties of a policy which is bound to the cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param policy: Either the name or the ID of a policy. :param dict \*\*params: A dictionary containing the new properties for the policy. :returns: A dict containing the action initiated by this operation. """ return self.update_cluster_policy(cluster, policy, **params) def update_cluster_policy(self, cluster, policy, **params): """Change properties of a policy which is bound to the cluster. :param cluster: Either the name or the ID of the cluster, or an instance of :class:`~openstack.clustering.v1.cluster.Cluster`. :param policy: Either the name or the ID of a policy. :param dict \*\*params: A dictionary containing the new properties for the policy. :returns: A dict containing the action initiated by this operation. """ if isinstance(cluster, _cluster.Cluster): obj = cluster else: obj = self._find(_cluster.Cluster, cluster, ignore_missing=False) return obj.policy_update(self, policy, **params) def collect_cluster_attrs(self, cluster, path): """Collect attribute values across a cluster. :param cluster: The value can be either the ID of a cluster or a :class:`~openstack.clustering.v1.cluster.Cluster` instance. :param path: A Json path string specifying the attribute to collect. :returns: A dictionary containing the list of attribute values. """ return self._list(_cluster_attr.ClusterAttr, paginated=False, cluster_id=cluster, path=path) def check_cluster(self, cluster, **params): """Check a cluster. :param cluster: The value can be either the ID of a cluster or a :class:`~openstack.clustering.v1.cluster.Cluster` instance. :param dict params: A dictionary providing the parameters for the check action. :returns: A dictionary containing the action ID. """ obj = self._get_resource(_cluster.Cluster, cluster) return obj.check(self, **params) def recover_cluster(self, cluster, **params): """recover a cluster. :param cluster: The value can be either the ID of a cluster or a :class:`~openstack.clustering.v1.cluster.Cluster` instance. :param dict params: A dictionary providing the parameters for the recover action. :returns: A dictionary containing the action ID. """ obj = self._get_resource(_cluster.Cluster, cluster) return obj.recover(self, **params) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use perform_operation_on_cluster instead") def cluster_operation(self, cluster, operation, **params): """Perform an operation on the specified cluster. :param cluster: The value can be either the ID of a cluster or a :class:`~openstack.clustering.v1.cluster.Cluster` instance. :param operation: A string specifying the operation to be performed. :param dict params: A dictionary providing the parameters for the operation. :returns: A dictionary containing the action ID. """ return self.perform_operation_on_cluster(cluster, operation, **params) def perform_operation_on_cluster(self, cluster, operation, **params): """Perform an operation on the specified cluster. :param cluster: The value can be either the ID of a cluster or a :class:`~openstack.clustering.v1.cluster.Cluster` instance. :param operation: A string specifying the operation to be performed. :param dict params: A dictionary providing the parameters for the operation. :returns: A dictionary containing the action ID. """ obj = self._get_resource(_cluster.Cluster, cluster) return obj.op(self, operation, **params) def create_node(self, **attrs): """Create a new node from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.clustering.v1.node.Node`, it is comprised of the properties on the ``Node`` class. :returns: The results of node creation. :rtype: :class:`~openstack.clustering.v1.node.Node`. """ return self._create(_node.Node, **attrs) def delete_node(self, node, ignore_missing=True, force_delete=False): """Delete a node. :param node: The value can be either the name or ID of a node or a :class:`~openstack.cluster.v1.node.Node` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the node could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent node. :param bool force_delete: When set to ``True``, the node deletion will be forced immediately. :returns: The instance of the Node which was deleted. :rtype: :class:`~openstack.cluster.v1.node.Node`. """ if force_delete: server = self._get_resource(_node.Node, node) return server.force_delete(self) else: return self._delete(_node.Node, node, ignore_missing=ignore_missing) def find_node(self, name_or_id, ignore_missing=True): """Find a single node. :param str name_or_id: The name or ID of a node. :returns: One :class:`~openstack.clustering.v1.node.Node` object or None. """ return self._find(_node.Node, name_or_id, ignore_missing=ignore_missing) def get_node(self, node, details=False): """Get a single node. :param node: The value can be the name or ID of a node or a :class:`~openstack.clustering.v1.node.Node` instance. :param details: An optional argument that indicates whether the server should return more details when retrieving the node data. :returns: One :class:`~openstack.clustering.v1.node.Node` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no node matching the name or ID could be found. """ # NOTE: When retrieving node with details (using NodeDetail resource), # the `node_id` is treated as part of the base_path thus a URI # property rather than a resource ID as assumed by the _get() method # in base proxy. if details: return self._get(_node.NodeDetail, requires_id=False, node_id=node) return self._get(_node.Node, node) def nodes(self, **query): """Retrieve a generator of nodes. :param kwargs \*\*query: Optional query parameters to be sent to restrict the nodes to be returned. Available parameters include: * cluster_id: A string including the name or ID of a cluster to which the resulted node(s) is a member. * name: The name of a node. * status: The current status of a node. * sort: A list of sorting keys separated by commas. Each sorting key can optionally be attached with a sorting direction modifier which can be ``asc`` or ``desc``. * limit: Requests at most the specified number of items be returned from the query. * marker: Specifies the ID of the last-seen node. Use the limit parameter to make an initial limited request and use the ID of the last-seen node from the response as the marker parameter value in a subsequent limited request. * global_project: A boolean value indicating whether nodes from all projects will be returned. :returns: A generator of node instances. """ return self._list(_node.Node, paginated=True, **query) def update_node(self, node, **attrs): """Update a node. :param node: Either the name or the ID of the node, or an instance of :class:`~openstack.clustering.v1.node.Node`. :param attrs: The attributes to update on the node represented by the ``node`` parameter. :returns: The updated node. :rtype: :class:`~openstack.clustering.v1.node.Node` """ return self._update(_node.Node, node, **attrs) def check_node(self, node, **params): """Check the health of the specified node. :param node: The value can be either the ID of a node or a :class:`~openstack.clustering.v1.node.Node` instance. :param dict params: A dictionary providing the parametes to the check action. :returns: A dictionary containing the action ID. """ obj = self._get_resource(_node.Node, node) return obj.check(self, **params) def recover_node(self, node, **params): """Recover the specified node into healthy status. :param node: The value can be either the ID of a node or a :class:`~openstack.clustering.v1.node.Node` instance. :param dict params: A dict supplying parameters to the recover action. :returns: A dictionary containing the action ID. """ obj = self._get_resource(_node.Node, node) return obj.recover(self, **params) def adopt_node(self, preview=False, **attrs): """Adopting an existing resource as a node. :param preview: A boolean indicating whether this is a "preview" operation which means only the profile to be used is returned rather than creating a node object using that profile. :param dict attrs: Keyword parameters for node adoption. Valid parameters include: * type: (Required) A string containing the profile type and version to be used for node adoption. For example, ``os.nova.sever-1.0``. * identity: (Required) A string including the name or ID of an OpenStack resource to be adopted as a Senlin node. * name: (Optional) The name of of node to be created. Omitting this parameter will have the node named automatically. * snapshot: (Optional) A boolean indicating whether a snapshot of the target resource should be created if possible. Default is False. * metadata: (Optional) A dictionary of arbitrary key-value pairs to be associated with the adopted node. * overrides: (Optional) A dictionary of key-value pairs to be used to override attributes derived from the target resource. :returns: The result of node adoption. If `preview` is set to False (default), returns a :class:`~openstack.clustering.v1.node.Node` object, otherwise a Dict is returned containing the profile to be used for the new node. """ node = self._get_resource(_node.Node, None) return node.adopt(self, preview=preview, **attrs) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use perform_operation_on_node instead") def node_operation(self, node, operation, **params): """Perform an operation on the specified node. :param cluster: The value can be either the ID of a node or a :class:`~openstack.clustering.v1.node.Node` instance. :param operation: A string specifying the operation to be performed. :param dict params: A dictionary providing the parameters for the operation. :returns: A dictionary containing the action ID. """ return self.perform_operation_on_node(node, operation, **params) def perform_operation_on_node(self, node, operation, **params): """Perform an operation on the specified node. :param cluster: The value can be either the ID of a node or a :class:`~openstack.clustering.v1.node.Node` instance. :param operation: A string specifying the operation to be performed. :param dict params: A dictionary providing the parameters for the operation. :returns: A dictionary containing the action ID. """ obj = self._get_resource(_node.Node, node) return obj.op(self, operation, **params) def create_policy(self, **attrs): """Create a new policy from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.clustering.v1.policy.Policy`, it is comprised of the properties on the ``Policy`` class. :returns: The results of policy creation. :rtype: :class:`~openstack.clustering.v1.policy.Policy`. """ return self._create(_policy.Policy, **attrs) def delete_policy(self, policy, ignore_missing=True): """Delete a policy. :param policy: The value can be either the name or ID of a policy or a :class:`~openstack.clustering.v1.policy.Policy` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the policy could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent policy. :returns: ``None`` """ self._delete(_policy.Policy, policy, ignore_missing=ignore_missing) def find_policy(self, name_or_id, ignore_missing=True): """Find a single policy. :param str name_or_id: The name or ID of a policy. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the specified policy does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent policy. :returns: A policy object or None. :rtype: :class:`~openstack.clustering.v1.policy.Policy` """ return self._find(_policy.Policy, name_or_id, ignore_missing=ignore_missing) def get_policy(self, policy): """Get a single policy. :param policy: The value can be the name or ID of a policy or a :class:`~openstack.clustering.v1.policy.Policy` instance. :returns: A policy object. :rtype: :class:`~openstack.clustering.v1.policy.Policy` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no policy matching the criteria could be found. """ return self._get(_policy.Policy, policy) def policies(self, **query): """Retrieve a generator of policies. :param kwargs \*\*query: Optional query parameters to be sent to restrict the policies to be returned. Available parameters include: * name: The name of a policy. * type: The type name of a policy. * sort: A list of sorting keys separated by commas. Each sorting key can optionally be attached with a sorting direction modifier which can be ``asc`` or ``desc``. * limit: Requests a specified size of returned items from the query. Returns a number of items up to the specified limit value. * marker: Specifies the ID of the last-seen item. Use the limit parameter to make an initial limited request and use the ID of the last-seen item from the response as the marker parameter value in a subsequent limited request. * global_project: A boolean value indicating whether policies from all projects will be returned. :returns: A generator of policy instances. """ return self._list(_policy.Policy, paginated=True, **query) def update_policy(self, policy, **attrs): """Update a policy. :param policy: Either the name or the ID of a policy, or an instance of :class:`~openstack.clustering.v1.policy.Policy`. :param attrs: The attributes to update on the policy represented by the ``value`` parameter. :returns: The updated policy. :rtype: :class:`~openstack.clustering.v1.policy.Policy` """ return self._update(_policy.Policy, policy, **attrs) def validate_policy(self, **attrs): """Validate a policy spec. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.clustering.v1.policy.PolicyValidate`, it is comprised of the properties on the Policy class. :returns: The results of Policy validation. :rtype: :class:`~openstack.clustering.v1.policy.PolicyValidate`. """ return self._create(_policy.PolicyValidate, **attrs) def cluster_policies(self, cluster, **query): """Retrieve a generator of cluster-policy bindings. :param cluster: The value can be the name or ID of a cluster or a :class:`~openstack.clustering.v1.cluster.Cluster` instance. :param kwargs \*\*query: Optional query parameters to be sent to restrict the policies to be returned. Available parameters include: * enabled: A boolean value indicating whether the policy is enabled on the cluster. :returns: A generator of cluster-policy binding instances. """ cluster_id = resource.Resource._get_id(cluster) return self._list(_cluster_policy.ClusterPolicy, paginated=False, cluster_id=cluster_id, **query) def get_cluster_policy(self, cluster_policy, cluster): """Get a cluster-policy binding. :param cluster_policy: The value can be the name or ID of a policy or a :class:`~openstack.clustering.v1.policy.Policy` instance. :param cluster: The value can be the name or ID of a cluster or a :class:`~openstack.clustering.v1.cluster.Cluster` instance. :returns: a cluster-policy binding object. :rtype: :class:`~openstack.clustering.v1.cluster_policy.CLusterPolicy` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no cluster-policy binding matching the criteria could be found. """ return self._get(_cluster_policy.ClusterPolicy, cluster_policy, cluster_id=cluster) def create_receiver(self, **attrs): """Create a new receiver from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.clustering.v1.receiver.Receiver`, it is comprised of the properties on the Receiver class. :returns: The results of receiver creation. :rtype: :class:`~openstack.clustering.v1.receiver.Receiver`. """ return self._create(_receiver.Receiver, **attrs) def update_receiver(self, receiver, **attrs): """Update a receiver. :param receiver: The value can be either the name or ID of a receiver or a :class:`~openstack.clustering.v1.receiver.Receiver` instance. :param attrs: The attributes to update on the receiver parameter. Valid attribute names include ``name``, ``action`` and ``params``. :returns: The updated receiver. :rtype: :class:`~openstack.clustering.v1.receiver.Receiver` """ return self._update(_receiver.Receiver, receiver, **attrs) def delete_receiver(self, receiver, ignore_missing=True): """Delete a receiver. :param receiver: The value can be either the name or ID of a receiver or a :class:`~openstack.clustering.v1.receiver.Receiver` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the receiver could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent receiver. :returns: ``None`` """ self._delete(_receiver.Receiver, receiver, ignore_missing=ignore_missing) def find_receiver(self, name_or_id, ignore_missing=True): """Find a single receiver. :param str name_or_id: The name or ID of a receiver. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the specified receiver does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent receiver. :returns: A receiver object or None. :rtype: :class:`~openstack.clustering.v1.receiver.Receiver` """ return self._find(_receiver.Receiver, name_or_id, ignore_missing=ignore_missing) def get_receiver(self, receiver): """Get a single receiver. :param receiver: The value can be the name or ID of a receiver or a :class:`~openstack.clustering.v1.receiver.Receiver` instance. :returns: A receiver object. :rtype: :class:`~openstack.clustering.v1.receiver.Receiver` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no receiver matching the criteria could be found. """ return self._get(_receiver.Receiver, receiver) def receivers(self, **query): """Retrieve a generator of receivers. :param kwargs \*\*query: Optional query parameters for restricting the receivers to be returned. Available parameters include: * name: The name of a receiver object. * type: The type of receiver objects. * cluster_id: The ID of the associated cluster. * action: The name of the associated action. * sort: A list of sorting keys separated by commas. Each sorting key can optionally be attached with a sorting direction modifier which can be ``asc`` or ``desc``. * global_project: A boolean value indicating whether receivers * from all projects will be returned. :returns: A generator of receiver instances. """ return self._list(_receiver.Receiver, paginated=True, **query) def get_action(self, action): """Get a single action. :param action: The value can be the name or ID of an action or a :class:`~openstack.clustering.v1.action.Action` instance. :returns: an action object. :rtype: :class:`~openstack.clustering.v1.action.Action` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no action matching the criteria could be found. """ return self._get(_action.Action, action) def actions(self, **query): """Retrieve a generator of actions. :param kwargs \*\*query: Optional query parameters to be sent to restrict the actions to be returned. Available parameters include: * name: name of action for query. * target: ID of the target object for which the actions should be returned. * action: built-in action types for query. * sort: A list of sorting keys separated by commas. Each sorting key can optionally be attached with a sorting direction modifier which can be ``asc`` or ``desc``. * limit: Requests a specified size of returned items from the query. Returns a number of items up to the specified limit value. * marker: Specifies the ID of the last-seen item. Use the limit parameter to make an initial limited request and use the ID of the last-seen item from the response as the marker parameter value in a subsequent limited request. :returns: A generator of action instances. """ return self._list(_action.Action, paginated=True, **query) def get_event(self, event): """Get a single event. :param event: The value can be the name or ID of an event or a :class:`~openstack.clustering.v1.event.Event` instance. :returns: an event object. :rtype: :class:`~openstack.clustering.v1.event.Event` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no event matching the criteria could be found. """ return self._get(_event.Event, event) def events(self, **query): """Retrieve a generator of events. :param kwargs \*\*query: Optional query parameters to be sent to restrict the events to be returned. Available parameters include: * obj_name: name string of the object associated with an event. * obj_type: type string of the object related to an event. The value can be ``cluster``, ``node``, ``policy`` etc. * obj_id: ID of the object associated with an event. * cluster_id: ID of the cluster associated with the event, if any. * action: name of the action associated with an event. * sort: A list of sorting keys separated by commas. Each sorting key can optionally be attached with a sorting direction modifier which can be ``asc`` or ``desc``. * limit: Requests a specified size of returned items from the query. Returns a number of items up to the specified limit value. * marker: Specifies the ID of the last-seen item. Use the limit parameter to make an initial limited request and use the ID of the last-seen item from the response as the marker parameter value in a subsequent limited request. * global_project: A boolean specifying whether events from all projects should be returned. This option is subject to access control checking. :returns: A generator of event instances. """ return self._list(_event.Event, paginated=True, **query) def wait_for_status(self, res, status, failures=None, interval=2, wait=120): """Wait for a resource to be in a particular status. :param res: The resource to wait on to reach the specified status. The resource must have a ``status`` attribute. :type resource: A :class:`~openstack.resource.Resource` object. :param status: Desired status. :param failures: Statuses that would be interpreted as failures. :type failures: :py:class:`list` :param interval: Number of seconds to wait before to consecutive checks. Default to 2. :param wait: Maximum number of seconds to wait before the change. Default to 120. :returns: The resource is returned on success. :raises: :class:`~openstack.exceptions.ResourceTimeout` if transition to the desired status failed to occur in specified seconds. :raises: :class:`~openstack.exceptions.ResourceFailure` if the resource has transited to one of the failure statuses. :raises: :class:`~AttributeError` if the resource does not have a ``status`` attribute. """ failures = [] if failures is None else failures return resource.wait_for_status( self, res, status, failures, interval, wait) def wait_for_delete(self, res, interval=2, wait=120): """Wait for a resource to be deleted. :param res: The resource to wait on to be deleted. :type resource: A :class:`~openstack.resource.Resource` object. :param interval: Number of seconds to wait before to consecutive checks. Default to 2. :param wait: Maximum number of seconds to wait before the change. Default to 120. :returns: The resource is returned on success. :raises: :class:`~openstack.exceptions.ResourceTimeout` if transition to delete failed to occur in the specified seconds. """ return resource.wait_for_delete(self, res, interval, wait) def services(self, **query): """Get a generator of services. :returns: A generator of objects that are of type :class:`~openstack.clustering.v1.service.Service` """ return self._list(_service.Service, paginated=False, **query) openstacksdk-0.11.3/openstack/clustering/__init__.py0000666000175100017510000000000013236151340022556 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/image/0000775000175100017510000000000013236151501017357 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/image/v2/0000775000175100017510000000000013236151501017706 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/image/v2/image.py0000666000175100017510000003310513236151340021347 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib import jsonpatch from openstack import _log from openstack import exceptions from openstack.image import image_service from openstack import resource from openstack import utils _logger = _log.setup_logging('openstack') class Image(resource.Resource): resources_key = 'images' base_path = '/images' service = image_service.ImageService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( "name", "visibility", "member_status", "owner", "status", "size_min", "size_max", "sort_key", "sort_dir", "sort", "tag", "created_at", "updated_at") # NOTE: Do not add "self" support here. If you've used Python before, # you know that self, while not being a reserved word, has special # meaning. You can't call a class initializer with the self name # as the first argument and then additionally in kwargs, as we # do when we're constructing instances from the JSON body. # Resource.list explicitly pops off any "self" keys from bodies so # that we don't end up getting the following: # TypeError: __init__() got multiple values for argument 'self' # The image data (bytes or a file-like object) data = None # Properties #: Hash of the image data used. The Image service uses this value #: for verification. checksum = resource.Body('checksum') #: The container format refers to whether the VM image is in a file #: format that also contains metadata about the actual VM. #: Container formats include OVF and Amazon AMI. In addition, #: a VM image might not have a container format - instead, #: the image is just a blob of unstructured data. container_format = resource.Body('container_format') #: The date and time when the image was created. created_at = resource.Body('created_at') #: Valid values are: aki, ari, ami, raw, iso, vhd, vdi, qcow2, or vmdk. #: The disk format of a VM image is the format of the underlying #: disk image. Virtual appliance vendors have different formats #: for laying out the information contained in a VM disk image. disk_format = resource.Body('disk_format') #: Defines whether the image can be deleted. #: *Type: bool* is_protected = resource.Body('protected', type=bool) #: The minimum disk size in GB that is required to boot the image. min_disk = resource.Body('min_disk') #: The minimum amount of RAM in MB that is required to boot the image. min_ram = resource.Body('min_ram') #: The name of the image. name = resource.Body('name') #: The ID of the owner, or project, of the image. owner_id = resource.Body('owner') #: Properties, if any, that are associated with the image. properties = resource.Body('properties', type=dict) #: The size of the image data, in bytes. size = resource.Body('size', type=int) #: When present, Glance will attempt to store the disk image data in the #: backing store indicated by the value of the header. When not present, #: Glance will store the disk image data in the backing store that is #: marked default. Valid values are: file, s3, rbd, swift, cinder, #: gridfs, sheepdog, or vsphere. store = resource.Body('store') #: The image status. status = resource.Body('status') #: Tags, if any, that are associated with the image. tags = resource.Body('tags') #: The date and time when the image was updated. updated_at = resource.Body('updated_at') #: The virtual size of the image. virtual_size = resource.Body('virtual_size') #: The image visibility. visibility = resource.Body('visibility') #: The URL for the virtual machine image file. file = resource.Body('file') #: A list of URLs to access the image file in external store. #: This list appears if the show_multiple_locations option is set #: to true in the Image service's configuration file. locations = resource.Body('locations') #: The URL to access the image file kept in external store. It appears #: when you set the show_image_direct_url option to true in the #: Image service's configuration file. direct_url = resource.Body('direct_url') #: An image property. path = resource.Body('path') #: Value of image property used in add or replace operations expressed #: in JSON notation. For example, you must enclose strings in quotation #: marks, and you do not enclose numeric values in quotation marks. value = resource.Body('value') #: The URL to access the image file kept in external store. url = resource.Body('url') #: The location metadata. metadata = resource.Body('metadata', type=dict) # Additional Image Properties # https://docs.openstack.org/glance/latest/user/common-image-properties.html # http://docs.openstack.org/cli-reference/glance-property-keys.html #: The CPU architecture that must be supported by the hypervisor. architecture = resource.Body("architecture") #: The hypervisor type. Note that qemu is used for both QEMU and #: KVM hypervisor types. hypervisor_type = resource.Body("hypervisor-type") #: Optional property allows created servers to have a different bandwidth #: cap than that defined in the network they are attached to. instance_type_rxtx_factor = resource.Body( "instance_type_rxtx_factor", type=float) # For snapshot images, this is the UUID of the server used to #: create this image. instance_uuid = resource.Body('instance_uuid') #: Specifies whether the image needs a config drive. #: `mandatory` or `optional` (default if property is not used). needs_config_drive = resource.Body('img_config_drive') #: The ID of an image stored in the Image service that should be used #: as the kernel when booting an AMI-style image. kernel_id = resource.Body('kernel_id') #: The common name of the operating system distribution in lowercase os_distro = resource.Body('os_distro') #: The operating system version as specified by the distributor. os_version = resource.Body('os_version') #: Secure Boot is a security standard. When the instance starts, #: Secure Boot first examines software such as firmware and OS by #: their signature and only allows them to run if the signatures are valid. needs_secure_boot = resource.Body('os_secure_boot') #: The ID of image stored in the Image service that should be used as #: the ramdisk when booting an AMI-style image. ramdisk_id = resource.Body('ramdisk_id') #: The virtual machine mode. This represents the host/guest ABI #: (application binary interface) used for the virtual machine. vm_mode = resource.Body('vm_mode') #: The preferred number of sockets to expose to the guest. hw_cpu_sockets = resource.Body('hw_cpu_sockets', type=int) #: The preferred number of cores to expose to the guest. hw_cpu_cores = resource.Body('hw_cpu_cores', type=int) #: The preferred number of threads to expose to the guest. hw_cpu_threads = resource.Body('hw_cpu_threads', type=int) #: Specifies the type of disk controller to attach disk devices to. #: One of scsi, virtio, uml, xen, ide, or usb. hw_disk_bus = resource.Body('hw_disk_bus') #: Adds a random-number generator device to the image's instances. hw_rng_model = resource.Body('hw_rng_model') #: For libvirt: Enables booting an ARM system using the specified #: machine type. #: For Hyper-V: Specifies whether the Hyper-V instance will be a #: generation 1 or generation 2 VM. hw_machine_type = resource.Body('hw_machine_type') #: Enables the use of VirtIO SCSI (virtio-scsi) to provide block device #: access for compute instances; by default, instances use VirtIO Block #: (virtio-blk). hw_scsi_model = resource.Body('hw_scsi_model') #: Specifies the count of serial ports that should be provided. hw_serial_port_count = resource.Body('hw_serial_port_count', type=int) #: The video image driver used. hw_video_model = resource.Body('hw_video_model') #: Maximum RAM for the video image. hw_video_ram = resource.Body('hw_video_ram', type=int) #: Enables a virtual hardware watchdog device that carries out the #: specified action if the server hangs. hw_watchdog_action = resource.Body('hw_watchdog_action') #: The kernel command line to be used by the libvirt driver, instead #: of the default. os_command_line = resource.Body('os_command_line') #: Specifies the model of virtual network interface device to use. hw_vif_model = resource.Body('hw_vif_model') #: If true, this enables the virtio-net multiqueue feature. #: In this case, the driver sets the number of queues equal to the #: number of guest vCPUs. This makes the network performance scale #: across a number of vCPUs. is_hw_vif_multiqueue_enabled = resource.Body( 'hw_vif_multiqueue_enabled', type=bool) #: If true, enables the BIOS bootmenu. is_hw_boot_menu_enabled = resource.Body('hw_boot_menu', type=bool) #: The virtual SCSI or IDE controller used by the hypervisor. vmware_adaptertype = resource.Body('vmware_adaptertype') #: A VMware GuestID which describes the operating system installed #: in the image. vmware_ostype = resource.Body('vmware_ostype') #: If true, the root partition on the disk is automatically resized #: before the instance boots. has_auto_disk_config = resource.Body('auto_disk_config', type=bool) #: The operating system installed on the image. os_type = resource.Body('os_type') def _action(self, session, action): """Call an action on an image ID.""" url = utils.urljoin(self.base_path, self.id, 'actions', action) return session.post(url,) def deactivate(self, session): """Deactivate an image Note: Only administrative users can view image locations for deactivated images. """ self._action(session, "deactivate") def reactivate(self, session): """Reactivate an image Note: The image must exist in order to be reactivated. """ self._action(session, "reactivate") def add_tag(self, session, tag): """Add a tag to an image""" url = utils.urljoin(self.base_path, self.id, 'tags', tag) session.put(url,) def remove_tag(self, session, tag): """Remove a tag from an image""" url = utils.urljoin(self.base_path, self.id, 'tags', tag) session.delete(url,) def upload(self, session): """Upload data into an existing image""" url = utils.urljoin(self.base_path, self.id, 'file') session.put(url, data=self.data, headers={"Content-Type": "application/octet-stream", "Accept": ""}) def download(self, session, stream=False): """Download the data contained in an image""" # TODO(briancurtin): This method should probably offload the get # operation into another thread or something of that nature. url = utils.urljoin(self.base_path, self.id, 'file') resp = session.get(url, stream=stream) # See the following bug report for details on why the checksum # code may sometimes depend on a second GET call. # https://bugs.launchpad.net/python-openstacksdk/+bug/1619675 checksum = resp.headers.get("Content-MD5") if checksum is None: # If we don't receive the Content-MD5 header with the download, # make an additional call to get the image details and look at # the checksum attribute. details = self.get(session) checksum = details.checksum # if we are returning the repsonse object, ensure that it # has the content-md5 header so that the caller doesn't # need to jump through the same hoops through which we # just jumped. if stream: resp.headers['content-md5'] = checksum return resp if checksum is not None: digest = hashlib.md5(resp.content).hexdigest() if digest != checksum: raise exceptions.InvalidResponse( "checksum mismatch: %s != %s" % (checksum, digest)) else: _logger.warn( "Unable to verify the integrity of image %s" % (self.id)) return resp.content def update(self, session, **attrs): url = utils.urljoin(self.base_path, self.id) headers = { 'Content-Type': 'application/openstack-images-v2.1-json-patch', 'Accept': '' } original = self.to_dict() # Update values from **attrs so they can be passed to jsonpatch new = self.to_dict() new.update(**attrs) patch_string = jsonpatch.make_patch(original, new).to_string() resp = session.patch(url, data=patch_string, headers=headers) self._translate_response(resp, has_body=True) return self openstacksdk-0.11.3/openstack/image/v2/__init__.py0000666000175100017510000000000013236151340022010 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/image/v2/member.py0000666000175100017510000000330213236151340021530 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.image import image_service from openstack import resource class Member(resource.Resource): resources_key = 'members' base_path = '/images/%(image_id)s/members' service = image_service.ImageService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # See https://bugs.launchpad.net/glance/+bug/1526991 for member/member_id # 'member' is documented incorrectly as being deprecated but it's the # only thing that works. 'member_id' is not accepted. #: The ID of the image member. An image member is a tenant #: with whom the image is shared. member_id = resource.Body('member', alternate_id=True) #: The date and time when the member was created. created_at = resource.Body('created_at') #: Image ID stored through the image API. Typically a UUID. image_id = resource.URI('image_id') #: The status of the image. status = resource.Body('status') #: The URL for schema of the member. schema = resource.Body('schema') #: The date and time when the member was updated. updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/image/v2/_proxy.py0000666000175100017510000003204413236151340021606 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import exceptions from openstack.image.v2 import image as _image from openstack.image.v2 import member as _member from openstack import proxy from openstack import resource class Proxy(proxy.BaseProxy): def upload_image(self, container_format=None, disk_format=None, data=None, **attrs): """Upload a new image from attributes :param container_format: Format of the container. A valid value is ami, ari, aki, bare, ovf, ova, or docker. :param disk_format: The format of the disk. A valid value is ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, or iso. :param data: The data to be uploaded as an image. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.image.v2.image.Image`, comprised of the properties on the Image class. :returns: The results of image creation :rtype: :class:`~openstack.image.v2.image.Image` """ # container_format and disk_format are required to be set # on the image by the time upload_image is called, but they're not # required by the _create call. Enforce them here so that we don't # need to handle a failure in _create, as upload_image will # return a 400 with a message about disk_format and container_format # not being set. if not all([container_format, disk_format]): raise exceptions.InvalidRequest( "Both container_format and disk_format are required") img = self._create(_image.Image, disk_format=disk_format, container_format=container_format, **attrs) # TODO(briancurtin): Perhaps we should run img.upload_image # in a background thread and just return what is called by # self._create, especially because the upload_image call doesn't # return anything anyway. Otherwise this blocks while uploading # significant amounts of image data. img.data = data img.upload(self) return img def download_image(self, image, stream=False): """Download an image This will download an image to memory when ``stream=False``, or allow streaming downloads using an iterator when ``stream=True``. For examples of working with streamed responses, see :ref:`download_image-stream-true`. :param image: The value can be either the ID of an image or a :class:`~openstack.image.v2.image.Image` instance. :param bool stream: When ``True``, return a :class:`requests.Response` instance allowing you to iterate over the response data stream instead of storing its entire contents in memory. See :meth:`requests.Response.iter_content` for more details. *NOTE*: If you do not consume the entirety of the response you must explicitly call :meth:`requests.Response.close` or otherwise risk inefficiencies with the ``requests`` library's handling of connections. When ``False``, return the entire contents of the response. :returns: The bytes comprising the given Image when stream is False, otherwise a :class:`requests.Response` instance. """ image = self._get_resource(_image.Image, image) return image.download(self, stream=stream) def delete_image(self, image, ignore_missing=True): """Delete an image :param image: The value can be either the ID of an image or a :class:`~openstack.image.v2.image.Image` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the image does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent image. :returns: ``None`` """ self._delete(_image.Image, image, ignore_missing=ignore_missing) def find_image(self, name_or_id, ignore_missing=True): """Find a single image :param name_or_id: The name or ID of a image. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.image.v2.image.Image` or None """ return self._find(_image.Image, name_or_id, ignore_missing=ignore_missing) def get_image(self, image): """Get a single image :param image: The value can be the ID of a image or a :class:`~openstack.image.v2.image.Image` instance. :returns: One :class:`~openstack.image.v2.image.Image` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_image.Image, image) def images(self, **query): """Return a generator of images :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of image objects :rtype: :class:`~openstack.image.v2.image.Image` """ return self._list(_image.Image, paginated=True, **query) def update_image(self, image, **attrs): """Update a image :param image: Either the ID of a image or a :class:`~openstack.image.v2.image.Image` instance. :attrs kwargs: The attributes to update on the image represented by ``value``. :returns: The updated image :rtype: :class:`~openstack.image.v2.image.Image` """ img = self._get_resource(_image.Image, image) return img.update(self, **attrs) def deactivate_image(self, image): """Deactivate an image :param image: Either the ID of a image or a :class:`~openstack.image.v2.image.Image` instance. :returns: None """ image = self._get_resource(_image.Image, image) image.deactivate(self) def reactivate_image(self, image): """Deactivate an image :param image: Either the ID of a image or a :class:`~openstack.image.v2.image.Image` instance. :returns: None """ image = self._get_resource(_image.Image, image) image.reactivate(self) def add_tag(self, image, tag): """Add a tag to an image :param image: The value can be the ID of a image or a :class:`~openstack.image.v2.image.Image` instance that the member will be created for. :param str tag: The tag to be added :returns: None """ image = self._get_resource(_image.Image, image) image.add_tag(self, tag) def remove_tag(self, image, tag): """Remove a tag to an image :param image: The value can be the ID of a image or a :class:`~openstack.image.v2.image.Image` instance that the member will be created for. :param str tag: The tag to be removed :returns: None """ image = self._get_resource(_image.Image, image) image.remove_tag(self, tag) def add_member(self, image, **attrs): """Create a new member from attributes :param image: The value can be the ID of a image or a :class:`~openstack.image.v2.image.Image` instance that the member will be created for. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.image.v2.member.Member`, comprised of the properties on the Member class. :returns: The results of member creation :rtype: :class:`~openstack.image.v2.member.Member` """ image_id = resource.Resource._get_id(image) return self._create(_member.Member, image_id=image_id, **attrs) def remove_member(self, member, image, ignore_missing=True): """Delete a member :param member: The value can be either the ID of a member or a :class:`~openstack.image.v2.member.Member` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the member does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent member. :returns: ``None`` """ image_id = resource.Resource._get_id(image) member_id = resource.Resource._get_id(member) self._delete(_member.Member, member_id=member_id, image_id=image_id, ignore_missing=ignore_missing) def find_member(self, name_or_id, image, ignore_missing=True): """Find a single member :param name_or_id: The name or ID of a member. :param image: This is the image that the member belongs to, the value can be the ID of a image or a :class:`~openstack.image.v2.image.Image` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.image.v2.member.Member` or None """ image_id = resource.Resource._get_id(image) return self._find(_member.Member, name_or_id, image_id=image_id, ignore_missing=ignore_missing) def get_member(self, member, image): """Get a single member on an image :param member: The value can be the ID of a member or a :class:`~openstack.image.v2.member.Member` instance. :param image: This is the image that the member belongs to. The value can be the ID of a image or a :class:`~openstack.image.v2.image.Image` instance. :returns: One :class:`~openstack.image.v2.member.Member` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ member_id = resource.Resource._get_id(member) image_id = resource.Resource._get_id(image) return self._get(_member.Member, member_id=member_id, image_id=image_id) def members(self, image): """Return a generator of members :param image: This is the image that the member belongs to, the value can be the ID of a image or a :class:`~openstack.image.v2.image.Image` instance. :returns: A generator of member objects :rtype: :class:`~openstack.image.v2.member.Member` """ image_id = resource.Resource._get_id(image) return self._list(_member.Member, paginated=False, image_id=image_id) def update_member(self, member, image, **attrs): """Update the member of an image :param member: Either the ID of a member or a :class:`~openstack.image.v2.member.Member` instance. :param image: This is the image that the member belongs to. The value can be the ID of a image or a :class:`~openstack.image.v2.image.Image` instance. :attrs kwargs: The attributes to update on the member represented by ``value``. :returns: The updated member :rtype: :class:`~openstack.image.v2.member.Member` """ member_id = resource.Resource._get_id(member) image_id = resource.Resource._get_id(image) return self._update(_member.Member, member_id=member_id, image_id=image_id, **attrs) openstacksdk-0.11.3/openstack/image/image_service.py0000666000175100017510000000172613236151340022544 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class ImageService(service_filter.ServiceFilter): """The image service.""" valid_versions = [ service_filter.ValidVersion('v2'), service_filter.ValidVersion('v1') ] def __init__(self, version=None): """Create an image service.""" super(ImageService, self).__init__(service_type='image', version=version) openstacksdk-0.11.3/openstack/image/v1/0000775000175100017510000000000013236151501017705 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/image/v1/image.py0000666000175100017510000000610213236151340021343 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.image import image_service from openstack import resource class Image(resource.Resource): resource_key = 'image' resources_key = 'images' base_path = '/images' service = image_service.ImageService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True #: Hash of the image data used. The Image service uses this value #: for verification. checksum = resource.Body('checksum') #: The container format refers to whether the VM image is in a file #: format that also contains metadata about the actual VM. #: Container formats include OVF and Amazon AMI. In addition, #: a VM image might not have a container format - instead, #: the image is just a blob of unstructured data. container_format = resource.Body('container_format') #: A URL to copy an image from copy_from = resource.Body('copy_from') #: The timestamp when this image was created. created_at = resource.Body('created_at') #: Valid values are: aki, ari, ami, raw, iso, vhd, vdi, qcow2, or vmdk. #: The disk format of a VM image is the format of the underlying #: disk image. Virtual appliance vendors have different formats for #: laying out the information contained in a VM disk image. disk_format = resource.Body('disk_format') #: Defines whether the image can be deleted. #: *Type: bool* is_protected = resource.Body('protected', type=bool) #: ``True`` if this is a public image. #: *Type: bool* is_public = resource.Body('is_public', type=bool) #: A location for the image identified by a URI location = resource.Body('location') #: The minimum disk size in GB that is required to boot the image. min_disk = resource.Body('min_disk') #: The minimum amount of RAM in MB that is required to boot the image. min_ram = resource.Body('min_ram') #: Name for the image. Note that the name of an image is not unique #: to a Glance node. The API cannot expect users to know the names #: of images owned by others. name = resource.Body('name') #: The ID of the owner, or project, of the image. owner_id = resource.Body('owner') #: Properties, if any, that are associated with the image. properties = resource.Body('properties') #: The size of the image data, in bytes. size = resource.Body('size') #: The image status. status = resource.Body('status') #: The timestamp when this image was last updated. updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/image/v1/__init__.py0000666000175100017510000000000013236151340022007 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/image/v1/_proxy.py0000666000175100017510000000726213236151340021611 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.image.v1 import image as _image from openstack import proxy class Proxy(proxy.BaseProxy): def upload_image(self, **attrs): """Upload a new image from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.image.v1.image.Image`, comprised of the properties on the Image class. :returns: The results of image creation :rtype: :class:`~openstack.image.v1.image.Image` """ return self._create(_image.Image, **attrs) def delete_image(self, image, ignore_missing=True): """Delete an image :param image: The value can be either the ID of an image or a :class:`~openstack.image.v1.image.Image` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the image does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent image. :returns: ``None`` """ self._delete(_image.Image, image, ignore_missing=ignore_missing) def find_image(self, name_or_id, ignore_missing=True): """Find a single image :param name_or_id: The name or ID of a image. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.image.v1.image.Image` or None """ return self._find(_image.Image, name_or_id, ignore_missing=ignore_missing) def get_image(self, image): """Get a single image :param image: The value can be the ID of an image or a :class:`~openstack.image.v1.image.Image` instance. :returns: One :class:`~openstack.image.v1.image.Image` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_image.Image, image) def images(self, **query): """Return a generator of images :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of image objects :rtype: :class:`~openstack.image.v1.image.Image` """ return self._list(_image.Image, paginated=True, **query) def update_image(self, image, **attrs): """Update a image :param image: Either the ID of a image or a :class:`~openstack.image.v1.image.Image` instance. :attrs kwargs: The attributes to update on the image represented by ``value``. :returns: The updated image :rtype: :class:`~openstack.image.v1.image.Image` """ return self._update(_image.Image, image, **attrs) openstacksdk-0.11.3/openstack/image/__init__.py0000666000175100017510000000000013236151340021461 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/__init__.py0000666000175100017510000000411313236151364020416 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. __all__ = [ 'connect', 'enable_logging', ] from openstack._log import enable_logging # noqa import openstack.config import openstack.connection def connect( cloud=None, app_name=None, app_version=None, options=None, load_yaml_config=True, load_envvars=True, **kwargs): """Create a :class:`~openstack.connection.Connection` :param string cloud: The name of the configuration to load from clouds.yaml. Defaults to 'envvars' which will load :param argparse.Namespace options: An argparse Namespace object. allows direct passing in of argparse options to be added to the cloud config. Values of None and '' will be removed. :param bool load_yaml_config: Whether or not to load config settings from clouds.yaml files. Defaults to True. :param bool load_envvars: Whether or not to load config settings from environment variables. Defaults to True. :param kwargs: Additional configuration options. :returns: openstack.connnection.Connection :raises: keystoneauth1.exceptions.MissingRequiredOptions on missing required auth parameters """ cloud_region = openstack.config.get_cloud_region( cloud=cloud, app_name=app_name, app_version=app_version, load_yaml_config=load_yaml_config, load_envvars=load_envvars, options=options, **kwargs) return openstack.connection.Connection(config=cloud_region) openstacksdk-0.11.3/openstack/key_manager/0000775000175100017510000000000013236151501020557 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/key_manager/v1/0000775000175100017510000000000013236151501021105 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/key_manager/v1/order.py0000666000175100017510000000413213236151340022575 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.key_manager import key_manager_service from openstack.key_manager.v1 import _format from openstack import resource class Order(resource.Resource): resources_key = 'orders' base_path = '/orders' service = key_manager_service.KeyManagerService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True #: Timestamp in ISO8601 format of when the order was created created_at = resource.Body('created') #: Keystone Id of the user who created the order creator_id = resource.Body('creator_id') #: A dictionary containing key-value parameters which specify the #: details of an order request meta = resource.Body('meta', type=dict) #: A URI for this order order_ref = resource.Body('order_ref') #: The ID of this order order_id = resource.Body( 'order_ref', alternate_id=True, type=_format.HREFToUUID) #: Secret href associated with the order secret_ref = resource.Body('secret_ref') #: Secret ID associated with the order secret_id = resource.Body('secret_ref', type=_format.HREFToUUID) # The status of this order status = resource.Body('status') #: Metadata associated with the order sub_status = resource.Body('sub_status') #: Metadata associated with the order sub_status_message = resource.Body('sub_status_message') # The type of order type = resource.Body('type') #: Timestamp in ISO8601 format of the last time the order was updated. updated_at = resource.Body('updated') openstacksdk-0.11.3/openstack/key_manager/v1/container.py0000666000175100017510000000344013236151340023445 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.key_manager import key_manager_service from openstack.key_manager.v1 import _format from openstack import resource class Container(resource.Resource): resources_key = 'containers' base_path = '/containers' service = key_manager_service.KeyManagerService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: A URI for this container container_ref = resource.Body('container_ref') #: The ID for this container container_id = resource.Body( 'container_ref', alternate_id=True, type=_format.HREFToUUID) #: The timestamp when this container was created. created_at = resource.Body('created') #: The name of this container name = resource.Body('name') #: A list of references to secrets in this container secret_refs = resource.Body('secret_refs', type=list) #: The status of this container status = resource.Body('status') #: The type of this container type = resource.Body('type') #: The timestamp when this container was updated. updated_at = resource.Body('updated') #: A party interested in this container. consumers = resource.Body('consumers', type=list) openstacksdk-0.11.3/openstack/key_manager/v1/_format.py0000666000175100017510000000266113236151340023116 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import format from six.moves.urllib import parse class HREFToUUID(format.Formatter): @classmethod def deserialize(cls, value): """Convert a HREF to the UUID portion""" parts = parse.urlsplit(value) # Only try to proceed if we have an actual URI. # Just check that we have a scheme, netloc, and path. if not all(parts[:3]): raise ValueError("Unable to convert %s to an ID" % value) # The UUID will be the last portion of the URI. return parts.path.split("/")[-1] @classmethod def serialize(cls, value): # NOTE(briancurtin): If we had access to the session to get # the endpoint we could do something smart here like take an ID # and give back an HREF, but this will just have to be something # that works different because Barbican does what it does... return value openstacksdk-0.11.3/openstack/key_manager/v1/__init__.py0000666000175100017510000000000013236151340023207 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/key_manager/v1/secret.py0000666000175100017510000001032513236151340022750 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.key_manager import key_manager_service from openstack.key_manager.v1 import _format from openstack import resource from openstack import utils class Secret(resource.Resource): resources_key = 'secrets' base_path = '/secrets' service = key_manager_service.KeyManagerService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( "name", "mode", "bits", "secret_type", "acl_only", "created", "updated", "expiration", "sort", algorithm="alg") # Properties #: Metadata provided by a user or system for informational purposes algorithm = resource.Body('algorithm') #: Metadata provided by a user or system for informational purposes. #: Value must be greater than zero. bit_length = resource.Body('bit_length') #: A list of content types content_types = resource.Body('content_types', type=dict) #: Once this timestamp has past, the secret will no longer be available. expires_at = resource.Body('expiration') #: Timestamp of when the secret was created. created_at = resource.Body('created') #: Timestamp of when the secret was last updated. updated_at = resource.Body('updated') #: The type/mode of the algorithm associated with the secret information. mode = resource.Body('mode') #: The name of the secret set by the user name = resource.Body('name') #: A URI to the sercret secret_ref = resource.Body('secret_ref') #: The ID of the secret # NOTE: This is not really how alternate IDs are supposed to work and # ultimately means this has to work differently than all other services # in all of OpenStack because of the departure from using actual IDs # that even this service can't even use itself. secret_id = resource.Body( 'secret_ref', alternate_id=True, type=_format.HREFToUUID) #: Used to indicate the type of secret being stored. secret_type = resource.Body('secret_type') #: The status of this secret status = resource.Body('status') #: A timestamp when this secret was updated. updated_at = resource.Body('updated') #: The secret's data to be stored. payload_content_type must also #: be supplied if payload is included. (optional) payload = resource.Body('payload') #: The media type for the content of the payload. #: (required if payload is included) payload_content_type = resource.Body('payload_content_type') #: The encoding used for the payload to be able to include it in #: the JSON request. Currently only base64 is supported. #: (required if payload is encoded) payload_content_encoding = resource.Body('payload_content_encoding') def get(self, session, requires_id=True, error_message=None): request = self._prepare_request(requires_id=requires_id) response = session.get(request.url).json() content_type = None if self.payload_content_type is not None: content_type = self.payload_content_type elif "content_types" in response: content_type = response["content_types"]["default"] # Only try to get the payload if a content type has been explicitly # specified or if one was found in the metadata response if content_type is not None: payload = session.get(utils.urljoin(request.url, "payload"), headers={"Accept": content_type}) response["payload"] = payload.text # We already have the JSON here so don't call into _translate_response self._update_from_body_attrs(response) return self openstacksdk-0.11.3/openstack/key_manager/v1/_proxy.py0000666000175100017510000002505013236151340023004 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.key_manager.v1 import container as _container from openstack.key_manager.v1 import order as _order from openstack.key_manager.v1 import secret as _secret from openstack import proxy class Proxy(proxy.BaseProxy): def create_container(self, **attrs): """Create a new container from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.key_manager.v1.container.Container`, comprised of the properties on the Container class. :returns: The results of container creation :rtype: :class:`~openstack.key_manager.v1.container.Container` """ return self._create(_container.Container, **attrs) def delete_container(self, container, ignore_missing=True): """Delete a container :param container: The value can be either the ID of a container or a :class:`~openstack.key_manager.v1.container.Container` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the container does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent container. :returns: ``None`` """ self._delete(_container.Container, container, ignore_missing=ignore_missing) def find_container(self, name_or_id, ignore_missing=True): """Find a single container :param name_or_id: The name or ID of a container. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.key_manager.v1.container.Container` or None """ return self._find(_container.Container, name_or_id, ignore_missing=ignore_missing) def get_container(self, container): """Get a single container :param container: The value can be the ID of a container or a :class:`~openstack.key_manager.v1.container.Container` instance. :returns: One :class:`~openstack.key_manager.v1.container.Container` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_container.Container, container) def containers(self, **query): """Return a generator of containers :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of container objects :rtype: :class:`~openstack.key_manager.v1.container.Container` """ return self._list(_container.Container, paginated=False, **query) def update_container(self, container, **attrs): """Update a container :param container: Either the id of a container or a :class:`~openstack.key_manager.v1.container.Container` instance. :attrs kwargs: The attributes to update on the container represented by ``value``. :returns: The updated container :rtype: :class:`~openstack.key_manager.v1.container.Container` """ return self._update(_container.Container, container, **attrs) def create_order(self, **attrs): """Create a new order from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.key_manager.v1.order.Order`, comprised of the properties on the Order class. :returns: The results of order creation :rtype: :class:`~openstack.key_manager.v1.order.Order` """ return self._create(_order.Order, **attrs) def delete_order(self, order, ignore_missing=True): """Delete an order :param order: The value can be either the ID of a order or a :class:`~openstack.key_manager.v1.order.Order` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the order does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent order. :returns: ``None`` """ self._delete(_order.Order, order, ignore_missing=ignore_missing) def find_order(self, name_or_id, ignore_missing=True): """Find a single order :param name_or_id: The name or ID of a order. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.key_manager.v1.order.Order` or None """ return self._find(_order.Order, name_or_id, ignore_missing=ignore_missing) def get_order(self, order): """Get a single order :param order: The value can be the ID of an order or a :class:`~openstack.key_manager.v1.order.Order` instance. :returns: One :class:`~openstack.key_manager.v1.order.Order` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_order.Order, order) def orders(self, **query): """Return a generator of orders :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of order objects :rtype: :class:`~openstack.key_manager.v1.order.Order` """ return self._list(_order.Order, paginated=False, **query) def update_order(self, order, **attrs): """Update a order :param order: Either the id of a order or a :class:`~openstack.key_manager.v1.order.Order` instance. :attrs kwargs: The attributes to update on the order represented by ``value``. :returns: The updated order :rtype: :class:`~openstack.key_manager.v1.order.Order` """ return self._update(_order.Order, order, **attrs) def create_secret(self, **attrs): """Create a new secret from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.key_manager.v1.secret.Secret`, comprised of the properties on the Order class. :returns: The results of secret creation :rtype: :class:`~openstack.key_manager.v1.secret.Secret` """ return self._create(_secret.Secret, **attrs) def delete_secret(self, secret, ignore_missing=True): """Delete a secret :param secret: The value can be either the ID of a secret or a :class:`~openstack.key_manager.v1.secret.Secret` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the secret does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent secret. :returns: ``None`` """ self._delete(_secret.Secret, secret, ignore_missing=ignore_missing) def find_secret(self, name_or_id, ignore_missing=True): """Find a single secret :param name_or_id: The name or ID of a secret. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.key_manager.v1.secret.Secret` or None """ return self._find(_secret.Secret, name_or_id, ignore_missing=ignore_missing) def get_secret(self, secret): """Get a single secret :param secret: The value can be the ID of a secret or a :class:`~openstack.key_manager.v1.secret.Secret` instance. :returns: One :class:`~openstack.key_manager.v1.secret.Secret` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_secret.Secret, secret) def secrets(self, **query): """Return a generator of secrets :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of secret objects :rtype: :class:`~openstack.key_manager.v1.secret.Secret` """ return self._list(_secret.Secret, paginated=False, **query) def update_secret(self, secret, **attrs): """Update a secret :param secret: Either the id of a secret or a :class:`~openstack.key_manager.v1.secret.Secret` instance. :attrs kwargs: The attributes to update on the secret represented by ``value``. :returns: The updated secret :rtype: :class:`~openstack.key_manager.v1.secret.Secret` """ return self._update(_secret.Secret, secret, **attrs) openstacksdk-0.11.3/openstack/key_manager/__init__.py0000666000175100017510000000000013236151340022661 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/key_manager/key_manager_service.py0000666000175100017510000000167513236151340025147 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class KeyManagerService(service_filter.ServiceFilter): """The key manager service.""" valid_versions = [service_filter.ValidVersion('v1')] def __init__(self, version=None): """Create a key manager service.""" super(KeyManagerService, self).__init__(service_type='key-manager', version=version) openstacksdk-0.11.3/openstack/config/0000775000175100017510000000000013236151501017542 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/config/defaults.py0000666000175100017510000000340413236151340021727 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import os import threading _json_path = os.path.join( os.path.dirname(os.path.realpath(__file__)), 'defaults.json') _defaults = None _defaults_lock = threading.Lock() def get_defaults(): global _defaults if _defaults is not None: return _defaults.copy() with _defaults_lock: if _defaults is not None: # Did someone else just finish filling it? return _defaults.copy() # Python language specific defaults # These are defaults related to use of python libraries, they are # not qualities of a cloud. # # NOTE(harlowja): update a in-memory dict, before updating # the global one so that other callers of get_defaults do not # see the partially filled one. tmp_defaults = dict( api_timeout=None, verify=True, cacert=None, cert=None, key=None, ) with open(_json_path, 'r') as json_file: updates = json.load(json_file) if updates is not None: tmp_defaults.update(updates) _defaults = tmp_defaults return tmp_defaults.copy() openstacksdk-0.11.3/openstack/config/exceptions.py0000666000175100017510000000173713236151340022310 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class OpenStackConfigException(Exception): """Something went wrong with parsing your OpenStack Config.""" class OpenStackConfigVersionException(OpenStackConfigException): """A version was requested that is different than what was found.""" def __init__(self, version): super(OpenStackConfigVersionException, self).__init__() self.version = version openstacksdk-0.11.3/openstack/config/defaults.json0000666000175100017510000000140313236151340022245 0ustar zuulzuul00000000000000{ "application_catalog_api_version": "1", "auth_type": "password", "baremetal_api_version": "1", "container_api_version": "1", "container_infra_api_version": "1", "compute_api_version": "2", "database_api_version": "1.0", "disable_vendor_agent": {}, "dns_api_version": "2", "interface": "public", "floating_ip_source": "neutron", "identity_api_version": "2.0", "image_api_use_tasks": false, "image_api_version": "2", "image_format": "qcow2", "key_manager_api_version": "v1", "message": "", "metering_api_version": "2", "network_api_version": "2", "object_store_api_version": "1", "orchestration_api_version": "1", "secgroup_source": "neutron", "status": "active", "volume_api_version": "2", "workflow_api_version": "2" } openstacksdk-0.11.3/openstack/config/vendor-schema.json0000666000175100017510000001617313236151340023203 0ustar zuulzuul00000000000000{ "$schema": "http://json-schema.org/draft-04/schema#", "id": "https://git.openstack.org/cgit/openstack/cloud-data/plain/vendor-schema.json#", "type": "object", "properties": { "name": { "type": "string" }, "profile": { "type": "object", "properties": { "auth": { "type": "object", "properties": { "auth_url": { "name": "Auth URL", "description": "URL of the primary Keystone endpoint", "type": "string" } } }, "auth_type": { "name": "Auth Type", "description": "Name of authentication plugin to be used", "default": "password", "type": "string" }, "disable_vendor_agent": { "name": "Disable Vendor Agent Properties", "description": "Image properties required to disable vendor agent", "type": "object", "properties": {} }, "floating_ip_source": { "name": "Floating IP Source", "description": "Which service provides Floating IPs", "enum": [ "neutron", "nova", "None" ], "default": "neutron" }, "image_api_use_tasks": { "name": "Image Task API", "description": "Does the cloud require the Image Task API", "default": false, "type": "boolean" }, "image_format": { "name": "Image Format", "description": "Format for uploaded Images", "default": "qcow2", "type": "string" }, "interface": { "name": "API Interface", "description": "Which API Interface should connections hit", "default": "public", "enum": [ "public", "internal", "admin" ] }, "message": { "name": "Status message", "description": "Optional message with information related to status", "type": "string" }, "requires_floating_ip": { "name": "Requires Floating IP", "description": "Whether the cloud requires a floating IP to route traffic off of the cloud", "default": null, "type": ["boolean", "null"] }, "secgroup_source": { "name": "Security Group Source", "description": "Which service provides security groups", "enum": [ "neutron", "nova", "None" ], "default": "neutron" }, "status": { "name": "Vendor status", "description": "Status of the vendor's cloud", "enum": [ "active", "deprecated", "shutdown"], "default": "active" }, "compute_service_name": { "name": "Compute API Service Name", "description": "Compute API Service Name", "type": "string" }, "database_service_name": { "name": "Database API Service Name", "description": "Database API Service Name", "type": "string" }, "dns_service_name": { "name": "DNS API Service Name", "description": "DNS API Service Name", "type": "string" }, "identity_service_name": { "name": "Identity API Service Name", "description": "Identity API Service Name", "type": "string" }, "image_service_name": { "name": "Image API Service Name", "description": "Image API Service Name", "type": "string" }, "volume_service_name": { "name": "Volume API Service Name", "description": "Volume API Service Name", "type": "string" }, "network_service_name": { "name": "Network API Service Name", "description": "Network API Service Name", "type": "string" }, "object_service_name": { "name": "Object Storage API Service Name", "description": "Object Storage API Service Name", "type": "string" }, "baremetal_service_name": { "name": "Baremetal API Service Name", "description": "Baremetal API Service Name", "type": "string" }, "compute_service_type": { "name": "Compute API Service Type", "description": "Compute API Service Type", "type": "string" }, "database_service_type": { "name": "Database API Service Type", "description": "Database API Service Type", "type": "string" }, "dns_service_type": { "name": "DNS API Service Type", "description": "DNS API Service Type", "type": "string" }, "identity_service_type": { "name": "Identity API Service Type", "description": "Identity API Service Type", "type": "string" }, "image_service_type": { "name": "Image API Service Type", "description": "Image API Service Type", "type": "string" }, "volume_service_type": { "name": "Volume API Service Type", "description": "Volume API Service Type", "type": "string" }, "network_service_type": { "name": "Network API Service Type", "description": "Network API Service Type", "type": "string" }, "object_service_type": { "name": "Object Storage API Service Type", "description": "Object Storage API Service Type", "type": "string" }, "baremetal_service_type": { "name": "Baremetal API Service Type", "description": "Baremetal API Service Type", "type": "string" }, "compute_api_version": { "name": "Compute API Version", "description": "Compute API Version", "type": "string" }, "database_api_version": { "name": "Database API Version", "description": "Database API Version", "type": "string" }, "dns_api_version": { "name": "DNS API Version", "description": "DNS API Version", "type": "string" }, "identity_api_version": { "name": "Identity API Version", "description": "Identity API Version", "type": "string" }, "image_api_version": { "name": "Image API Version", "description": "Image API Version", "type": "string" }, "volume_api_version": { "name": "Volume API Version", "description": "Volume API Version", "type": "string" }, "network_api_version": { "name": "Network API Version", "description": "Network API Version", "type": "string" }, "object_api_version": { "name": "Object Storage API Version", "description": "Object Storage API Version", "type": "string" }, "baremetal_api_version": { "name": "Baremetal API Version", "description": "Baremetal API Version", "type": "string" } } } }, "required": [ "name", "profile" ] } openstacksdk-0.11.3/openstack/config/cloud_config.py0000666000175100017510000000157613236151340022563 0ustar zuulzuul00000000000000# Copyright (c) 2018 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(mordred) This is only here to ease the OSC transition from openstack.config import cloud_region class CloudConfig(cloud_region.CloudRegion): def __init__(self, name, region, config, **kwargs): super(CloudConfig, self).__init__(name, region, config, **kwargs) self.region = region openstacksdk-0.11.3/openstack/config/cloud_region.py0000666000175100017510000004274013236151364022605 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import math import warnings from keystoneauth1 import adapter import keystoneauth1.exceptions.catalog from keystoneauth1 import session as ks_session import requestsexceptions from six.moves import urllib from openstack import version as openstack_version from openstack import _log from openstack.config import defaults as config_defaults from openstack.config import exceptions def _make_key(key, service_type): if not service_type: return key else: service_type = service_type.lower().replace('-', '_') return "_".join([service_type, key]) def from_session(session, name=None, region_name=None, force_ipv4=False, app_name=None, app_version=None, **kwargs): """Construct a CloudRegion from an existing `keystoneauth1.session.Session` When a Session already exists, we don't actually even need to go through the OpenStackConfig.get_one_cloud dance. We have a Session with Auth info. The only parameters that are really needed are adapter/catalog related. :param keystoneauth1.session.session session: An existing authenticated Session to use. :param str name: A name to use for this cloud region in logging. If left empty, the hostname of the auth_url found in the Session will be used. :param str region_name: The region name to connect to. :param bool force_ipv4: Whether or not to disable IPv6 support. Defaults to False. :param str app_name: Name of the application to be added to User Agent. :param str app_version: Version of the application to be added to User Agent. :param kwargs: Config settings for this cloud region. """ # If someone is constructing one of these from a Session, then they are # not using a named config. Use the hostname of their auth_url instead. name = name or urllib.parse.urlparse(session.auth.auth_url).hostname config_dict = config_defaults.get_defaults() config_dict.update(**kwargs) return CloudRegion( name=name, session=session, config=config_dict, region_name=region_name, force_ipv4=force_ipv4, app_name=app_name, app_version=app_version) class CloudRegion(object): """The configuration for a Region of an OpenStack Cloud. A CloudRegion encapsulates the config information needed for connections to all of the services in a Region of a Cloud. """ def __init__(self, name, region_name=None, config=None, force_ipv4=False, auth_plugin=None, openstack_config=None, session_constructor=None, app_name=None, app_version=None, session=None): self.name = name self.region_name = region_name self.config = config self.log = _log.setup_logging('openstack.config') self._force_ipv4 = force_ipv4 self._auth = auth_plugin self._openstack_config = openstack_config self._keystone_session = session self._session_constructor = session_constructor or ks_session.Session self._app_name = app_name self._app_version = app_version def __getattr__(self, key): """Return arbitrary attributes.""" if key.startswith('os_'): key = key[3:] if key in [attr.replace('-', '_') for attr in self.config]: return self.config[key] else: return None def __iter__(self): return self.config.__iter__() def __eq__(self, other): return ( self.name == other.name and self.region_name == other.region_name and self.config == other.config) def __ne__(self, other): return not self == other def set_session_constructor(self, session_constructor): """Sets the Session constructor.""" self._session_constructor = session_constructor def get_requests_verify_args(self): """Return the verify and cert values for the requests library.""" if self.config['verify'] and self.config['cacert']: verify = self.config['cacert'] else: verify = self.config['verify'] if self.config['cacert']: warnings.warn( "You are specifying a cacert for the cloud {0} but " "also to ignore the host verification. The host SSL cert " "will not be verified.".format(self.name)) cert = self.config.get('cert', None) if cert: if self.config['key']: cert = (cert, self.config['key']) return (verify, cert) def get_services(self): """Return a list of service types we know something about.""" services = [] for key, val in self.config.items(): if (key.endswith('api_version') or key.endswith('service_type') or key.endswith('service_name')): services.append("_".join(key.split('_')[:-2])) return list(set(services)) def get_auth_args(self): return self.config.get('auth', {}) def get_interface(self, service_type=None): key = _make_key('interface', service_type) interface = self.config.get('interface') return self.config.get(key, interface) def get_api_version(self, service_type): key = _make_key('api_version', service_type) return self.config.get(key, None) def get_service_type(self, service_type): key = _make_key('service_type', service_type) # Cinder did an evil thing where they defined a second service # type in the catalog. Of course, that's insane, so let's hide this # atrocity from the as-yet-unsullied eyes of our users. # Of course, if the user requests a volumev2, that structure should # still work. # What's even more amazing is that they did it AGAIN with cinder v3 # And then I learned that mistral copied it. # TODO(shade) This should get removed when we have os-service-types # alias support landed in keystoneauth. if service_type in ('volume', 'block-storage'): vol_ver = self.get_api_version('volume') if vol_ver and vol_ver.startswith('2'): service_type = 'volumev2' elif vol_ver and vol_ver.startswith('3'): service_type = 'volumev3' elif service_type == 'workflow': wk_ver = self.get_api_version(service_type) if wk_ver and wk_ver.startswith('2'): service_type = 'workflowv2' return self.config.get(key, service_type) def get_service_name(self, service_type): key = _make_key('service_name', service_type) return self.config.get(key, None) def get_endpoint(self, service_type): key = _make_key('endpoint_override', service_type) old_key = _make_key('endpoint', service_type) return self.config.get(key, self.config.get(old_key, None)) @property def prefer_ipv6(self): return not self._force_ipv4 @property def force_ipv4(self): return self._force_ipv4 def get_auth(self): """Return a keystoneauth plugin from the auth credentials.""" return self._auth def get_session(self): """Return a keystoneauth session based on the auth credentials.""" if self._keystone_session is None: if not self._auth: raise exceptions.OpenStackConfigException( "Problem with auth parameters") (verify, cert) = self.get_requests_verify_args() # Turn off urllib3 warnings about insecure certs if we have # explicitly configured requests to tell it we do not want # cert verification if not verify: self.log.debug( "Turning off SSL warnings for {cloud}:{region}" " since verify=False".format( cloud=self.name, region=self.region_name)) requestsexceptions.squelch_warnings(insecure_requests=not verify) self._keystone_session = self._session_constructor( auth=self._auth, verify=verify, cert=cert, timeout=self.config['api_timeout']) if hasattr(self._keystone_session, 'additional_user_agent'): self._keystone_session.additional_user_agent.append( ('openstacksdk', openstack_version.__version__)) # Using old keystoneauth with new os-client-config fails if # we pass in app_name and app_version. Those are not essential, # nor a reason to bump our minimum, so just test for the session # having the attribute post creation and set them then. if hasattr(self._keystone_session, 'app_name'): self._keystone_session.app_name = self._app_name if hasattr(self._keystone_session, 'app_version'): self._keystone_session.app_version = self._app_version return self._keystone_session def get_service_catalog(self): """Helper method to grab the service catalog.""" return self._auth.get_access(self.get_session()).service_catalog def _get_version_args(self, service_key, version): """Translate OCC version args to those needed by ksa adapter. If no version is requested explicitly and we have a configured version, set the version parameter and let ksa deal with expanding that to min=ver.0, max=ver.latest. If version is set, pass it through. If version is not set and we don't have a configured version, default to latest. """ if version == 'latest': return None, None, 'latest' if not version: version = self.get_api_version(service_key) if not version: return None, None, 'latest' return version, None, None def get_session_client(self, service_key, version=None): """Return a prepped requests adapter for a given service. This is useful for making direct requests calls against a 'mounted' endpoint. That is, if you do: client = get_session_client('compute') then you can do: client.get('/flavors') and it will work like you think. """ (version, min_version, max_version) = self._get_version_args( service_key, version) return adapter.Adapter( session=self.get_session(), service_type=self.get_service_type(service_key), service_name=self.get_service_name(service_key), interface=self.get_interface(service_key), region_name=self.region_name, version=version, min_version=min_version, max_version=max_version) def _get_highest_endpoint(self, service_types, kwargs): session = self.get_session() for service_type in service_types: kwargs['service_type'] = service_type try: # Return the highest version we find that matches # the request return session.get_endpoint(**kwargs) except keystoneauth1.exceptions.catalog.EndpointNotFound: pass def get_session_endpoint( self, service_key, min_version=None, max_version=None): """Return the endpoint from config or the catalog. If a configuration lists an explicit endpoint for a service, return that. Otherwise, fetch the service catalog from the keystone session and return the appropriate endpoint. :param service_key: Generic key for service, such as 'compute' or 'network' """ override_endpoint = self.get_endpoint(service_key) if override_endpoint: return override_endpoint endpoint = None kwargs = { 'service_name': self.get_service_name(service_key), 'region_name': self.region_name } kwargs['interface'] = self.get_interface(service_key) if service_key == 'volume' and not self.get_api_version('volume'): # If we don't have a configured cinder version, we can't know # to request a different service_type min_version = float(min_version or 1) max_version = float(max_version or 3) min_major = math.trunc(float(min_version)) max_major = math.trunc(float(max_version)) versions = range(int(max_major) + 1, int(min_major), -1) service_types = [] for version in versions: if version == 1: service_types.append('volume') else: service_types.append('volumev{v}'.format(v=version)) else: service_types = [self.get_service_type(service_key)] endpoint = self._get_highest_endpoint(service_types, kwargs) if not endpoint: self.log.warning( "Keystone catalog entry not found (" "service_type=%s,service_name=%s" "interface=%s,region_name=%s)", service_key, kwargs['service_name'], kwargs['interface'], kwargs['region_name']) return endpoint def get_cache_expiration_time(self): if self._openstack_config: return self._openstack_config.get_cache_expiration_time() def get_cache_path(self): if self._openstack_config: return self._openstack_config.get_cache_path() def get_cache_class(self): if self._openstack_config: return self._openstack_config.get_cache_class() def get_cache_arguments(self): if self._openstack_config: return self._openstack_config.get_cache_arguments() def get_cache_expiration(self): if self._openstack_config: return self._openstack_config.get_cache_expiration() def get_cache_resource_expiration(self, resource, default=None): """Get expiration time for a resource :param resource: Name of the resource type :param default: Default value to return if not found (optional, defaults to None) :returns: Expiration time for the resource type as float or default """ if self._openstack_config: expiration = self._openstack_config.get_cache_expiration() if resource not in expiration: return default return float(expiration[resource]) def requires_floating_ip(self): """Return whether or not this cloud requires floating ips. :returns: True of False if know, None if discovery is needed. If requires_floating_ip is not configured but the cloud is known to not provide floating ips, will return False. """ if self.config['floating_ip_source'] == "None": return False return self.config.get('requires_floating_ip') def get_external_networks(self): """Get list of network names for external networks.""" return [ net['name'] for net in self.config['networks'] if net['routes_externally']] def get_external_ipv4_networks(self): """Get list of network names for external IPv4 networks.""" return [ net['name'] for net in self.config['networks'] if net['routes_ipv4_externally']] def get_external_ipv6_networks(self): """Get list of network names for external IPv6 networks.""" return [ net['name'] for net in self.config['networks'] if net['routes_ipv6_externally']] def get_internal_networks(self): """Get list of network names for internal networks.""" return [ net['name'] for net in self.config['networks'] if not net['routes_externally']] def get_internal_ipv4_networks(self): """Get list of network names for internal IPv4 networks.""" return [ net['name'] for net in self.config['networks'] if not net['routes_ipv4_externally']] def get_internal_ipv6_networks(self): """Get list of network names for internal IPv6 networks.""" return [ net['name'] for net in self.config['networks'] if not net['routes_ipv6_externally']] def get_default_network(self): """Get network used for default interactions.""" for net in self.config['networks']: if net['default_interface']: return net['name'] return None def get_nat_destination(self): """Get network used for NAT destination.""" for net in self.config['networks']: if net['nat_destination']: return net['name'] return None def get_nat_source(self): """Get network used for NAT source.""" for net in self.config['networks']: if net.get('nat_source'): return net['name'] return None openstacksdk-0.11.3/openstack/config/__init__.py0000666000175100017510000000233213236151364021664 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from openstack.config.loader import OpenStackConfig # noqa def get_cloud_region( service_key=None, options=None, app_name=None, app_version=None, load_yaml_config=True, load_envvars=True, **kwargs): config = OpenStackConfig( load_yaml_config=load_yaml_config, app_name=app_name, app_version=app_version) if options: config.register_argparse_arguments(options, sys.argv, service_key) parsed_options = options.parse_known_args(sys.argv) else: parsed_options = None return config.get_one(options=parsed_options, **kwargs) openstacksdk-0.11.3/openstack/config/schema.json0000666000175100017510000000652413236151340021707 0ustar zuulzuul00000000000000{ "$schema": "http://json-schema.org/draft-04/schema#", "id": "https://git.openstack.org/cgit/openstack/cloud-data/plain/schema.json#", "type": "object", "properties": { "auth_type": { "name": "Auth Type", "description": "Name of authentication plugin to be used", "default": "password", "type": "string" }, "disable_vendor_agent": { "name": "Disable Vendor Agent Properties", "description": "Image properties required to disable vendor agent", "type": "object", "properties": {} }, "floating_ip_source": { "name": "Floating IP Source", "description": "Which service provides Floating IPs", "enum": [ "neutron", "nova", "None" ], "default": "neutron" }, "image_api_use_tasks": { "name": "Image Task API", "description": "Does the cloud require the Image Task API", "default": false, "type": "boolean" }, "image_format": { "name": "Image Format", "description": "Format for uploaded Images", "default": "qcow2", "type": "string" }, "interface": { "name": "API Interface", "description": "Which API Interface should connections hit", "default": "public", "enum": [ "public", "internal", "admin" ] }, "secgroup_source": { "name": "Security Group Source", "description": "Which service provides security groups", "default": "neutron", "enum": [ "neutron", "nova", "None" ] }, "baremetal_api_version": { "name": "Baremetal API Service Type", "description": "Baremetal API Service Type", "default": "1", "type": "string" }, "compute_api_version": { "name": "Compute API Version", "description": "Compute API Version", "default": "2", "type": "string" }, "database_api_version": { "name": "Database API Version", "description": "Database API Version", "default": "1.0", "type": "string" }, "dns_api_version": { "name": "DNS API Version", "description": "DNS API Version", "default": "2", "type": "string" }, "identity_api_version": { "name": "Identity API Version", "description": "Identity API Version", "default": "2", "type": "string" }, "image_api_version": { "name": "Image API Version", "description": "Image API Version", "default": "1", "type": "string" }, "network_api_version": { "name": "Network API Version", "description": "Network API Version", "default": "2", "type": "string" }, "object_store_api_version": { "name": "Object Storage API Version", "description": "Object Storage API Version", "default": "1", "type": "string" }, "volume_api_version": { "name": "Volume API Version", "description": "Volume API Version", "default": "2", "type": "string" } }, "required": [ "auth_type", "baremetal_api_version", "compute_api_version", "database_api_version", "disable_vendor_agent", "dns_api_version", "floating_ip_source", "identity_api_version", "image_api_use_tasks", "image_api_version", "image_format", "interface", "network_api_version", "object_store_api_version", "secgroup_source", "volume_api_version" ] } openstacksdk-0.11.3/openstack/config/vendors/0000775000175100017510000000000013236151501021222 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/config/vendors/dreamhost.json0000666000175100017510000000051313236151340024105 0ustar zuulzuul00000000000000{ "name": "dreamhost", "profile": { "status": "deprecated", "message": "The dreamhost profile is deprecated. Please use the dreamcompute profile instead", "auth": { "auth_url": "https://keystone.dream.io" }, "identity_api_version": "3", "region_name": "RegionOne", "image_format": "raw" } } openstacksdk-0.11.3/openstack/config/vendors/elastx.json0000666000175100017510000000026013236151340023416 0ustar zuulzuul00000000000000{ "name": "elastx", "profile": { "auth": { "auth_url": "https://ops.elastx.net:5000" }, "identity_api_version": "3", "region_name": "regionOne" } } openstacksdk-0.11.3/openstack/config/vendors/ovh.json0000666000175100017510000000036013236151340022713 0ustar zuulzuul00000000000000{ "name": "ovh", "profile": { "auth": { "auth_url": "https://auth.cloud.ovh.net/" }, "regions": [ "BHS1", "GRA1", "SBG1" ], "identity_api_version": "3", "floating_ip_source": "None" } } openstacksdk-0.11.3/openstack/config/vendors/entercloudsuite.json0000666000175100017510000000044513236151340025341 0ustar zuulzuul00000000000000{ "name": "entercloudsuite", "profile": { "auth": { "auth_url": "https://api.entercloudsuite.com/" }, "identity_api_version": "3", "image_api_version": "1", "volume_api_version": "1", "regions": [ "it-mil1", "nl-ams1", "de-fra1" ] } } openstacksdk-0.11.3/openstack/config/vendors/switchengines.json0000666000175100017510000000041713236151340024774 0ustar zuulzuul00000000000000{ "name": "switchengines", "profile": { "auth": { "auth_url": "https://keystone.cloud.switch.ch:5000/v2.0" }, "regions": [ "LS", "ZH" ], "volume_api_version": "1", "image_api_use_tasks": true, "image_format": "raw" } } openstacksdk-0.11.3/openstack/config/vendors/internap.json0000666000175100017510000000043513236151340023742 0ustar zuulzuul00000000000000{ "name": "internap", "profile": { "auth": { "auth_url": "https://identity.api.cloud.iweb.com" }, "regions": [ "ams01", "da01", "nyj01", "sin01", "sjc01" ], "identity_api_version": "3", "floating_ip_source": "None" } } openstacksdk-0.11.3/openstack/config/vendors/fuga.json0000666000175100017510000000045013236151340023041 0ustar zuulzuul00000000000000{ "name": "fuga", "profile": { "auth": { "auth_url": "https://identity.api.fuga.io:5000", "user_domain_name": "Default", "project_domain_name": "Default" }, "regions": [ "cystack" ], "identity_api_version": "3", "volume_api_version": "3" } } openstacksdk-0.11.3/openstack/config/vendors/unitedstack.json0000666000175100017510000000045113236151340024436 0ustar zuulzuul00000000000000{ "name": "unitedstack", "profile": { "auth": { "auth_url": "https://identity.api.ustack.com/v3" }, "regions": [ "bj1", "gd1" ], "volume_api_version": "1", "identity_api_version": "3", "image_format": "raw", "floating_ip_source": "None" } } openstacksdk-0.11.3/openstack/config/vendors/catalyst.json0000666000175100017510000000042413236151340023744 0ustar zuulzuul00000000000000{ "name": "catalyst", "profile": { "auth": { "auth_url": "https://api.cloud.catalyst.net.nz:5000/v2.0" }, "regions": [ "nz-por-1", "nz_wlg_2" ], "image_api_version": "1", "volume_api_version": "1", "image_format": "raw" } } openstacksdk-0.11.3/openstack/config/vendors/betacloud.json0000666000175100017510000000037513236151340024067 0ustar zuulzuul00000000000000{ "name": "betacloud", "profile": { "auth": { "auth_url": "https://api-1.betacloud.io:5000" }, "regions": [ "betacloud-1" ], "identity_api_version": "3", "image_format": "raw", "volume_api_version": "3" } } openstacksdk-0.11.3/openstack/config/vendors/bluebox.json0000666000175100017510000000015213236151340023556 0ustar zuulzuul00000000000000{ "name": "bluebox", "profile": { "volume_api_version": "1", "region_name": "RegionOne" } } openstacksdk-0.11.3/openstack/config/vendors/datacentred.json0000666000175100017510000000032713236151340024400 0ustar zuulzuul00000000000000{ "name": "datacentred", "profile": { "auth": { "auth_url": "https://compute.datacentred.io:5000" }, "region-name": "sal01", "identity_api_version": "3", "image_api_version": "2" } } openstacksdk-0.11.3/openstack/config/vendors/citycloud.json0000666000175100017510000000051313236151340024116 0ustar zuulzuul00000000000000{ "name": "citycloud", "profile": { "auth": { "auth_url": "https://identity1.citycloud.com:5000/v3/" }, "regions": [ "Buf1", "La1", "Fra1", "Lon1", "Sto2", "Kna1" ], "requires_floating_ip": true, "volume_api_version": "1", "identity_api_version": "3" } } openstacksdk-0.11.3/openstack/config/vendors/__init__.py0000666000175100017510000000255413236151340023344 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glob import json import os import yaml _vendors_path = os.path.dirname(os.path.realpath(__file__)) _vendor_defaults = None def get_profile(profile_name): global _vendor_defaults if _vendor_defaults is None: _vendor_defaults = {} for vendor in glob.glob(os.path.join(_vendors_path, '*.yaml')): with open(vendor, 'r') as f: vendor_data = yaml.safe_load(f) _vendor_defaults[vendor_data['name']] = vendor_data['profile'] for vendor in glob.glob(os.path.join(_vendors_path, '*.json')): with open(vendor, 'r') as f: vendor_data = json.load(f) _vendor_defaults[vendor_data['name']] = vendor_data['profile'] return _vendor_defaults.get(profile_name) openstacksdk-0.11.3/openstack/config/vendors/dreamcompute.json0000666000175100017510000000032013236151340024600 0ustar zuulzuul00000000000000{ "name": "dreamcompute", "profile": { "auth": { "auth_url": "https://iad2.dream.io:5000" }, "identity_api_version": "3", "region_name": "RegionOne", "image_format": "raw" } } openstacksdk-0.11.3/openstack/config/vendors/auro.json0000666000175100017510000000032213236151340023063 0ustar zuulzuul00000000000000{ "name": "auro", "profile": { "auth": { "auth_url": "https://api.van1.auro.io:5000/v2.0" }, "identity_api_version": "2", "region_name": "van1", "requires_floating_ip": true } } openstacksdk-0.11.3/openstack/config/vendors/ibmcloud.json0000666000175100017510000000034013236151340023713 0ustar zuulzuul00000000000000{ "name": "ibmcloud", "profile": { "auth": { "auth_url": "https://identity.open.softlayer.com" }, "volume_api_version": "2", "identity_api_version": "3", "regions": [ "london" ] } } openstacksdk-0.11.3/openstack/config/vendors/conoha.json0000666000175100017510000000033613236151340023371 0ustar zuulzuul00000000000000{ "name": "conoha", "profile": { "auth": { "auth_url": "https://identity.{region_name}.conoha.io" }, "regions": [ "sin1", "sjc1", "tyo1" ], "identity_api_version": "2" } } openstacksdk-0.11.3/openstack/config/vendors/otc.json0000666000175100017510000000034313236151340022705 0ustar zuulzuul00000000000000{ "name": "otc", "profile": { "auth": { "auth_url": "https://iam.%(region_name)s.otc.t-systems.com/v3" }, "regions": [ "eu-de" ], "identity_api_version": "3", "image_format": "vhd" } } openstacksdk-0.11.3/openstack/config/vendors/ultimum.json0000666000175100017510000000033413236151340023614 0ustar zuulzuul00000000000000{ "name": "ultimum", "profile": { "auth": { "auth_url": "https://console.ultimum-cloud.com:5000/" }, "identity_api_version": "3", "volume_api_version": "1", "region-name": "RegionOne" } } openstacksdk-0.11.3/openstack/config/vendors/zetta.json0000666000175100017510000000033013236151340023243 0ustar zuulzuul00000000000000{ "name": "zetta", "profile": { "auth": { "auth_url": "https://identity.api.zetta.io/v3" }, "regions": [ "no-osl1" ], "identity_api_version": "3", "dns_api_version": "2" } } openstacksdk-0.11.3/openstack/config/vendors/rackspace.json0000666000175100017510000000120113236151340024046 0ustar zuulzuul00000000000000{ "name": "rackspace", "profile": { "auth": { "auth_url": "https://identity.api.rackspacecloud.com/v2.0/" }, "regions": [ "DFW", "HKG", "IAD", "ORD", "SYD", "LON" ], "database_service_type": "rax:database", "compute_service_name": "cloudServersOpenStack", "image_api_use_tasks": true, "image_format": "vhd", "floating_ip_source": "None", "secgroup_source": "None", "requires_floating_ip": false, "volume_api_version": "1", "disable_vendor_agent": { "vm_mode": "hvm", "xenapi_use_agent": false }, "has_network": false } } openstacksdk-0.11.3/openstack/config/vendors/vexxhost.json0000666000175100017510000000043213236151340024007 0ustar zuulzuul00000000000000{ "name": "vexxhost", "profile": { "auth": { "auth_url": "https://auth.vexxhost.net" }, "regions": [ "ca-ymq-1" ], "dns_api_version": "1", "identity_api_version": "3", "floating_ip_source": "None", "requires_floating_ip": false } } openstacksdk-0.11.3/openstack/config/loader.py0000666000175100017510000014311013236151340021365 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # alias because we already had an option named argparse import argparse as argparse_mod import collections import copy import json import os import re import sys import warnings import appdirs from keystoneauth1 import adapter from keystoneauth1 import loading import yaml from openstack import _log from openstack.config import cloud_region from openstack.config import defaults from openstack.config import exceptions from openstack.config import vendors APPDIRS = appdirs.AppDirs('openstack', 'OpenStack', multipath='/etc') CONFIG_HOME = APPDIRS.user_config_dir CACHE_PATH = APPDIRS.user_cache_dir UNIX_CONFIG_HOME = os.path.join( os.path.expanduser(os.path.join('~', '.config')), 'openstack') UNIX_SITE_CONFIG_HOME = '/etc/openstack' SITE_CONFIG_HOME = APPDIRS.site_config_dir CONFIG_SEARCH_PATH = [ os.getcwd(), CONFIG_HOME, UNIX_CONFIG_HOME, SITE_CONFIG_HOME, UNIX_SITE_CONFIG_HOME ] YAML_SUFFIXES = ('.yaml', '.yml') JSON_SUFFIXES = ('.json',) CONFIG_FILES = [ os.path.join(d, 'clouds' + s) for d in CONFIG_SEARCH_PATH for s in YAML_SUFFIXES + JSON_SUFFIXES ] SECURE_FILES = [ os.path.join(d, 'secure' + s) for d in CONFIG_SEARCH_PATH for s in YAML_SUFFIXES + JSON_SUFFIXES ] VENDOR_FILES = [ os.path.join(d, 'clouds-public' + s) for d in CONFIG_SEARCH_PATH for s in YAML_SUFFIXES + JSON_SUFFIXES ] BOOL_KEYS = ('insecure', 'cache') FORMAT_EXCLUSIONS = frozenset(['password']) # NOTE(dtroyer): This turns out to be not the best idea so let's move # overriding defaults to a kwarg to OpenStackConfig.__init__() # Remove this sometime in June 2015 once OSC is comfortably # changed-over and global-defaults is updated. def set_default(key, value): warnings.warn( "Use of set_default() is deprecated. Defaults should be set with the " "`override_defaults` parameter of OpenStackConfig." ) defaults.get_defaults() # make sure the dict is initialized defaults._defaults[key] = value def get_boolean(value): if value is None: return False if type(value) is bool: return value if value.lower() == 'true': return True return False def _get_os_environ(envvar_prefix=None): ret = defaults.get_defaults() if not envvar_prefix: # This makes the or below be OS_ or OS_ which is a no-op envvar_prefix = 'OS_' environkeys = [k for k in os.environ.keys() if (k.startswith('OS_') or k.startswith(envvar_prefix)) and not k.startswith('OS_TEST') # infra CI var and not k.startswith('OS_STD') # infra CI var ] for k in environkeys: newkey = k.split('_', 1)[-1].lower() ret[newkey] = os.environ[k] # If the only environ keys are selectors or behavior modification, don't # return anything selectors = set([ 'OS_CLOUD', 'OS_REGION_NAME', 'OS_CLIENT_CONFIG_FILE', 'OS_CLIENT_SECURE_FILE', 'OS_CLOUD_NAME']) if set(environkeys) - selectors: return ret return None def _merge_clouds(old_dict, new_dict): """Like dict.update, except handling nested dicts.""" ret = old_dict.copy() for (k, v) in new_dict.items(): if isinstance(v, dict): if k in ret: ret[k] = _merge_clouds(ret[k], v) else: ret[k] = v.copy() else: ret[k] = v return ret def _auth_update(old_dict, new_dict_source): """Like dict.update, except handling the nested dict called auth.""" new_dict = copy.deepcopy(new_dict_source) for (k, v) in new_dict.items(): if k == 'auth': if k in old_dict: old_dict[k].update(v) else: old_dict[k] = v.copy() else: old_dict[k] = v return old_dict def _fix_argv(argv): # Transform any _ characters in arg names to - so that we don't # have to throw billions of compat argparse arguments around all # over the place. processed = collections.defaultdict(list) for index in range(0, len(argv)): # If the value starts with '--' and has '-' or '_' in it, then # it's worth looking at it if re.match('^--.*(_|-)+.*', argv[index]): split_args = argv[index].split('=') orig = split_args[0] new = orig.replace('_', '-') if orig != new: split_args[0] = new argv[index] = "=".join(split_args) # Save both for later so we can throw an error about dupes processed[new].append(orig) overlap = [] for new, old in processed.items(): if len(old) > 1: overlap.extend(old) if overlap: raise exceptions.OpenStackConfigException( "The following options were given: '{options}' which contain" " duplicates except that one has _ and one has -. There is" " no sane way for us to know what you're doing. Remove the" " duplicate option and try again".format( options=','.join(overlap))) class OpenStackConfig(object): def __init__(self, config_files=None, vendor_files=None, override_defaults=None, force_ipv4=None, envvar_prefix=None, secure_files=None, pw_func=None, session_constructor=None, app_name=None, app_version=None, load_yaml_config=True, load_envvars=True): self.log = _log.setup_logging('openstack.config') self._session_constructor = session_constructor self._app_name = app_name self._app_version = app_version self._load_envvars = load_envvars if load_yaml_config: self._config_files = config_files or CONFIG_FILES self._secure_files = secure_files or SECURE_FILES self._vendor_files = vendor_files or VENDOR_FILES else: self._config_files = [] self._secure_files = [] self._vendor_files = [] config_file_override = self._get_envvar('OS_CLIENT_CONFIG_FILE') if config_file_override: self._config_files.insert(0, config_file_override) secure_file_override = self._get_envvar('OS_CLIENT_SECURE_FILE') if secure_file_override: self._secure_files.insert(0, secure_file_override) self.defaults = defaults.get_defaults() if override_defaults: self.defaults.update(override_defaults) # First, use a config file if it exists where expected self.config_filename, self.cloud_config = self._load_config_file() _, secure_config = self._load_secure_file() if secure_config: self.cloud_config = _merge_clouds( self.cloud_config, secure_config) if not self.cloud_config: self.cloud_config = {'clouds': {}} if 'clouds' not in self.cloud_config: self.cloud_config['clouds'] = {} # Grab ipv6 preference settings from env client_config = self.cloud_config.get('client', {}) if force_ipv4 is not None: # If it's passed in to the constructor, honor it. self.force_ipv4 = force_ipv4 else: # Get the backwards compat value prefer_ipv6 = get_boolean( self._get_envvar( 'OS_PREFER_IPV6', client_config.get( 'prefer_ipv6', client_config.get( 'prefer-ipv6', True)))) force_ipv4 = get_boolean( self._get_envvar( 'OS_FORCE_IPV4', client_config.get( 'force_ipv4', client_config.get( 'broken-ipv6', False)))) self.force_ipv4 = force_ipv4 if not prefer_ipv6: # this will only be false if someone set it explicitly # honor their wishes self.force_ipv4 = True # Next, process environment variables and add them to the mix self.envvar_key = self._get_envvar('OS_CLOUD_NAME', 'envvars') if self.envvar_key in self.cloud_config['clouds']: raise exceptions.OpenStackConfigException( '"{0}" defines a cloud named "{1}", but' ' OS_CLOUD_NAME is also set to "{1}". Please rename' ' either your environment based cloud, or one of your' ' file-based clouds.'.format(self.config_filename, self.envvar_key)) self.default_cloud = self._get_envvar('OS_CLOUD') if load_envvars: envvars = _get_os_environ(envvar_prefix=envvar_prefix) if envvars: self.cloud_config['clouds'][self.envvar_key] = envvars if not self.default_cloud: self.default_cloud = self.envvar_key if not self.default_cloud and self.cloud_config['clouds']: if len(self.cloud_config['clouds'].keys()) == 1: # If there is only one cloud just use it. This matches envvars # behavior and allows for much less typing. # TODO(mordred) allow someone to mark a cloud as "default" in # clouds.yaml. # The next/iter thing is for python3 compat where dict.keys # returns an iterator but in python2 it's a list. self.default_cloud = next(iter( self.cloud_config['clouds'].keys())) # Finally, fall through and make a cloud that starts with defaults # because we need somewhere to put arguments, and there are neither # config files or env vars if not self.cloud_config['clouds']: self.cloud_config = dict( clouds=dict(defaults=dict(self.defaults))) self.default_cloud = 'defaults' self._cache_expiration_time = 0 self._cache_path = CACHE_PATH self._cache_class = 'dogpile.cache.null' self._cache_arguments = {} self._cache_expiration = {} if 'cache' in self.cloud_config: cache_settings = self._normalize_keys(self.cloud_config['cache']) # expiration_time used to be 'max_age' but the dogpile setting # is expiration_time. Support max_age for backwards compat. self._cache_expiration_time = cache_settings.get( 'expiration_time', cache_settings.get( 'max_age', self._cache_expiration_time)) # If cache class is given, use that. If not, but if cache time # is given, default to memory. Otherwise, default to nothing. # to memory. if self._cache_expiration_time: self._cache_class = 'dogpile.cache.memory' self._cache_class = self.cloud_config['cache'].get( 'class', self._cache_class) self._cache_path = os.path.expanduser( cache_settings.get('path', self._cache_path)) self._cache_arguments = cache_settings.get( 'arguments', self._cache_arguments) self._cache_expiration = cache_settings.get( 'expiration', self._cache_expiration) # Flag location to hold the peeked value of an argparse timeout value self._argv_timeout = False # Save the password callback # password = self._pw_callback(prompt="Password: ") self._pw_callback = pw_func def _get_envvar(self, key, default=None): if not self._load_envvars: return default return os.environ.get(key, default) def get_extra_config(self, key, defaults=None): """Fetch an arbitrary extra chunk of config, laying in defaults. :param string key: name of the config section to fetch :param dict defaults: (optional) default values to merge under the found config """ if not defaults: defaults = {} return _merge_clouds( self._normalize_keys(defaults), self._normalize_keys(self.cloud_config.get(key, {}))) def _load_config_file(self): return self._load_yaml_json_file(self._config_files) def _load_secure_file(self): return self._load_yaml_json_file(self._secure_files) def _load_vendor_file(self): return self._load_yaml_json_file(self._vendor_files) def _load_yaml_json_file(self, filelist): for path in filelist: if os.path.exists(path): with open(path, 'r') as f: if path.endswith('json'): return path, json.load(f) else: return path, yaml.safe_load(f) return (None, {}) def _normalize_keys(self, config): new_config = {} for key, value in config.items(): key = key.replace('-', '_') if isinstance(value, dict): new_config[key] = self._normalize_keys(value) elif isinstance(value, bool): new_config[key] = value elif isinstance(value, int) and key != 'verbose_level': new_config[key] = str(value) elif isinstance(value, float): new_config[key] = str(value) else: new_config[key] = value return new_config def get_cache_expiration_time(self): return int(self._cache_expiration_time) def get_cache_interval(self): return self.get_cache_expiration_time() def get_cache_max_age(self): return self.get_cache_expiration_time() def get_cache_path(self): return self._cache_path def get_cache_class(self): return self._cache_class def get_cache_arguments(self): return copy.deepcopy(self._cache_arguments) def get_cache_expiration(self): return copy.deepcopy(self._cache_expiration) def _expand_region_name(self, region_name): return {'name': region_name, 'values': {}} def _expand_regions(self, regions): ret = [] for region in regions: if isinstance(region, dict): ret.append(copy.deepcopy(region)) else: ret.append(self._expand_region_name(region)) return ret def _get_regions(self, cloud): if cloud not in self.cloud_config['clouds']: return [self._expand_region_name('')] regions = self._get_known_regions(cloud) if not regions: # We don't know of any regions use a workable default. regions = [self._expand_region_name('')] return regions def _get_known_regions(self, cloud): config = self._normalize_keys(self.cloud_config['clouds'][cloud]) if 'regions' in config: return self._expand_regions(config['regions']) elif 'region_name' in config: if isinstance(config['region_name'], list): regions = config['region_name'] else: regions = config['region_name'].split(',') if len(regions) > 1: warnings.warn( "Comma separated lists in region_name are deprecated." " Please use a yaml list in the regions" " parameter in {0} instead.".format(self.config_filename)) return self._expand_regions(regions) else: # crappit. we don't have a region defined. new_cloud = dict() our_cloud = self.cloud_config['clouds'].get(cloud, dict()) self._expand_vendor_profile(cloud, new_cloud, our_cloud) if 'regions' in new_cloud and new_cloud['regions']: return self._expand_regions(new_cloud['regions']) elif 'region_name' in new_cloud and new_cloud['region_name']: return [self._expand_region_name(new_cloud['region_name'])] def _get_region(self, cloud=None, region_name=''): if region_name is None: region_name = '' if not cloud: return self._expand_region_name(region_name) regions = self._get_known_regions(cloud) if not regions: return self._expand_region_name(region_name) if not region_name: return regions[0] for region in regions: if region['name'] == region_name: return region raise exceptions.OpenStackConfigException( 'Region {region_name} is not a valid region name for cloud' ' {cloud}. Valid choices are {region_list}. Please note that' ' region names are case sensitive.'.format( region_name=region_name, region_list=','.join([r['name'] for r in regions]), cloud=cloud)) def get_cloud_names(self): return self.cloud_config['clouds'].keys() def _get_base_cloud_config(self, name): cloud = dict() # Only validate cloud name if one was given if name and name not in self.cloud_config['clouds']: raise exceptions.OpenStackConfigException( "Cloud {name} was not found.".format( name=name)) our_cloud = self.cloud_config['clouds'].get(name, dict()) # Get the defaults cloud.update(self.defaults) self._expand_vendor_profile(name, cloud, our_cloud) if 'auth' not in cloud: cloud['auth'] = dict() _auth_update(cloud, our_cloud) if 'cloud' in cloud: del cloud['cloud'] return cloud def _expand_vendor_profile(self, name, cloud, our_cloud): # Expand a profile if it exists. 'cloud' is an old confusing name # for this. profile_name = our_cloud.get('profile', our_cloud.get('cloud', None)) if profile_name and profile_name != self.envvar_key: if 'cloud' in our_cloud: warnings.warn( "{0} use the keyword 'cloud' to reference a known " "vendor profile. This has been deprecated in favor of the " "'profile' keyword.".format(self.config_filename)) vendor_filename, vendor_file = self._load_vendor_file() if vendor_file and profile_name in vendor_file['public-clouds']: _auth_update(cloud, vendor_file['public-clouds'][profile_name]) else: profile_data = vendors.get_profile(profile_name) if profile_data: status = profile_data.pop('status', 'active') message = profile_data.pop('message', '') if status == 'deprecated': warnings.warn( "{profile_name} is deprecated: {message}".format( profile_name=profile_name, message=message)) elif status == 'shutdown': raise exceptions.OpenStackConfigException( "{profile_name} references a cloud that no longer" " exists: {message}".format( profile_name=profile_name, message=message)) _auth_update(cloud, profile_data) else: # Can't find the requested vendor config, go about business warnings.warn("Couldn't find the vendor profile '{0}', for" " the cloud '{1}'".format(profile_name, name)) def _project_scoped(self, cloud): return ('project_id' in cloud or 'project_name' in cloud or 'project_id' in cloud['auth'] or 'project_name' in cloud['auth']) def _validate_networks(self, networks, key): value = None for net in networks: if value and net[key]: raise exceptions.OpenStackConfigException( "Duplicate network entries for {key}: {net1} and {net2}." " Only one network can be flagged with {key}".format( key=key, net1=value['name'], net2=net['name'])) if not value and net[key]: value = net def _fix_backwards_networks(self, cloud): # Leave the external_network and internal_network keys in the # dict because consuming code might be expecting them. networks = [] # Normalize existing network entries for net in cloud.get('networks', []): name = net.get('name') if not name: raise exceptions.OpenStackConfigException( 'Entry in network list is missing required field "name".') network = dict( name=name, routes_externally=get_boolean(net.get('routes_externally')), nat_source=get_boolean(net.get('nat_source')), nat_destination=get_boolean(net.get('nat_destination')), default_interface=get_boolean(net.get('default_interface')), ) # routes_ipv4_externally defaults to the value of routes_externally network['routes_ipv4_externally'] = get_boolean( net.get( 'routes_ipv4_externally', network['routes_externally'])) # routes_ipv6_externally defaults to the value of routes_externally network['routes_ipv6_externally'] = get_boolean( net.get( 'routes_ipv6_externally', network['routes_externally'])) networks.append(network) for key in ('external_network', 'internal_network'): external = key.startswith('external') if key in cloud and 'networks' in cloud: raise exceptions.OpenStackConfigException( "Both {key} and networks were specified in the config." " Please remove {key} from the config and use the network" " list to configure network behavior.".format(key=key)) if key in cloud: warnings.warn( "{key} is deprecated. Please replace with an entry in" " a dict inside of the networks list with name: {name}" " and routes_externally: {external}".format( key=key, name=cloud[key], external=external)) networks.append(dict( name=cloud[key], routes_externally=external, nat_destination=not external, default_interface=external)) # Validate that we don't have duplicates self._validate_networks(networks, 'nat_destination') self._validate_networks(networks, 'default_interface') cloud['networks'] = networks return cloud def _handle_domain_id(self, cloud): # Allow people to just specify domain once if it's the same mappings = { 'domain_id': ('user_domain_id', 'project_domain_id'), 'domain_name': ('user_domain_name', 'project_domain_name'), } for target_key, possible_values in mappings.items(): if not self._project_scoped(cloud): if target_key in cloud and target_key not in cloud['auth']: cloud['auth'][target_key] = cloud.pop(target_key) continue for key in possible_values: if target_key in cloud['auth'] and key not in cloud['auth']: cloud['auth'][key] = cloud['auth'][target_key] cloud.pop(target_key, None) cloud['auth'].pop(target_key, None) return cloud def _fix_backwards_project(self, cloud): # Do the lists backwards so that project_name is the ultimate winner # Also handle moving domain names into auth so that domain mapping # is easier mappings = { 'domain_id': ('domain_id', 'domain-id'), 'domain_name': ('domain_name', 'domain-name'), 'user_domain_id': ('user_domain_id', 'user-domain-id'), 'user_domain_name': ('user_domain_name', 'user-domain-name'), 'project_domain_id': ('project_domain_id', 'project-domain-id'), 'project_domain_name': ( 'project_domain_name', 'project-domain-name'), 'token': ('auth-token', 'auth_token', 'token'), } if cloud.get('auth_type', None) == 'v2password': # If v2password is explcitly requested, this is to deal with old # clouds. That's fine - we need to map settings in the opposite # direction mappings['tenant_id'] = ( 'project_id', 'project-id', 'tenant_id', 'tenant-id') mappings['tenant_name'] = ( 'project_name', 'project-name', 'tenant_name', 'tenant-name') else: mappings['project_id'] = ( 'tenant_id', 'tenant-id', 'project_id', 'project-id') mappings['project_name'] = ( 'tenant_name', 'tenant-name', 'project_name', 'project-name') for target_key, possible_values in mappings.items(): target = None for key in possible_values: if key in cloud: target = str(cloud[key]) del cloud[key] if key in cloud['auth']: target = str(cloud['auth'][key]) del cloud['auth'][key] if target: cloud['auth'][target_key] = target return cloud def _fix_backwards_auth_plugin(self, cloud): # Do the lists backwards so that auth_type is the ultimate winner mappings = { 'auth_type': ('auth_plugin', 'auth_type'), } for target_key, possible_values in mappings.items(): target = None for key in possible_values: if key in cloud: target = cloud[key] del cloud[key] cloud[target_key] = target # Because we force alignment to v3 nouns, we want to force # use of the auth plugin that can do auto-selection and dealing # with that based on auth parameters. v2password is basically # completely broken return cloud def register_argparse_arguments(self, parser, argv, service_keys=None): """Register all of the common argparse options needed. Given an argparse parser, register the keystoneauth Session arguments, the keystoneauth Auth Plugin Options and os-cloud. Also, peek in the argv to see if all of the auth plugin options should be registered or merely the ones already configured. :param argparse.ArgumentParser: parser to attach argparse options to :param argv: the arguments provided to the application :param string service_keys: Service or list of services this argparse should be specialized for, if known. The first item in the list will be used as the default value for service_type (optional) :raises exceptions.OpenStackConfigException if an invalid auth-type is requested """ if service_keys is None: service_keys = [] # Fix argv in place - mapping any keys with embedded _ in them to - _fix_argv(argv) local_parser = argparse_mod.ArgumentParser(add_help=False) for p in (parser, local_parser): p.add_argument( '--os-cloud', metavar='', default=self._get_envvar('OS_CLOUD', None), help='Named cloud to connect to') # we need to peek to see if timeout was actually passed, since # the keystoneauth declaration of it has a default, which means # we have no clue if the value we get is from the ksa default # for from the user passing it explicitly. We'll stash it for later local_parser.add_argument('--timeout', metavar='') # We need for get_one to be able to peek at whether a token # was passed so that we can swap the default from password to # token if it was. And we need to also peek for --os-auth-token # for novaclient backwards compat local_parser.add_argument('--os-token') local_parser.add_argument('--os-auth-token') # Peek into the future and see if we have an auth-type set in # config AND a cloud set, so that we know which command line # arguments to register and show to the user (the user may want # to say something like: # openstack --os-cloud=foo --os-oidctoken=bar # although I think that user is the cause of my personal pain options, _args = local_parser.parse_known_args(argv) if options.timeout: self._argv_timeout = True # validate = False because we're not _actually_ loading here # we're only peeking, so it's the wrong time to assert that # the rest of the arguments given are invalid for the plugin # chosen (for instance, --help may be requested, so that the # user can see what options he may want to give cloud_region = self.get_one(argparse=options, validate=False) default_auth_type = cloud_region.config['auth_type'] try: loading.register_auth_argparse_arguments( parser, argv, default=default_auth_type) except Exception: # Hidiing the keystoneauth exception because we're not actually # loading the auth plugin at this point, so the error message # from it doesn't actually make sense to os-client-config users options, _args = parser.parse_known_args(argv) plugin_names = loading.get_available_plugin_names() raise exceptions.OpenStackConfigException( "An invalid auth-type was specified: {auth_type}." " Valid choices are: {plugin_names}.".format( auth_type=options.os_auth_type, plugin_names=",".join(plugin_names))) if service_keys: primary_service = service_keys[0] else: primary_service = None loading.register_session_argparse_arguments(parser) adapter.register_adapter_argparse_arguments( parser, service_type=primary_service) for service_key in service_keys: # legacy clients have un-prefixed api-version options parser.add_argument( '--{service_key}-api-version'.format( service_key=service_key.replace('_', '-'), help=argparse_mod.SUPPRESS)) adapter.register_service_adapter_argparse_arguments( parser, service_type=service_key) # Backwards compat options for legacy clients parser.add_argument('--http-timeout', help=argparse_mod.SUPPRESS) parser.add_argument('--os-endpoint-type', help=argparse_mod.SUPPRESS) parser.add_argument('--endpoint-type', help=argparse_mod.SUPPRESS) def _fix_backwards_interface(self, cloud): new_cloud = {} for key in cloud.keys(): if key.endswith('endpoint_type'): target_key = key.replace('endpoint_type', 'interface') else: target_key = key new_cloud[target_key] = cloud[key] return new_cloud def _fix_backwards_api_timeout(self, cloud): new_cloud = {} # requests can only have one timeout, which means that in a single # cloud there is no point in different timeout values. However, # for some reason many of the legacy clients decided to shove their # service name in to the arg name for reasons surpassin sanity. If # we find any values that are not api_timeout, overwrite api_timeout # with the value service_timeout = None for key in cloud.keys(): if key.endswith('timeout') and not ( key == 'timeout' or key == 'api_timeout'): service_timeout = cloud[key] else: new_cloud[key] = cloud[key] if service_timeout is not None: new_cloud['api_timeout'] = service_timeout # The common argparse arg from keystoneauth is called timeout, but # os-client-config expects it to be called api_timeout if self._argv_timeout: if 'timeout' in new_cloud and new_cloud['timeout']: new_cloud['api_timeout'] = new_cloud.pop('timeout') return new_cloud def get_all(self): clouds = [] for cloud in self.get_cloud_names(): for region in self._get_regions(cloud): if region: clouds.append(self.get_one( cloud, region_name=region['name'])) return clouds # TODO(mordred) Backwards compat for OSC transition get_all_clouds = get_all def _fix_args(self, args=None, argparse=None): """Massage the passed-in options Replace - with _ and strip os_ prefixes. Convert an argparse Namespace object to a dict, removing values that are either None or ''. """ if not args: args = {} if argparse: # Convert the passed-in Namespace o_dict = vars(argparse) parsed_args = dict() for k in o_dict: if o_dict[k] is not None and o_dict[k] != '': parsed_args[k] = o_dict[k] args.update(parsed_args) os_args = dict() new_args = dict() for (key, val) in iter(args.items()): if type(args[key]) == dict: # dive into the auth dict new_args[key] = self._fix_args(args[key]) continue key = key.replace('-', '_') if key.startswith('os_'): os_args[key[3:]] = val else: new_args[key] = val new_args.update(os_args) return new_args def _find_winning_auth_value(self, opt, config): opt_name = opt.name.replace('-', '_') if opt_name in config: return config[opt_name] else: deprecated = getattr(opt, 'deprecated', getattr( opt, 'deprecated_opts', [])) for d_opt in deprecated: d_opt_name = d_opt.name.replace('-', '_') if d_opt_name in config: return config[d_opt_name] def auth_config_hook(self, config): """Allow examination of config values before loading auth plugin OpenStackClient will override this to perform additional checks on auth_type. """ return config def _get_auth_loader(self, config): # Re-use the admin_token plugin for the "None" plugin # since it does not look up endpoints or tokens but rather # does a passthrough. This is useful for things like Ironic # that have a keystoneless operational mode, but means we're # still dealing with a keystoneauth Session object, so all the # _other_ things (SSL arg handling, timeout) all work consistently if config['auth_type'] in (None, "None", ''): config['auth_type'] = 'admin_token' # Set to notused rather than None because validate_auth will # strip the value if it's actually python None config['auth']['token'] = 'notused' elif config['auth_type'] == 'token_endpoint': # Humans have been trained to use a thing called token_endpoint # That it does not exist in keystoneauth is irrelvant- it not # doing what they want causes them sorrow. config['auth_type'] = 'admin_token' return loading.get_plugin_loader(config['auth_type']) def _validate_auth(self, config, loader): # May throw a keystoneauth1.exceptions.NoMatchingPlugin plugin_options = loader.get_options() for p_opt in plugin_options: # if it's in config.auth, win, kill it from config dict # if it's in config and not in config.auth, move it # deprecated loses to current # provided beats default, deprecated or not winning_value = self._find_winning_auth_value( p_opt, config['auth'], ) if not winning_value: winning_value = self._find_winning_auth_value( p_opt, config, ) config = self._clean_up_after_ourselves( config, p_opt, winning_value, ) if winning_value: # Prefer the plugin configuration dest value if the value's key # is marked as deprecated. if p_opt.dest is None: good_name = p_opt.name.replace('-', '_') config['auth'][good_name] = winning_value else: config['auth'][p_opt.dest] = winning_value # See if this needs a prompting config = self.option_prompt(config, p_opt) return config def _validate_auth_correctly(self, config, loader): # May throw a keystoneauth1.exceptions.NoMatchingPlugin plugin_options = loader.get_options() for p_opt in plugin_options: # if it's in config, win, move it and kill it from config dict # if it's in config.auth but not in config it's good # deprecated loses to current # provided beats default, deprecated or not winning_value = self._find_winning_auth_value( p_opt, config, ) if not winning_value: winning_value = self._find_winning_auth_value( p_opt, config['auth'], ) config = self._clean_up_after_ourselves( config, p_opt, winning_value, ) # See if this needs a prompting config = self.option_prompt(config, p_opt) return config def option_prompt(self, config, p_opt): """Prompt user for option that requires a value""" if ( getattr(p_opt, 'prompt', None) is not None and p_opt.dest not in config['auth'] and self._pw_callback is not None ): config['auth'][p_opt.dest] = self._pw_callback(p_opt.prompt) return config def _clean_up_after_ourselves(self, config, p_opt, winning_value): # Clean up after ourselves for opt in [p_opt.name] + [o.name for o in p_opt.deprecated]: opt = opt.replace('-', '_') config.pop(opt, None) config['auth'].pop(opt, None) if winning_value: # Prefer the plugin configuration dest value if the value's key # is marked as depreciated. if p_opt.dest is None: config['auth'][p_opt.name.replace('-', '_')] = ( winning_value) else: config['auth'][p_opt.dest] = winning_value return config def magic_fixes(self, config): """Perform the set of magic argument fixups""" # Infer token plugin if a token was given if (('auth' in config and 'token' in config['auth']) or ('auth_token' in config and config['auth_token']) or ('token' in config and config['token'])): config.setdefault('token', config.pop('auth_token', None)) # These backwards compat values are only set via argparse. If it's # there, it's because it was passed in explicitly, and should win config = self._fix_backwards_api_timeout(config) if 'endpoint_type' in config: config['interface'] = config.pop('endpoint_type') config = self._fix_backwards_auth_plugin(config) config = self._fix_backwards_project(config) config = self._fix_backwards_interface(config) config = self._fix_backwards_networks(config) config = self._handle_domain_id(config) for key in BOOL_KEYS: if key in config: if type(config[key]) is not bool: config[key] = get_boolean(config[key]) # TODO(mordred): Special casing auth_url here. We should # come back to this betterer later so that it's # more generalized if 'auth' in config and 'auth_url' in config['auth']: config['auth']['auth_url'] = config['auth']['auth_url'].format( **config) return config def get_one( self, cloud=None, validate=True, argparse=None, **kwargs): """Retrieve a single CloudRegion and merge additional options :param string cloud: The name of the configuration to load from clouds.yaml :param boolean validate: Validate the config. Setting this to False causes no auth plugin to be created. It's really only useful for testing. :param Namespace argparse: An argparse Namespace object; allows direct passing in of argparse options to be added to the cloud config. Values of None and '' will be removed. :param region_name: Name of the region of the cloud. :param kwargs: Additional configuration options :returns: openstack.config.cloud_region.CloudRegion :raises: keystoneauth1.exceptions.MissingRequiredOptions on missing required auth parameters """ args = self._fix_args(kwargs, argparse=argparse) if cloud is None: if 'cloud' in args: cloud = args['cloud'] else: cloud = self.default_cloud config = self._get_base_cloud_config(cloud) # Get region specific settings if 'region_name' not in args: args['region_name'] = '' region = self._get_region(cloud=cloud, region_name=args['region_name']) args['region_name'] = region['name'] region_args = copy.deepcopy(region['values']) # Regions is a list that we can use to create a list of cloud/region # objects. It does not belong in the single-cloud dict config.pop('regions', None) # Can't just do update, because None values take over for arg_list in region_args, args: for (key, val) in iter(arg_list.items()): if val is not None: if key == 'auth' and config[key] is not None: config[key] = _auth_update(config[key], val) else: config[key] = val config = self.magic_fixes(config) config = self._normalize_keys(config) # NOTE(dtroyer): OSC needs a hook into the auth args before the # plugin is loaded in order to maintain backward- # compatible behaviour config = self.auth_config_hook(config) if validate: loader = self._get_auth_loader(config) config = self._validate_auth(config, loader) auth_plugin = loader.load_from_options(**config['auth']) else: auth_plugin = None # If any of the defaults reference other values, we need to expand for (key, value) in config.items(): if hasattr(value, 'format') and key not in FORMAT_EXCLUSIONS: config[key] = value.format(**config) force_ipv4 = config.pop('force_ipv4', self.force_ipv4) prefer_ipv6 = config.pop('prefer_ipv6', True) if not prefer_ipv6: force_ipv4 = True if cloud is None: cloud_name = '' else: cloud_name = str(cloud) return cloud_region.CloudRegion( name=cloud_name, region_name=config['region_name'], config=config, force_ipv4=force_ipv4, auth_plugin=auth_plugin, openstack_config=self, session_constructor=self._session_constructor, app_name=self._app_name, app_version=self._app_version, ) # TODO(mordred) Backwards compat for OSC transition get_one_cloud = get_one def get_one_cloud_osc( self, cloud=None, validate=True, argparse=None, **kwargs ): """Retrieve a single CloudRegion and merge additional options :param string cloud: The name of the configuration to load from clouds.yaml :param boolean validate: Validate the config. Setting this to False causes no auth plugin to be created. It's really only useful for testing. :param Namespace argparse: An argparse Namespace object; allows direct passing in of argparse options to be added to the cloud config. Values of None and '' will be removed. :param region_name: Name of the region of the cloud. :param kwargs: Additional configuration options :raises: keystoneauth1.exceptions.MissingRequiredOptions on missing required auth parameters """ args = self._fix_args(kwargs, argparse=argparse) if cloud is None: if 'cloud' in args: cloud = args['cloud'] else: cloud = self.default_cloud config = self._get_base_cloud_config(cloud) # Get region specific settings if 'region_name' not in args: args['region_name'] = '' region = self._get_region(cloud=cloud, region_name=args['region_name']) args['region_name'] = region['name'] region_args = copy.deepcopy(region['values']) # Regions is a list that we can use to create a list of cloud/region # objects. It does not belong in the single-cloud dict config.pop('regions', None) # Can't just do update, because None values take over for arg_list in region_args, args: for (key, val) in iter(arg_list.items()): if val is not None: if key == 'auth' and config[key] is not None: config[key] = _auth_update(config[key], val) else: config[key] = val config = self.magic_fixes(config) # NOTE(dtroyer): OSC needs a hook into the auth args before the # plugin is loaded in order to maintain backward- # compatible behaviour config = self.auth_config_hook(config) if validate: loader = self._get_auth_loader(config) config = self._validate_auth_correctly(config, loader) auth_plugin = loader.load_from_options(**config['auth']) else: auth_plugin = None # If any of the defaults reference other values, we need to expand for (key, value) in config.items(): if hasattr(value, 'format') and key not in FORMAT_EXCLUSIONS: config[key] = value.format(**config) force_ipv4 = config.pop('force_ipv4', self.force_ipv4) prefer_ipv6 = config.pop('prefer_ipv6', True) if not prefer_ipv6: force_ipv4 = True if cloud is None: cloud_name = '' else: cloud_name = str(cloud) return cloud_region.CloudRegion( name=cloud_name, region_name=config['region_name'], config=self._normalize_keys(config), force_ipv4=force_ipv4, auth_plugin=auth_plugin, openstack_config=self, ) @staticmethod def set_one_cloud(config_file, cloud, set_config=None): """Set a single cloud configuration. :param string config_file: The path to the config file to edit. If this file does not exist it will be created. :param string cloud: The name of the configuration to save to clouds.yaml :param dict set_config: Configuration options to be set """ set_config = set_config or {} cur_config = {} try: with open(config_file) as fh: cur_config = yaml.safe_load(fh) except IOError as e: # Not no such file if e.errno != 2: raise pass clouds_config = cur_config.get('clouds', {}) cloud_config = _auth_update(clouds_config.get(cloud, {}), set_config) clouds_config[cloud] = cloud_config cur_config['clouds'] = clouds_config with open(config_file, 'w') as fh: yaml.safe_dump(cur_config, fh, default_flow_style=False) if __name__ == '__main__': config = OpenStackConfig().get_all_clouds() for cloud in config: print_cloud = False if len(sys.argv) == 1: print_cloud = True elif len(sys.argv) == 3 and ( sys.argv[1] == cloud.name and sys.argv[2] == cloud.region): print_cloud = True elif len(sys.argv) == 2 and ( sys.argv[1] == cloud.name): print_cloud = True if print_cloud: print(cloud.name, cloud.region, cloud.config) openstacksdk-0.11.3/openstack/format.py0000666000175100017510000000312513236151340020143 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class Formatter(object): @classmethod def serialize(cls, value): """Return a string representing the formatted value""" raise NotImplementedError @classmethod def deserialize(cls, value): """Return a formatted object representing the value""" raise NotImplementedError class BoolStr(Formatter): @classmethod def deserialize(cls, value): """Convert a boolean string to a boolean""" expr = str(value).lower() if "true" == expr: return True elif "false" == expr: return False else: raise ValueError("Unable to deserialize boolean string: %s" % value) @classmethod def serialize(cls, value): """Convert a boolean to a boolean string""" if isinstance(value, bool): if value: return "true" else: return "false" else: raise ValueError("Unable to serialize boolean string: %s" % value) openstacksdk-0.11.3/openstack/proxy.py0000666000175100017510000003465013236151340020043 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import _adapter from openstack import exceptions from openstack import resource from openstack import utils # The _check_resource decorator is used on BaseProxy methods to ensure that # the `actual` argument is in fact the type of the `expected` argument. # It does so under two cases: # 1. When strict=False, if and only if `actual` is a Resource instance, # it is checked to see that it's an instance of the `expected` class. # This allows `actual` to be other types, such as strings, when it makes # sense to accept a raw id value. # 2. When strict=True, `actual` must be an instance of the `expected` class. def _check_resource(strict=False): def wrap(method): def check(self, expected, actual=None, *args, **kwargs): if (strict and actual is not None and not isinstance(actual, resource.Resource)): raise ValueError("A %s must be passed" % expected.__name__) elif (isinstance(actual, resource.Resource) and not isinstance(actual, expected)): raise ValueError("Expected %s but received %s" % ( expected.__name__, actual.__class__.__name__)) return method(self, expected, actual, *args, **kwargs) return check return wrap class BaseProxy(_adapter.OpenStackSDKAdapter): """Represents a service.""" def _get_resource(self, resource_type, value, **attrs): """Get a resource object to work on :param resource_type: The type of resource to operate on. This should be a subclass of :class:`~openstack.resource.Resource` with a ``from_id`` method. :param value: The ID of a resource or an object of ``resource_type`` class if using an existing instance, or None to create a new instance. :param path_args: A dict containing arguments for forming the request URL, if needed. """ if value is None: # Create a bare resource res = resource_type.new(**attrs) elif not isinstance(value, resource_type): # Create from an ID res = resource_type.new(id=value, **attrs) else: # An existing resource instance res = value res._update(**attrs) return res def _get_uri_attribute(self, child, parent, name): """Get a value to be associated with a URI attribute `child` will not be None here as it's a required argument on the proxy method. `parent` is allowed to be None if `child` is an actual resource, but when an ID is given for the child one must also be provided for the parent. An example of this is that a parent is a Server and a child is a ServerInterface. """ if parent is None: value = getattr(child, name) else: value = resource.Resource._get_id(parent) return value def _find(self, resource_type, name_or_id, ignore_missing=True, **attrs): """Find a resource :param name_or_id: The name or ID of a resource to find. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :param dict attrs: Attributes to be passed onto the :meth:`~openstack.resource.Resource.find` method, such as query parameters. :returns: An instance of ``resource_type`` or None """ return resource_type.find(self, name_or_id, ignore_missing=ignore_missing, **attrs) @_check_resource(strict=False) def _delete(self, resource_type, value, ignore_missing=True, **attrs): """Delete a resource :param resource_type: The type of resource to delete. This should be a :class:`~openstack.resource.Resource` subclass with a ``from_id`` method. :param value: The value to delete. Can be either the ID of a resource or a :class:`~openstack.resource.Resource` subclass. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent resource. :param dict attrs: Attributes to be passed onto the :meth:`~openstack.resource.Resource.delete` method, such as the ID of a parent resource. :returns: The result of the ``delete`` :raises: ``ValueError`` if ``value`` is a :class:`~openstack.resource.Resource` that doesn't match the ``resource_type``. :class:`~openstack.exceptions.ResourceNotFound` when ignore_missing if ``False`` and a nonexistent resource is attempted to be deleted. """ res = self._get_resource(resource_type, value, **attrs) try: rv = res.delete( self, error_message=( "Unable to delete {resource_type} for {value}".format( resource_type=resource_type.__name__, value=value, ) ) ) except exceptions.NotFoundException: if ignore_missing: return None raise return rv @_check_resource(strict=False) def _update(self, resource_type, value, **attrs): """Update a resource :param resource_type: The type of resource to update. :type resource_type: :class:`~openstack.resource.Resource` :param value: The resource to update. This must either be a :class:`~openstack.resource.Resource` or an id that corresponds to a resource. :param dict attrs: Attributes to be passed onto the :meth:`~openstack.resource.Resource.update` method to be updated. These should correspond to either :class:`~openstack.resource.Body` or :class:`~openstack.resource.Header` values on this resource. :returns: The result of the ``update`` :rtype: :class:`~openstack.resource.Resource` """ res = self._get_resource(resource_type, value, **attrs) return res.update(self) def _create(self, resource_type, **attrs): """Create a resource from attributes :param resource_type: The type of resource to create. :type resource_type: :class:`~openstack.resource.Resource` :param path_args: A dict containing arguments for forming the request URL, if needed. :param dict attrs: Attributes to be passed onto the :meth:`~openstack.resource.Resource.create` method to be created. These should correspond to either :class:`~openstack.resource.Body` or :class:`~openstack.resource.Header` values on this resource. :returns: The result of the ``create`` :rtype: :class:`~openstack.resource.Resource` """ res = resource_type.new(**attrs) return res.create(self) @_check_resource(strict=False) def _get(self, resource_type, value=None, requires_id=True, **attrs): """Get a resource :param resource_type: The type of resource to get. :type resource_type: :class:`~openstack.resource.Resource` :param value: The value to get. Can be either the ID of a resource or a :class:`~openstack.resource.Resource` subclass. :param dict attrs: Attributes to be passed onto the :meth:`~openstack.resource.Resource.get` method. These should correspond to either :class:`~openstack.resource.Body` or :class:`~openstack.resource.Header` values on this resource. :returns: The result of the ``get`` :rtype: :class:`~openstack.resource.Resource` """ res = self._get_resource(resource_type, value, **attrs) return res.get( self, requires_id=requires_id, error_message="No {resource_type} found for {value}".format( resource_type=resource_type.__name__, value=value)) def _list(self, resource_type, value=None, paginated=False, **attrs): """List a resource :param resource_type: The type of resource to delete. This should be a :class:`~openstack.resource.Resource` subclass with a ``from_id`` method. :param value: The resource to list. It can be the ID of a resource, or a :class:`~openstack.resource.Resource` object. When set to None, a new bare resource is created. :param bool paginated: When set to ``False``, expect all of the data to be returned in one response. When set to ``True``, the resource supports data being returned across multiple pages. :param dict attrs: Attributes to be passed onto the :meth:`~openstack.resource.Resource.list` method. These should correspond to either :class:`~openstack.resource.URI` values or appear in :data:`~openstack.resource.Resource._query_mapping`. :returns: A generator of Resource objects. :raises: ``ValueError`` if ``value`` is a :class:`~openstack.resource.Resource` that doesn't match the ``resource_type``. """ res = self._get_resource(resource_type, value, **attrs) return res.list(self, paginated=paginated, **attrs) def _head(self, resource_type, value=None, **attrs): """Retrieve a resource's header :param resource_type: The type of resource to retrieve. :type resource_type: :class:`~openstack.resource.Resource` :param value: The value of a specific resource to retreive headers for. Can be either the ID of a resource, a :class:`~openstack.resource.Resource` subclass, or ``None``. :param dict attrs: Attributes to be passed onto the :meth:`~openstack.resource.Resource.head` method. These should correspond to :class:`~openstack.resource.URI` values. :returns: The result of the ``head`` call :rtype: :class:`~openstack.resource.Resource` """ res = self._get_resource(resource_type, value, **attrs) return res.head(self) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details=("This is no longer a part of the proxy base, " "service-specific subclasses should expose " "this as needed. See resource.wait_for_status " "for this behavior")) def wait_for_status(self, value, status, failures=None, interval=2, wait=120): """Wait for a resource to be in a particular status. :param value: The resource to wait on to reach the status. The resource must have a status attribute. :type value: :class:`~openstack.resource.Resource` :param status: Desired status of the resource. :param list failures: Statuses that would indicate the transition failed such as 'ERROR'. :param interval: Number of seconds to wait between checks. :param wait: Maximum number of seconds to wait for the change. :return: Method returns resource on success. :raises: :class:`~openstack.exceptions.ResourceTimeout` transition to status failed to occur in wait seconds. :raises: :class:`~openstack.exceptions.ResourceFailure` resource transitioned to one of the failure states. :raises: :class:`~AttributeError` if the resource does not have a status attribute """ failures = [] if failures is None else failures return resource.wait_for_status( self, value, status, failures, interval, wait) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details=("This is no longer a part of the proxy base, " "service-specific subclasses should expose " "this as needed. See resource.wait_for_delete " "for this behavior")) def wait_for_delete(self, value, interval=2, wait=120): """Wait for the resource to be deleted. :param value: The resource to wait on to be deleted. :type value: :class:`~openstack.resource.Resource` :param interval: Number of seconds to wait between checks. :param wait: Maximum number of seconds to wait for the delete. :return: Method returns resource on success. :raises: :class:`~openstack.exceptions.ResourceTimeout` transition to delete failed to occur in wait seconds. """ return resource.wait_for_delete(self, value, interval, wait) openstacksdk-0.11.3/openstack/baremetal/0000775000175100017510000000000013236151501020231 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/baremetal/version.py0000666000175100017510000000201113236151340022265 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.baremetal import baremetal_service from openstack import resource class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = baremetal_service.BaremetalService( version=baremetal_service.BaremetalService.UNVERSIONED ) # Capabilities allow_list = True # Attributes links = resource.Body('links') status = resource.Body('status') updated = resource.Body('updated') openstacksdk-0.11.3/openstack/baremetal/v1/0000775000175100017510000000000013236151501020557 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/baremetal/v1/chassis.py0000666000175100017510000000371613236151340022600 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.baremetal import baremetal_service from openstack import resource class Chassis(resource.Resource): resources_key = 'chassis' base_path = '/chassis' service = baremetal_service.BaremetalService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'fields' ) #: Timestamp at which the chassis was created. created_at = resource.Body('created_at') #: A descriptive text about the service description = resource.Body('description') #: A set of one or more arbitrary metadata key and value pairs. extra = resource.Body('extra') #: The UUID for the chassis id = resource.Body('uuid', alternate_id=True) #: A list of relative links, including the self and bookmark links. links = resource.Body('links', type=list) #: Links to the collection of nodes contained in the chassis nodes = resource.Body('nodes', type=list) #: Timestamp at which the chassis was last updated. updated_at = resource.Body('updated_at') class ChassisDetail(Chassis): base_path = '/chassis/detail' # capabilities allow_create = False allow_get = False allow_update = False allow_delete = False allow_list = True #: The UUID for the chassis id = resource.Body('uuid', alternate_id=True) openstacksdk-0.11.3/openstack/baremetal/v1/driver.py0000666000175100017510000000247513236151340022437 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.baremetal import baremetal_service from openstack import resource class Driver(resource.Resource): resources_key = 'drivers' base_path = '/drivers' service = baremetal_service.BaremetalService() # capabilities allow_create = False allow_get = True allow_update = False allow_delete = False allow_list = True # NOTE: Query mapping? #: The name of the driver name = resource.Body('name', alternate_id=True) #: A list of active hosts that support this driver. hosts = resource.Body('hosts', type=list) #: A list of relative links, including the self and bookmark links. links = resource.Body('links', type=list) #: A list of links to driver properties. properties = resource.Body('properties', type=list) openstacksdk-0.11.3/openstack/baremetal/v1/port.py0000666000175100017510000000551013236151340022121 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.baremetal import baremetal_service from openstack import resource class Port(resource.Resource): resources_key = 'ports' base_path = '/ports' service = baremetal_service.BaremetalService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'fields' ) #: The physical hardware address of the network port, typically the #: hardware MAC address. address = resource.Body('address') #: Timestamp at which the port was created. created_at = resource.Body('created_at') #: A set of one or more arbitrary metadata key and value pairs. extra = resource.Body('extra') #: The UUID of the port id = resource.Body('uuid', alternate_id=True) #: Internal metadata set and stored by the port. This field is read-only. #: Added in API microversion 1.18. internal_info = resource.Body('internal_info') #: Whether PXE is enabled on the port. Added in API microversion 1.19. is_pxe_enabled = resource.Body('pxe_enabled', type=bool) #: A list of relative links, including the self and bookmark links. links = resource.Body('links', type=list) #: The port bindig profile. If specified, must contain ``switch_id`` and #: ``port_id`` fields. ``switch_info`` field is an optional string field #: to be used to store vendor specific information. Added in API #: microversion 1.19. local_link_connection = resource.Body('local_link_connection') #: The UUID of node this port belongs to node_id = resource.Body('node_uuid') #: The UUID of PortGroup this port belongs to. Added in API microversion #: 1.23. port_group_id = resource.Body('portgroup_uuid') #: Timestamp at which the port was last updated. updated_at = resource.Body('updated_at') class PortDetail(Port): base_path = '/ports/detail' # capabilities allow_create = False allow_get = False allow_update = False allow_delete = False allow_list = True _query_mapping = resource.QueryParameters( 'address', 'fields', 'node', 'portgroup', node_id='node_uuid', ) #: The UUID of the port id = resource.Body('uuid', alternate_id=True) openstacksdk-0.11.3/openstack/baremetal/v1/node.py0000666000175100017510000001353613236151340022071 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.baremetal import baremetal_service from openstack import resource class Node(resource.Resource): resources_key = 'nodes' base_path = '/nodes' service = baremetal_service.BaremetalService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'associated', 'driver', 'fields', 'provision_state', 'resource_class', instance_id='instance_uuid', is_maintenance='maintenance', ) # Properties #: The UUID of the chassis associated wit this node. Can be empty or None. chassis_id = resource.Body("chassis_uuid") #: The current clean step. clean_step = resource.Body("clean_step") #: Timestamp at which the node was last updated. created_at = resource.Body("created_at") #: The name of the driver. driver = resource.Body("driver") #: All the metadata required by the driver to manage this node. List of #: fields varies between drivers, and can be retrieved from the #: :class:`openstack.baremetal.v1.driver.Driver` resource. driver_info = resource.Body("driver_info", type=dict) #: Internal metadata set and stored by node's driver. This is read-only. driver_internal_info = resource.Body("driver_internal_info", type=dict) #: A set of one or more arbitrary metadata key and value pairs. extra = resource.Body("extra") #: The UUID of the node resource. id = resource.Body("uuid", alternate_id=True) #: Information used to customize the deployed image, e.g. size of root #: partition, config drive in the form of base64 encoded string and other #: metadata. instance_info = resource.Body("instance_info") #: UUID of the nova instance associated with this node. instance_id = resource.Body("instance_uuid") #: Whether console access is enabled on this node. is_console_enabled = resource.Body("console_enabled", type=bool) #: Whether node is currently in "maintenance mode". Nodes put into #: maintenance mode are removed from the available resource pool. is_maintenance = resource.Body("maintenance", type=bool) #: Any error from the most recent transaction that started but failed to #: finish. last_error = resource.Body("last_error") #: A list of relative links, including self and bookmark links. links = resource.Body("links", type=list) #: user settable description of the reason why the node was placed into #: maintenance mode. maintenance_reason = resource.Body("maintenance_reason") #: Human readable identifier for the node. May be undefined. Certain words #: are reserved. Added in API microversion 1.5 name = resource.Body("name") #: Network interface provider to use when plumbing the network connections #: for this node. Introduced in API microversion 1.20. network_interface = resource.Body("network_interface") #: Links to the collection of ports on this node. ports = resource.Body("ports", type=list) #: Links to the collection of portgroups on this node. Available since #: API microversion 1.24. port_groups = resource.Body("portgroups", type=list) #: The current power state. Usually "power on" or "power off", but may be #: "None" if service is unable to determine the power state. power_state = resource.Body("power_state") #: Physical characteristics of the node. Content populated by the service #: during inspection. properties = resource.Body("properties", type=dict) #: The current provisioning state of the node. provision_state = resource.Body("provision_state") #: The current RAID configuration of the node. raid_config = resource.Body("raid_config") #: The name of an service conductor host which is holding a lock on this #: node, if a lock is held. reservation = resource.Body("reservation") #: A string to be used by external schedulers to identify this node as a #: unit of a specific type of resource. Added in API microversion 1.21. resource_class = resource.Body("resource_class") #: Links to the collection of states. states = resource.Body("states", type=list) #: The requested state if a provisioning action has been requested. For #: example, ``AVAILABLE``, ``DEPLOYING``, ``DEPLOYWAIT``, ``DEPLOYING``, #: ``ACTIVE`` etc. target_provision_state = resource.Body("target_provision_state") #: The requested state during a state transition. target_power_state = resource.Body("target_power_state") #: The requested RAID configuration of the node which will be applied when #: the node next transitions through the CLEANING state. target_raid_config = resource.Body("target_raid_config") #: Timestamp at which the node was last updated. updated_at = resource.Body("updated_at") class NodeDetail(Node): base_path = '/nodes/detail' # capabilities allow_create = False allow_get = False allow_update = False allow_delete = False allow_list = True _query_mapping = resource.QueryParameters( 'associated', 'driver', 'fields', 'provision_state', 'resource_class', instance_id='instance_uuid', is_maintenance='maintenance', ) #: The UUID of the node resource. id = resource.Body("uuid", alternate_id=True) openstacksdk-0.11.3/openstack/baremetal/v1/__init__.py0000666000175100017510000000000013236151340022661 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/baremetal/v1/_proxy.py0000666000175100017510000007421713236151340022467 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.baremetal.v1 import chassis as _chassis from openstack.baremetal.v1 import driver as _driver from openstack.baremetal.v1 import node as _node from openstack.baremetal.v1 import port as _port from openstack.baremetal.v1 import port_group as _portgroup from openstack import proxy from openstack import utils class Proxy(proxy.BaseProxy): def chassis(self, details=False, **query): """Retrieve a generator of chassis. :param details: A boolean indicating whether the detailed information for every chassis should be returned. :param dict query: Optional query parameters to be sent to restrict the chassis to be returned. Available parameters include: * ``fields``: A list containing one or more fields to be returned in the response. This may lead to some performance gain because other fields of the resource are not refreshed. * ``limit``: Requests at most the specified number of items be returned from the query. * ``marker``: Specifies the ID of the last-seen chassis. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen chassis from the response as the ``marker`` value in a subsequent limited request. * ``sort_dir``: Sorts the response by the requested sort direction. A valid value is ``asc`` (ascending) or ``desc`` (descending). Default is ``asc``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. * ``sort_key``: Sorts the response by the this attribute value. Default is ``id``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. :returns: A generator of chassis instances. """ cls = _chassis.ChassisDetail if details else _chassis.Chassis return self._list(cls, paginated=True, **query) def create_chassis(self, **attrs): """Create a new chassis from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.baremetal.v1.chassis.Chassis`, it comprised of the properties on the ``Chassis`` class. :returns: The results of chassis creation. :rtype: :class:`~openstack.baremetal.v1.chassis.Chassis`. """ return self._create(_chassis.Chassis, **attrs) def find_chassis(self, name_or_id, ignore_missing=True): """Find a single chassis. :param str name_or_id: The name or ID of a chassis. :param bool ignore_missing: When set to ``False``, an exception of :class:`~openstack.exceptions.ResourceNotFound` will be raised when the chassis does not exist. When set to `True``, None will be returned when attempting to find a nonexistent chassis. :returns: One :class:`~openstack.baremetal.v1.chassis.Chassis` object or None. """ return self._find(_chassis.Chassis, name_or_id, ignore_missing=ignore_missing) def get_chassis(self, chassis): """Get a specific chassis. :param chassis: The value can be the name or ID of a chassis or a :class:`~openstack.baremetal.v1.chassis.Chassis` instance. :returns: One :class:`~openstack.baremetal.v1.chassis.Chassis` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no chassis matching the name or ID could be found. """ return self._get(_chassis.Chassis, chassis) def update_chassis(self, chassis, **attrs): """Update a chassis. :param chassis: Either the name or the ID of a chassis, or an instance of :class:`~openstack.baremetal.v1.chassis.Chassis`. :param dict attrs: The attributes to update on the chassis represented by the ``chassis`` parameter. :returns: The updated chassis. :rtype: :class:`~openstack.baremetal.v1.chassis.Chassis` """ return self._update(_chassis.Chassis, chassis, **attrs) def delete_chassis(self, chassis, ignore_missing=True): """Delete a chassis. :param chassis: The value can be either the name or ID of a chassis or a :class:`~openstack.baremetal.v1.chassis.Chassis` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the chassis could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent chassis. :returns: The instance of the chassis which was deleted. :rtype: :class:`~openstack.baremetal.v1.chassis.Chassis`. """ return self._delete(_chassis.Chassis, chassis, ignore_missing=ignore_missing) def drivers(self): """Retrieve a generator of drivers. :returns: A generator of driver instances. """ return self._list(_driver.Driver, paginated=False) def get_driver(self, driver): """Get a specific driver. :param driver: The value can be the name of a driver or a :class:`~openstack.baremetal.v1.driver.Driver` instance. :returns: One :class:`~openstack.baremetal.v1.driver.Driver` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no driver matching the name could be found. """ return self._get(_driver.Driver, driver) def nodes(self, details=False, **query): """Retrieve a generator of nodes. :param details: A boolean indicating whether the detailed information for every node should be returned. :param dict query: Optional query parameters to be sent to restrict the nodes returned. Available parameters include: * ``associated``: Only return those which are, or are not, associated with an ``instance_id``. * ``driver``: Only return those with the specified ``driver``. * ``fields``: A list containing one or more fields to be returned in the response. This may lead to some performance gain because other fields of the resource are not refreshed. * ``instance_id``: Only return the node with this specific instance UUID or an empty set if not found. * ``is_maintenance``: Only return those with ``maintenance`` set to ``True`` or ``False``. * ``limit``: Requests at most the specified number of nodes be returned from the query. * ``marker``: Specifies the ID of the last-seen node. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen node from the response as the ``marker`` value in a subsequent limited request. * ``provision_state``: Only return those nodes with the specified ``provision_state``. * ``sort_dir``: Sorts the response by the requested sort direction. A valid value is ``asc`` (ascending) or ``desc`` (descending). Default is ``asc``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. * ``sort_key``: Sorts the response by the this attribute value. Default is ``id``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. :returns: A generator of node instances. """ cls = _node.NodeDetail if details else _node.Node return self._list(cls, paginated=True, **query) def create_node(self, **attrs): """Create a new node from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.baremetal.v1.node.Node`, it comprised of the properties on the ``Node`` class. :returns: The results of node creation. :rtype: :class:`~openstack.baremetal.v1.node.Node`. """ return self._create(_node.Node, **attrs) def find_node(self, name_or_id, ignore_missing=True): """Find a single node. :param str name_or_id: The name or ID of a node. :param bool ignore_missing: When set to ``False``, an exception of :class:`~openstack.exceptions.ResourceNotFound` will be raised when the node does not exist. When set to `True``, None will be returned when attempting to find a nonexistent node. :returns: One :class:`~openstack.baremetal.v1.node.Node` object or None. """ return self._find(_node.Node, name_or_id, ignore_missing=ignore_missing) def get_node(self, node): """Get a specific node. :param node: The value can be the name or ID of a chassis or a :class:`~openstack.baremetal.v1.node.Node` instance. :returns: One :class:`~openstack.baremetal.v1.node.Node` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no node matching the name or ID could be found. """ return self._get(_node.Node, node) def update_node(self, node, **attrs): """Update a node. :param chassis: Either the name or the ID of a node or an instance of :class:`~openstack.baremetal.v1.node.Node`. :param dict attrs: The attributes to update on the node represented by the ``node`` parameter. :returns: The updated node. :rtype: :class:`~openstack.baremetal.v1.node.Node` """ return self._update(_node.Node, node, **attrs) def delete_node(self, node, ignore_missing=True): """Delete a node. :param node: The value can be either the name or ID of a node or a :class:`~openstack.baremetal.v1.node.Node` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the node could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent node. :returns: The instance of the node which was deleted. :rtype: :class:`~openstack.baremetal.v1.node.Node`. """ return self._delete(_node.Node, node, ignore_missing=ignore_missing) def ports(self, details=False, **query): """Retrieve a generator of ports. :param details: A boolean indicating whether the detailed information for every port should be returned. :param dict query: Optional query parameters to be sent to restrict the ports returned. Available parameters include: * ``address``: Only return ports with the specified physical hardware address, typically a MAC address. * ``driver``: Only return those with the specified ``driver``. * ``fields``: A list containing one or more fields to be returned in the response. This may lead to some performance gain because other fields of the resource are not refreshed. * ``limit``: Requests at most the specified number of ports be returned from the query. * ``marker``: Specifies the ID of the last-seen port. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen port from the response as the ``marker`` value in a subsequent limited request. * ``node``:only return the ones associated with this specific node (name or UUID), or an empty set if not found. * ``node_id``:only return the ones associated with this specific node UUID, or an empty set if not found. * ``portgroup``: only return the ports associated with this specific Portgroup (name or UUID), or an empty set if not found. Added in API microversion 1.24. * ``sort_dir``: Sorts the response by the requested sort direction. A valid value is ``asc`` (ascending) or ``desc`` (descending). Default is ``asc``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. * ``sort_key``: Sorts the response by the this attribute value. Default is ``id``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. :returns: A generator of port instances. """ cls = _port.PortDetail if details else _port.Port return self._list(cls, paginated=True, **query) def create_port(self, **attrs): """Create a new port from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.baremetal.v1.port.Port`, it comprises of the properties on the ``Port`` class. :returns: The results of port creation. :rtype: :class:`~openstack.baremetal.v1.port.Port`. """ return self._create(_port.Port, **attrs) def find_port(self, name_or_id, ignore_missing=True): """Find a single port. :param str name_or_id: The name or ID of a port. :param bool ignore_missing: When set to ``False``, an exception of :class:`~openstack.exceptions.ResourceNotFound` will be raised when the port does not exist. When set to `True``, None will be returned when attempting to find a nonexistent port. :returns: One :class:`~openstack.baremetal.v1.port.Port` object or None. """ return self._find(_port.Port, name_or_id, ignore_missing=ignore_missing) def get_port(self, port, **query): """Get a specific port. :param port: The value can be the name or ID of a chassis or a :class:`~openstack.baremetal.v1.port.Port` instance. :param dict query: Optional query parameters to be sent to restrict the port properties returned. Available parameters include: * ``fields``: A list containing one or more fields to be returned in the response. This may lead to some performance gain because other fields of the resource are not refreshed. :returns: One :class:`~openstack.baremetal.v1.port.Port` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no port matching the name or ID could be found. """ return self._get(_port.Port, port, **query) def update_port(self, port, **attrs): """Update a port. :param chassis: Either the name or the ID of a port or an instance of :class:`~openstack.baremetal.v1.port.Port`. :param dict attrs: The attributes to update on the port represented by the ``port`` parameter. :returns: The updated port. :rtype: :class:`~openstack.baremetal.v1.port.Port` """ return self._update(_port.Port, port, **attrs) def delete_port(self, port, ignore_missing=True): """Delete a port. :param port: The value can be either the name or ID of a port or a :class:`~openstack.baremetal.v1.port.Port` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the port could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent port. :returns: The instance of the port which was deleted. :rtype: :class:`~openstack.baremetal.v1.port.Port`. """ return self._delete(_port.Port, port, ignore_missing=ignore_missing) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use port_groups instead") def portgroups(self, details=False, **query): """Retrieve a generator of port groups. :param details: A boolean indicating whether the detailed information for every portgroup should be returned. :param dict query: Optional query parameters to be sent to restrict the portgroups returned. Available parameters include: * ``address``: Only return portgroups with the specified physical hardware address, typically a MAC address. * ``fields``: A list containing one or more fields to be returned in the response. This may lead to some performance gain because other fields of the resource are not refreshed. * ``limit``: Requests at most the specified number of portgroups returned from the query. * ``marker``: Specifies the ID of the last-seen portgroup. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen portgroup from the response as the ``marker`` value in a subsequent limited request. * ``node``:only return the ones associated with this specific node (name or UUID), or an empty set if not found. * ``sort_dir``: Sorts the response by the requested sort direction. A valid value is ``asc`` (ascending) or ``desc`` (descending). Default is ``asc``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. * ``sort_key``: Sorts the response by the this attribute value. Default is ``id``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. :returns: A generator of portgroup instances. """ return self.port_groups(details=details, **query) def port_groups(self, details=False, **query): """Retrieve a generator of port groups. :param details: A boolean indicating whether the detailed information for every port group should be returned. :param dict query: Optional query parameters to be sent to restrict the port groups returned. Available parameters include: * ``address``: Only return portgroups with the specified physical hardware address, typically a MAC address. * ``fields``: A list containing one or more fields to be returned in the response. This may lead to some performance gain because other fields of the resource are not refreshed. * ``limit``: Requests at most the specified number of portgroups returned from the query. * ``marker``: Specifies the ID of the last-seen portgroup. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen portgroup from the response as the ``marker`` value in a subsequent limited request. * ``node``:only return the ones associated with this specific node (name or UUID), or an empty set if not found. * ``sort_dir``: Sorts the response by the requested sort direction. A valid value is ``asc`` (ascending) or ``desc`` (descending). Default is ``asc``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. * ``sort_key``: Sorts the response by the this attribute value. Default is ``id``. You can specify multiple pairs of sort key and sort direction query parameters. If you omit the sort direction in a pair, the API uses the natural sorting direction of the server attribute that is provided as the ``sort_key``. :returns: A generator of port group instances. """ cls = _portgroup.PortGroupDetail if details else _portgroup.PortGroup return self._list(cls, paginated=True, **query) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use create_port_group instead") def create_portgroup(self, **attrs): """Create a new port group from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.baremetal.v1.port_group.PortGroup`, it comprises of the properties on the ``PortGroup`` class. :returns: The results of portgroup creation. :rtype: :class:`~openstack.baremetal.v1.port_group.PortGroup`. """ return self.create_port_group(**attrs) def create_port_group(self, **attrs): """Create a new portgroup from attributes. :param dict attrs: Keyword arguments that will be used to create a :class:`~openstack.baremetal.v1.port_group.PortGroup`, it comprises of the properties on the ``PortGroup`` class. :returns: The results of portgroup creation. :rtype: :class:`~openstack.baremetal.v1.port_group.PortGroup`. """ return self._create(_portgroup.PortGroup, **attrs) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use find_port_group instead") def find_portgroup(self, name_or_id, ignore_missing=True): """Find a single port group. :param str name_or_id: The name or ID of a portgroup. :param bool ignore_missing: When set to ``False``, an exception of :class:`~openstack.exceptions.ResourceNotFound` will be raised when the port group does not exist. When set to `True``, None will be returned when attempting to find a nonexistent port group. :returns: One :class:`~openstack.baremetal.v1.port_group.PortGroup` object or None. """ return self.find_port_group(name_or_id, ignore_missing=ignore_missing) def find_port_group(self, name_or_id, ignore_missing=True): """Find a single port group. :param str name_or_id: The name or ID of a portgroup. :param bool ignore_missing: When set to ``False``, an exception of :class:`~openstack.exceptions.ResourceNotFound` will be raised when the port group does not exist. When set to `True``, None will be returned when attempting to find a nonexistent port group. :returns: One :class:`~openstack.baremetal.v1.port_group.PortGroup` object or None. """ return self._find(_portgroup.PortGroup, name_or_id, ignore_missing=ignore_missing) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use get_port_group instead") def get_portgroup(self, portgroup, **query): """Get a specific port group. :param portgroup: The value can be the name or ID of a chassis or a :class:`~openstack.baremetal.v1.port_group.PortGroup` instance. :param dict query: Optional query parameters to be sent to restrict the portgroup properties returned. Available parameters include: * ``fields``: A list containing one or more fields to be returned in the response. This may lead to some performance gain because other fields of the resource are not refreshed. :returns: One :class:`~openstack.baremetal.v1.port_group.PortGroup` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no port group matching the name or ID could be found. """ return self.get_port_group(portgroup, **query) def get_port_group(self, port_group, **query): """Get a specific port group. :param port_group: The value can be the name or ID of a chassis or a :class:`~openstack.baremetal.v1.port_group.PortGroup` instance. :param dict query: Optional query parameters to be sent to restrict the port group properties returned. Available parameters include: * ``fields``: A list containing one or more fields to be returned in the response. This may lead to some performance gain because other fields of the resource are not refreshed. :returns: One :class:`~openstack.baremetal.v1.port_group.PortGroup` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no port group matching the name or ID could be found. """ return self._get(_portgroup.PortGroup, port_group, **query) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use update_port_group instead") def update_portgroup(self, portgroup, **attrs): """Update a port group. :param chassis: Either the name or the ID of a port group or an instance of :class:`~openstack.baremetal.v1.port_group.PortGroup`. :param dict attrs: The attributes to update on the port group represented by the ``portgroup`` parameter. :returns: The updated port group. :rtype: :class:`~openstack.baremetal.v1.port_group.PortGroup` """ return self.update_port_group(portgroup, **attrs) def update_port_group(self, port_group, **attrs): """Update a port group. :param chassis: Either the name or the ID of a port group or an instance of :class:`~openstack.baremetal.v1.port_group.PortGroup`. :param dict attrs: The attributes to update on the port group represented by the ``port_group`` parameter. :returns: The updated port group. :rtype: :class:`~openstack.baremetal.v1.port_group.PortGroup` """ return self._update(_portgroup.PortGroup, port_group, **attrs) @utils.deprecated(deprecated_in="0.9.14", removed_in="1.0", details="Use delete_port_group instead") def delete_portgroup(self, portgroup, ignore_missing=True): """Delete a port group. :param portgroup: The value can be either the name or ID of a port group or a :class:`~openstack.baremetal.v1.port_group.PortGroup` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the port group could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent port group. :returns: The instance of the port group which was deleted. :rtype: :class:`~openstack.baremetal.v1.port_group.PortGroup`. """ return self.delete_port_group(portgroup, ignore_missing=ignore_missing) def delete_port_group(self, port_group, ignore_missing=True): """Delete a port group. :param port_group: The value can be either the name or ID of a port group or a :class:`~openstack.baremetal.v1.port_group.PortGroup` instance. :param bool ignore_missing: When set to ``False``, an exception :class:`~openstack.exceptions.ResourceNotFound` will be raised when the port group could not be found. When set to ``True``, no exception will be raised when attempting to delete a non-existent port group. :returns: The instance of the port group which was deleted. :rtype: :class:`~openstack.baremetal.v1.port_group.PortGroup`. """ return self._delete(_portgroup.PortGroup, port_group, ignore_missing=ignore_missing) openstacksdk-0.11.3/openstack/baremetal/v1/port_group.py0000666000175100017510000000532113236151340023335 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.baremetal import baremetal_service from openstack import resource class PortGroup(resource.Resource): resources_key = 'portgroups' base_path = '/portgroups' service = baremetal_service.BaremetalService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'node', 'address', 'fields', ) #: The physical hardware address of the portgroup, typically the hardware #: MAC address. Added in API microversion 1.23. address = resource.Body('address') #: Timestamp at which the portgroup was created. created_at = resource.Body('created_at') #: A set of one or more arbitrary metadata key and value pairs. extra = resource.Body('extra', type=dict) #: The name of the portgroup name = resource.Body('name') #: The UUID for the portgroup id = resource.Body('uuid', alternate_id=True) #: Internal metadaa set and stored by the portgroup. internal_info = resource.Body('internal_info') #: Whether ports that are members of this portgroup can be used as #: standalone ports. Added in API microversion 1.23. is_standalone_ports_supported = resource.Body('standalone_ports_supported', type=bool) #: A list of relative links, including the self and bookmark links. links = resource.Body('links', type=list) #: UUID of the node this portgroup belongs to. node_id = resource.Body('node_uuid') #: A list of links to the collection of ports belonging to this portgroup. #: Added in API microversion 1.24. ports = resource.Body('ports') #: Timestamp at which the portgroup was last updated. updated_at = resource.Body('updated_at') class PortGroupDetail(PortGroup): base_path = '/portgroups/detail' allow_create = False allow_get = False allow_update = False allow_delete = False allow_list = True _query_mapping = resource.QueryParameters( 'node', 'address', ) #: The UUID for the portgroup id = resource.Body('uuid', alternate_id=True) openstacksdk-0.11.3/openstack/baremetal/__init__.py0000666000175100017510000000000013236151340022333 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/baremetal/baremetal_service.py0000666000175100017510000000166613236151340024273 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class BaremetalService(service_filter.ServiceFilter): """The bare metal service.""" valid_versions = [service_filter.ValidVersion('v1')] def __init__(self, version=None): """Create a bare metal service.""" super(BaremetalService, self).__init__(service_type='baremetal', version=version) openstacksdk-0.11.3/openstack/service_filter.py0000666000175100017510000001433613236151340021666 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ The :class:`~openstack.service_filter.ServiceFilter` is the base class for service identifiers and user service preferences. Each :class:`~openstack.resource.Resource` has a service identifier to associate the resource with a service. An example of a service identifier would be ``openstack.compute.compute_service.ComputeService``. The service preference and the service identifier are joined to create a filter to match a service. Examples -------- The :class:`~openstack.service_filter.ServiceFilter` class can be built with a service type, interface, region, name, and version. Create a service filter ~~~~~~~~~~~~~~~~~~~~~~~ Create a compute service and service preference. Join the services and match:: from openstack import service_filter from openstack.compute import compute_service default = compute_service.ComputeService() preference = service_filter.ServiceFilter('compute', version='v2') result = preference.join(default) matches = (result.match_service_type('compute') and result.match_service_name('Hal9000') and result.match_region('DiscoveryOne') and result.match_interface('public')) print(str(result)) print("matches=" + str(matches)) The resulting output from the code:: service_type=compute,interface=public,version=v2 matches=True """ class ValidVersion(object): def __init__(self, module, path=None): """" Valid service version. :param string module: Module associated with version. :param string path: URL path version. """ self.module = module self.path = path or module class ServiceFilter(dict): UNVERSIONED = '' PUBLIC = 'public' INTERNAL = 'internal' ADMIN = 'admin' valid_versions = [] def __init__(self, service_type, interface=PUBLIC, region=None, service_name=None, version=None, api_version=None, requires_project_id=False): """Create a service identifier. :param string service_type: The desired type of service. :param string interface: The exposure of the endpoint. Should be `public` (default), `internal` or `admin`. :param string region: The desired region (optional). :param string service_name: Name of the service :param string version: Version of service to use. :param string api_version: Microversion of service supported. :param bool requires_project_id: True if this service's endpoint expects project id to be included. """ self['service_type'] = service_type.lower() self['interface'] = interface self['region_name'] = region self['service_name'] = service_name self['version'] = version self['api_version'] = api_version self['requires_project_id'] = requires_project_id @classmethod def _get_proxy_class_names(cls): names = [] module_name = ".".join(cls.__module__.split('.')[:-1]) for version in cls.valid_versions: names.append("{module}.{version}._proxy.Proxy".format( module=module_name, version=version.module)) return names @property def service_type(self): return self['service_type'] @property def interface(self): return self['interface'] @interface.setter def interface(self, value): self['interface'] = value @property def region(self): return self['region_name'] @region.setter def region(self, value): self['region_name'] = value @property def service_name(self): return self['service_name'] @service_name.setter def service_name(self, value): self['service_name'] = value @property def version(self): return self['version'] @version.setter def version(self, value): self['version'] = value @property def api_version(self): return self['api_version'] @api_version.setter def api_version(self, value): self['api_version'] = value @property def requires_project_id(self): return self['requires_project_id'] @requires_project_id.setter def requires_project_id(self, value): self['requires_project_id'] = value @property def path(self): return self['path'] @path.setter def path(self, value): self['path'] = value def get_path(self, version=None): if not self.version: self.version = version return self.get('path', self._get_valid_version().path) def get_filter(self): filter = dict(self) del filter['version'] return filter def _get_valid_version(self): if self.valid_versions: if self.version: for valid in self.valid_versions: # NOTE(thowe): should support fuzzy match e.g: v2.1==v2 if self.version.startswith(valid.module): return valid return self.valid_versions[0] return ValidVersion('') def get_module(self): """Get the full module name associated with the service.""" module = self.__class__.__module__.split('.') module = ".".join(module[:-1]) module = module + "." + self._get_valid_version().module return module def get_service_module(self): """Get the module version of the service name. This would often be the same as the service type except in cases like object store where the service type is `object-store` and the module is `object_store`. """ return self.__class__.__module__.split('.')[-2] openstacksdk-0.11.3/openstack/workflow/0000775000175100017510000000000013236151501020147 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/workflow/version.py0000666000175100017510000000174013236151340022213 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import resource from openstack.workflow import workflow_service class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = workflow_service.WorkflowService( version=workflow_service.WorkflowService.UNVERSIONED ) # capabilities allow_list = True # Properties links = resource.Body('links') status = resource.Body('status') openstacksdk-0.11.3/openstack/workflow/v2/0000775000175100017510000000000013236151501020476 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/workflow/v2/execution.py0000666000175100017510000000465713236151340023072 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import resource from openstack.workflow import workflow_service class Execution(resource.Resource): resource_key = 'execution' resources_key = 'executions' base_path = '/executions' service = workflow_service.WorkflowService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True _query_mapping = resource.QueryParameters( 'marker', 'limit', 'sort_keys', 'sort_dirs', 'fields', 'params', 'include_output') #: The name of the workflow workflow_name = resource.Body("workflow_name") #: The ID of the workflow workflow_id = resource.Body("workflow_id") #: A description of the workflow execution description = resource.Body("description") #: A reference to the parent task execution task_execution_id = resource.Body("task_execution_id") #: Status can be one of: IDLE, RUNNING, SUCCESS, ERROR, or PAUSED status = resource.Body("state") #: An optional information string about the status status_info = resource.Body("state_info") #: A JSON structure containing workflow input values # TODO(briancurtin): type=dict input = resource.Body("input") #: The output of the workflow output = resource.Body("output") #: The time at which the Execution was created created_at = resource.Body("created_at") #: The time at which the Execution was updated updated_at = resource.Body("updated_at") def create(self, session, prepend_key=True): request = self._prepare_request(requires_id=False, prepend_key=prepend_key) request_body = request.body["execution"] response = session.post(request.url, json=request_body, headers=request.headers) self._translate_response(response, has_body=True) return self openstacksdk-0.11.3/openstack/workflow/v2/__init__.py0000666000175100017510000000000013236151340022600 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/workflow/v2/workflow.py0000666000175100017510000000462313236151340022732 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import resource from openstack.workflow import workflow_service class Workflow(resource.Resource): resource_key = 'workflow' resources_key = 'workflows' base_path = '/workflows' service = workflow_service.WorkflowService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True _query_mapping = resource.QueryParameters( 'marker', 'limit', 'sort_keys', 'sort_dirs', 'fields') #: The name of this Workflow name = resource.Body("name") #: The inputs for this Workflow input = resource.Body("input") #: A Workflow definition using the Mistral v2 DSL definition = resource.Body("definition") #: A list of values associated with a workflow that users can use #: to group workflows by some criteria # TODO(briancurtin): type=list tags = resource.Body("tags") #: Can be either "private" or "public" scope = resource.Body("scope") #: The ID of the associated project project_id = resource.Body("project_id") #: The time at which the workflow was created created_at = resource.Body("created_at") #: The time at which the workflow was created updated_at = resource.Body("updated_at") def create(self, session, prepend_key=True): request = self._prepare_request(requires_id=False, prepend_key=prepend_key) headers = { "Content-Type": 'text/plain' } kwargs = { "data": self.definition, } scope = "?scope=%s" % self.scope uri = request.url + scope request.headers.update(headers) response = session.post(uri, json=None, headers=request.headers, **kwargs) self._translate_response(response, has_body=False) return self openstacksdk-0.11.3/openstack/workflow/v2/_proxy.py0000666000175100017510000001611613236151340022400 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import proxy from openstack.workflow.v2 import execution as _execution from openstack.workflow.v2 import workflow as _workflow class Proxy(proxy.BaseProxy): def create_workflow(self, **attrs): """Create a new workflow from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.workflow.v2.workflow.Workflow`, comprised of the properties on the Workflow class. :returns: The results of workflow creation :rtype: :class:`~openstack.workflow.v2.workflow.Workflow` """ return self._create(_workflow.Workflow, **attrs) def get_workflow(self, *attrs): """Get a workflow :param workflow: The value can be the name of a workflow or :class:`~openstack.workflow.v2.workflow.Workflow` instance. :returns: One :class:`~openstack.workflow.v2.workflow.Workflow` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no workflow matching the name could be found. """ return self._get(_workflow.Workflow, *attrs) def workflows(self, **query): """Retrieve a generator of workflows :param kwargs \*\*query: Optional query parameters to be sent to restrict the workflows to be returned. Available parameters include: * limit: Requests at most the specified number of items be returned from the query. * marker: Specifies the ID of the last-seen workflow. Use the limit parameter to make an initial limited request and use the ID of the last-seen workflow from the response as the marker parameter value in a subsequent limited request. :returns: A generator of workflow instances. """ return self._list(_workflow.Workflow, paginated=True, **query) def delete_workflow(self, value, ignore_missing=True): """Delete a workflow :param value: The value can be either the name of a workflow or a :class:`~openstack.workflow.v2.workflow.Workflow` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the workflow does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent workflow. :returns: ``None`` """ return self._delete(_workflow.Workflow, value, ignore_missing=ignore_missing) def find_workflow(self, name_or_id, ignore_missing=True): """Find a single workflow :param name_or_id: The name or ID of an workflow. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.compute.v2.workflow.Extension` or None """ return self._find(_workflow.Workflow, name_or_id, ignore_missing=ignore_missing) def create_execution(self, **attrs): """Create a new execution from attributes :param workflow_name: The name of target workflow to execute. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.workflow.v2.execution.Execution`, comprised of the properties on the Execution class. :returns: The results of execution creation :rtype: :class:`~openstack.workflow.v2.execution.Execution` """ return self._create(_execution.Execution, **attrs) def get_execution(self, *attrs): """Get a execution :param workflow_name: The name of target workflow to execute. :param execution: The value can be either the ID of a execution or a :class:`~openstack.workflow.v2.execution.Execution` instance. :returns: One :class:`~openstack.workflow.v2.execution.Execution` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no execution matching the criteria could be found. """ return self._get(_execution.Execution, *attrs) def executions(self, **query): """Retrieve a generator of executions :param kwargs \*\*query: Optional query parameters to be sent to restrict the executions to be returned. Available parameters include: * limit: Requests at most the specified number of items be returned from the query. * marker: Specifies the ID of the last-seen execution. Use the limit parameter to make an initial limited request and use the ID of the last-seen execution from the response as the marker parameter value in a subsequent limited request. :returns: A generator of execution instances. """ return self._list(_execution.Execution, paginated=True, **query) def delete_execution(self, value, ignore_missing=True): """Delete an execution :param value: The value can be either the name of a execution or a :class:`~openstack.workflow.v2.execute.Execution` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the execution does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent execution. :returns: ``None`` """ return self._delete(_execution.Execution, value, ignore_missing=ignore_missing) def find_execution(self, name_or_id, ignore_missing=True): """Find a single execution :param name_or_id: The name or ID of an execution. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.compute.v2.execution.Execution` or None """ return self._find(_execution.Execution, name_or_id, ignore_missing=ignore_missing) openstacksdk-0.11.3/openstack/workflow/workflow_service.py0000666000175100017510000000164413236151340024123 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class WorkflowService(service_filter.ServiceFilter): """The workflow service.""" valid_versions = [service_filter.ValidVersion('v2')] def __init__(self, version=None): """Create a workflow service.""" super(WorkflowService, self).__init__( service_type='workflowv2', version=version ) openstacksdk-0.11.3/openstack/workflow/__init__.py0000666000175100017510000000000013236151340022251 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/profile.py0000666000175100017510000001771613236151340020326 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :class:`~openstack.profile.Profile` is deprecated. Code should use :class:`~openstack.config.cloud_region.CloudRegion` instead. """ import copy from six.moves import urllib from openstack import _log from openstack.config import cloud_region from openstack.config import defaults as config_defaults from openstack.baremetal import baremetal_service from openstack.block_storage import block_storage_service from openstack.clustering import clustering_service from openstack.compute import compute_service from openstack.database import database_service from openstack import exceptions from openstack.identity import identity_service from openstack.image import image_service from openstack.key_manager import key_manager_service from openstack.load_balancer import load_balancer_service as lb_service from openstack.message import message_service from openstack.network import network_service from openstack.object_store import object_store_service from openstack.orchestration import orchestration_service from openstack import utils from openstack.workflow import workflow_service _logger = _log.setup_logging('openstack') def _get_config_from_profile(profile, authenticator, **kwargs): # TODO(shade) Remove this once we've shifted python-openstackclient # to not use the profile interface. # We don't have a cloud name. Make one up from the auth_url hostname # so that log messages work. name = urllib.parse.urlparse(authenticator.auth_url).hostname region_name = None for service in profile.get_services(): if service.region: region_name = service.region service_type = service.service_type if service.interface: key = cloud_region._make_key('interface', service_type) kwargs[key] = service.interface if service.version: version = service.version if version.startswith('v'): version = version[1:] key = cloud_region._make_key('api_version', service_type) kwargs[key] = service.version config_kwargs = config_defaults.get_defaults() config_kwargs.update(kwargs) config = cloud_region.CloudRegion( name=name, region_name=region_name, config=config_kwargs) config._auth = authenticator return config class Profile(object): ALL = "*" """Wildcard service identifier representing all services.""" @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.config instead") def __init__(self, plugins=None): """User preference for each service. :param plugins: List of entry point namespaces to load. Create a new :class:`~openstack.profile.Profile` object with no preferences defined, but knowledge of the services. Services are identified by their service type, e.g.: 'identity', 'compute', etc. """ self._services = {} self._add_service(baremetal_service.BaremetalService(version="v1")) self._add_service( block_storage_service.BlockStorageService(version="v2")) self._add_service(clustering_service.ClusteringService(version="v1")) self._add_service(compute_service.ComputeService(version="v2")) self._add_service(database_service.DatabaseService(version="v1")) self._add_service(identity_service.IdentityService(version="v3")) self._add_service(image_service.ImageService(version="v2")) self._add_service(key_manager_service.KeyManagerService(version="v1")) self._add_service(lb_service.LoadBalancerService(version="v2")) self._add_service(message_service.MessageService(version="v1")) self._add_service(network_service.NetworkService(version="v2")) self._add_service( object_store_service.ObjectStoreService(version="v1")) self._add_service( orchestration_service.OrchestrationService(version="v1")) self._add_service(workflow_service.WorkflowService(version="v2")) self.service_keys = sorted(self._services.keys()) def __repr__(self): return repr(self._services) def _add_service(self, serv): serv.interface = None self._services[serv.service_type] = serv @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.config instead") def get_filter(self, service): """Get a service preference. :param str service: Desired service type. """ return copy.copy(self._get_filter(service)) def _get_filter(self, service): """Get a service preference. :param str service: Desired service type. """ serv = self._services.get(service, None) if serv is not None: return serv msg = ("Service %s not in list of valid services: %s" % (service, self.service_keys)) raise exceptions.SDKException(msg) def _get_services(self, service): return self.service_keys if service == self.ALL else [service] def _setter(self, service, attr, value): for service in self._get_services(service): setattr(self._get_filter(service), attr, value) @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.config instead") def get_services(self): """Get a list of all the known services.""" services = [] for name, service in self._services.items(): services.append(service) return services @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.config instead") def set_name(self, service, name): """Set the desired name for the specified service. :param str service: Service type. :param str name: Desired service name. """ self._setter(service, "service_name", name) @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.config instead") def set_region(self, service, region): """Set the desired region for the specified service. :param str service: Service type. :param str region: Desired service region. """ self._setter(service, "region", region) @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.config instead") def set_version(self, service, version): """Set the desired version for the specified service. :param str service: Service type. :param str version: Desired service version. """ self._get_filter(service).version = version @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.config instead") def set_api_version(self, service, api_version): """Set the desired API micro-version for the specified service. :param str service: Service type. :param str api_version: Desired service API micro-version. """ self._setter(service, "api_version", api_version) @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="Use openstack.config instead") def set_interface(self, service, interface): """Set the desired interface for the specified service. :param str service: Service type. :param str interface: Desired service interface. """ self._setter(service, "interface", interface) openstacksdk-0.11.3/openstack/orchestration/0000775000175100017510000000000013236151501021161 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/orchestration/version.py0000666000175100017510000000177213236151340023232 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.orchestration import orchestration_service from openstack import resource class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = orchestration_service.OrchestrationService( version=orchestration_service.OrchestrationService.UNVERSIONED ) # capabilities allow_list = True # Properties links = resource.Body('links') status = resource.Body('status') openstacksdk-0.11.3/openstack/orchestration/v1/0000775000175100017510000000000013236151501021507 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/orchestration/v1/stack_files.py0000666000175100017510000000256613236151340024364 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.orchestration import orchestration_service from openstack import resource class StackFiles(resource.Resource): service = orchestration_service.OrchestrationService() base_path = "/stacks/%(stack_name)s/%(stack_id)s/files" # capabilities allow_create = False allow_list = False allow_get = True allow_delete = False allow_update = False # Properties #: Name of the stack where the template is referenced. stack_name = resource.URI('stack_name') #: ID of the stack where the template is referenced. stack_id = resource.URI('stack_id') def get(self, session): # The stack files response contains a map of filenames and file # contents. request = self._prepare_request(requires_id=False) resp = session.get(request.url) return resp.json() openstacksdk-0.11.3/openstack/orchestration/v1/resource.py0000666000175100017510000000450713236151340023721 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.orchestration import orchestration_service from openstack import resource class Resource(resource.Resource): name_attribute = 'resource_name' resource_key = 'resource' resources_key = 'resources' base_path = '/stacks/%(stack_name)s/%(stack_id)s/resources' service = orchestration_service.OrchestrationService() # capabilities allow_create = False allow_list = True allow_retrieve = False allow_delete = False allow_update = False # Properties #: A list of dictionaries containing links relevant to the resource. links = resource.Body('links') #: ID of the logical resource, usually the literal name of the resource #: as it appears in the stack template. logical_resource_id = resource.Body('logical_resource_id', alternate_id=True) #: Name of the resource. name = resource.Body('resource_name') #: ID of the physical resource (if any) that backs up the resource. For #: example, it contains a nova server ID if the resource is a nova #: server. physical_resource_id = resource.Body('physical_resource_id') #: A list of resource names that depend on this resource. This #: property facilitates the deduction of resource dependencies. #: *Type: list* required_by = resource.Body('required_by', type=list) #: A string representation of the resource type. resource_type = resource.Body('resource_type') #: A string representing the status the resource is currently in. status = resource.Body('resource_status') #: A string that explains why the resource is in its current status. status_reason = resource.Body('resource_status_reason') #: Timestamp of the last update made to the resource. updated_at = resource.Body('updated_time') openstacksdk-0.11.3/openstack/orchestration/v1/template.py0000666000175100017510000000351213236151340023700 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from six.moves.urllib import parse from openstack.orchestration import orchestration_service from openstack import resource class Template(resource.Resource): service = orchestration_service.OrchestrationService() # capabilities allow_create = False allow_list = False allow_get = False allow_delete = False allow_update = False # Properties #: The description specified in the template description = resource.Body('Description') #: Key and value pairs that contain template parameters parameters = resource.Body('Parameters', type=dict) #: A list of parameter groups each contains a lsit of parameter names. parameter_groups = resource.Body('ParameterGroups', type=list) def validate(self, session, template, environment=None, template_url=None, ignore_errors=None): url = '/validate' body = {'template': template} if environment is not None: body['environment'] = environment if template_url is not None: body['template_url'] = template_url if ignore_errors: qry = parse.urlencode({'ignore_errors': ignore_errors}) url = '?'.join([url, qry]) resp = session.post(url, json=body) self._translate_response(resp) return self openstacksdk-0.11.3/openstack/orchestration/v1/software_config.py0000666000175100017510000000406213236151340025245 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.orchestration import orchestration_service from openstack import resource class SoftwareConfig(resource.Resource): resource_key = 'software_config' resources_key = 'software_configs' base_path = '/software_configs' service = orchestration_service.OrchestrationService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True allow_update = False # Properties #: Configuration script or manifest that defines which configuration is #: performed config = resource.Body('config') #: The date and time when the software config resource was created. created_at = resource.Body('creation_time') #: A string indicating the namespace used for grouping software configs. group = resource.Body('group') #: A list of schemas each representing an input this software config #: expects. inputs = resource.Body('inputs') #: Name of the software config. name = resource.Body('name') #: A string that contains options that are specific to the configuraiton #: management tool that this resource uses. options = resource.Body('options') #: A list of schemas each representing an output this software config #: produces. outputs = resource.Body('outputs') def create(self, session): # This overrides the default behavior of resource creation because # heat doesn't accept resource_key in its request. return super(SoftwareConfig, self).create(session, prepend_key=False) openstacksdk-0.11.3/openstack/orchestration/v1/stack_environment.py0000666000175100017510000000331613236151340025620 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.orchestration import orchestration_service from openstack import resource class StackEnvironment(resource.Resource): service = orchestration_service.OrchestrationService() base_path = "/stacks/%(stack_name)s/%(stack_id)s/environment" # capabilities allow_create = False allow_list = False allow_get = True allow_delete = False allow_update = False # Properties #: Name of the stack where the template is referenced. stack_name = resource.URI('stack_name') #: ID of the stack where the template is referenced. stack_id = resource.URI('stack_id') #: A list of parameter names whose values are encrypted encrypted_param_names = resource.Body('encrypted_param_names') #: A list of event sinks event_sinks = resource.Body('event_sinks') #: A map of parameters and their default values defined for the stack. parameter_defaults = resource.Body('parameter_defaults') #: A map of parametes defined in the stack template. parameters = resource.Body('parameters', type=dict) #: A map containing customized resource definitions. resource_registry = resource.Body('resource_registry', type=dict) openstacksdk-0.11.3/openstack/orchestration/v1/__init__.py0000666000175100017510000000000013236151340023611 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/orchestration/v1/software_deployment.py0000666000175100017510000000531713236151340026164 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.orchestration import orchestration_service from openstack import resource class SoftwareDeployment(resource.Resource): resource_key = 'software_deployment' resources_key = 'software_deployments' base_path = '/software_deployments' service = orchestration_service.OrchestrationService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True allow_update = True # Properties #: The stack action that triggers this deployment resource. action = resource.Body('action') #: The UUID of the software config resource that runs when applying to the #: server. config_id = resource.Body('config_id') #: A map containing the names and values of all inputs to the config. input_values = resource.Body('input_values', type=dict) #: A map containing the names and values from the deployment. output_values = resource.Body('output_values', type=dict) #: The UUID of the compute server to which the configuration applies. server_id = resource.Body('server_id') #: The ID of the authentication project which can also perform operations #: on this deployment. stack_user_project_id = resource.Body('stack_user_project_id') #: Current status of the software deployment. status = resource.Body('status') #: Error description for the last status change. status_reason = resource.Body('status_reason') #: The date and time when the software deployment resource was created. created_at = resource.Body('creation_time') #: The date and time when the software deployment resource was created. updated_at = resource.Body('updated_time') def create(self, session): # This overrides the default behavior of resource creation because # heat doesn't accept resource_key in its request. return super(SoftwareDeployment, self).create( session, prepend_key=False) def update(self, session): # This overrides the default behavior of resource creation because # heat doesn't accept resource_key in its request. return super(SoftwareDeployment, self).update( session, prepend_key=False) openstacksdk-0.11.3/openstack/orchestration/v1/stack.py0000666000175100017510000001123513236151340023173 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import exceptions from openstack.orchestration import orchestration_service from openstack import resource from openstack import utils class Stack(resource.Resource): name_attribute = 'stack_name' resource_key = 'stack' resources_key = 'stacks' base_path = '/stacks' service = orchestration_service.OrchestrationService() # capabilities allow_create = True allow_list = True allow_get = True allow_update = True allow_delete = True # Properties #: Placeholder for AWS compatible template listing capabilities #: required by the stack. capabilities = resource.Body('capabilities') #: Timestamp of the stack creation. created_at = resource.Body('creation_time') #: A text description of the stack. description = resource.Body('description') #: Whether the stack will support a rollback operation on stack #: create/update failures. *Type: bool* is_rollback_disabled = resource.Body('disable_rollback', type=bool) #: A list of dictionaries containing links relevant to the stack. links = resource.Body('links') #: Name of the stack. name = resource.Body('stack_name') #: Placeholder for future extensions where stack related events #: can be published. notification_topics = resource.Body('notification_topics') #: A list containing output keys and values from the stack, if any. outputs = resource.Body('outputs') #: The ID of the owner stack if any. owner_id = resource.Body('stack_owner') #: A dictionary containing the parameter names and values for the stack. parameters = resource.Body('parameters', type=dict) #: The ID of the parent stack if any parent_id = resource.Body('parent') #: A string representation of the stack status, e.g. ``CREATE_COMPLETE``. status = resource.Body('stack_status') #: A text explaining how the stack transits to its current status. status_reason = resource.Body('stack_status_reason') #: A list of strings used as tags on the stack tags = resource.Body('tags') #: A dict containing the template use for stack creation. template = resource.Body('template', type=dict) #: Stack template description text. Currently contains the same text #: as that of the ``description`` property. template_description = resource.Body('template_description') #: A string containing the URL where a stack template can be found. template_url = resource.Body('template_url') #: Stack operation timeout in minutes. timeout_mins = resource.Body('timeout_mins') #: Timestamp of last update on the stack. updated_at = resource.Body('updated_time') #: The ID of the user project created for this stack. user_project_id = resource.Body('stack_user_project_id') def create(self, session): # This overrides the default behavior of resource creation because # heat doesn't accept resource_key in its request. return super(Stack, self).create(session, prepend_key=False) def update(self, session): # This overrides the default behavior of resource creation because # heat doesn't accept resource_key in its request. return super(Stack, self).update(session, prepend_key=False, has_body=False) def _action(self, session, body): """Perform stack actions""" url = utils.urljoin(self.base_path, self._get_id(self), 'actions') resp = session.post(url, json=body) return resp.json() def check(self, session): return self._action(session, {'check': ''}) def get(self, session, requires_id=True, error_message=None): stk = super(Stack, self).get(session, requires_id=requires_id, error_message=error_message) if stk and stk.status in ['DELETE_COMPLETE', 'ADOPT_COMPLETE']: raise exceptions.NotFoundException( "No stack found for %s" % stk.id) return stk class StackPreview(Stack): base_path = '/stacks/preview' allow_create = True allow_list = False allow_get = False allow_update = False allow_delete = False openstacksdk-0.11.3/openstack/orchestration/v1/stack_template.py0000666000175100017510000000426213236151340025070 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.orchestration import orchestration_service from openstack import resource class StackTemplate(resource.Resource): service = orchestration_service.OrchestrationService() base_path = "/stacks/%(stack_name)s/%(stack_id)s/template" # capabilities allow_create = False allow_list = False allow_get = True allow_delete = False allow_update = False # Properties #: Name of the stack where the template is referenced. stack_name = resource.URI('stack_name') #: ID of the stack where the template is referenced. stack_id = resource.URI('stack_id') #: The description specified in the template description = resource.Body('Description') #: The version of the orchestration HOT template. heat_template_version = resource.Body('heat_template_version') #: Key and value that contain output data. outputs = resource.Body('outputs', type=dict) #: Key and value pairs that contain template parameters parameters = resource.Body('parameters', type=dict) #: Key and value pairs that contain definition of resources in the #: template resources = resource.Body('resources', type=dict) # List parameters grouped. parameter_groups = resource.Body('parameter_groups', type=list) # Restrict conditions which supported since '2016-10-14'. conditions = resource.Body('conditions', type=dict) def to_dict(self): mapping = super(StackTemplate, self).to_dict() mapping.pop('location') mapping.pop('id') mapping.pop('name') if self.heat_template_version < '2016-10-14': mapping.pop('conditions') return mapping openstacksdk-0.11.3/openstack/orchestration/v1/_proxy.py0000666000175100017510000003706713236151364023427 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import exceptions from openstack.orchestration.v1 import resource as _resource from openstack.orchestration.v1 import software_config as _sc from openstack.orchestration.v1 import software_deployment as _sd from openstack.orchestration.v1 import stack as _stack from openstack.orchestration.v1 import stack_environment as _stack_environment from openstack.orchestration.v1 import stack_files as _stack_files from openstack.orchestration.v1 import stack_template as _stack_template from openstack.orchestration.v1 import template as _template from openstack import proxy class Proxy(proxy.BaseProxy): def create_stack(self, preview=False, **attrs): """Create a new stack from attributes :param bool perview: When ``True``, returns an :class:`~openstack.orchestration.v1.stack.StackPreview` object, otherwise an :class:`~openstack.orchestration.v1.stack.Stack` object. *Default: ``False``* :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.orchestration.v1.stack.Stack`, comprised of the properties on the Stack class. :returns: The results of stack creation :rtype: :class:`~openstack.orchestration.v1.stack.Stack` """ res_type = _stack.StackPreview if preview else _stack.Stack return self._create(res_type, **attrs) def find_stack(self, name_or_id, ignore_missing=True): """Find a single stack :param name_or_id: The name or ID of a stack. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.orchestration.v1.stack.Stack` or None """ return self._find(_stack.Stack, name_or_id, ignore_missing=ignore_missing) def stacks(self, **query): """Return a generator of stacks :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of stack objects :rtype: :class:`~openstack.orchestration.v1.stack.Stack` """ return self._list(_stack.Stack, paginated=False, **query) def get_stack(self, stack): """Get a single stack :param stack: The value can be the ID of a stack or a :class:`~openstack.orchestration.v1.stack.Stack` instance. :returns: One :class:`~openstack.orchestration.v1.stack.Stack` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_stack.Stack, stack) def update_stack(self, stack, **attrs): """Update a stack :param stack: The value can be the ID of a stack or a :class:`~openstack.orchestration.v1.stack.Stack` instance. :param kwargs \*\*attrs: The attributes to update on the stack represented by ``value``. :returns: The updated stack :rtype: :class:`~openstack.orchestration.v1.stack.Stack` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._update(_stack.Stack, stack, **attrs) def delete_stack(self, stack, ignore_missing=True): """Delete a stack :param stack: The value can be either the ID of a stack or a :class:`~openstack.orchestration.v1.stack.Stack` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the stack does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent stack. :returns: ``None`` """ self._delete(_stack.Stack, stack, ignore_missing=ignore_missing) def check_stack(self, stack): """Check a stack's status Since this is an asynchronous action, the only way to check the result is to track the stack's status. :param stack: The value can be either the ID of a stack or an instance of :class:`~openstack.orchestration.v1.stack.Stack`. :returns: ``None`` """ if isinstance(stack, _stack.Stack): stk_obj = stack else: stk_obj = _stack.Stack.existing(id=stack) stk_obj.check(self) def get_stack_template(self, stack): """Get template used by a stack :param stack: The value can be the ID of a stack or an instance of :class:`~openstack.orchestration.v1.stack.Stack` :returns: One object of :class:`~openstack.orchestration.v1.stack_template.StackTemplate` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ if isinstance(stack, _stack.Stack): obj = stack else: obj = self._find(_stack.Stack, stack, ignore_missing=False) return self._get(_stack_template.StackTemplate, requires_id=False, stack_name=obj.name, stack_id=obj.id) def get_stack_environment(self, stack): """Get environment used by a stack :param stack: The value can be the ID of a stack or an instance of :class:`~openstack.orchestration.v1.stack.Stack` :returns: One object of :class:`~openstack.orchestration.v1.stack_environment.\ StackEnvironment` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ if isinstance(stack, _stack.Stack): obj = stack else: obj = self._find(_stack.Stack, stack, ignore_missing=False) return self._get(_stack_environment.StackEnvironment, requires_id=False, stack_name=obj.name, stack_id=obj.id) def get_stack_files(self, stack): """Get files used by a stack :param stack: The value can be the ID of a stack or an instance of :class:`~openstack.orchestration.v1.stack.Stack` :returns: A dictionary containing the names and contents of all files used by the stack. :raises: :class:`~openstack.exceptions.ResourceNotFound` when the stack cannot be found. """ if isinstance(stack, _stack.Stack): stk = stack else: stk = self._find(_stack.Stack, stack, ignore_missing=False) obj = _stack_files.StackFiles(stack_name=stk.name, stack_id=stk.id) return obj.get(self) def resources(self, stack, **query): """Return a generator of resources :param stack: This can be a stack object, or the name of a stack for which the resources are to be listed. :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of resource objects if the stack exists and there are resources in it. If the stack cannot be found, an exception is thrown. :rtype: A generator of :class:`~openstack.orchestration.v1.resource.Resource` :raises: :class:`~openstack.exceptions.ResourceNotFound` when the stack cannot be found. """ # first try treat the value as a stack object or an ID if isinstance(stack, _stack.Stack): obj = stack else: obj = self._find(_stack.Stack, stack, ignore_missing=False) return self._list(_resource.Resource, paginated=False, stack_name=obj.name, stack_id=obj.id, **query) def create_software_config(self, **attrs): """Create a new software config from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.orchestration.v1.software_config.SoftwareConfig`, comprised of the properties on the SoftwareConfig class. :returns: The results of software config creation :rtype: :class:`~openstack.orchestration.v1.software_config.SoftwareConfig` """ return self._create(_sc.SoftwareConfig, **attrs) def software_configs(self, **query): """Returns a generator of software configs :param dict query: Optional query parameters to be sent to limit the software configs returned. :returns: A generator of software config objects. :rtype: :class:`~openstack.orchestration.v1.software_config.\ SoftwareConfig` """ return self._list(_sc.SoftwareConfig, paginated=True, **query) def get_software_config(self, software_config): """Get details about a specific software config. :param software_config: The value can be the ID of a software config or a instace of :class:`~openstack.orchestration.v1.software_config.SoftwareConfig`, :returns: An object of type :class:`~openstack.orchestration.v1.software_config.SoftwareConfig` """ return self._get(_sc.SoftwareConfig, software_config) def delete_software_config(self, software_config, ignore_missing=True): """Delete a software config :param software_config: The value can be either the ID of a software config or an instance of :class:`~openstack.orchestration.v1.software_config.SoftwareConfig` :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the software config does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent software config. :returns: ``None`` """ self._delete(_sc.SoftwareConfig, software_config, ignore_missing=ignore_missing) def create_software_deployment(self, **attrs): """Create a new software deployment from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.orchestration.v1.software_deployment.SoftwareDeployment`, comprised of the properties on the SoftwareDeployment class. :returns: The results of software deployment creation :rtype: :class:`~openstack.orchestration.v1.software_deployment.SoftwareDeployment` """ return self._create(_sd.SoftwareDeployment, **attrs) def software_deployments(self, **query): """Returns a generator of software deployments :param dict query: Optional query parameters to be sent to limit the software deployments returned. :returns: A generator of software deployment objects. :rtype: :class:`~openstack.orchestration.v1.software_deployment.\ SoftwareDeployment` """ return self._list(_sd.SoftwareDeployment, paginated=False, **query) def get_software_deployment(self, software_deployment): """Get details about a specific software deployment resource :param software_deployment: The value can be the ID of a software deployment or an instace of :class:`~openstack.orchestration.v1.software_deployment.SoftwareDeployment`, :returns: An object of type :class:`~openstack.orchestration.v1.software_deployment.SoftwareDeployment` """ return self._get(_sd.SoftwareDeployment, software_deployment) def delete_software_deployment(self, software_deployment, ignore_missing=True): """Delete a software deployment :param software_deployment: The value can be either the ID of a software deployment or an instance of :class:`~openstack.orchestration.v1.software_deployment.SoftwareDeployment` :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the software deployment does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent software deployment. :returns: ``None`` """ self._delete(_sd.SoftwareDeployment, software_deployment, ignore_missing=ignore_missing) def update_software_deployment(self, software_deployment, **attrs): """Update a software deployment :param server: Either the ID of a software deployment or an instance of :class:`~openstack.orchestration.v1.software_deployment.SoftwareDeployment` :param dict attrs: The attributes to update on the software deployment represented by ``software_deployment``. :returns: The updated software deployment :rtype: :class:`~openstack.orchestration.v1.software_deployment.\ SoftwareDeployment` """ return self._update(_sd.SoftwareDeployment, software_deployment, **attrs) def validate_template(self, template, environment=None, template_url=None, ignore_errors=None): """Validates a template. :param template: The stack template on which the validation is performed. :param environment: A JSON environment for the stack, if provided. :param template_url: A URI to the location containing the stack template for validation. This parameter is only required if the ``template`` parameter is None. This parameter is ignored if ``template`` is specified. :param ignore_errors: A string containing comma separated error codes to ignore. Currently the only valid error code is '99001'. :returns: The result of template validation. :raises: :class:`~openstack.exceptions.InvalidRequest` if neither `template` not `template_url` is provided. :raises: :class:`~openstack.exceptions.HttpException` if the template fails the validation. """ if template is None and template_url is None: raise exceptions.InvalidRequest( "'template_url' must be specified when template is None") tmpl = _template.Template.new() return tmpl.validate(self, template, environment=environment, template_url=template_url, ignore_errors=ignore_errors) openstacksdk-0.11.3/openstack/orchestration/__init__.py0000666000175100017510000000000013236151340023263 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/orchestration/orchestration_service.py0000666000175100017510000000174313236151340026147 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class OrchestrationService(service_filter.ServiceFilter): """The orchestration service.""" valid_versions = [service_filter.ValidVersion('v1')] def __init__(self, version=None): """Create an orchestration service.""" super(OrchestrationService, self).__init__( service_type='orchestration', version=version, requires_project_id=True, ) openstacksdk-0.11.3/openstack/load_balancer/0000775000175100017510000000000013236151501021043 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/load_balancer/version.py0000666000175100017510000000176313236151340023114 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer import load_balancer_service as lb_service from openstack import resource class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = lb_service.LoadBalancerService( version=lb_service.LoadBalancerService.UNVERSIONED ) # capabilities allow_list = True # Properties links = resource.Body('links') status = resource.Body('status') openstacksdk-0.11.3/openstack/load_balancer/v2/0000775000175100017510000000000013236151501021372 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/load_balancer/v2/l7_policy.py0000666000175100017510000000511013236151340023645 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer import load_balancer_service as lb_service from openstack import resource class L7Policy(resource.Resource): resource_key = 'l7policy' resources_key = 'l7policies' base_path = '/v2.0/lbaas/l7policies' service = lb_service.LoadBalancerService() # capabilities allow_create = True allow_list = True allow_get = True allow_update = True allow_delete = True _query_mapping = resource.QueryParameters( 'action', 'description', 'listener_id', 'name', 'position', 'redirect_pool_id', 'redirect_url', 'provisioning_status', 'operating_status', is_admin_state_up='admin_state_up', ) #: Properties #: The action to be taken l7policy is matched action = resource.Body('action') #: Timestamp when the L7 policy was created. created_at = resource.Body('created_at') #: The l7policy description description = resource.Body('description') #: The administrative state of the l7policy *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: The ID of the listener associated with this l7policy listener_id = resource.Body('listener_id') #: The l7policy name name = resource.Body('name') #: Operating status of the member. operating_status = resource.Body('operating_status') #: Sequence number of this l7policy position = resource.Body('position', type=int) #: The ID of the project this l7policy is associated with. project_id = resource.Body('project_id') #: The provisioning status of this l7policy provisioning_status = resource.Body('provisioning_status') #: The ID of the pool to which the requests will be redirected redirect_pool_id = resource.Body('redirect_pool_id') #: The URL to which the requests should be redirected redirect_url = resource.Body('redirect_url') #: The list of L7Rules associated with the l7policy rules = resource.Body('rules', type=list) #: Timestamp when the member was last updated. updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/load_balancer/v2/listener.py0000666000175100017510000000706213236151340023601 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer import load_balancer_service as lb_service from openstack import resource class Listener(resource.Resource): resource_key = 'listener' resources_key = 'listeners' base_path = '/v2.0/lbaas/listeners' service = lb_service.LoadBalancerService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'connection_limit', 'default_pool_id', 'default_tls_container_ref', 'description', 'name', 'project_id', 'protocol', 'protocol_port', 'created_at', 'updated_at', 'provisioning_status', 'operating_status', 'sni_container_refs', 'insert_headers', 'load_balancer_id', is_admin_state_up='admin_state_up', ) # Properties #: The maximum number of connections permitted for this load balancer. #: Default is infinite. connection_limit = resource.Body('connection_limit') #: Timestamp when the listener was created. created_at = resource.Body('created_at') #: Default pool to which the requests will be routed. default_pool = resource.Body('default_pool') #: ID of default pool. Must have compatible protocol with listener. default_pool_id = resource.Body('default_pool_id') #: A reference to a container of TLS secrets. default_tls_container_ref = resource.Body('default_tls_container_ref') #: Description for the listener. description = resource.Body('description') #: Dictionary of additional headers insertion into HTTP header. insert_headers = resource.Body('insert_headers', type=dict) #: The administrative state of the listener, which is up #: ``True`` or down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: List of l7policies associated with this listener. l7_policies = resource.Body('l7policies', type=list) #: The ID of the parent load balancer. load_balancer_id = resource.Body('loadbalancer_id') #: List of load balancers associated with this listener. #: *Type: list of dicts which contain the load balancer IDs* load_balancers = resource.Body('loadbalancers', type=list) #: Name of the listener name = resource.Body('name') #: Operating status of the listener. operating_status = resource.Body('operating_status') #: The ID of the project this listener is associated with. project_id = resource.Body('project_id') #: The protocol of the listener, which is TCP, HTTP, HTTPS #: or TERMINATED_HTTPS. protocol = resource.Body('protocol') #: Port the listener will listen to, e.g. 80. protocol_port = resource.Body('protocol_port', type=int) #: The provisioning status of this listener. provisioning_status = resource.Body('provisioning_status') #: A list of references to TLS secrets. #: *Type: list* sni_container_refs = resource.Body('sni_container_refs') #: Timestamp when the listener was last updated. updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/load_balancer/v2/health_monitor.py0000666000175100017510000000620413236151340024765 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer import load_balancer_service as lb_service from openstack import resource class HealthMonitor(resource.Resource): resource_key = 'healthmonitor' resources_key = 'healthmonitors' base_path = '/v2.0/lbaas/healthmonitors' service = lb_service.LoadBalancerService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True allow_update = True _query_mapping = resource.QueryParameters( 'name', 'created_at', 'updated_at', 'delay', 'expected_codes', 'http_method', 'max_retries', 'max_retries_down', 'pool_id', 'provisioning_status', 'operating_status', 'timeout', 'project_id', 'type', 'url_path', is_admin_state_up='admin_state_up', ) #: Properties #: Timestamp when the health monitor was created. created_at = resource.Body('created_at') #: The time, in seconds, between sending probes to members. delay = resource.Body('delay', type=int) #: The expected http status codes to get from a successful health check expected_codes = resource.Body('expected_codes') #: The HTTP method that the monitor uses for requests http_method = resource.Body('http_method') #: The administrative state of the health monitor *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: The number of successful checks before changing the operating status #: of the member to ONLINE. max_retries = resource.Body('max_retries', type=int) #: The number of allowed check failures before changing the operating #: status of the member to ERROR. max_retries_down = resource.Body('max_retries_down', type=int) #: The health monitor name name = resource.Body('name') #: Operating status of the member. operating_status = resource.Body('operating_status') #: List of associated pools. #: *Type: list of dicts which contain the pool IDs* pools = resource.Body('pools', type=list) #: The ID of the associated Pool pool_id = resource.Body('pool_id') #: The ID of the project project_id = resource.Body('project_id') #: The provisioning status of this member. provisioning_status = resource.Body('provisioning_status') #: The time, in seconds, after which a health check times out timeout = resource.Body('timeout', type=int) #: The type of health monitor type = resource.Body('type') #: Timestamp when the member was last updated. updated_at = resource.Body('updated_at') #: The HTTP path of the request to test the health of a member url_path = resource.Body('url_path') openstacksdk-0.11.3/openstack/load_balancer/v2/__init__.py0000666000175100017510000000000013236151340023474 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/load_balancer/v2/member.py0000666000175100017510000000550713236151340023225 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer import load_balancer_service as lb_service from openstack import resource class Member(resource.Resource): resource_key = 'member' resources_key = 'members' base_path = '/v2.0/lbaas/pools/%(pool_id)s/members' service = lb_service.LoadBalancerService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'address', 'name', 'protocol_port', 'subnet_id', 'weight', 'created_at', 'updated_at', 'provisioning_status', 'operating_status', 'project_id', 'monitor_address', 'monitor_port', is_admin_state_up='admin_state_up', ) # Properties #: The IP address of the member. address = resource.Body('address') #: Timestamp when the member was created. created_at = resource.Body('created_at') #: The administrative state of the member, which is up ``True`` or #: down ``False``. *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: IP address used to monitor this member monitor_address = resource.Body('monitor_address') #: Port used to monitor this member monitor_port = resource.Body('monitor_port', type=int) #: Name of the member. name = resource.Body('name') #: Operating status of the member. operating_status = resource.Body('operating_status') #: The ID of the owning pool. pool_id = resource.URI('pool_id') #: The provisioning status of this member. provisioning_status = resource.Body('provisioning_status') #: The ID of the project this member is associated with. project_id = resource.Body('project_id') #: The port on which the application is hosted. protocol_port = resource.Body('protocol_port', type=int) #: Subnet ID in which to access this member. subnet_id = resource.Body('subnet_id') #: Timestamp when the member was last updated. updated_at = resource.Body('updated_at') #: A positive integer value that indicates the relative portion of traffic #: that this member should receive from the pool. For example, a member #: with a weight of 10 receives five times as much traffic as a member #: with weight of 2. weight = resource.Body('weight', type=int) openstacksdk-0.11.3/openstack/load_balancer/v2/load_balancer.py0000666000175100017510000000532113236151340024516 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer import load_balancer_service as lb_service from openstack import resource class LoadBalancer(resource.Resource): resource_key = 'loadbalancer' resources_key = 'loadbalancers' base_path = '/v2.0/lbaas/loadbalancers' service = lb_service.LoadBalancerService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'description', 'flavor', 'name', 'project_id', 'provider', 'vip_address', 'vip_network_id', 'vip_port_id', 'vip_subnet_id', 'provisioning_status', 'operating_status', is_admin_state_up='admin_state_up' ) #: Properties #: The administrative state of the load balancer *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: Timestamp when the load balancer was created created_at = resource.Body('created_at') #: The load balancer description description = resource.Body('description') #: The load balancer flavor flavor = resource.Body('flavor') #: List of listeners associated with this load balancer listeners = resource.Body('listeners', type=list) #: The load balancer name name = resource.Body('name') #: Operating status of the load balancer operating_status = resource.Body('operating_status') #: List of pools associated with this load balancer pools = resource.Body('pools', type=list) #: The ID of the project this load balancer is associated with. project_id = resource.Body('project_id') #: Provider name for the load balancer. provider = resource.Body('provider') #: The provisioning status of this load balancer provisioning_status = resource.Body('provisioning_status') #: Timestamp when the load balancer was last updated updated_at = resource.Body('updated_at') #: VIP address of load balancer vip_address = resource.Body('vip_address') #: VIP netowrk ID vip_network_id = resource.Body('vip_network_id') #: VIP port ID vip_port_id = resource.Body('vip_port_id') #: VIP subnet ID vip_subnet_id = resource.Body('vip_subnet_id') openstacksdk-0.11.3/openstack/load_balancer/v2/l7_rule.py0000666000175100017510000000465113236151340023326 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer import load_balancer_service as lb_service from openstack import resource class L7Rule(resource.Resource): resource_key = 'rule' resources_key = 'rules' base_path = '/v2.0/lbaas/l7policies/%(l7policy_id)s/rules' service = lb_service.LoadBalancerService() # capabilities allow_create = True allow_list = True allow_get = True allow_update = True allow_delete = True _query_mapping = resource.QueryParameters( 'compare_type', 'created_at', 'invert', 'key', 'project_id', 'provisioning_status', 'type', 'updated_at', 'rule_value', 'operating_status', is_admin_state_up='admin_state_up', l7_policy_id='l7policy_id', ) #: Properties #: The administrative state of the l7policy *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: comparison type to be used with the value in this L7 rule. compare_type = resource.Body('compare_type') #: Timestamp when the L7 rule was created. created_at = resource.Body('created_at') #: inverts the logic of the rule if True # (ie. perform a logical NOT on the rule) invert = resource.Body('invert', type=bool) #: The key to use for the comparison. key = resource.Body('key') #: The ID of the associated l7 policy l7_policy_id = resource.URI('l7policy_id') #: The operating status of this l7rule operating_status = resource.Body('operating_status') #: The ID of the project this l7policy is associated with. project_id = resource.Body('project_id') #: The provisioning status of this l7policy provisioning_status = resource.Body('provisioning_status') #: The type of L7 rule type = resource.Body('type') #: Timestamp when the L7 rule was updated. updated_at = resource.Body('updated_at') #: value to be compared with rule_value = resource.Body('value') openstacksdk-0.11.3/openstack/load_balancer/v2/pool.py0000666000175100017510000000553113236151340022724 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer import load_balancer_service as lb_service from openstack import resource class Pool(resource.Resource): resource_key = 'pool' resources_key = 'pools' base_path = '/v2.0/lbaas/pools' service = lb_service.LoadBalancerService() # capabilities allow_create = True allow_list = True allow_get = True allow_delete = True allow_update = True _query_mapping = resource.QueryParameters( 'health_monitor_id', 'lb_algorithm', 'listener_id', 'loadbalancer_id', 'description', 'name', 'project_id', 'protocol', 'created_at', 'updated_at', 'provisioning_status', 'operating_status', is_admin_state_up='admin_state_up' ) #: Properties #: Timestamp when the pool was created created_at = resource.Body('created_at') #: Description for the pool. description = resource.Body('description') #: Health Monitor ID health_monitor_id = resource.Body('healthmonitor_id') #: The administrative state of the pool *Type: bool* is_admin_state_up = resource.Body('admin_state_up', type=bool) #: The loadbalancing algorithm used in the pool lb_algorithm = resource.Body('lb_algorithm') #: ID of listener associated with this pool listener_id = resource.Body('listener_id') #: List of listeners associated with this pool listeners = resource.Body('listeners', type=list) #: ID of load balancer associated with this pool loadbalancer_id = resource.Body('loadbalancer_id') #: List of loadbalancers associated with this pool loadbalancers = resource.Body('loadbalancers', type=list) #: Members associated with this pool members = resource.Body('members', type=list) #: The pool name name = resource.Body('name') #: Operating status of the pool operating_status = resource.Body('operating_status') #: The ID of the project project_id = resource.Body('project_id') #: The protocol of the pool protocol = resource.Body('protocol') #: Provisioning status of the pool provisioning_status = resource.Body('provisioning_status') #: A JSON object specifying the session persistence for the pool. session_persistence = resource.Body('session_persistence', type=dict) #: Timestamp when the pool was updated updated_at = resource.Body('updated_at') openstacksdk-0.11.3/openstack/load_balancer/v2/_proxy.py0000666000175100017510000007021313236151340023272 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer.v2 import health_monitor as _hm from openstack.load_balancer.v2 import l7_policy as _l7policy from openstack.load_balancer.v2 import l7_rule as _l7rule from openstack.load_balancer.v2 import listener as _listener from openstack.load_balancer.v2 import load_balancer as _lb from openstack.load_balancer.v2 import member as _member from openstack.load_balancer.v2 import pool as _pool from openstack import proxy class Proxy(proxy.BaseProxy): def create_load_balancer(self, **attrs): """Create a new load balancer from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.load_balancer.v2. load_balancer.LoadBalancer`, comprised of the properties on the LoadBalancer class. :returns: The results of load balancer creation :rtype: :class:`~openstack.load_balancer.v2.load_balancer.LoadBalancer` """ return self._create(_lb.LoadBalancer, **attrs) def get_load_balancer(self, *attrs): """Get a load balancer :param load_balancer: The value can be the name of a load balancer or :class:`~openstack.load_balancer.v2.load_balancer.LoadBalancer` instance. :returns: One :class:`~openstack.load_balancer.v2.load_balancer.LoadBalancer` """ return self._get(_lb.LoadBalancer, *attrs) def load_balancers(self, **query): """Retrieve a generator of load balancers :returns: A generator of load balancer instances """ return self._list(_lb.LoadBalancer, paginated=True, **query) def delete_load_balancer(self, load_balancer, ignore_missing=True): """Delete a load balancer :param load_balancer: The load_balancer can be either the name or a :class:`~openstack.load_balancer.v2.load_balancer.LoadBalancer` instance :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the load balancer does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent load balancer. :returns: ``None`` """ return self._delete(_lb.LoadBalancer, load_balancer, ignore_missing=ignore_missing) def find_load_balancer(self, name_or_id, ignore_missing=True): """Find a single load balancer :param name_or_id: The name or ID of a load balancer :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the load balancer does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent load balancer. :returns: ``None`` """ return self._find(_lb.LoadBalancer, name_or_id, ignore_missing=ignore_missing) def update_load_balancer(self, load_balancer, **attrs): """Update a load balancer :param load_balancer: The load_balancer can be either the name or a :class:`~openstack.load_balancer.v2.load_balancer.LoadBalancer` instance :param dict attrs: The attributes to update on the load balancer represented by ``load_balancer``. :returns: The updated load_balancer :rtype: :class:`~openstack.load_balancer.v2.load_balancer.LoadBalancer` """ return self._update(_lb.LoadBalancer, load_balancer, **attrs) def create_listener(self, **attrs): """Create a new listener from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.load_balancer.v2.listener.Listener`, comprised of the properties on the Listener class. :returns: The results of listener creation :rtype: :class:`~openstack.load_balancer.v2.listener.Listener` """ return self._create(_listener.Listener, **attrs) def delete_listener(self, listener, ignore_missing=True): """Delete a listener :param listener: The value can be either the ID of a listner or a :class:`~openstack.load_balancer.v2.listener.Listener` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the listner does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent listener. :returns: ``None`` """ self._delete(_listener.Listener, listener, ignore_missing=ignore_missing) def find_listener(self, name_or_id, ignore_missing=True): """Find a single listener :param name_or_id: The name or ID of a listener. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.load_balancer.v2.listener.Listener` or None """ return self._find(_listener.Listener, name_or_id, ignore_missing=ignore_missing) def get_listener(self, listener): """Get a single listener :param listener: The value can be the ID of a listener or a :class:`~openstack.load_balancer.v2.listener.Listener` instance. :returns: One :class:`~openstack.load_balancer.v2.listener.Listener` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_listener.Listener, listener) def listeners(self, **query): """Return a generator of listeners :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: :returns: A generator of listener objects :rtype: :class:`~openstack.load_balancer.v2.listener.Listener` """ return self._list(_listener.Listener, paginated=True, **query) def update_listener(self, listener, **attrs): """Update a listener :param listener: Either the id of a listener or a :class:`~openstack.load_balancer.v2.listener.Listener` instance. :param dict attrs: The attributes to update on the listener represented by ``listener``. :returns: The updated listener :rtype: :class:`~openstack.load_balancer.v2.listener.Listener` """ return self._update(_listener.Listener, listener, **attrs) def create_pool(self, **attrs): """Create a new pool from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.load_balancer.v2. pool.Pool`, comprised of the properties on the Pool class. :returns: The results of Pool creation :rtype: :class:`~openstack.load_balancer.v2.pool.Pool` """ return self._create(_pool.Pool, **attrs) def get_pool(self, *attrs): """Get a pool :param pool: Value is :class:`~openstack.load_balancer.v2.pool.Pool` instance. :returns: One :class:`~openstack.load_balancer.v2.pool.Pool` """ return self._get(_pool.Pool, *attrs) def pools(self, **query): """Retrieve a generator of pools :returns: A generator of Pool instances """ return self._list(_pool.Pool, paginated=True, **query) def delete_pool(self, pool, ignore_missing=True): """Delete a pool :param pool: The pool is a :class:`~openstack.load_balancer.v2.pool.Pool` instance :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the pool does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent pool. :returns: ``None`` """ return self._delete(_pool.Pool, pool, ignore_missing=ignore_missing) def find_pool(self, name_or_id, ignore_missing=True): """Find a single pool :param name_or_id: The name or ID of a pool :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the pool does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent pool. :returns: ``None`` """ return self._find(_pool.Pool, name_or_id, ignore_missing=ignore_missing) def update_pool(self, pool, **attrs): """Update a pool :param pool: Either the id of a pool or a :class:`~openstack.load_balancer.v2.pool.Pool` instance. :param dict attrs: The attributes to update on the pool represented by ``pool``. :returns: The updated pool :rtype: :class:`~openstack.load_balancer.v2.pool.Pool` """ return self._update(_pool.Pool, pool, **attrs) def create_member(self, pool, **attrs): """Create a new member from attributes :param pool: The pool can be either the ID of a pool or a :class:`~openstack.load_balancer.v2.pool.Pool` instance that the member will be created in. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.load_balancer.v2.member.Member`, comprised of the properties on the Member class. :returns: The results of member creation :rtype: :class:`~openstack.load_balancer.v2.member.Member` """ poolobj = self._get_resource(_pool.Pool, pool) return self._create(_member.Member, pool_id=poolobj.id, **attrs) def delete_member(self, member, pool, ignore_missing=True): """Delete a member :param member: The member can be either the ID of a member or a :class:`~openstack.load_balancer.v2.member.Member` instance. :param pool: The pool can be either the ID of a pool or a :class:`~openstack.load_balancer.v2.pool.Pool` instance that the member belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the member does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent member. :returns: ``None`` """ poolobj = self._get_resource(_pool.Pool, pool) self._delete(_member.Member, member, ignore_missing=ignore_missing, pool_id=poolobj.id) def find_member(self, name_or_id, pool, ignore_missing=True): """Find a single member :param str name_or_id: The name or ID of a member. :param pool: The pool can be either the ID of a pool or a :class:`~openstack.load_balancer.v2.pool.Pool` instance that the member belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.load_balancer.v2.member.Member` or None """ poolobj = self._get_resource(_pool.Pool, pool) return self._find(_member.Member, name_or_id, ignore_missing=ignore_missing, pool_id=poolobj.id) def get_member(self, member, pool): """Get a single member :param member: The member can be the ID of a member or a :class:`~openstack.load_balancer.v2.member.Member` instance. :param pool: The pool can be either the ID of a pool or a :class:`~openstack.load_balancer.v2.pool.Pool` instance that the member belongs to. :returns: One :class:`~openstack.load_balancer.v2.member.Member` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ poolobj = self._get_resource(_pool.Pool, pool) return self._get(_member.Member, member, pool_id=poolobj.id) def members(self, pool, **query): """Return a generator of members :param pool: The pool can be either the ID of a pool or a :class:`~openstack.load_balancer.v2.pool.Pool` instance that the member belongs to. :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: :returns: A generator of member objects :rtype: :class:`~openstack.load_balancer.v2.member.Member` """ poolobj = self._get_resource(_pool.Pool, pool) return self._list(_member.Member, paginated=True, pool_id=poolobj.id, **query) def update_member(self, member, pool, **attrs): """Update a member :param member: Either the ID of a member or a :class:`~openstack.load_balancer.v2.member.Member` instance. :param pool: The pool can be either the ID of a pool or a :class:`~openstack.load_balancer.v2.pool.Pool` instance that the member belongs to. :param dict attrs: The attributes to update on the member represented by ``member``. :returns: The updated member :rtype: :class:`~openstack.load_balancer.v2.member.Member` """ poolobj = self._get_resource(_pool.Pool, pool) return self._update(_member.Member, member, pool_id=poolobj.id, **attrs) def find_health_monitor(self, name_or_id, ignore_missing=True): """Find a single health monitor :param name_or_id: The name or ID of a health monitor :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the health monitor does not exist. When set to ``True``, no exception will be set when attempting to find a nonexistent health monitor. :returns: The :class:`openstack.load_balancer.v2.healthmonitor.HealthMonitor` object matching the given name or id or None if nothing matches. :raises: :class:`openstack.exceptions.DuplicateResource` if more than one resource is found for this request. :raises: :class:`openstack.exceptions.ResourceNotFound` if nothing is found and ignore_missing is ``False``. """ return self._find(_hm.HealthMonitor, name_or_id, ignore_missing=ignore_missing) def create_health_monitor(self, **attrs): """Create a new health monitor from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.load_balancer.v2. healthmonitor.HealthMonitor`, comprised of the properties on the HealthMonitor class. :returns: The results of HealthMonitor creation :rtype: :class:`~openstack.load_balancer.v2. healthmonitor.HealthMonitor` """ return self._create(_hm.HealthMonitor, **attrs) def get_health_monitor(self, healthmonitor): """Get a health monitor :param healthmonitor: The value can be the ID of a health monitor or :class:`~openstack.load_balancer.v2.healthmonitor.HealthMonitor` instance. :returns: One health monitor :rtype: :class:`~openstack.load_balancer.v2. healthmonitor.HealthMonitor` """ return self._get(_hm.HealthMonitor, healthmonitor) def health_monitors(self, **query): """Retrieve a generator of health monitors :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: 'name', 'created_at', 'updated_at', 'delay', 'expected_codes', 'http_method', 'max_retries', 'max_retries_down', 'pool_id', 'provisioning_status', 'operating_status', 'timeout', 'project_id', 'type', 'url_path', 'is_admin_state_up'. :returns: A generator of health monitor instances """ return self._list(_hm.HealthMonitor, paginated=True, **query) def delete_health_monitor(self, healthmonitor, ignore_missing=True): """Delete a health monitor :param healthmonitor: The healthmonitor can be either the ID of the health monitor or a :class:`~openstack.load_balancer.v2.healthmonitor.HealthMonitor` instance :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the healthmonitor does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent healthmonitor. :returns: ``None`` """ return self._delete(_hm.HealthMonitor, healthmonitor, ignore_missing=ignore_missing) def update_health_monitor(self, healthmonitor, **attrs): """Update a health monitor :param healthmonitor: The healthmonitor can be either the ID of the health monitor or a :class:`~openstack.load_balancer.v2.healthmonitor.HealthMonitor` instance :param dict attrs: The attributes to update on the health monitor represented by ``healthmonitor``. :returns: The updated health monitor :rtype: :class:`~openstack.load_balancer.v2. healthmonitor.HealthMonitor` """ return self._update(_hm.HealthMonitor, healthmonitor, **attrs) def create_l7_policy(self, **attrs): """Create a new l7policy from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.load_balancer.v2.l7_policy.L7Policy`, comprised of the properties on the L7Policy class. :returns: The results of l7policy creation :rtype: :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` """ return self._create(_l7policy.L7Policy, **attrs) def delete_l7_policy(self, l7_policy, ignore_missing=True): """Delete a l7policy :param l7_policy: The value can be either the ID of a l7policy or a :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the l7policy does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent l7policy. :returns: ``None`` """ self._delete(_l7policy.L7Policy, l7_policy, ignore_missing=ignore_missing) def find_l7_policy(self, name_or_id, ignore_missing=True): """Find a single l7policy :param name_or_id: The name or ID of a l7policy. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` or None """ return self._find(_l7policy.L7Policy, name_or_id, ignore_missing=ignore_missing) def get_l7_policy(self, l7_policy): """Get a single l7policy :param l7_policy: The value can be the ID of a l7policy or a :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance. :returns: One :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_l7policy.L7Policy, l7_policy) def l7_policies(self, **query): """Return a generator of l7policies :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: :returns: A generator of l7policy objects :rtype: :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` """ return self._list(_l7policy.L7Policy, paginated=True, **query) def update_l7_policy(self, l7_policy, **attrs): """Update a l7policy :param l7_policy: Either the id of a l7policy or a :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance. :param dict attrs: The attributes to update on the l7policy represented by ``l7policy``. :returns: The updated l7policy :rtype: :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` """ return self._update(_l7policy.L7Policy, l7_policy, **attrs) def create_l7_rule(self, l7_policy, **attrs): """Create a new l7rule from attributes :param l7_policy: The l7_policy can be either the ID of a l7policy or :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance that the l7rule will be created in. :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.load_balancer.v2.l7_rule.L7Rule`, comprised of the properties on the L7Rule class. :returns: The results of l7rule creation :rtype: :class:`~openstack.load_balancer.v2.l7_rule.L7Rule` """ l7policyobj = self._get_resource(_l7policy.L7Policy, l7_policy) return self._create(_l7rule.L7Rule, l7policy_id=l7policyobj.id, **attrs) def delete_l7_rule(self, l7rule, l7_policy, ignore_missing=True): """Delete a l7rule :param l7rule: The l7rule can be either the ID of a l7rule or a :class:`~openstack.load_balancer.v2.l7_rule.L7Rule` instance. :param l7_policy: The l7_policy can be either the ID of a l7policy or :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance that the l7rule belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the l7rule does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent l7rule. :returns: ``None`` """ l7policyobj = self._get_resource(_l7policy.L7Policy, l7_policy) self._delete(_l7rule.L7Rule, l7rule, ignore_missing=ignore_missing, l7policy_id=l7policyobj.id) def find_l7_rule(self, name_or_id, l7_policy, ignore_missing=True): """Find a single l7rule :param str name_or_id: The name or ID of a l7rule. :param l7_policy: The l7_policy can be either the ID of a l7policy or :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance that the l7rule belongs to. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.load_balancer.v2.l7_rule.L7Rule` or None """ l7policyobj = self._get_resource(_l7policy.L7Policy, l7_policy) return self._find(_l7rule.L7Rule, name_or_id, ignore_missing=ignore_missing, l7policy_id=l7policyobj.id) def get_l7_rule(self, l7rule, l7_policy): """Get a single l7rule :param l7rule: The l7rule can be the ID of a l7rule or a :class:`~openstack.load_balancer.v2.l7_rule.L7Rule` instance. :param l7_policy: The l7_policy can be either the ID of a l7policy or :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance that the l7rule belongs to. :returns: One :class:`~openstack.load_balancer.v2.l7_rule.L7Rule` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ l7policyobj = self._get_resource(_l7policy.L7Policy, l7_policy) return self._get(_l7rule.L7Rule, l7rule, l7policy_id=l7policyobj.id) def l7_rules(self, l7_policy, **query): """Return a generator of l7rules :param l7_policy: The l7_policy can be either the ID of a l7_policy or :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance that the l7rule belongs to. :param dict query: Optional query parameters to be sent to limit the resources being returned. Valid parameters are: :returns: A generator of l7rule objects :rtype: :class:`~openstack.load_balancer.v2.l7_rule.L7Rule` """ l7policyobj = self._get_resource(_l7policy.L7Policy, l7_policy) return self._list(_l7rule.L7Rule, paginated=True, l7policy_id=l7policyobj.id, **query) def update_l7_rule(self, l7rule, l7_policy, **attrs): """Update a l7rule :param l7rule: Either the ID of a l7rule or a :class:`~openstack.load_balancer.v2.l7_rule.L7Rule` instance. :param l7_policy: The l7_policy can be either the ID of a l7policy or :class:`~openstack.load_balancer.v2.l7_policy.L7Policy` instance that the l7rule belongs to. :param dict attrs: The attributes to update on the l7rule represented by ``l7rule``. :returns: The updated l7rule :rtype: :class:`~openstack.load_balancer.v2.l7_rule.L7Rule` """ l7policyobj = self._get_resource(_l7policy.L7Policy, l7_policy) return self._update(_l7rule.L7Rule, l7rule, l7policy_id=l7policyobj.id, **attrs) openstacksdk-0.11.3/openstack/load_balancer/__init__.py0000666000175100017510000000000013236151340023145 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/load_balancer/load_balancer_service.py0000666000175100017510000000170113236151340025705 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class LoadBalancerService(service_filter.ServiceFilter): """The load balancer service.""" valid_versions = [service_filter.ValidVersion('v2', 'v2.0')] def __init__(self, version=None): """Create a load balancer service.""" super(LoadBalancerService, self).__init__( service_type='load-balancer', version=version ) openstacksdk-0.11.3/openstack/tests/0000775000175100017510000000000013236151501017437 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/examples/0000775000175100017510000000000013236151501021255 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/examples/test_identity.py0000666000175100017510000000247613236151340024533 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from examples import connect from examples.identity import list as identity_list class TestIdentity(testtools.TestCase): """Test the identity examples The purpose of these tests is to ensure the examples run without erring out. """ def setUp(self): super(TestIdentity, self).setUp() self.conn = connect.create_connection_from_config() def test_identity(self): identity_list.list_users(self.conn) identity_list.list_credentials(self.conn) identity_list.list_projects(self.conn) identity_list.list_domains(self.conn) identity_list.list_groups(self.conn) identity_list.list_services(self.conn) identity_list.list_endpoints(self.conn) identity_list.list_regions(self.conn) openstacksdk-0.11.3/openstack/tests/examples/test_compute.py0000666000175100017510000000326013236151340024346 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from examples.compute import create from examples.compute import delete from examples.compute import find as compute_find from examples.compute import list as compute_list from examples import connect from examples.network import find as network_find from examples.network import list as network_list class TestCompute(testtools.TestCase): """Test the compute examples The purpose of these tests is to ensure the examples run without erring out. """ def setUp(self): super(TestCompute, self).setUp() self.conn = connect.create_connection_from_config() def test_compute(self): compute_list.list_servers(self.conn) compute_list.list_images(self.conn) compute_list.list_flavors(self.conn) compute_list.list_keypairs(self.conn) network_list.list_networks(self.conn) compute_find.find_image(self.conn) compute_find.find_flavor(self.conn) compute_find.find_keypair(self.conn) network_find.find_network(self.conn) create.create_server(self.conn) delete.delete_keypair(self.conn) delete.delete_server(self.conn) openstacksdk-0.11.3/openstack/tests/examples/__init__.py0000666000175100017510000000000013236151340023357 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/examples/test_image.py0000666000175100017510000000224113236151340023752 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from examples import connect from examples.image import create as image_create from examples.image import delete as image_delete from examples.image import list as image_list class TestImage(testtools.TestCase): """Test the image examples The purpose of these tests is to ensure the examples run without erring out. """ def setUp(self): super(TestImage, self).setUp() self.conn = connect.create_connection_from_config() def test_image(self): image_list.list_images(self.conn) image_create.upload_image(self.conn) image_delete.delete_image(self.conn) openstacksdk-0.11.3/openstack/tests/examples/test_network.py0000666000175100017510000000273213236151340024366 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from examples import connect from examples.network import create as network_create from examples.network import delete as network_delete from examples.network import find as network_find from examples.network import list as network_list class TestNetwork(testtools.TestCase): """Test the network examples The purpose of these tests is to ensure the examples run without erring out. """ def setUp(self): super(TestNetwork, self).setUp() self.conn = connect.create_connection_from_config() def test_network(self): network_list.list_networks(self.conn) network_list.list_subnets(self.conn) network_list.list_ports(self.conn) network_list.list_security_groups(self.conn) network_list.list_routers(self.conn) network_find.find_network(self.conn) network_create.create_network(self.conn) network_delete.delete_network(self.conn) openstacksdk-0.11.3/openstack/tests/fakes.py0000666000175100017510000003541613236151340021116 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License.V """ fakes ---------------------------------- Fakes used for testing """ import datetime import json import uuid from openstack.cloud._heat import template_format from openstack.cloud import meta PROJECT_ID = '1c36b64c840a42cd9e9b931a369337f0' FLAVOR_ID = u'0c1d9008-f546-4608-9e8f-f8bdaec8dddd' CHOCOLATE_FLAVOR_ID = u'0c1d9008-f546-4608-9e8f-f8bdaec8ddde' STRAWBERRY_FLAVOR_ID = u'0c1d9008-f546-4608-9e8f-f8bdaec8dddf' COMPUTE_ENDPOINT = 'https://compute.example.com/v2.1' ORCHESTRATION_ENDPOINT = 'https://orchestration.example.com/v1/{p}'.format( p=PROJECT_ID) NO_MD5 = '93b885adfe0da089cdf634904fd59f71' NO_SHA256 = '6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d' FAKE_PUBLIC_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n" # flake8: noqa def make_fake_flavor(flavor_id, name, ram=100, disk=1600, vcpus=24): return { u'OS-FLV-DISABLED:disabled': False, u'OS-FLV-EXT-DATA:ephemeral': 0, u'disk': disk, u'id': flavor_id, u'links': [{ u'href': u'{endpoint}/flavors/{id}'.format( endpoint=COMPUTE_ENDPOINT, id=flavor_id), u'rel': u'self' }, { u'href': u'{endpoint}/flavors/{id}'.format( endpoint=COMPUTE_ENDPOINT, id=flavor_id), u'rel': u'bookmark' }], u'name': name, u'os-flavor-access:is_public': True, u'ram': ram, u'rxtx_factor': 1.0, u'swap': u'', u'vcpus': vcpus } FAKE_FLAVOR = make_fake_flavor(FLAVOR_ID, 'vanilla') FAKE_CHOCOLATE_FLAVOR = make_fake_flavor( CHOCOLATE_FLAVOR_ID, 'chocolate', ram=200) FAKE_STRAWBERRY_FLAVOR = make_fake_flavor( STRAWBERRY_FLAVOR_ID, 'strawberry', ram=300) FAKE_FLAVOR_LIST = [FAKE_FLAVOR, FAKE_CHOCOLATE_FLAVOR, FAKE_STRAWBERRY_FLAVOR] FAKE_TEMPLATE = '''heat_template_version: 2014-10-16 parameters: length: type: number default: 10 resources: my_rand: type: OS::Heat::RandomString properties: length: {get_param: length} outputs: rand: value: get_attr: [my_rand, value] ''' FAKE_TEMPLATE_CONTENT = template_format.parse(FAKE_TEMPLATE) def make_fake_server( server_id, name, status='ACTIVE', admin_pass=None, addresses=None, image=None, flavor=None): if addresses is None: if status == 'ACTIVE': addresses = { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:df:b0:8d", "version": 6, "addr": "fddb:b018:307:0:f816:3eff:fedf:b08d", "OS-EXT-IPS:type": "fixed"}, { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:df:b0:8d", "version": 4, "addr": "10.1.0.9", "OS-EXT-IPS:type": "fixed"}, { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:df:b0:8d", "version": 4, "addr": "172.24.5.5", "OS-EXT-IPS:type": "floating"}]} else: addresses = {} if image is None: image = {"id": "217f3ab1-03e0-4450-bf27-63d52b421e9e", "links": []} if flavor is None: flavor = {"id": "64", "links": []} server = { "OS-EXT-STS:task_state": None, "addresses": addresses, "links": [], "image": image, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-03-23T23:57:38.000000", "flavor": flavor, "id": server_id, "security_groups": [{"name": "default"}], "user_id": "9c119f4beaaa438792ce89387362b3ad", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "metadata": {}, "status": status, "updated": "2017-03-23T23:57:39Z", "hostId": "89d165f04384e3ffa4b6536669eb49104d30d6ca832bba2684605dbc", "OS-SRV-USG:terminated_at": None, "key_name": None, "name": name, "created": "2017-03-23T23:57:12Z", "tenant_id": PROJECT_ID, "os-extended-volumes:volumes_attached": [], "config_drive": "True"} if admin_pass: server['adminPass'] = admin_pass return json.loads(json.dumps(server)) def make_fake_keypair(name): # Note: this is literally taken from: # https://developer.openstack.org/api-ref/compute/ return { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": name, "type": "ssh", "public_key": FAKE_PUBLIC_KEY, "created_at": datetime.datetime.now().isoformat(), } def make_fake_stack(id, name, description=None, status='CREATE_COMPLETE'): return { 'creation_time': '2017-03-23T23:57:12Z', 'deletion_time': '2017-03-23T23:57:12Z', 'description': description, 'id': id, 'links': [], 'parent': None, 'stack_name': name, 'stack_owner': None, 'stack_status': status, 'stack_user_project_id': PROJECT_ID, 'tags': None, 'updated_time': '2017-03-23T23:57:12Z', } def make_fake_stack_event( id, name, status='CREATE_COMPLETED', resource_name='id'): event_id = uuid.uuid4().hex self_url = "{endpoint}/stacks/{name}/{id}/resources/{name}/events/{event}" resource_url = "{endpoint}/stacks/{name}/{id}/resources/{name}" return { "resource_name": id if resource_name == 'id' else name, "event_time": "2017-03-26T19:38:18", "links": [ { "href": self_url.format( endpoint=ORCHESTRATION_ENDPOINT, name=name, id=id, event=event_id), "rel": "self" }, { "href": resource_url.format( endpoint=ORCHESTRATION_ENDPOINT, name=name, id=id), "rel": "resource" }, { "href": "{endpoint}/stacks/{name}/{id}".format( endpoint=ORCHESTRATION_ENDPOINT, name=name, id=id), "rel": "stack" }], "logical_resource_id": name, "resource_status": status, "resource_status_reason": "", "physical_resource_id": id, "id": event_id, } def make_fake_image( image_id=None, md5=NO_MD5, sha256=NO_SHA256, status='active', image_name=u'fake_image'): return { u'image_state': u'available', u'container_format': u'bare', u'min_ram': 0, u'ramdisk_id': None, u'updated_at': u'2016-02-10T05:05:02Z', u'file': '/v2/images/' + image_id + '/file', u'size': 3402170368, u'image_type': u'snapshot', u'disk_format': u'qcow2', u'id': image_id, u'schema': u'/v2/schemas/image', u'status': status, u'tags': [], u'visibility': u'private', u'locations': [{ u'url': u'http://127.0.0.1/images/' + image_id, u'metadata': {}}], u'min_disk': 40, u'virtual_size': None, u'name': image_name, u'checksum': u'ee36e35a297980dee1b514de9803ec6d', u'created_at': u'2016-02-10T05:03:11Z', u'owner_specified.openstack.md5': NO_MD5, u'owner_specified.openstack.sha256': NO_SHA256, u'owner_specified.openstack.object': 'images/{name}'.format( name=image_name), u'protected': False} def make_fake_machine(machine_name, machine_id=None): if not machine_id: machine_id = uuid.uuid4().hex return meta.obj_to_munch(FakeMachine( id=machine_id, name=machine_name)) def make_fake_port(address, node_id=None, port_id=None): if not node_id: node_id = uuid.uuid4().hex if not port_id: port_id = uuid.uuid4().hex return meta.obj_to_munch(FakeMachinePort( id=port_id, address=address, node_id=node_id)) class FakeFloatingIP(object): def __init__(self, id, pool, ip, fixed_ip, instance_id): self.id = id self.pool = pool self.ip = ip self.fixed_ip = fixed_ip self.instance_id = instance_id def make_fake_server_group(id, name, policies): return json.loads(json.dumps({ 'id': id, 'name': name, 'policies': policies, 'members': [], 'metadata': {}, })) def make_fake_hypervisor(id, name): return json.loads(json.dumps({ 'id': id, 'hypervisor_hostname': name, 'state': 'up', 'status': 'enabled', "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "status": "enabled", "state": "up", "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_type": "fake", "hypervisor_version": 1000, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "host1", "id": 7, "disabled_reason": None }, "vcpus": 1, "vcpus_used": 0 })) class FakeVolume(object): def __init__( self, id, status, name, attachments=[], size=75): self.id = id self.status = status self.name = name self.attachments = attachments self.size = size self.snapshot_id = 'id:snapshot' self.description = 'description' self.volume_type = 'type:volume' self.availability_zone = 'az1' self.created_at = '1900-01-01 12:34:56' self.source_volid = '12345' self.metadata = {} class FakeVolumeSnapshot(object): def __init__( self, id, status, name, description, size=75): self.id = id self.status = status self.name = name self.description = description self.size = size self.created_at = '1900-01-01 12:34:56' self.volume_id = '12345' self.metadata = {} class FakeMachine(object): def __init__(self, id, name=None, driver=None, driver_info=None, chassis_uuid=None, instance_info=None, instance_uuid=None, properties=None, reservation=None, last_error=None, provision_state=None): self.uuid = id self.name = name self.driver = driver self.driver_info = driver_info self.chassis_uuid = chassis_uuid self.instance_info = instance_info self.instance_uuid = instance_uuid self.properties = properties self.reservation = reservation self.last_error = last_error self.provision_state = provision_state class FakeMachinePort(object): def __init__(self, id, address, node_id): self.uuid = id self.address = address self.node_uuid = node_id def make_fake_neutron_security_group( id, name, description, rules, project_id=None): if not rules: rules = [] if not project_id: project_id = PROJECT_ID return json.loads(json.dumps({ 'id': id, 'name': name, 'description': description, 'project_id': project_id, 'tenant_id': project_id, 'security_group_rules': rules, })) def make_fake_nova_security_group_rule( id, from_port, to_port, ip_protocol, cidr): return json.loads(json.dumps({ 'id': id, 'from_port': int(from_port), 'to_port': int(to_port), 'ip_protcol': 'tcp', 'ip_range': { 'cidr': cidr } })) def make_fake_nova_security_group(id, name, description, rules): if not rules: rules = [] return json.loads(json.dumps({ 'id': id, 'name': name, 'description': description, 'tenant_id': PROJECT_ID, 'rules': rules, })) class FakeNovaSecgroupRule(object): def __init__(self, id, from_port=None, to_port=None, ip_protocol=None, cidr=None, parent_group_id=None): self.id = id self.from_port = from_port self.to_port = to_port self.ip_protocol = ip_protocol if cidr: self.ip_range = {'cidr': cidr} self.parent_group_id = parent_group_id class FakeHypervisor(object): def __init__(self, id, hostname): self.id = id self.hypervisor_hostname = hostname class FakeZone(object): def __init__(self, id, name, type_, email, description, ttl, masters): self.id = id self.name = name self.type_ = type_ self.email = email self.description = description self.ttl = ttl self.masters = masters class FakeRecordset(object): def __init__(self, zone, id, name, type_, description, ttl, records): self.zone = zone self.id = id self.name = name self.type_ = type_ self.description = description self.ttl = ttl self.records = records def make_fake_aggregate(id, name, availability_zone='nova', metadata=None, hosts=None): if not metadata: metadata = {} if not hosts: hosts = [] return json.loads(json.dumps({ "availability_zone": availability_zone, "created_at": datetime.datetime.now().isoformat(), "deleted": False, "deleted_at": None, "hosts": hosts, "id": int(id), "metadata": { "availability_zone": availability_zone, }, "name": name, "updated_at": None, })) openstacksdk-0.11.3/openstack/tests/functional/0000775000175100017510000000000013236151501021601 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/block_store/0000775000175100017510000000000013236151501024107 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/block_store/v2/0000775000175100017510000000000013236151501024436 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/block_store/v2/test_stats.py0000666000175100017510000000343413236151340027214 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage.v2 import stats as _stats from openstack.tests.functional import base class TestStats(base.BaseFunctionalTest): @classmethod def setUpClass(cls): super(TestStats, cls).setUpClass() sot = cls.conn.block_storage.backend_pools() for pool in sot: assert isinstance(pool, _stats.Pools) def test_list(self): capList = ['volume_backend_name', 'storage_protocol', 'free_capacity_gb', 'driver_version', 'goodness_function', 'QoS_support', 'vendor_name', 'pool_name', 'thin_provisioning_support', 'thick_provisioning_support', 'timestamp', 'max_over_subscription_ratio', 'total_volumes', 'total_capacity_gb', 'filter_function', 'multiattach', 'provisioned_capacity_gb', 'allocated_capacity_gb', 'reserved_percentage', 'location_info'] capList.sort() pools = self.conn.block_storage.backend_pools() for pool in pools: caps = pool.capabilities keys = caps.keys() keys.sort() assert isinstance(caps, dict) self.assertListEqual(keys, capList) openstacksdk-0.11.3/openstack/tests/functional/network/0000775000175100017510000000000013236151501023272 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/network/v2/0000775000175100017510000000000013236151501023621 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_floating_ip.py0000666000175100017510000001344613236151340027540 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import floating_ip from openstack.network.v2 import network from openstack.network.v2 import port from openstack.network.v2 import router from openstack.network.v2 import subnet from openstack.tests.functional import base class TestFloatingIP(base.BaseFunctionalTest): IPV4 = 4 EXT_CIDR = "10.100.0.0/24" INT_CIDR = "10.101.0.0/24" EXT_NET_ID = None INT_NET_ID = None EXT_SUB_ID = None INT_SUB_ID = None ROT_ID = None PORT_ID = None FIP = None def setUp(self): super(TestFloatingIP, self).setUp() self.ROT_NAME = self.getUniqueString() self.EXT_NET_NAME = self.getUniqueString() self.EXT_SUB_NAME = self.getUniqueString() self.INT_NET_NAME = self.getUniqueString() self.INT_SUB_NAME = self.getUniqueString() # Create Exeternal Network args = {'router:external': True} net = self._create_network(self.EXT_NET_NAME, **args) self.EXT_NET_ID = net.id sub = self._create_subnet( self.EXT_SUB_NAME, self.EXT_NET_ID, self.EXT_CIDR) self.EXT_SUB_ID = sub.id # Create Internal Network net = self._create_network(self.INT_NET_NAME) self.INT_NET_ID = net.id sub = self._create_subnet( self.INT_SUB_NAME, self.INT_NET_ID, self.INT_CIDR) self.INT_SUB_ID = sub.id # Create Router args = {'external_gateway_info': {'network_id': self.EXT_NET_ID}} sot = self.conn.network.create_router(name=self.ROT_NAME, **args) assert isinstance(sot, router.Router) self.assertEqual(self.ROT_NAME, sot.name) self.ROT_ID = sot.id self.ROT = sot # Add Router's Interface to Internal Network sot = self.ROT.add_interface( self.conn.network, subnet_id=self.INT_SUB_ID) self.assertEqual(sot['subnet_id'], self.INT_SUB_ID) # Create Port in Internal Network prt = self.conn.network.create_port(network_id=self.INT_NET_ID) assert isinstance(prt, port.Port) self.PORT_ID = prt.id # Create Floating IP. fip = self.conn.network.create_ip(floating_network_id=self.EXT_NET_ID) assert isinstance(fip, floating_ip.FloatingIP) self.FIP = fip def tearDown(self): sot = self.conn.network.delete_ip(self.FIP.id, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_port(self.PORT_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.ROT.remove_interface( self.conn.network, subnet_id=self.INT_SUB_ID) self.assertEqual(sot['subnet_id'], self.INT_SUB_ID) sot = self.conn.network.delete_router( self.ROT_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_subnet( self.EXT_SUB_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_network( self.EXT_NET_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_subnet( self.INT_SUB_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_network( self.INT_NET_ID, ignore_missing=False) self.assertIsNone(sot) super(TestFloatingIP, self).tearDown() def _create_network(self, name, **args): self.name = name net = self.conn.network.create_network(name=name, **args) assert isinstance(net, network.Network) self.assertEqual(self.name, net.name) return net def _create_subnet(self, name, net_id, cidr): self.name = name self.net_id = net_id self.cidr = cidr sub = self.conn.network.create_subnet( name=self.name, ip_version=self.IPV4, network_id=self.net_id, cidr=self.cidr) assert isinstance(sub, subnet.Subnet) self.assertEqual(self.name, sub.name) return sub def test_find_by_id(self): sot = self.conn.network.find_ip(self.FIP.id) self.assertEqual(self.FIP.id, sot.id) def test_find_by_ip_address(self): sot = self.conn.network.find_ip(self.FIP.floating_ip_address) self.assertEqual(self.FIP.floating_ip_address, sot.floating_ip_address) self.assertEqual(self.FIP.floating_ip_address, sot.name) def test_find_available_ip(self): sot = self.conn.network.find_available_ip() self.assertIsNotNone(sot.id) self.assertIsNone(sot.port_id) def test_get(self): sot = self.conn.network.get_ip(self.FIP.id) self.assertEqual(self.EXT_NET_ID, sot.floating_network_id) self.assertEqual(self.FIP.id, sot.id) self.assertEqual(self.FIP.floating_ip_address, sot.floating_ip_address) self.assertEqual(self.FIP.fixed_ip_address, sot.fixed_ip_address) self.assertEqual(self.FIP.port_id, sot.port_id) self.assertEqual(self.FIP.router_id, sot.router_id) def test_list(self): ids = [o.id for o in self.conn.network.ips()] self.assertIn(self.FIP.id, ids) def test_update(self): sot = self.conn.network.update_ip(self.FIP.id, port_id=self.PORT_ID) self.assertEqual(self.PORT_ID, sot.port_id) self.assertEqual(self.FIP.id, sot.id) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_dvr_router.py0000666000175100017510000000363213236151340027434 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import router from openstack.tests.functional import base class TestDVRRouter(base.BaseFunctionalTest): ID = None def setUp(self): super(TestDVRRouter, self).setUp() self.NAME = self.getUniqueString() self.UPDATE_NAME = self.getUniqueString() sot = self.conn.network.create_router(name=self.NAME, distributed=True) assert isinstance(sot, router.Router) self.assertEqual(self.NAME, sot.name) self.ID = sot.id def tearDown(self): sot = self.conn.network.delete_router(self.ID, ignore_missing=False) self.assertIsNone(sot) super(TestDVRRouter, self).tearDown() def test_find(self): sot = self.conn.network.find_router(self.NAME) self.assertEqual(self.ID, sot.id) def test_get(self): sot = self.conn.network.get_router(self.ID) self.assertEqual(self.NAME, sot.name) self.assertEqual(self.ID, sot.id) self.assertTrue(sot.is_distributed) def test_list(self): names = [o.name for o in self.conn.network.routers()] self.assertIn(self.NAME, names) dvr = [o.is_distributed for o in self.conn.network.routers()] self.assertTrue(dvr) def test_update(self): sot = self.conn.network.update_router(self.ID, name=self.UPDATE_NAME) self.assertEqual(self.UPDATE_NAME, sot.name) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_flavor.py0000666000175100017510000000553413236151340026535 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import flavor from openstack.tests.functional import base class TestFlavor(base.BaseFunctionalTest): UPDATE_NAME = "UPDATED-NAME" SERVICE_TYPE = "FLAVORS" ID = None SERVICE_PROFILE_DESCRIPTION = "DESCRIPTION" METAINFO = "FlAVOR_PROFILE_METAINFO" def setUp(self): super(TestFlavor, self).setUp() self.FLAVOR_NAME = self.getUniqueString('flavor') flavors = self.conn.network.create_flavor( name=self.FLAVOR_NAME, service_type=self.SERVICE_TYPE) assert isinstance(flavors, flavor.Flavor) self.assertEqual(self.FLAVOR_NAME, flavors.name) self.assertEqual(self.SERVICE_TYPE, flavors.service_type) self.ID = flavors.id self.service_profiles = self.conn.network.create_service_profile( description=self.SERVICE_PROFILE_DESCRIPTION, metainfo=self.METAINFO,) def tearDown(self): flavors = self.conn.network.delete_flavor(self.ID, ignore_missing=True) self.assertIsNone(flavors) service_profiles = self.conn.network.delete_service_profile( self.ID, ignore_missing=True) self.assertIsNone(service_profiles) super(TestFlavor, self).tearDown() def test_find(self): flavors = self.conn.network.find_flavor(self.FLAVOR_NAME) self.assertEqual(self.ID, flavors.id) def test_get(self): flavors = self.conn.network.get_flavor(self.ID) self.assertEqual(self.FLAVOR_NAME, flavors.name) self.assertEqual(self.ID, flavors.id) def test_list(self): names = [f.name for f in self.conn.network.flavors()] self.assertIn(self.FLAVOR_NAME, names) def test_update(self): flavor = self.conn.network.update_flavor(self.ID, name=self.UPDATE_NAME) self.assertEqual(self.UPDATE_NAME, flavor.name) def test_associate_disassociate_flavor_with_service_profile(self): response = \ self.conn.network.associate_flavor_with_service_profile( self.ID, self.service_profiles.id) self.assertIsNotNone(response) response = \ self.conn.network.disassociate_flavor_from_service_profile( self.ID, self.service_profiles.id) self.assertIsNone(response) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_qos_minimum_bandwidth_rule.py0000666000175100017510000000650413236151340032652 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import (qos_minimum_bandwidth_rule as _qos_minimum_bandwidth_rule) from openstack.tests.functional import base class TestQoSMinimumBandwidthRule(base.BaseFunctionalTest): QOS_POLICY_ID = None QOS_IS_SHARED = False QOS_POLICY_DESCRIPTION = "QoS policy description" RULE_ID = None RULE_MIN_KBPS = 1200 RULE_MIN_KBPS_NEW = 1800 RULE_DIRECTION = 'egress' def setUp(self): super(TestQoSMinimumBandwidthRule, self).setUp() self.QOS_POLICY_NAME = self.getUniqueString() qos_policy = self.conn.network.create_qos_policy( description=self.QOS_POLICY_DESCRIPTION, name=self.QOS_POLICY_NAME, shared=self.QOS_IS_SHARED, ) self.QOS_POLICY_ID = qos_policy.id qos_min_bw_rule = self.conn.network.create_qos_minimum_bandwidth_rule( self.QOS_POLICY_ID, direction=self.RULE_DIRECTION, min_kbps=self.RULE_MIN_KBPS, ) assert isinstance(qos_min_bw_rule, _qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule) self.assertEqual(self.RULE_MIN_KBPS, qos_min_bw_rule.min_kbps) self.assertEqual(self.RULE_DIRECTION, qos_min_bw_rule.direction) self.RULE_ID = qos_min_bw_rule.id def tearDown(self): rule = self.conn.network.delete_qos_minimum_bandwidth_rule( self.RULE_ID, self.QOS_POLICY_ID) qos_policy = self.conn.network.delete_qos_policy(self.QOS_POLICY_ID) self.assertIsNone(rule) self.assertIsNone(qos_policy) super(TestQoSMinimumBandwidthRule, self).tearDown() def test_find(self): sot = self.conn.network.find_qos_minimum_bandwidth_rule( self.RULE_ID, self.QOS_POLICY_ID) self.assertEqual(self.RULE_ID, sot.id) self.assertEqual(self.RULE_DIRECTION, sot.direction) self.assertEqual(self.RULE_MIN_KBPS, sot.min_kbps) def test_get(self): sot = self.conn.network.get_qos_minimum_bandwidth_rule( self.RULE_ID, self.QOS_POLICY_ID) self.assertEqual(self.RULE_ID, sot.id) self.assertEqual(self.QOS_POLICY_ID, sot.qos_policy_id) self.assertEqual(self.RULE_DIRECTION, sot.direction) self.assertEqual(self.RULE_MIN_KBPS, sot.min_kbps) def test_list(self): rule_ids = [o.id for o in self.conn.network.qos_minimum_bandwidth_rules( self.QOS_POLICY_ID)] self.assertIn(self.RULE_ID, rule_ids) def test_update(self): sot = self.conn.network.update_qos_minimum_bandwidth_rule( self.RULE_ID, self.QOS_POLICY_ID, min_kbps=self.RULE_MIN_KBPS_NEW) self.assertEqual(self.RULE_MIN_KBPS_NEW, sot.min_kbps) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_service_profile.py0000666000175100017510000000473713236151340030430 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import service_profile as _service_profile from openstack.tests.functional import base class TestServiceProfile(base.BaseFunctionalTest): SERVICE_PROFILE_DESCRIPTION = "DESCRIPTION" UPDATE_DESCRIPTION = "UPDATED-DESCRIPTION" METAINFO = "FlAVOR_PROFILE_METAINFO" ID = None def setUp(self): super(TestServiceProfile, self).setUp() service_profiles = self.conn.network.create_service_profile( description=self.SERVICE_PROFILE_DESCRIPTION, metainfo=self.METAINFO,) assert isinstance(service_profiles, _service_profile.ServiceProfile) self.assertEqual( self.SERVICE_PROFILE_DESCRIPTION, service_profiles.description) self.assertEqual(self.METAINFO, service_profiles.meta_info) self.ID = service_profiles.id def tearDown(self): service_profiles = self.conn.network.delete_service_profile( self.ID, ignore_missing=True) self.assertIsNone(service_profiles) super(TestServiceProfile, self).tearDown() def test_find(self): service_profiles = self.conn.network.find_service_profile( self.ID) self.assertEqual(self.METAINFO, service_profiles.meta_info) def test_get(self): service_profiles = self.conn.network.get_service_profile(self.ID) self.assertEqual(self.METAINFO, service_profiles.meta_info) self.assertEqual(self.SERVICE_PROFILE_DESCRIPTION, service_profiles.description) def test_update(self): service_profiles = self.conn.network.update_service_profile( self.ID, description=self.UPDATE_DESCRIPTION) self.assertEqual(self.UPDATE_DESCRIPTION, service_profiles.description) def test_list(self): metainfos = [f.meta_info for f in self.conn.network.service_profiles()] self.assertIn(self.METAINFO, metainfos) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_agent_add_remove_network.py0000666000175100017510000000407613236151340032300 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import network from openstack.tests.functional import base class TestAgentNetworks(base.BaseFunctionalTest): NETWORK_ID = None AGENT = None AGENT_ID = None def setUp(self): super(TestAgentNetworks, self).setUp() self.NETWORK_NAME = self.getUniqueString('network') net = self.conn.network.create_network(name=self.NETWORK_NAME) self.addCleanup(self.conn.network.delete_network, net.id) assert isinstance(net, network.Network) self.NETWORK_ID = net.id agent_list = list(self.conn.network.agents()) agents = [agent for agent in agent_list if agent.agent_type == 'DHCP agent'] self.AGENT = agents[0] self.AGENT_ID = self.AGENT.id def test_add_remove_agent(self): net = self.AGENT.add_agent_to_network(self.conn.network, network_id=self.NETWORK_ID) self._verify_add(net) net = self.AGENT.remove_agent_from_network(self.conn.network, network_id=self.NETWORK_ID) self._verify_remove(net) def _verify_add(self, network): net = self.conn.network.dhcp_agent_hosting_networks(self.AGENT_ID) net_ids = [n.id for n in net] self.assertIn(self.NETWORK_ID, net_ids) def _verify_remove(self, network): net = self.conn.network.dhcp_agent_hosting_networks(self.AGENT_ID) net_ids = [n.id for n in net] self.assertNotIn(self.NETWORK_ID, net_ids) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_qos_policy.py0000666000175100017510000000464013236151340027422 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import qos_policy as _qos_policy from openstack.tests.functional import base class TestQoSPolicy(base.BaseFunctionalTest): QOS_POLICY_ID = None IS_SHARED = False IS_DEFAULT = False RULES = [] QOS_POLICY_DESCRIPTION = "QoS policy description" def setUp(self): super(TestQoSPolicy, self).setUp() self.QOS_POLICY_NAME = self.getUniqueString() self.QOS_POLICY_NAME_UPDATED = self.getUniqueString() qos = self.conn.network.create_qos_policy( description=self.QOS_POLICY_DESCRIPTION, name=self.QOS_POLICY_NAME, shared=self.IS_SHARED, is_default=self.IS_DEFAULT, ) assert isinstance(qos, _qos_policy.QoSPolicy) self.assertEqual(self.QOS_POLICY_NAME, qos.name) self.QOS_POLICY_ID = qos.id def tearDown(self): sot = self.conn.network.delete_qos_policy(self.QOS_POLICY_ID) self.assertIsNone(sot) super(TestQoSPolicy, self).tearDown() def test_find(self): sot = self.conn.network.find_qos_policy(self.QOS_POLICY_NAME) self.assertEqual(self.QOS_POLICY_ID, sot.id) def test_get(self): sot = self.conn.network.get_qos_policy(self.QOS_POLICY_ID) self.assertEqual(self.QOS_POLICY_NAME, sot.name) self.assertEqual(self.IS_SHARED, sot.is_shared) self.assertEqual(self.RULES, sot.rules) self.assertEqual(self.QOS_POLICY_DESCRIPTION, sot.description) self.assertEqual(self.IS_DEFAULT, sot.is_default) def test_list(self): names = [o.name for o in self.conn.network.qos_policies()] self.assertIn(self.QOS_POLICY_NAME, names) def test_update(self): sot = self.conn.network.update_qos_policy( self.QOS_POLICY_ID, name=self.QOS_POLICY_NAME_UPDATED) self.assertEqual(self.QOS_POLICY_NAME_UPDATED, sot.name) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_extension.py0000666000175100017510000000211613236151340027251 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from openstack.tests.functional import base class TestExtension(base.BaseFunctionalTest): def test_list(self): extensions = list(self.conn.network.extensions()) self.assertGreater(len(extensions), 0) for ext in extensions: self.assertIsInstance(ext.name, six.string_types) self.assertIsInstance(ext.alias, six.string_types) def test_find(self): extension = self.conn.network.find_extension('external-net') self.assertEqual('Neutron external network', extension.name) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_qos_dscp_marking_rule.py0000666000175100017510000000575313236151340031621 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import (qos_dscp_marking_rule as _qos_dscp_marking_rule) from openstack.tests.functional import base class TestQoSDSCPMarkingRule(base.BaseFunctionalTest): QOS_POLICY_ID = None QOS_IS_SHARED = False QOS_POLICY_DESCRIPTION = "QoS policy description" RULE_DSCP_MARK = 36 RULE_DSCP_MARK_NEW = 40 def setUp(self): super(TestQoSDSCPMarkingRule, self).setUp() self.QOS_POLICY_NAME = self.getUniqueString() self.RULE_ID = self.getUniqueString() qos_policy = self.conn.network.create_qos_policy( description=self.QOS_POLICY_DESCRIPTION, name=self.QOS_POLICY_NAME, shared=self.QOS_IS_SHARED, ) self.QOS_POLICY_ID = qos_policy.id qos_rule = self.conn.network.create_qos_dscp_marking_rule( self.QOS_POLICY_ID, dscp_mark=self.RULE_DSCP_MARK, ) assert isinstance(qos_rule, _qos_dscp_marking_rule.QoSDSCPMarkingRule) self.assertEqual(self.RULE_DSCP_MARK, qos_rule.dscp_mark) self.RULE_ID = qos_rule.id def tearDown(self): rule = self.conn.network.delete_qos_minimum_bandwidth_rule( self.RULE_ID, self.QOS_POLICY_ID) qos_policy = self.conn.network.delete_qos_policy(self.QOS_POLICY_ID) self.assertIsNone(rule) self.assertIsNone(qos_policy) super(TestQoSDSCPMarkingRule, self).tearDown() def test_find(self): sot = self.conn.network.find_qos_dscp_marking_rule( self.RULE_ID, self.QOS_POLICY_ID) self.assertEqual(self.RULE_ID, sot.id) self.assertEqual(self.RULE_DSCP_MARK, sot.dscp_mark) def test_get(self): sot = self.conn.network.get_qos_dscp_marking_rule( self.RULE_ID, self.QOS_POLICY_ID) self.assertEqual(self.RULE_ID, sot.id) self.assertEqual(self.QOS_POLICY_ID, sot.qos_policy_id) self.assertEqual(self.RULE_DSCP_MARK, sot.dscp_mark) def test_list(self): rule_ids = [o.id for o in self.conn.network.qos_dscp_marking_rules( self.QOS_POLICY_ID)] self.assertIn(self.RULE_ID, rule_ids) def test_update(self): sot = self.conn.network.update_qos_dscp_marking_rule( self.RULE_ID, self.QOS_POLICY_ID, dscp_mark=self.RULE_DSCP_MARK_NEW) self.assertEqual(self.RULE_DSCP_MARK_NEW, sot.dscp_mark) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_network_ip_availability.py0000666000175100017510000000527613236151340032162 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import network from openstack.network.v2 import port from openstack.network.v2 import subnet from openstack.tests.functional import base class TestNetworkIPAvailability(base.BaseFunctionalTest): IPV4 = 4 CIDR = "10.100.0.0/24" NET_ID = None SUB_ID = None PORT_ID = None def setUp(self): super(TestNetworkIPAvailability, self).setUp() self.NET_NAME = self.getUniqueString() self.SUB_NAME = self.getUniqueString() self.PORT_NAME = self.getUniqueString() self.UPDATE_NAME = self.getUniqueString() net = self.conn.network.create_network(name=self.NET_NAME) assert isinstance(net, network.Network) self.assertEqual(self.NET_NAME, net.name) self.NET_ID = net.id sub = self.conn.network.create_subnet( name=self.SUB_NAME, ip_version=self.IPV4, network_id=self.NET_ID, cidr=self.CIDR) assert isinstance(sub, subnet.Subnet) self.assertEqual(self.SUB_NAME, sub.name) self.SUB_ID = sub.id prt = self.conn.network.create_port( name=self.PORT_NAME, network_id=self.NET_ID) assert isinstance(prt, port.Port) self.assertEqual(self.PORT_NAME, prt.name) self.PORT_ID = prt.id def tearDown(self): sot = self.conn.network.delete_port(self.PORT_ID) self.assertIsNone(sot) sot = self.conn.network.delete_subnet(self.SUB_ID) self.assertIsNone(sot) sot = self.conn.network.delete_network(self.NET_ID) self.assertIsNone(sot) super(TestNetworkIPAvailability, self).tearDown() def test_find(self): sot = self.conn.network.find_network_ip_availability(self.NET_ID) self.assertEqual(self.NET_ID, sot.network_id) def test_get(self): sot = self.conn.network.get_network_ip_availability(self.NET_ID) self.assertEqual(self.NET_ID, sot.network_id) self.assertEqual(self.NET_NAME, sot.network_name) def test_list(self): ids = [o.network_id for o in self.conn.network.network_ip_availabilities()] self.assertIn(self.NET_ID, ids) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_qos_bandwidth_limit_rule.py0000666000175100017510000000761413236151340032320 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import (qos_bandwidth_limit_rule as _qos_bandwidth_limit_rule) from openstack.tests.functional import base class TestQoSBandwidthLimitRule(base.BaseFunctionalTest): QOS_POLICY_ID = None QOS_IS_SHARED = False QOS_POLICY_DESCRIPTION = "QoS policy description" RULE_MAX_KBPS = 1500 RULE_MAX_KBPS_NEW = 1800 RULE_MAX_BURST_KBPS = 1100 RULE_MAX_BURST_KBPS_NEW = 1300 RULE_DIRECTION = 'egress' RULE_DIRECTION_NEW = 'ingress' def setUp(self): super(TestQoSBandwidthLimitRule, self).setUp() self.QOS_POLICY_NAME = self.getUniqueString() self.RULE_ID = self.getUniqueString() qos_policy = self.conn.network.create_qos_policy( description=self.QOS_POLICY_DESCRIPTION, name=self.QOS_POLICY_NAME, shared=self.QOS_IS_SHARED, ) self.QOS_POLICY_ID = qos_policy.id qos_rule = self.conn.network.create_qos_bandwidth_limit_rule( self.QOS_POLICY_ID, max_kbps=self.RULE_MAX_KBPS, max_burst_kbps=self.RULE_MAX_BURST_KBPS, direction=self.RULE_DIRECTION, ) assert isinstance(qos_rule, _qos_bandwidth_limit_rule.QoSBandwidthLimitRule) self.assertEqual(self.RULE_MAX_KBPS, qos_rule.max_kbps) self.assertEqual(self.RULE_MAX_BURST_KBPS, qos_rule.max_burst_kbps) self.assertEqual(self.RULE_DIRECTION, qos_rule.direction) self.RULE_ID = qos_rule.id def tearDown(self): rule = self.conn.network.delete_qos_minimum_bandwidth_rule( self.RULE_ID, self.QOS_POLICY_ID) qos_policy = self.conn.network.delete_qos_policy(self.QOS_POLICY_ID) self.assertIsNone(rule) self.assertIsNone(qos_policy) super(TestQoSBandwidthLimitRule, self).tearDown() def test_find(self): sot = self.conn.network.find_qos_bandwidth_limit_rule( self.RULE_ID, self.QOS_POLICY_ID) self.assertEqual(self.RULE_ID, sot.id) self.assertEqual(self.RULE_MAX_KBPS, sot.max_kbps) self.assertEqual(self.RULE_MAX_BURST_KBPS, sot.max_burst_kbps) self.assertEqual(self.RULE_DIRECTION, sot.direction) def test_get(self): sot = self.conn.network.get_qos_bandwidth_limit_rule( self.RULE_ID, self.QOS_POLICY_ID) self.assertEqual(self.RULE_ID, sot.id) self.assertEqual(self.QOS_POLICY_ID, sot.qos_policy_id) self.assertEqual(self.RULE_MAX_KBPS, sot.max_kbps) self.assertEqual(self.RULE_MAX_BURST_KBPS, sot.max_burst_kbps) self.assertEqual(self.RULE_DIRECTION, sot.direction) def test_list(self): rule_ids = [o.id for o in self.conn.network.qos_bandwidth_limit_rules( self.QOS_POLICY_ID)] self.assertIn(self.RULE_ID, rule_ids) def test_update(self): sot = self.conn.network.update_qos_bandwidth_limit_rule( self.RULE_ID, self.QOS_POLICY_ID, max_kbps=self.RULE_MAX_KBPS_NEW, max_burst_kbps=self.RULE_MAX_BURST_KBPS_NEW, direction=self.RULE_DIRECTION_NEW) self.assertEqual(self.RULE_MAX_KBPS_NEW, sot.max_kbps) self.assertEqual(self.RULE_MAX_BURST_KBPS_NEW, sot.max_burst_kbps) self.assertEqual(self.RULE_DIRECTION_NEW, sot.direction) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_router_add_remove_interface.py0000666000175100017510000000532613236151340032770 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import network from openstack.network.v2 import router from openstack.network.v2 import subnet from openstack.tests.functional import base class TestRouterInterface(base.BaseFunctionalTest): CIDR = "10.100.0.0/16" IPV4 = 4 ROUTER_ID = None NET_ID = None SUB_ID = None ROT = None def setUp(self): super(TestRouterInterface, self).setUp() self.ROUTER_NAME = self.getUniqueString() self.NET_NAME = self.getUniqueString() self.SUB_NAME = self.getUniqueString() sot = self.conn.network.create_router(name=self.ROUTER_NAME) assert isinstance(sot, router.Router) self.assertEqual(self.ROUTER_NAME, sot.name) net = self.conn.network.create_network(name=self.NET_NAME) assert isinstance(net, network.Network) self.assertEqual(self.NET_NAME, net.name) sub = self.conn.network.create_subnet( name=self.SUB_NAME, ip_version=self.IPV4, network_id=net.id, cidr=self.CIDR) assert isinstance(sub, subnet.Subnet) self.assertEqual(self.SUB_NAME, sub.name) self.ROUTER_ID = sot.id self.ROT = sot self.NET_ID = net.id self.SUB_ID = sub.id def tearDown(self): sot = self.conn.network.delete_router( self.ROUTER_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_subnet( self.SUB_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_network( self.NET_ID, ignore_missing=False) self.assertIsNone(sot) super(TestRouterInterface, self).tearDown() def test_router_add_remove_interface(self): iface = self.ROT.add_interface(self.conn.network, subnet_id=self.SUB_ID) self._verification(iface) iface = self.ROT.remove_interface(self.conn.network, subnet_id=self.SUB_ID) self._verification(iface) def _verification(self, interface): self.assertEqual(interface['subnet_id'], self.SUB_ID) self.assertIn('port_id', interface) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_rbac_policy.py0000666000175100017510000000416713236151340027533 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import network from openstack.network.v2 import rbac_policy from openstack.tests.functional import base class TestRBACPolicy(base.BaseFunctionalTest): ACTION = 'access_as_shared' OBJ_TYPE = 'network' TARGET_TENANT_ID = '*' NET_ID = None ID = None def setUp(self): super(TestRBACPolicy, self).setUp() self.NET_NAME = self.getUniqueString('net') self.UPDATE_NAME = self.getUniqueString() net = self.conn.network.create_network(name=self.NET_NAME) assert isinstance(net, network.Network) self.NET_ID = net.id sot = self.conn.network.create_rbac_policy( action=self.ACTION, object_type=self.OBJ_TYPE, target_tenant=self.TARGET_TENANT_ID, object_id=self.NET_ID) assert isinstance(sot, rbac_policy.RBACPolicy) self.ID = sot.id def tearDown(self): sot = self.conn.network.delete_rbac_policy( self.ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_network( self.NET_ID, ignore_missing=False) self.assertIsNone(sot) super(TestRBACPolicy, self).tearDown() def test_find(self): sot = self.conn.network.find_rbac_policy(self.ID) self.assertEqual(self.ID, sot.id) def test_get(self): sot = self.conn.network.get_rbac_policy(self.ID) self.assertEqual(self.ID, sot.id) def test_list(self): ids = [o.id for o in self.conn.network.rbac_policies()] self.assertIn(self.ID, ids) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_quota.py0000666000175100017510000000276713236151340026402 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.functional import base class TestQuota(base.BaseFunctionalTest): def test_list(self): for qot in self.conn.network.quotas(): self.assertIsNotNone(qot.project_id) self.assertIsNotNone(qot.networks) def test_list_details(self): expected_keys = ['limit', 'used', 'reserved'] project_id = self.conn.session.get_project_id() quota_details = self.conn.network.get_quota(project_id, details=True) for details in quota_details._body.attributes.values(): for expected_key in expected_keys: self.assertTrue(expected_key in details.keys()) def test_set(self): attrs = {'networks': 123456789} for project_quota in self.conn.network.quotas(): self.conn.network.update_quota(project_quota, **attrs) new_quota = self.conn.network.get_quota(project_quota.project_id) self.assertEqual(123456789, new_quota.networks) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_agent.py0000666000175100017510000000315013236151340026332 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.network.v2 import agent from openstack.tests.functional import base class TestAgent(base.BaseFunctionalTest): AGENT = None DESC = 'test descrition' def validate_uuid(self, s): try: uuid.UUID(s) except Exception: return False return True def setUp(self): super(TestAgent, self).setUp() agent_list = list(self.conn.network.agents()) self.AGENT = agent_list[0] assert isinstance(self.AGENT, agent.Agent) def test_list(self): agent_list = list(self.conn.network.agents()) self.AGENT = agent_list[0] assert isinstance(self.AGENT, agent.Agent) self.assertTrue(self.validate_uuid(self.AGENT.id)) def test_get(self): sot = self.conn.network.get_agent(self.AGENT.id) self.assertEqual(self.AGENT.id, sot.id) def test_update(self): sot = self.conn.network.update_agent(self.AGENT.id, description=self.DESC) self.assertEqual(self.DESC, sot.description) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_router.py0000666000175100017510000000356113236151340026562 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import router from openstack.tests.functional import base class TestRouter(base.BaseFunctionalTest): ID = None def setUp(self): super(TestRouter, self).setUp() self.NAME = self.getUniqueString() self.UPDATE_NAME = self.getUniqueString() sot = self.conn.network.create_router(name=self.NAME) assert isinstance(sot, router.Router) self.assertEqual(self.NAME, sot.name) self.ID = sot.id def tearDown(self): sot = self.conn.network.delete_router(self.ID, ignore_missing=False) self.assertIsNone(sot) super(TestRouter, self).tearDown() def test_find(self): sot = self.conn.network.find_router(self.NAME) self.assertEqual(self.ID, sot.id) def test_get(self): sot = self.conn.network.get_router(self.ID) self.assertEqual(self.NAME, sot.name) self.assertEqual(self.ID, sot.id) self.assertFalse(sot.is_ha) def test_list(self): names = [o.name for o in self.conn.network.routers()] self.assertIn(self.NAME, names) ha = [o.is_ha for o in self.conn.network.routers()] self.assertIn(False, ha) def test_update(self): sot = self.conn.network.update_router(self.ID, name=self.UPDATE_NAME) self.assertEqual(self.UPDATE_NAME, sot.name) openstacksdk-0.11.3/openstack/tests/functional/network/v2/__init__.py0000666000175100017510000000000013236151340025723 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_security_group.py0000666000175100017510000000320013236151340030313 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import security_group from openstack.tests.functional import base class TestSecurityGroup(base.BaseFunctionalTest): ID = None def setUp(self): super(TestSecurityGroup, self).setUp() self.NAME = self.getUniqueString() sot = self.conn.network.create_security_group(name=self.NAME) assert isinstance(sot, security_group.SecurityGroup) self.assertEqual(self.NAME, sot.name) self.ID = sot.id def tearDown(self): sot = self.conn.network.delete_security_group( self.ID, ignore_missing=False) self.assertIsNone(sot) super(TestSecurityGroup, self).tearDown() def test_find(self): sot = self.conn.network.find_security_group(self.NAME) self.assertEqual(self.ID, sot.id) def test_get(self): sot = self.conn.network.get_security_group(self.ID) self.assertEqual(self.NAME, sot.name) self.assertEqual(self.ID, sot.id) def test_list(self): names = [o.name for o in self.conn.network.security_groups()] self.assertIn(self.NAME, names) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_qos_rule_type.py0000666000175100017510000000246413236151340030135 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from openstack.tests.functional import base class TestQoSRuleType(base.BaseFunctionalTest): QOS_RULE_TYPE = "bandwidth_limit" def test_find(self): sot = self.conn.network.find_qos_rule_type(self.QOS_RULE_TYPE) self.assertEqual(self.QOS_RULE_TYPE, sot.type) self.assertIsInstance(sot.drivers, list) def test_get(self): sot = self.conn.network.get_qos_rule_type(self.QOS_RULE_TYPE) self.assertEqual(self.QOS_RULE_TYPE, sot.type) self.assertIsInstance(sot.drivers, list) def test_list(self): rule_types = list(self.conn.network.qos_rule_types()) self.assertGreater(len(rule_types), 0) for rule_type in rule_types: self.assertIsInstance(rule_type.type, six.string_types) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_security_group_rule.py0000666000175100017510000000505513236151340031354 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import security_group from openstack.network.v2 import security_group_rule from openstack.tests.functional import base class TestSecurityGroupRule(base.BaseFunctionalTest): IPV4 = 'IPv4' PROTO = 'tcp' PORT = 22 DIR = 'ingress' ID = None RULE_ID = None def setUp(self): super(TestSecurityGroupRule, self).setUp() self.NAME = self.getUniqueString() sot = self.conn.network.create_security_group(name=self.NAME) assert isinstance(sot, security_group.SecurityGroup) self.assertEqual(self.NAME, sot.name) self.ID = sot.id rul = self.conn.network.create_security_group_rule( direction=self.DIR, ethertype=self.IPV4, port_range_max=self.PORT, port_range_min=self.PORT, protocol=self.PROTO, security_group_id=self.ID) assert isinstance(rul, security_group_rule.SecurityGroupRule) self.assertEqual(self.ID, rul.security_group_id) self.RULE_ID = rul.id def tearDown(self): sot = self.conn.network.delete_security_group_rule( self.RULE_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_security_group( self.ID, ignore_missing=False) self.assertIsNone(sot) super(TestSecurityGroupRule, self).tearDown() def test_find(self): sot = self.conn.network.find_security_group_rule(self.RULE_ID) self.assertEqual(self.RULE_ID, sot.id) def test_get(self): sot = self.conn.network.get_security_group_rule(self.RULE_ID) self.assertEqual(self.RULE_ID, sot.id) self.assertEqual(self.DIR, sot.direction) self.assertEqual(self.PROTO, sot.protocol) self.assertEqual(self.PORT, sot.port_range_min) self.assertEqual(self.PORT, sot.port_range_max) self.assertEqual(self.ID, sot.security_group_id) def test_list(self): ids = [o.id for o in self.conn.network.security_group_rules()] self.assertIn(self.RULE_ID, ids) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_subnet_pool.py0000666000175100017510000000600213236151340027564 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import subnet_pool as _subnet_pool from openstack.tests.functional import base class TestSubnetPool(base.BaseFunctionalTest): SUBNET_POOL_ID = None MINIMUM_PREFIX_LENGTH = 8 DEFAULT_PREFIX_LENGTH = 24 MAXIMUM_PREFIX_LENGTH = 32 DEFAULT_QUOTA = 24 IS_SHARED = False IP_VERSION = 4 PREFIXES = ['10.100.0.0/24', '10.101.0.0/24'] def setUp(self): super(TestSubnetPool, self).setUp() self.SUBNET_POOL_NAME = self.getUniqueString() self.SUBNET_POOL_NAME_UPDATED = self.getUniqueString() subnet_pool = self.conn.network.create_subnet_pool( name=self.SUBNET_POOL_NAME, min_prefixlen=self.MINIMUM_PREFIX_LENGTH, default_prefixlen=self.DEFAULT_PREFIX_LENGTH, max_prefixlen=self.MAXIMUM_PREFIX_LENGTH, default_quota=self.DEFAULT_QUOTA, shared=self.IS_SHARED, prefixes=self.PREFIXES) assert isinstance(subnet_pool, _subnet_pool.SubnetPool) self.assertEqual(self.SUBNET_POOL_NAME, subnet_pool.name) self.SUBNET_POOL_ID = subnet_pool.id def tearDown(self): sot = self.conn.network.delete_subnet_pool(self.SUBNET_POOL_ID) self.assertIsNone(sot) super(TestSubnetPool, self).tearDown() def test_find(self): sot = self.conn.network.find_subnet_pool(self.SUBNET_POOL_NAME) self.assertEqual(self.SUBNET_POOL_ID, sot.id) def test_get(self): sot = self.conn.network.get_subnet_pool(self.SUBNET_POOL_ID) self.assertEqual(self.SUBNET_POOL_NAME, sot.name) self.assertEqual(self.MINIMUM_PREFIX_LENGTH, sot.minimum_prefix_length) self.assertEqual(self.DEFAULT_PREFIX_LENGTH, sot.default_prefix_length) self.assertEqual(self.MAXIMUM_PREFIX_LENGTH, sot.maximum_prefix_length) self.assertEqual(self.DEFAULT_QUOTA, sot.default_quota) self.assertEqual(self.IS_SHARED, sot.is_shared) self.assertEqual(self.IP_VERSION, sot.ip_version) self.assertEqual(self.PREFIXES, sot.prefixes) def test_list(self): names = [o.name for o in self.conn.network.subnet_pools()] self.assertIn(self.SUBNET_POOL_NAME, names) def test_update(self): sot = self.conn.network.update_subnet_pool( self.SUBNET_POOL_ID, name=self.SUBNET_POOL_NAME_UPDATED) self.assertEqual(self.SUBNET_POOL_NAME_UPDATED, sot.name) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_subnet.py0000666000175100017510000000603713236151340026543 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import network from openstack.network.v2 import subnet from openstack.tests.functional import base class TestSubnet(base.BaseFunctionalTest): IPV4 = 4 CIDR = "10.100.0.0/24" DNS_SERVERS = ["8.8.4.4", "8.8.8.8"] POOL = [{"start": "10.100.0.2", "end": "10.100.0.253"}] ROUTES = [{"destination": "10.101.0.0/24", "nexthop": "10.100.0.254"}] NET_ID = None SUB_ID = None def setUp(self): super(TestSubnet, self).setUp() self.NET_NAME = self.getUniqueString() self.SUB_NAME = self.getUniqueString() self.UPDATE_NAME = self.getUniqueString() net = self.conn.network.create_network(name=self.NET_NAME) assert isinstance(net, network.Network) self.assertEqual(self.NET_NAME, net.name) self.NET_ID = net.id sub = self.conn.network.create_subnet( name=self.SUB_NAME, ip_version=self.IPV4, network_id=self.NET_ID, cidr=self.CIDR, dns_nameservers=self.DNS_SERVERS, allocation_pools=self.POOL, host_routes=self.ROUTES) assert isinstance(sub, subnet.Subnet) self.assertEqual(self.SUB_NAME, sub.name) self.SUB_ID = sub.id def tearDown(self): sot = self.conn.network.delete_subnet(self.SUB_ID) self.assertIsNone(sot) sot = self.conn.network.delete_network( self.NET_ID, ignore_missing=False) self.assertIsNone(sot) super(TestSubnet, self).tearDown() def test_find(self): sot = self.conn.network.find_subnet(self.SUB_NAME) self.assertEqual(self.SUB_ID, sot.id) def test_get(self): sot = self.conn.network.get_subnet(self.SUB_ID) self.assertEqual(self.SUB_NAME, sot.name) self.assertEqual(self.SUB_ID, sot.id) self.assertEqual(self.DNS_SERVERS, sot.dns_nameservers) self.assertEqual(self.CIDR, sot.cidr) self.assertEqual(self.POOL, sot.allocation_pools) self.assertEqual(self.IPV4, sot.ip_version) self.assertEqual(self.ROUTES, sot.host_routes) self.assertEqual("10.100.0.1", sot.gateway_ip) self.assertTrue(sot.is_dhcp_enabled) def test_list(self): names = [o.name for o in self.conn.network.subnets()] self.assertIn(self.SUB_NAME, names) def test_update(self): sot = self.conn.network.update_subnet(self.SUB_ID, name=self.UPDATE_NAME) self.assertEqual(self.UPDATE_NAME, sot.name) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_availability_zone.py0000666000175100017510000000201713236151340030742 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from openstack.tests.functional import base class TestAvailabilityZone(base.BaseFunctionalTest): def test_list(self): availability_zones = list(self.conn.network.availability_zones()) self.assertGreater(len(availability_zones), 0) for az in availability_zones: self.assertIsInstance(az.name, six.string_types) self.assertIsInstance(az.resource, six.string_types) self.assertIsInstance(az.state, six.string_types) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_auto_allocated_topology.py0000666000175100017510000000461113236151340032153 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.functional import base class TestAutoAllocatedTopology(base.BaseFunctionalTest): NETWORK_NAME = 'auto_allocated_network' NETWORK_ID = None PROJECT_ID = None def setUp(self): super(TestAutoAllocatedTopology, self).setUp() projects = [o.project_id for o in self.conn.network.networks()] self.PROJECT_ID = projects[0] def tearDown(self): res = self.conn.network.delete_auto_allocated_topology(self.PROJECT_ID) self.assertIsNone(res) super(TestAutoAllocatedTopology, self).tearDown() def test_dry_run_option_pass(self): # Dry run will only pass if there is a public network networks = self.conn.network.networks() self._set_network_external(networks) # Dry run option will return "dry-run=pass" in the 'id' resource top = self.conn.network.validate_auto_allocated_topology( self.PROJECT_ID) self.assertEqual(self.PROJECT_ID, top.project) self.assertEqual('dry-run=pass', top.id) def test_show_no_project_option(self): top = self.conn.network.get_auto_allocated_topology() project = self.conn.session.get_project_id() network = self.conn.network.get_network(top.id) self.assertEqual(top.project_id, project) self.assertEqual(top.id, network.id) def test_show_project_option(self): top = self.conn.network.get_auto_allocated_topology(self.PROJECT_ID) network = self.conn.network.get_network(top.id) self.assertEqual(top.project_id, network.project_id) self.assertEqual(top.id, network.id) self.assertEqual(network.name, 'auto_allocated_network') def _set_network_external(self, networks): for network in networks: if network.name == 'public': self.conn.network.update_network(network, is_default=True) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_segment.py0000666000175100017510000000741313236151340026704 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import network from openstack.network.v2 import segment from openstack.tests.functional import base class TestSegment(base.BaseFunctionalTest): NETWORK_TYPE = None PHYSICAL_NETWORK = None SEGMENTATION_ID = None NETWORK_ID = None SEGMENT_ID = None SEGMENT_EXTENSION = None def setUp(self): super(TestSegment, self).setUp() self.NETWORK_NAME = self.getUniqueString() # NOTE(rtheis): The segment extension is not yet enabled by default. # Skip the tests if not enabled. if not self.conn.network.find_extension('segment'): self.skipTest('Segment extension disabled') # Create a network to hold the segment. net = self.conn.network.create_network(name=self.NETWORK_NAME) assert isinstance(net, network.Network) self.assertEqual(self.NETWORK_NAME, net.name) self.NETWORK_ID = net.id if self.SEGMENT_EXTENSION: # Get the segment for the network. for seg in self.conn.network.segments(): assert isinstance(seg, segment.Segment) if self.NETWORK_ID == seg.network_id: self.NETWORK_TYPE = seg.network_type self.PHYSICAL_NETWORK = seg.physical_network self.SEGMENTATION_ID = seg.segmentation_id self.SEGMENT_ID = seg.id break def tearDown(self): sot = self.conn.network.delete_network( self.NETWORK_ID, ignore_missing=False) self.assertIsNone(sot) super(TestSegment, self).tearDown() def test_create_delete(self): sot = self.conn.network.create_segment( description='test description', name='test name', network_id=self.NETWORK_ID, network_type='geneve', segmentation_id=2055, ) self.assertIsInstance(sot, segment.Segment) del_sot = self.conn.network.delete_segment(sot.id) self.assertEqual('test description', sot.description) self.assertEqual('test name', sot.name) self.assertEqual(self.NETWORK_ID, sot.network_id) self.assertEqual('geneve', sot.network_type) self.assertIsNone(sot.physical_network) self.assertEqual(2055, sot.segmentation_id) self.assertIsNone(del_sot) def test_find(self): sot = self.conn.network.find_segment(self.SEGMENT_ID) self.assertEqual(self.SEGMENT_ID, sot.id) def test_get(self): sot = self.conn.network.get_segment(self.SEGMENT_ID) self.assertEqual(self.SEGMENT_ID, sot.id) self.assertIsNone(sot.name) self.assertEqual(self.NETWORK_ID, sot.network_id) self.assertEqual(self.NETWORK_TYPE, sot.network_type) self.assertEqual(self.PHYSICAL_NETWORK, sot.physical_network) self.assertEqual(self.SEGMENTATION_ID, sot.segmentation_id) def test_list(self): ids = [o.id for o in self.conn.network.segments(name=None)] self.assertIn(self.SEGMENT_ID, ids) def test_update(self): sot = self.conn.network.update_segment(self.SEGMENT_ID, description='update') self.assertEqual('update', sot.description) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_address_scope.py0000666000175100017510000000444613236151340030063 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import address_scope as _address_scope from openstack.tests.functional import base class TestAddressScope(base.BaseFunctionalTest): ADDRESS_SCOPE_ID = None IS_SHARED = False IP_VERSION = 4 def setUp(self): super(TestAddressScope, self).setUp() self.ADDRESS_SCOPE_NAME = self.getUniqueString() self.ADDRESS_SCOPE_NAME_UPDATED = self.getUniqueString() address_scope = self.conn.network.create_address_scope( ip_version=self.IP_VERSION, name=self.ADDRESS_SCOPE_NAME, shared=self.IS_SHARED, ) assert isinstance(address_scope, _address_scope.AddressScope) self.assertEqual(self.ADDRESS_SCOPE_NAME, address_scope.name) self.ADDRESS_SCOPE_ID = address_scope.id def tearDown(self): sot = self.conn.network.delete_address_scope(self.ADDRESS_SCOPE_ID) self.assertIsNone(sot) super(TestAddressScope, self).tearDown() def test_find(self): sot = self.conn.network.find_address_scope(self.ADDRESS_SCOPE_NAME) self.assertEqual(self.ADDRESS_SCOPE_ID, sot.id) def test_get(self): sot = self.conn.network.get_address_scope(self.ADDRESS_SCOPE_ID) self.assertEqual(self.ADDRESS_SCOPE_NAME, sot.name) self.assertEqual(self.IS_SHARED, sot.is_shared) self.assertEqual(self.IP_VERSION, sot.ip_version) def test_list(self): names = [o.name for o in self.conn.network.address_scopes()] self.assertIn(self.ADDRESS_SCOPE_NAME, names) def test_update(self): sot = self.conn.network.update_address_scope( self.ADDRESS_SCOPE_ID, name=self.ADDRESS_SCOPE_NAME_UPDATED) self.assertEqual(self.ADDRESS_SCOPE_NAME_UPDATED, sot.name) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_agent_add_remove_router.py0000666000175100017510000000342713236151340032126 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import router from openstack.tests.functional import base class TestAgentRouters(base.BaseFunctionalTest): ROUTER = None AGENT = None def setUp(self): super(TestAgentRouters, self).setUp() self.ROUTER_NAME = 'router-name-' + self.getUniqueString('router-name') self.ROUTER = self.conn.network.create_router(name=self.ROUTER_NAME) self.addCleanup(self.conn.network.delete_router, self.ROUTER) assert isinstance(self.ROUTER, router.Router) agent_list = list(self.conn.network.agents()) agents = [agent for agent in agent_list if agent.agent_type == 'L3 agent'] self.AGENT = agents[0] def test_add_router_to_agent(self): self.conn.network.add_router_to_agent(self.AGENT, self.ROUTER) rots = self.conn.network.agent_hosted_routers(self.AGENT) routers = [router.id for router in rots] self.assertIn(self.ROUTER.id, routers) def test_remove_router_from_agent(self): self.conn.network.remove_router_from_agent(self.AGENT, self.ROUTER) rots = self.conn.network.agent_hosted_routers(self.AGENT) routers = [router.id for router in rots] self.assertNotIn(self.ROUTER.id, routers) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_network.py0000666000175100017510000000531613236151340026733 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import network from openstack.tests.functional import base def create_network(conn, name, cidr): try: network = conn.network.create_network(name=name) subnet = conn.network.create_subnet( name=name, ip_version=4, network_id=network.id, cidr=cidr) return (network, subnet) except Exception as e: print(str(e)) pass return (None, None) def delete_network(conn, network, subnet): if subnet: conn.network.delete_subnet(subnet) if network: conn.network.delete_network(network) class TestNetwork(base.BaseFunctionalTest): ID = None def setUp(self): super(TestNetwork, self).setUp() self.NAME = self.getUniqueString() sot = self.conn.network.create_network(name=self.NAME) assert isinstance(sot, network.Network) self.assertEqual(self.NAME, sot.name) self.ID = sot.id def tearDown(self): sot = self.conn.network.delete_network(self.ID, ignore_missing=False) self.assertIsNone(sot) super(TestNetwork, self).tearDown() def test_find(self): sot = self.conn.network.find_network(self.NAME) self.assertEqual(self.ID, sot.id) def test_find_with_filter(self): project_id_1 = "1" project_id_2 = "2" sot1 = self.conn.network.create_network(name=self.NAME, project_id=project_id_1) sot2 = self.conn.network.create_network(name=self.NAME, project_id=project_id_2) sot = self.conn.network.find_network(self.NAME, project_id=project_id_1) self.assertEqual(project_id_1, sot.project_id) self.conn.network.delete_network(sot1.id) self.conn.network.delete_network(sot2.id) def test_get(self): sot = self.conn.network.get_network(self.ID) self.assertEqual(self.NAME, sot.name) self.assertEqual(self.ID, sot.id) def test_list(self): names = [o.name for o in self.conn.network.networks()] self.assertIn(self.NAME, names) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_service_provider.py0000666000175100017510000000163613236151340030615 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.functional import base class TestServiceProvider(base.BaseFunctionalTest): def test_list(self): providers = list(self.conn.network.service_providers()) names = [o.name for o in providers] service_types = [o.service_type for o in providers] self.assertIn('ha', names) self.assertIn('L3_ROUTER_NAT', service_types) openstacksdk-0.11.3/openstack/tests/functional/network/v2/test_port.py0000666000175100017510000000562713236151340026233 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.network.v2 import network from openstack.network.v2 import port from openstack.network.v2 import subnet from openstack.tests.functional import base class TestPort(base.BaseFunctionalTest): IPV4 = 4 CIDR = "10.100.0.0/24" NET_ID = None SUB_ID = None PORT_ID = None def setUp(self): super(TestPort, self).setUp() self.NET_NAME = self.getUniqueString() self.SUB_NAME = self.getUniqueString() self.PORT_NAME = self.getUniqueString() self.UPDATE_NAME = self.getUniqueString() net = self.conn.network.create_network(name=self.NET_NAME) assert isinstance(net, network.Network) self.assertEqual(self.NET_NAME, net.name) self.NET_ID = net.id sub = self.conn.network.create_subnet( name=self.SUB_NAME, ip_version=self.IPV4, network_id=self.NET_ID, cidr=self.CIDR) assert isinstance(sub, subnet.Subnet) self.assertEqual(self.SUB_NAME, sub.name) self.SUB_ID = sub.id prt = self.conn.network.create_port( name=self.PORT_NAME, network_id=self.NET_ID) assert isinstance(prt, port.Port) self.assertEqual(self.PORT_NAME, prt.name) self.PORT_ID = prt.id def tearDown(self): sot = self.conn.network.delete_port( self.PORT_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_subnet( self.SUB_ID, ignore_missing=False) self.assertIsNone(sot) sot = self.conn.network.delete_network( self.NET_ID, ignore_missing=False) self.assertIsNone(sot) super(TestPort, self).tearDown() def test_find(self): sot = self.conn.network.find_port(self.PORT_NAME) self.assertEqual(self.PORT_ID, sot.id) def test_get(self): sot = self.conn.network.get_port(self.PORT_ID) self.assertEqual(self.PORT_ID, sot.id) self.assertEqual(self.PORT_NAME, sot.name) self.assertEqual(self.NET_ID, sot.network_id) def test_list(self): ids = [o.id for o in self.conn.network.ports()] self.assertIn(self.PORT_ID, ids) def test_update(self): sot = self.conn.network.update_port(self.PORT_ID, name=self.UPDATE_NAME) self.assertEqual(self.UPDATE_NAME, sot.name) openstacksdk-0.11.3/openstack/tests/functional/network/__init__.py0000666000175100017510000000000013236151340025374 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/block_storage/0000775000175100017510000000000013236151501024417 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/block_storage/v2/0000775000175100017510000000000013236151501024746 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/block_storage/v2/test_snapshot.py0000666000175100017510000000475213236151340030231 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage.v2 import snapshot as _snapshot from openstack.block_storage.v2 import volume as _volume from openstack.tests.functional import base class TestSnapshot(base.BaseFunctionalTest): def setUp(self): super(TestSnapshot, self).setUp() self.SNAPSHOT_NAME = self.getUniqueString() self.SNAPSHOT_ID = None self.VOLUME_NAME = self.getUniqueString() self.VOLUME_ID = None volume = self.conn.block_storage.create_volume( name=self.VOLUME_NAME, size=1) self.conn.block_storage.wait_for_status( volume, status='available', failures=['error'], interval=2, wait=120) assert isinstance(volume, _volume.Volume) self.assertEqual(self.VOLUME_NAME, volume.name) self.VOLUME_ID = volume.id snapshot = self.conn.block_storage.create_snapshot( name=self.SNAPSHOT_NAME, volume_id=self.VOLUME_ID) self.conn.block_storage.wait_for_status( snapshot, status='available', failures=['error'], interval=2, wait=120) assert isinstance(snapshot, _snapshot.Snapshot) self.assertEqual(self.SNAPSHOT_NAME, snapshot.name) self.SNAPSHOT_ID = snapshot.id def tearDown(self): snapshot = self.conn.block_storage.get_snapshot(self.SNAPSHOT_ID) sot = self.conn.block_storage.delete_snapshot( snapshot, ignore_missing=False) self.conn.block_storage.wait_for_delete( snapshot, interval=2, wait=120) self.assertIsNone(sot) sot = self.conn.block_storage.delete_volume( self.VOLUME_ID, ignore_missing=False) self.assertIsNone(sot) super(TestSnapshot, self).tearDown() def test_get(self): sot = self.conn.block_storage.get_snapshot(self.SNAPSHOT_ID) self.assertEqual(self.SNAPSHOT_NAME, sot.name) openstacksdk-0.11.3/openstack/tests/functional/block_storage/v2/__init__.py0000666000175100017510000000000013236151340027050 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/block_storage/v2/test_volume.py0000666000175100017510000000313113236151340027667 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage.v2 import volume as _volume from openstack.tests.functional import base class TestVolume(base.BaseFunctionalTest): def setUp(self): super(TestVolume, self).setUp() self.VOLUME_NAME = self.getUniqueString() self.VOLUME_ID = None volume = self.conn.block_storage.create_volume( name=self.VOLUME_NAME, size=1) self.conn.block_storage.wait_for_status( volume, status='available', failures=['error'], interval=2, wait=120) assert isinstance(volume, _volume.Volume) self.assertEqual(self.VOLUME_NAME, volume.name) self.VOLUME_ID = volume.id def tearDown(self): sot = self.conn.block_storage.delete_volume( self.VOLUME_ID, ignore_missing=False) self.assertIsNone(sot) super(TestVolume, self).tearDown() def test_get(self): sot = self.conn.block_storage.get_volume(self.VOLUME_ID) self.assertEqual(self.VOLUME_NAME, sot.name) openstacksdk-0.11.3/openstack/tests/functional/block_storage/v2/test_type.py0000666000175100017510000000251113236151340027342 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage.v2 import type as _type from openstack.tests.functional import base class TestType(base.BaseFunctionalTest): def setUp(self): super(TestType, self).setUp() self.TYPE_NAME = self.getUniqueString() self.TYPE_ID = None sot = self.conn.block_storage.create_type(name=self.TYPE_NAME) assert isinstance(sot, _type.Type) self.assertEqual(self.TYPE_NAME, sot.name) self.TYPE_ID = sot.id def tearDown(self): sot = self.conn.block_storage.delete_type( self.TYPE_ID, ignore_missing=False) self.assertIsNone(sot) super(TestType, self).tearDown() def test_get(self): sot = self.conn.block_storage.get_type(self.TYPE_ID) self.assertEqual(self.TYPE_NAME, sot.name) openstacksdk-0.11.3/openstack/tests/functional/block_storage/__init__.py0000666000175100017510000000000013236151340026521 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/cloud/0000775000175100017510000000000013236151501022707 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/cloud/test_devstack.py0000666000175100017510000000350013236151340026125 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_devstack ------------- Throw errors if we do not actually detect the services we're supposed to. """ import os from testscenarios import load_tests_apply_scenarios as load_tests # noqa from openstack.tests.functional.cloud import base class TestDevstack(base.BaseFunctionalTestCase): scenarios = [ ('designate', dict(env='DESIGNATE', service='dns')), ('heat', dict(env='HEAT', service='orchestration')), ('magnum', dict(env='MAGNUM', service='container-infra')), ('neutron', dict(env='NEUTRON', service='network')), ('octavia', dict(env='OCTAVIA', service='load-balancer')), ('swift', dict(env='SWIFT', service='object-store')), ] def test_has_service(self): if os.environ.get( 'OPENSTACKSDK_HAS_{env}'.format(env=self.env), '0') == '1': self.assertTrue(self.user_cloud.has_service(self.service)) class TestKeystoneVersion(base.BaseFunctionalTestCase): def test_keystone_version(self): use_keystone_v2 = os.environ.get('OPENSTACKSDK_USE_KEYSTONE_V2', False) if use_keystone_v2 and use_keystone_v2 != '0': self.assertEqual('2.0', self.identity_version) else: self.assertEqual('3', self.identity_version) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_floating_ip.py0000666000175100017510000002610213236151340026617 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip ---------------------------------- Functional tests for floating IP resource. """ import pprint from testtools import content from openstack import _adapter from openstack.cloud import meta from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base from openstack.tests.functional.cloud.util import pick_flavor from openstack import utils class TestFloatingIP(base.BaseFunctionalTestCase): timeout = 60 def setUp(self): super(TestFloatingIP, self).setUp() self.flavor = pick_flavor( self.user_cloud.list_flavors(get_extra=False)) if self.flavor is None: self.assertFalse('no sensible flavor available') self.image = self.pick_image() # Generate a random name for these tests self.new_item_name = self.getUniqueString() self.addCleanup(self._cleanup_network) self.addCleanup(self._cleanup_servers) def _cleanup_network(self): exception_list = list() # Delete stale networks as well as networks created for this test if self.user_cloud.has_service('network'): # Delete routers for r in self.user_cloud.list_routers(): try: if r['name'].startswith(self.new_item_name): self.user_cloud.update_router( r['id'], ext_gateway_net_id=None) for s in self.user_cloud.list_subnets(): if s['name'].startswith(self.new_item_name): try: self.user_cloud.remove_router_interface( r, subnet_id=s['id']) except Exception: pass self.user_cloud.delete_router(name_or_id=r['id']) except Exception as e: exception_list.append(str(e)) continue # Delete subnets for s in self.user_cloud.list_subnets(): if s['name'].startswith(self.new_item_name): try: self.user_cloud.delete_subnet(name_or_id=s['id']) except Exception as e: exception_list.append(str(e)) continue # Delete networks for n in self.user_cloud.list_networks(): if n['name'].startswith(self.new_item_name): try: self.user_cloud.delete_network(name_or_id=n['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_servers(self): exception_list = list() # Delete stale servers as well as server created for this test for i in self.user_cloud.list_servers(bare=True): if i.name.startswith(self.new_item_name): try: self.user_cloud.delete_server(i, wait=True) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_ips(self, server): exception_list = list() fixed_ip = meta.get_server_private_ip(server) for ip in self.user_cloud.list_floating_ips(): if (ip.get('fixed_ip', None) == fixed_ip or ip.get('fixed_ip_address', None) == fixed_ip): try: self.user_cloud.delete_floating_ip(ip['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _setup_networks(self): if self.user_cloud.has_service('network'): # Create a network self.test_net = self.user_cloud.create_network( name=self.new_item_name + '_net') # Create a subnet on it self.test_subnet = self.user_cloud.create_subnet( subnet_name=self.new_item_name + '_subnet', network_name_or_id=self.test_net['id'], cidr='10.24.4.0/24', enable_dhcp=True ) # Create a router self.test_router = self.user_cloud.create_router( name=self.new_item_name + '_router') # Attach the router to an external network ext_nets = self.user_cloud.search_networks( filters={'router:external': True}) self.user_cloud.update_router( name_or_id=self.test_router['id'], ext_gateway_net_id=ext_nets[0]['id']) # Attach the router to the internal subnet self.user_cloud.add_router_interface( self.test_router, subnet_id=self.test_subnet['id']) # Select the network for creating new servers self.nic = {'net-id': self.test_net['id']} self.addDetail( 'networks-neutron', content.text_content(pprint.pformat( self.user_cloud.list_networks()))) else: # Find network names for nova-net data = _adapter._json_response( self.user_cloud._conn.compute.get('/os-tenant-networks')) nets = meta.get_and_munchify('networks', data) self.addDetail( 'networks-nova', content.text_content(pprint.pformat( nets))) self.nic = {'net-id': nets[0].id} def test_private_ip(self): self._setup_networks() new_server = self.user_cloud.get_openstack_vars( self.user_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic])) self.addDetail( 'server', content.text_content(pprint.pformat(new_server))) self.assertNotEqual(new_server['private_v4'], '') def test_add_auto_ip(self): self._setup_networks() new_server = self.user_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic]) # ToDo: remove the following iteration when create_server waits for # the IP to be attached ip = None for _ in utils.iterate_timeout( self.timeout, "Timeout waiting for IP address to be attached"): ip = meta.get_server_external_ipv4(self.user_cloud, new_server) if ip is not None: break new_server = self.user_cloud.get_server(new_server.id) self.addCleanup(self._cleanup_ips, new_server) def test_detach_ip_from_server(self): self._setup_networks() new_server = self.user_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic]) # ToDo: remove the following iteration when create_server waits for # the IP to be attached ip = None for _ in utils.iterate_timeout( self.timeout, "Timeout waiting for IP address to be attached"): ip = meta.get_server_external_ipv4(self.user_cloud, new_server) if ip is not None: break new_server = self.user_cloud.get_server(new_server.id) self.addCleanup(self._cleanup_ips, new_server) f_ip = self.user_cloud.get_floating_ip( id=None, filters={'floating_ip_address': ip}) self.user_cloud.detach_ip_from_server( server_id=new_server.id, floating_ip_id=f_ip['id']) def test_list_floating_ips(self): fip_admin = self.operator_cloud.create_floating_ip() self.addCleanup(self.operator_cloud.delete_floating_ip, fip_admin.id) fip_user = self.user_cloud.create_floating_ip() self.addCleanup(self.user_cloud.delete_floating_ip, fip_user.id) # Get all the floating ips. fip_id_list = [ fip.id for fip in self.operator_cloud.list_floating_ips() ] if self.user_cloud.has_service('network'): # Neutron returns all FIP for all projects by default self.assertIn(fip_admin.id, fip_id_list) self.assertIn(fip_user.id, fip_id_list) # Ask Neutron for only a subset of all the FIPs. filtered_fip_id_list = [ fip.id for fip in self.operator_cloud.list_floating_ips( {'tenant_id': self.user_cloud.current_project_id} ) ] self.assertNotIn(fip_admin.id, filtered_fip_id_list) self.assertIn(fip_user.id, filtered_fip_id_list) else: self.assertIn(fip_admin.id, fip_id_list) # By default, Nova returns only the FIPs that belong to the # project which made the listing request. self.assertNotIn(fip_user.id, fip_id_list) self.assertRaisesRegex( ValueError, "Nova-network don't support server-side.*", self.operator_cloud.list_floating_ips, filters={'foo': 'bar'} ) def test_search_floating_ips(self): fip_user = self.user_cloud.create_floating_ip() self.addCleanup(self.user_cloud.delete_floating_ip, fip_user.id) self.assertIn( fip_user['id'], [fip.id for fip in self.user_cloud.search_floating_ips( filters={"attached": False})] ) self.assertNotIn( fip_user['id'], [fip.id for fip in self.user_cloud.search_floating_ips( filters={"attached": True})] ) def test_get_floating_ip_by_id(self): fip_user = self.user_cloud.create_floating_ip() self.addCleanup(self.user_cloud.delete_floating_ip, fip_user.id) ret_fip = self.user_cloud.get_floating_ip_by_id(fip_user.id) self.assertEqual(fip_user, ret_fip) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_cluster_templates.py0000666000175100017510000001007213236151340030062 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_cluster_templates ---------------------------------- Functional tests for `openstack.cloud` cluster_template methods. """ import fixtures from testtools import content from openstack.tests.functional.cloud import base import subprocess class TestClusterTemplate(base.BaseFunctionalTestCase): def setUp(self): super(TestClusterTemplate, self).setUp() if not self.user_cloud.has_service('container-infra'): self.skipTest('Container service not supported by cloud') self.ct = None self.ssh_directory = self.useFixture(fixtures.TempDir()).path def test_cluster_templates(self): '''Test cluster_templates functionality''' name = 'fake-cluster_template' server_type = 'vm' public = False image_id = 'fedora-atomic-f23-dib' tls_disabled = False registry_enabled = False coe = 'kubernetes' keypair_id = 'testkey' self.addDetail('cluster_template', content.text_content(name)) self.addCleanup(self.cleanup, name) # generate a keypair to add to nova subprocess.call( ['ssh-keygen', '-t', 'rsa', '-N', '', '-f', '%s/id_rsa_sdk' % self.ssh_directory]) # add keypair to nova with open('%s/id_rsa_sdk.pub' % self.ssh_directory) as f: key_content = f.read() self.user_cloud.create_keypair('testkey', key_content) # Test we can create a cluster_template and we get it returned self.ct = self.user_cloud.create_cluster_template( name=name, image_id=image_id, keypair_id=keypair_id, coe=coe) self.assertEqual(self.ct['name'], name) self.assertEqual(self.ct['image_id'], image_id) self.assertEqual(self.ct['keypair_id'], keypair_id) self.assertEqual(self.ct['coe'], coe) self.assertEqual(self.ct['registry_enabled'], registry_enabled) self.assertEqual(self.ct['tls_disabled'], tls_disabled) self.assertEqual(self.ct['public'], public) self.assertEqual(self.ct['server_type'], server_type) # Test that we can list cluster_templates cluster_templates = self.user_cloud.list_cluster_templates() self.assertIsNotNone(cluster_templates) # Test we get the same cluster_template with the # get_cluster_template method cluster_template_get = self.user_cloud.get_cluster_template( self.ct['uuid']) self.assertEqual(cluster_template_get['uuid'], self.ct['uuid']) # Test the get method also works by name cluster_template_get = self.user_cloud.get_cluster_template(name) self.assertEqual(cluster_template_get['name'], self.ct['name']) # Test we can update a field on the cluster_template and only that # field is updated cluster_template_update = self.user_cloud.update_cluster_template( self.ct['uuid'], 'replace', tls_disabled=True) self.assertEqual( cluster_template_update['uuid'], self.ct['uuid']) self.assertTrue(cluster_template_update['tls_disabled']) # Test we can delete and get True returned cluster_template_delete = self.user_cloud.delete_cluster_template( self.ct['uuid']) self.assertTrue(cluster_template_delete) def cleanup(self, name): if self.ct: try: self.user_cloud.delete_cluster_template(self.ct['name']) except Exception: pass # delete keypair self.user_cloud.delete_keypair('testkey') openstacksdk-0.11.3/openstack/tests/functional/cloud/test_identity.py0000666000175100017510000002456213236151340026165 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_identity ---------------------------------- Functional tests for `shade` identity methods. """ import random import string from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestIdentity(base.KeystoneBaseFunctionalTestCase): def setUp(self): super(TestIdentity, self).setUp() self.role_prefix = 'test_role' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.user_prefix = self.getUniqueString('user') self.group_prefix = self.getUniqueString('group') self.addCleanup(self._cleanup_users) if self.identity_version not in ('2', '2.0'): self.addCleanup(self._cleanup_groups) self.addCleanup(self._cleanup_roles) def _cleanup_groups(self): exception_list = list() for group in self.operator_cloud.list_groups(): if group['name'].startswith(self.group_prefix): try: self.operator_cloud.delete_group(group['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_users(self): exception_list = list() for user in self.operator_cloud.list_users(): if user['name'].startswith(self.user_prefix): try: self.operator_cloud.delete_user(user['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_roles(self): exception_list = list() for role in self.operator_cloud.list_roles(): if role['name'].startswith(self.role_prefix): try: self.operator_cloud.delete_role(role['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _create_user(self, **kwargs): domain_id = None if self.identity_version not in ('2', '2.0'): domain = self.operator_cloud.get_domain('default') domain_id = domain['id'] return self.operator_cloud.create_user(domain_id=domain_id, **kwargs) def test_list_roles(self): roles = self.operator_cloud.list_roles() self.assertIsNotNone(roles) self.assertNotEqual([], roles) def test_get_role(self): role = self.operator_cloud.get_role('admin') self.assertIsNotNone(role) self.assertIn('id', role) self.assertIn('name', role) self.assertEqual('admin', role['name']) def test_search_roles(self): roles = self.operator_cloud.search_roles(filters={'name': 'admin'}) self.assertIsNotNone(roles) self.assertEqual(1, len(roles)) self.assertEqual('admin', roles[0]['name']) def test_create_role(self): role_name = self.role_prefix + '_create_role' role = self.operator_cloud.create_role(role_name) self.assertIsNotNone(role) self.assertIn('id', role) self.assertIn('name', role) self.assertEqual(role_name, role['name']) def test_delete_role(self): role_name = self.role_prefix + '_delete_role' role = self.operator_cloud.create_role(role_name) self.assertIsNotNone(role) self.assertTrue(self.operator_cloud.delete_role(role_name)) # TODO(Shrews): Once we can support assigning roles within shade, we # need to make this test a little more specific, and add more for testing # filtering functionality. def test_list_role_assignments(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support role assignments") assignments = self.operator_cloud.list_role_assignments() self.assertIsInstance(assignments, list) self.assertGreater(len(assignments), 0) def test_list_role_assignments_v2(self): user = self.operator_cloud.get_user('demo') project = self.operator_cloud.get_project('demo') assignments = self.operator_cloud.list_role_assignments( filters={'user': user['id'], 'project': project['id']}) self.assertIsInstance(assignments, list) self.assertGreater(len(assignments), 0) def test_grant_revoke_role_user_project(self): user_name = self.user_prefix + '_user_project' user_email = 'nobody@nowhere.com' role_name = self.role_prefix + '_grant_user_project' role = self.operator_cloud.create_role(role_name) user = self._create_user(name=user_name, email=user_email, default_project='demo') self.assertTrue(self.operator_cloud.grant_role( role_name, user=user['id'], project='demo', wait=True)) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'project': self.operator_cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(1, len(assignments)) self.assertTrue(self.operator_cloud.revoke_role( role_name, user=user['id'], project='demo', wait=True)) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'project': self.operator_cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(0, len(assignments)) def test_grant_revoke_role_group_project(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support group") role_name = self.role_prefix + '_grant_group_project' role = self.operator_cloud.create_role(role_name) group_name = self.group_prefix + '_group_project' group = self.operator_cloud.create_group( name=group_name, description='test group', domain='default') self.assertTrue(self.operator_cloud.grant_role( role_name, group=group['id'], project='demo')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'project': self.operator_cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(1, len(assignments)) self.assertTrue(self.operator_cloud.revoke_role( role_name, group=group['id'], project='demo')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'project': self.operator_cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(0, len(assignments)) def test_grant_revoke_role_user_domain(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support domain") role_name = self.role_prefix + '_grant_user_domain' role = self.operator_cloud.create_role(role_name) user_name = self.user_prefix + '_user_domain' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email, default_project='demo') self.assertTrue(self.operator_cloud.grant_role( role_name, user=user['id'], domain='default')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'domain': self.operator_cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(1, len(assignments)) self.assertTrue(self.operator_cloud.revoke_role( role_name, user=user['id'], domain='default')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'domain': self.operator_cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(0, len(assignments)) def test_grant_revoke_role_group_domain(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support domain or group") role_name = self.role_prefix + '_grant_group_domain' role = self.operator_cloud.create_role(role_name) group_name = self.group_prefix + '_group_domain' group = self.operator_cloud.create_group( name=group_name, description='test group', domain='default') self.assertTrue(self.operator_cloud.grant_role( role_name, group=group['id'], domain='default')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'domain': self.operator_cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(1, len(assignments)) self.assertTrue(self.operator_cloud.revoke_role( role_name, group=group['id'], domain='default')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'domain': self.operator_cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(0, len(assignments)) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_inventory.py0000666000175100017510000000720113236151340026360 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_inventory ---------------------------------- Functional tests for `shade` inventory methods. """ from openstack.cloud import inventory from openstack.tests.functional.cloud import base from openstack.tests.functional.cloud.util import pick_flavor class TestInventory(base.BaseFunctionalTestCase): def setUp(self): super(TestInventory, self).setUp() # This needs to use an admin account, otherwise a public IP # is not allocated from devstack. self.inventory = inventory.OpenStackInventory() self.server_name = self.getUniqueString('inventory') self.flavor = pick_flavor( self.user_cloud.list_flavors(get_extra=False)) if self.flavor is None: self.assertTrue(False, 'no sensible flavor available') self.image = self.pick_image() self.addCleanup(self._cleanup_server) server = self.operator_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True, auto_ip=True, network='public') self.server_id = server['id'] def _cleanup_server(self): self.user_cloud.delete_server(self.server_id, wait=True) def _test_host_content(self, host): self.assertEqual(host['image']['id'], self.image.id) self.assertNotIn('links', host['image']) self.assertEqual(host['flavor']['id'], self.flavor.id) self.assertNotIn('links', host['flavor']) self.assertNotIn('links', host) self.assertIsInstance(host['volumes'], list) self.assertIsInstance(host['metadata'], dict) self.assertIn('interface_ip', host) def _test_expanded_host_content(self, host): self.assertEqual(host['image']['name'], self.image.name) self.assertEqual(host['flavor']['name'], self.flavor.name) def test_get_host(self): host = self.inventory.get_host(self.server_id) self.assertIsNotNone(host) self.assertEqual(host['name'], self.server_name) self._test_host_content(host) self._test_expanded_host_content(host) host_found = False for host in self.inventory.list_hosts(): if host['id'] == self.server_id: host_found = True self._test_host_content(host) self.assertTrue(host_found) def test_get_host_no_detail(self): host = self.inventory.get_host(self.server_id, expand=False) self.assertIsNotNone(host) self.assertEqual(host['name'], self.server_name) self.assertEqual(host['image']['id'], self.image.id) self.assertNotIn('links', host['image']) self.assertNotIn('name', host['name']) self.assertEqual(host['flavor']['id'], self.flavor.id) self.assertNotIn('links', host['flavor']) self.assertNotIn('name', host['flavor']) host_found = False for host in self.inventory.list_hosts(expand=False): if host['id'] == self.server_id: host_found = True self._test_host_content(host) self.assertTrue(host_found) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_recordset.py0000666000175100017510000000762413236151340026326 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_recordset ---------------------------------- Functional tests for `shade` recordset methods. """ from testtools import content from openstack.tests.functional.cloud import base class TestRecordset(base.BaseFunctionalTestCase): def setUp(self): super(TestRecordset, self).setUp() if not self.user_cloud.has_service('dns'): self.skipTest('dns service not supported by cloud') def test_recordsets(self): '''Test DNS recordsets functionality''' zone = 'example2.net.' email = 'test@example2.net' name = 'www' type_ = 'a' description = 'Test recordset' ttl = 3600 records = ['192.168.1.1'] self.addDetail('zone', content.text_content(zone)) self.addDetail('recordset', content.text_content(name)) self.addCleanup(self.cleanup, zone, name) # Create a zone to hold the tested recordset zone_obj = self.user_cloud.create_zone(name=zone, email=email) # Test we can create a recordset and we get it returned created_recordset = self.user_cloud.create_recordset(zone, name, type_, records, description, ttl) self.assertEqual(created_recordset['zone_id'], zone_obj['id']) self.assertEqual(created_recordset['name'], name + '.' + zone) self.assertEqual(created_recordset['type'], type_.upper()) self.assertEqual(created_recordset['records'], records) self.assertEqual(created_recordset['description'], description) self.assertEqual(created_recordset['ttl'], ttl) # Test that we can list recordsets recordsets = self.user_cloud.list_recordsets(zone) self.assertIsNotNone(recordsets) # Test we get the same recordset with the get_recordset method get_recordset = self.user_cloud.get_recordset(zone, created_recordset['id']) self.assertEqual(get_recordset['id'], created_recordset['id']) # Test the get method also works by name get_recordset = self.user_cloud.get_recordset(zone, name + '.' + zone) self.assertEqual(get_recordset['id'], created_recordset['id']) # Test we can update a field on the recordset and only that field # is updated updated_recordset = self.user_cloud.update_recordset(zone_obj['id'], name + '.' + zone, ttl=7200) self.assertEqual(updated_recordset['id'], created_recordset['id']) self.assertEqual(updated_recordset['name'], name + '.' + zone) self.assertEqual(updated_recordset['type'], type_.upper()) self.assertEqual(updated_recordset['records'], records) self.assertEqual(updated_recordset['description'], description) self.assertEqual(updated_recordset['ttl'], 7200) # Test we can delete and get True returned deleted_recordset = self.user_cloud.delete_recordset( zone, name + '.' + zone) self.assertTrue(deleted_recordset) def cleanup(self, zone_name, recordset_name): self.user_cloud.delete_recordset( zone_name, recordset_name + '.' + zone_name) self.user_cloud.delete_zone(zone_name) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_project.py0000666000175100017510000001132013236151364025774 0ustar zuulzuul00000000000000# Copyright (c) 2016 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_project ---------------------------------- Functional tests for `shade` project resource. """ import pprint from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestProject(base.KeystoneBaseFunctionalTestCase): def setUp(self): super(TestProject, self).setUp() self.new_project_name = self.getUniqueString('project') self.addCleanup(self._cleanup_projects) def _cleanup_projects(self): exception_list = list() for p in self.operator_cloud.list_projects(): if p['name'].startswith(self.new_project_name): try: self.operator_cloud.delete_project(p['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_project(self): project_name = self.new_project_name + '_create' params = { 'name': project_name, 'description': 'test_create_project', } if self.identity_version == '3': params['domain_id'] = \ self.operator_cloud.get_domain('default')['id'] project = self.operator_cloud.create_project(**params) self.assertIsNotNone(project) self.assertEqual(project_name, project['name']) self.assertEqual('test_create_project', project['description']) user_id = self.operator_cloud.current_user_id # Grant the current user access to the project self.assertTrue(self.operator_cloud.grant_role( 'Member', user=user_id, project=project['id'], wait=True)) self.addCleanup( self.operator_cloud.revoke_role, 'Member', user=user_id, project=project['id'], wait=True) new_cloud = self.operator_cloud.connect_as_project(project) self.add_info_on_exception( 'new_cloud_config', pprint.pformat(new_cloud.cloud_config.config)) location = new_cloud.current_location self.assertEqual(project_name, location['project']['name']) def test_update_project(self): project_name = self.new_project_name + '_update' params = { 'name': project_name, 'description': 'test_update_project', 'enabled': True } if self.identity_version == '3': params['domain_id'] = \ self.operator_cloud.get_domain('default')['id'] project = self.operator_cloud.create_project(**params) updated_project = self.operator_cloud.update_project( project_name, enabled=False, description='new') self.assertIsNotNone(updated_project) self.assertEqual(project['id'], updated_project['id']) self.assertEqual(project['name'], updated_project['name']) self.assertEqual(updated_project['description'], 'new') self.assertTrue(project['enabled']) self.assertFalse(updated_project['enabled']) # Revert the description and verify the project is still disabled updated_project = self.operator_cloud.update_project( project_name, description=params['description']) self.assertIsNotNone(updated_project) self.assertEqual(project['id'], updated_project['id']) self.assertEqual(project['name'], updated_project['name']) self.assertEqual(project['description'], updated_project['description']) self.assertTrue(project['enabled']) self.assertFalse(updated_project['enabled']) def test_delete_project(self): project_name = self.new_project_name + '_delete' params = {'name': project_name} if self.identity_version == '3': params['domain_id'] = \ self.operator_cloud.get_domain('default')['id'] project = self.operator_cloud.create_project(**params) self.assertIsNotNone(project) self.assertTrue(self.operator_cloud.delete_project(project['id'])) def test_delete_project_not_found(self): self.assertFalse(self.operator_cloud.delete_project('doesNotExist')) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_flavor.py0000666000175100017510000001475713236151364025640 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_flavor ---------------------------------- Functional tests for `shade` flavor resource. """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestFlavor(base.BaseFunctionalTestCase): def setUp(self): super(TestFlavor, self).setUp() # Generate a random name for flavors in this test self.new_item_name = self.getUniqueString('flavor') self.addCleanup(self._cleanup_flavors) def _cleanup_flavors(self): exception_list = list() for f in self.operator_cloud.list_flavors(get_extra=False): if f['name'].startswith(self.new_item_name): try: self.operator_cloud.delete_flavor(f['id']) except Exception as e: # We were unable to delete a flavor, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_flavor(self): flavor_name = self.new_item_name + '_create' flavor_kwargs = dict( name=flavor_name, ram=1024, vcpus=2, disk=10, ephemeral=5, swap=100, rxtx_factor=1.5, is_public=True ) flavor = self.operator_cloud.create_flavor(**flavor_kwargs) self.assertIsNotNone(flavor['id']) # When properly normalized, we should always get an extra_specs # and expect empty dict on create. self.assertIn('extra_specs', flavor) self.assertEqual({}, flavor['extra_specs']) # We should also always have ephemeral and public attributes self.assertIn('ephemeral', flavor) self.assertIn('OS-FLV-EXT-DATA:ephemeral', flavor) self.assertEqual(5, flavor['ephemeral']) self.assertIn('is_public', flavor) self.assertIn('os-flavor-access:is_public', flavor) self.assertTrue(flavor['is_public']) for key in flavor_kwargs.keys(): self.assertIn(key, flavor) for key, value in flavor_kwargs.items(): self.assertEqual(value, flavor[key]) def test_list_flavors(self): pub_flavor_name = self.new_item_name + '_public' priv_flavor_name = self.new_item_name + '_private' public_kwargs = dict( name=pub_flavor_name, ram=1024, vcpus=2, disk=10, is_public=True ) private_kwargs = dict( name=priv_flavor_name, ram=1024, vcpus=2, disk=10, is_public=False ) # Create a public and private flavor. We expect both to be listed # for an operator. self.operator_cloud.create_flavor(**public_kwargs) self.operator_cloud.create_flavor(**private_kwargs) flavors = self.operator_cloud.list_flavors(get_extra=False) # Flavor list will include the standard devstack flavors. We just want # to make sure both of the flavors we just created are present. found = [] for f in flavors: # extra_specs should be added within list_flavors() self.assertIn('extra_specs', f) if f['name'] in (pub_flavor_name, priv_flavor_name): found.append(f) self.assertEqual(2, len(found)) def test_flavor_access(self): priv_flavor_name = self.new_item_name + '_private' private_kwargs = dict( name=priv_flavor_name, ram=1024, vcpus=2, disk=10, is_public=False ) new_flavor = self.operator_cloud.create_flavor(**private_kwargs) # Validate the 'demo' user cannot see the new flavor flavors = self.user_cloud.search_flavors(priv_flavor_name) self.assertEqual(0, len(flavors)) # We need the tenant ID for the 'demo' user project = self.operator_cloud.get_project('demo') self.assertIsNotNone(project) # Now give 'demo' access self.operator_cloud.add_flavor_access(new_flavor['id'], project['id']) # Now see if the 'demo' user has access to it flavors = self.user_cloud.search_flavors(priv_flavor_name) self.assertEqual(1, len(flavors)) self.assertEqual(priv_flavor_name, flavors[0]['name']) # Now see if the 'demo' user has access to it without needing # the demo_cloud access. acls = self.operator_cloud.list_flavor_access(new_flavor['id']) self.assertEqual(1, len(acls)) self.assertEqual(project['id'], acls[0]['project_id']) # Now revoke the access and make sure we can't find it self.operator_cloud.remove_flavor_access(new_flavor['id'], project['id']) flavors = self.user_cloud.search_flavors(priv_flavor_name) self.assertEqual(0, len(flavors)) def test_set_unset_flavor_specs(self): """ Test setting and unsetting flavor extra specs """ flavor_name = self.new_item_name + '_spec_test' kwargs = dict( name=flavor_name, ram=1024, vcpus=2, disk=10 ) new_flavor = self.operator_cloud.create_flavor(**kwargs) # Expect no extra_specs self.assertEqual({}, new_flavor['extra_specs']) # Now set them extra_specs = {'foo': 'aaa', 'bar': 'bbb'} self.operator_cloud.set_flavor_specs(new_flavor['id'], extra_specs) mod_flavor = self.operator_cloud.get_flavor(new_flavor['id']) # Verify extra_specs were set self.assertIn('extra_specs', mod_flavor) self.assertEqual(extra_specs, mod_flavor['extra_specs']) # Unset the 'foo' value self.operator_cloud.unset_flavor_specs(mod_flavor['id'], ['foo']) mod_flavor = self.operator_cloud.get_flavor_by_id(new_flavor['id']) # Verify 'foo' is unset and 'bar' is still set self.assertEqual({'bar': 'bbb'}, mod_flavor['extra_specs']) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_object.py0000666000175100017510000001601713236151340025576 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_object ---------------------------------- Functional tests for `shade` object methods. """ import random import string import tempfile from testtools import content from openstack.cloud import exc from openstack.tests.functional.cloud import base class TestObject(base.BaseFunctionalTestCase): def setUp(self): super(TestObject, self).setUp() if not self.user_cloud.has_service('object-store'): self.skipTest('Object service not supported by cloud') def test_create_object(self): '''Test uploading small and large files.''' container_name = self.getUniqueString('container') self.addDetail('container', content.text_content(container_name)) self.addCleanup(self.user_cloud.delete_container, container_name) self.user_cloud.create_container(container_name) self.assertEqual(container_name, self.user_cloud.list_containers()[0]['name']) sizes = ( (64 * 1024, 1), # 64K, one segment (64 * 1024, 5) # 64MB, 5 segments ) for size, nseg in sizes: segment_size = int(round(size / nseg)) with tempfile.NamedTemporaryFile() as fake_file: fake_content = ''.join(random.SystemRandom().choice( string.ascii_uppercase + string.digits) for _ in range(size)).encode('latin-1') fake_file.write(fake_content) fake_file.flush() name = 'test-%d' % size self.addCleanup( self.user_cloud.delete_object, container_name, name) self.user_cloud.create_object( container_name, name, fake_file.name, segment_size=segment_size, metadata={'foo': 'bar'}) self.assertFalse(self.user_cloud.is_object_stale( container_name, name, fake_file.name )) self.assertEqual( 'bar', self.user_cloud.get_object_metadata( container_name, name)['x-object-meta-foo'] ) self.user_cloud.update_object(container=container_name, name=name, metadata={'testk': 'testv'}) self.assertEqual( 'testv', self.user_cloud.get_object_metadata( container_name, name)['x-object-meta-testk'] ) try: self.assertIsNotNone( self.user_cloud.get_object(container_name, name)) except exc.OpenStackCloudException as e: self.addDetail( 'failed_response', content.text_content(str(e.response.headers))) self.addDetail( 'failed_response', content.text_content(e.response.text)) self.assertEqual( name, self.user_cloud.list_objects(container_name)[0]['name']) self.assertTrue( self.user_cloud.delete_object(container_name, name)) self.assertEqual([], self.user_cloud.list_objects(container_name)) self.assertEqual(container_name, self.user_cloud.list_containers()[0]['name']) self.user_cloud.delete_container(container_name) def test_download_object_to_file(self): '''Test uploading small and large files.''' container_name = self.getUniqueString('container') self.addDetail('container', content.text_content(container_name)) self.addCleanup(self.user_cloud.delete_container, container_name) self.user_cloud.create_container(container_name) self.assertEqual(container_name, self.user_cloud.list_containers()[0]['name']) sizes = ( (64 * 1024, 1), # 64K, one segment (64 * 1024, 5) # 64MB, 5 segments ) for size, nseg in sizes: fake_content = '' segment_size = int(round(size / nseg)) with tempfile.NamedTemporaryFile() as fake_file: fake_content = ''.join(random.SystemRandom().choice( string.ascii_uppercase + string.digits) for _ in range(size)).encode('latin-1') fake_file.write(fake_content) fake_file.flush() name = 'test-%d' % size self.addCleanup( self.user_cloud.delete_object, container_name, name) self.user_cloud.create_object( container_name, name, fake_file.name, segment_size=segment_size, metadata={'foo': 'bar'}) self.assertFalse(self.user_cloud.is_object_stale( container_name, name, fake_file.name )) self.assertEqual( 'bar', self.user_cloud.get_object_metadata( container_name, name)['x-object-meta-foo'] ) self.user_cloud.update_object(container=container_name, name=name, metadata={'testk': 'testv'}) self.assertEqual( 'testv', self.user_cloud.get_object_metadata( container_name, name)['x-object-meta-testk'] ) try: with tempfile.NamedTemporaryFile() as fake_file: self.user_cloud.get_object( container_name, name, outfile=fake_file.name) downloaded_content = open(fake_file.name, 'rb').read() self.assertEqual(fake_content, downloaded_content) except exc.OpenStackCloudException as e: self.addDetail( 'failed_response', content.text_content(str(e.response.headers))) self.addDetail( 'failed_response', content.text_content(e.response.text)) raise self.assertEqual( name, self.user_cloud.list_objects(container_name)[0]['name']) self.assertTrue( self.user_cloud.delete_object(container_name, name)) self.assertEqual([], self.user_cloud.list_objects(container_name)) self.assertEqual(container_name, self.user_cloud.list_containers()[0]['name']) self.user_cloud.delete_container(container_name) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_limits.py0000666000175100017510000000255713236151340025635 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_limits ---------------------------------- Functional tests for `shade` limits method """ from openstack.tests.functional.cloud import base class TestUsage(base.BaseFunctionalTestCase): def test_get_our_limits(self): '''Test quotas functionality''' limits = self.user_cloud.get_compute_limits() self.assertIsNotNone(limits) self.assertTrue(hasattr(limits, 'max_server_meta')) # Test normalize limits self.assertFalse(hasattr(limits, 'maxImageMeta')) def test_get_other_limits(self): '''Test quotas functionality''' limits = self.operator_cloud.get_compute_limits('demo') self.assertIsNotNone(limits) self.assertTrue(hasattr(limits, 'max_server_meta')) # Test normalize limits self.assertFalse(hasattr(limits, 'maxImageMeta')) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_volume_type.py0000666000175100017510000001074313236151340026700 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_volume ---------------------------------- Functional tests for `shade` block storage methods. """ import testtools from openstack.cloud import exc from openstack.tests.functional.cloud import base class TestVolumeType(base.BaseFunctionalTestCase): def _assert_project(self, volume_name_or_id, project_id, allowed=True): acls = self.operator_cloud.get_volume_type_access(volume_name_or_id) allowed_projects = [x.get('project_id') for x in acls] self.assertEqual(allowed, project_id in allowed_projects) def setUp(self): super(TestVolumeType, self).setUp() if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') volume_type = { "name": 'test-volume-type', "description": None, "os-volume-type-access:is_public": False} self.operator_cloud._volume_client.post( '/types', json={'volume_type': volume_type}) def tearDown(self): ret = self.operator_cloud.get_volume_type('test-volume-type') if ret.get('id'): self.operator_cloud._volume_client.delete( '/types/{volume_type_id}'.format(volume_type_id=ret.id)) super(TestVolumeType, self).tearDown() def test_list_volume_types(self): volume_types = self.operator_cloud.list_volume_types() self.assertTrue(volume_types) self.assertTrue(any( x for x in volume_types if x.name == 'test-volume-type')) def test_add_remove_volume_type_access(self): volume_type = self.operator_cloud.get_volume_type('test-volume-type') self.assertEqual('test-volume-type', volume_type.name) self.operator_cloud.add_volume_type_access( 'test-volume-type', self.operator_cloud.current_project_id) self._assert_project( 'test-volume-type', self.operator_cloud.current_project_id, allowed=True) self.operator_cloud.remove_volume_type_access( 'test-volume-type', self.operator_cloud.current_project_id) self._assert_project( 'test-volume-type', self.operator_cloud.current_project_id, allowed=False) def test_add_volume_type_access_missing_project(self): # Project id is not valitaded and it may not exist. self.operator_cloud.add_volume_type_access( 'test-volume-type', '00000000000000000000000000000000') self.operator_cloud.remove_volume_type_access( 'test-volume-type', '00000000000000000000000000000000') def test_add_volume_type_access_missing_volume(self): with testtools.ExpectedException( exc.OpenStackCloudException, "VolumeType not found.*" ): self.operator_cloud.add_volume_type_access( 'MISSING_VOLUME_TYPE', self.operator_cloud.current_project_id) def test_remove_volume_type_access_missing_volume(self): with testtools.ExpectedException( exc.OpenStackCloudException, "VolumeType not found.*" ): self.operator_cloud.remove_volume_type_access( 'MISSING_VOLUME_TYPE', self.operator_cloud.current_project_id) def test_add_volume_type_access_bad_project(self): with testtools.ExpectedException( exc.OpenStackCloudBadRequest, "Unable to authorize.*" ): self.operator_cloud.add_volume_type_access( 'test-volume-type', 'BAD_PROJECT_ID') def test_remove_volume_type_access_missing_project(self): with testtools.ExpectedException( exc.OpenStackCloudURINotFound, "Unable to revoke.*" ): self.operator_cloud.remove_volume_type_access( 'test-volume-type', '00000000000000000000000000000000') openstacksdk-0.11.3/openstack/tests/functional/cloud/test_stack.py0000666000175100017510000001316313236151340025434 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_stack ---------------------------------- Functional tests for `shade` stack methods. """ import tempfile from openstack.cloud import exc from openstack.tests import fakes from openstack.tests.functional.cloud import base simple_template = '''heat_template_version: 2014-10-16 parameters: length: type: number default: 10 resources: my_rand: type: OS::Heat::RandomString properties: length: {get_param: length} outputs: rand: value: get_attr: [my_rand, value] ''' root_template = '''heat_template_version: 2014-10-16 parameters: length: type: number default: 10 count: type: number default: 5 resources: my_rands: type: OS::Heat::ResourceGroup properties: count: {get_param: count} resource_def: type: My::Simple::Template properties: length: {get_param: length} outputs: rands: value: get_attr: [my_rands, attributes, rand] ''' environment = ''' resource_registry: My::Simple::Template: %s ''' validate_template = '''heat_template_version: asdf-no-such-version ''' class TestStack(base.BaseFunctionalTestCase): def setUp(self): super(TestStack, self).setUp() if not self.user_cloud.has_service('orchestration'): self.skipTest('Orchestration service not supported by cloud') def _cleanup_stack(self): self.user_cloud.delete_stack(self.stack_name, wait=True) self.assertIsNone(self.user_cloud.get_stack(self.stack_name)) def test_stack_validation(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(validate_template.encode('utf-8')) test_template.close() stack_name = self.getUniqueString('validate_template') self.assertRaises(exc.OpenStackCloudException, self.user_cloud.create_stack, name=stack_name, template_file=test_template.name) def test_stack_simple(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.stack_name = self.getUniqueString('simple_stack') self.addCleanup(self._cleanup_stack) stack = self.user_cloud.create_stack( name=self.stack_name, template_file=test_template.name, wait=True) # assert expected values in stack self.assertEqual('CREATE_COMPLETE', stack['stack_status']) rand = stack['outputs'][0]['output_value'] self.assertEqual(10, len(rand)) # assert get_stack matches returned create_stack stack = self.user_cloud.get_stack(self.stack_name) self.assertEqual('CREATE_COMPLETE', stack['stack_status']) self.assertEqual(rand, stack['outputs'][0]['output_value']) # assert stack is in list_stacks stacks = self.user_cloud.list_stacks() stack_ids = [s['id'] for s in stacks] self.assertIn(stack['id'], stack_ids) # update with no changes stack = self.user_cloud.update_stack( self.stack_name, template_file=test_template.name, wait=True) # assert no change in updated stack self.assertEqual('UPDATE_COMPLETE', stack['stack_status']) rand = stack['outputs'][0]['output_value'] self.assertEqual(rand, stack['outputs'][0]['output_value']) # update with changes stack = self.user_cloud.update_stack( self.stack_name, template_file=test_template.name, wait=True, length=12) # assert changed output in updated stack stack = self.user_cloud.get_stack(self.stack_name) self.assertEqual('UPDATE_COMPLETE', stack['stack_status']) new_rand = stack['outputs'][0]['output_value'] self.assertNotEqual(rand, new_rand) self.assertEqual(12, len(new_rand)) def test_stack_nested(self): test_template = tempfile.NamedTemporaryFile( suffix='.yaml', delete=False) test_template.write(root_template.encode('utf-8')) test_template.close() simple_tmpl = tempfile.NamedTemporaryFile(suffix='.yaml', delete=False) simple_tmpl.write(fakes.FAKE_TEMPLATE.encode('utf-8')) simple_tmpl.close() env = tempfile.NamedTemporaryFile(suffix='.yaml', delete=False) expanded_env = environment % simple_tmpl.name env.write(expanded_env.encode('utf-8')) env.close() self.stack_name = self.getUniqueString('nested_stack') self.addCleanup(self._cleanup_stack) stack = self.user_cloud.create_stack( name=self.stack_name, template_file=test_template.name, environment_files=[env.name], wait=True) # assert expected values in stack self.assertEqual('CREATE_COMPLETE', stack['stack_status']) rands = stack['outputs'][0]['output_value'] self.assertEqual(['0', '1', '2', '3', '4'], sorted(rands.keys())) for rand in rands.values(): self.assertEqual(10, len(rand)) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_qos_minimum_bandwidth_rule.py0000666000175100017510000000540213236151340031734 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_qos_minumum_bandwidth_rule ---------------------------------- Functional tests for `shade`QoS minimum bandwidth methods. """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestQosMinimumBandwidthRule(base.BaseFunctionalTestCase): def setUp(self): super(TestQosMinimumBandwidthRule, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') if not self.operator_cloud._has_neutron_extension('qos'): self.skipTest('QoS network extension not supported by cloud') policy_name = self.getUniqueString('qos_policy') self.policy = self.operator_cloud.create_qos_policy(name=policy_name) self.addCleanup(self._cleanup_qos_policy) def _cleanup_qos_policy(self): try: self.operator_cloud.delete_qos_policy(self.policy['id']) except Exception as e: raise OpenStackCloudException(e) def test_qos_minimum_bandwidth_rule_lifecycle(self): min_kbps = 1500 updated_min_kbps = 2000 # Create min bw rule rule = self.operator_cloud.create_qos_minimum_bandwidth_rule( self.policy['id'], min_kbps=min_kbps) self.assertIn('id', rule) self.assertEqual(min_kbps, rule['min_kbps']) # Now try to update rule updated_rule = self.operator_cloud.update_qos_minimum_bandwidth_rule( self.policy['id'], rule['id'], min_kbps=updated_min_kbps) self.assertIn('id', updated_rule) self.assertEqual(updated_min_kbps, updated_rule['min_kbps']) # List rules from policy policy_rules = self.operator_cloud.list_qos_minimum_bandwidth_rules( self.policy['id']) self.assertEqual([updated_rule], policy_rules) # Delete rule self.operator_cloud.delete_qos_minimum_bandwidth_rule( self.policy['id'], updated_rule['id']) # Check if there is no rules in policy policy_rules = self.operator_cloud.list_qos_minimum_bandwidth_rules( self.policy['id']) self.assertEqual([], policy_rules) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_range_search.py0000666000175100017510000001376713236151340026762 0ustar zuulzuul00000000000000# Copyright (c) 2016 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. from openstack.cloud import exc from openstack.tests.functional.cloud import base class TestRangeSearch(base.BaseFunctionalTestCase): def _filter_m1_flavors(self, results): """The m1 flavors are the original devstack flavors""" new_results = [] for flavor in results: if flavor['name'].startswith("m1."): new_results.append(flavor) return new_results def test_range_search_bad_range(self): flavors = self.user_cloud.list_flavors(get_extra=False) self.assertRaises( exc.OpenStackCloudException, self.user_cloud.range_search, flavors, {"ram": "<1a0"}) def test_range_search_exact(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "4096"}) self.assertIsInstance(result, list) # should only be 1 m1 flavor with 4096 ram result = self._filter_m1_flavors(result) self.assertEqual(1, len(result)) self.assertEqual("m1.medium", result[0]['name']) def test_range_search_min(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "MIN"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) # older devstack does not have cirros256 self.assertIn(result[0]['name'], ('cirros256', 'm1.tiny')) def test_range_search_max(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "MAX"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) self.assertEqual("m1.xlarge", result[0]['name']) def test_range_search_lt(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "<1024"}) self.assertIsInstance(result, list) # should only be 1 m1 flavor with <1024 ram result = self._filter_m1_flavors(result) self.assertEqual(1, len(result)) self.assertEqual("m1.tiny", result[0]['name']) def test_range_search_gt(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": ">4096"}) self.assertIsInstance(result, list) # should only be 2 m1 flavors with >4096 ram result = self._filter_m1_flavors(result) self.assertEqual(2, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.large", flavor_names) self.assertIn("m1.xlarge", flavor_names) def test_range_search_le(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "<=4096"}) self.assertIsInstance(result, list) # should only be 3 m1 flavors with <=4096 ram result = self._filter_m1_flavors(result) self.assertEqual(3, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.tiny", flavor_names) self.assertIn("m1.small", flavor_names) self.assertIn("m1.medium", flavor_names) def test_range_search_ge(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": ">=4096"}) self.assertIsInstance(result, list) # should only be 3 m1 flavors with >=4096 ram result = self._filter_m1_flavors(result) self.assertEqual(3, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.medium", flavor_names) self.assertIn("m1.large", flavor_names) self.assertIn("m1.xlarge", flavor_names) def test_range_search_multi_1(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search( flavors, {"ram": "MIN", "vcpus": "MIN"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) # older devstack does not have cirros256 self.assertIn(result[0]['name'], ('cirros256', 'm1.tiny')) def test_range_search_multi_2(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search( flavors, {"ram": "<1024", "vcpus": "MIN"}) self.assertIsInstance(result, list) result = self._filter_m1_flavors(result) self.assertEqual(1, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.tiny", flavor_names) def test_range_search_multi_3(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search( flavors, {"ram": ">=4096", "vcpus": "<6"}) self.assertIsInstance(result, list) result = self._filter_m1_flavors(result) self.assertEqual(2, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.medium", flavor_names) self.assertIn("m1.large", flavor_names) def test_range_search_multi_4(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search( flavors, {"ram": ">=4096", "vcpus": "MAX"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) # This is the only result that should have max vcpu self.assertEqual("m1.xlarge", result[0]['name']) openstacksdk-0.11.3/openstack/tests/functional/cloud/util.py0000666000175100017510000000246213236151340024245 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ util -------------------------------- Util methods for functional tests """ import operator import os def pick_flavor(flavors): """Given a flavor list pick the smallest one.""" # Enable running functional tests against rax - which requires # performance flavors be used for boot from volume flavor_name = os.environ.get('OPENSTACKSDK_FLAVOR') if flavor_name: for flavor in flavors: if flavor.name == flavor_name: return flavor return None for flavor in sorted( flavors, key=operator.attrgetter('ram')): if 'performance' in flavor.name: return flavor for flavor in sorted( flavors, key=operator.attrgetter('ram')): return flavor openstacksdk-0.11.3/openstack/tests/functional/cloud/hooks/0000775000175100017510000000000013236151501024032 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/cloud/hooks/post_test_hook.sh0000777000175100017510000000316313236151340027443 0ustar zuulzuul00000000000000#!/bin/bash -x # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(shade) Rework for zuul v3 export OPENSTACKSDK_DIR="$BASE/new/shade" cd $OPENSTACKSDK_DIR sudo chown -R jenkins:stack $OPENSTACKSDK_DIR CLOUDS_YAML=/etc/openstack/clouds.yaml if [ ! -e ${CLOUDS_YAML} ] then # stable/liberty had clouds.yaml in the home/base directory sudo mkdir -p /etc/openstack sudo cp $BASE/new/.config/openstack/clouds.yaml ${CLOUDS_YAML} sudo chown -R jenkins:stack /etc/openstack fi # Devstack runs both keystone v2 and v3. An environment variable is set # within the shade keystone v2 job that tells us which version we should # test against. if [ ${OPENSTACKSDK_USE_KEYSTONE_V2:-0} -eq 1 ] then sudo sed -ie "s/identity_api_version: '3'/identity_api_version: '2.0'/g" $CLOUDS_YAML sudo sed -ie '/^.*domain_id.*$/d' $CLOUDS_YAML fi if [ "x$1" = "xtips" ] ; then tox_env=functional-tips else tox_env=functional fi echo "Running shade functional test suite" set +e sudo -E -H -u jenkins tox -e$tox_env EXIT_CODE=$? sudo stestr last --subunit > $WORKSPACE/tempest.subunit .tox/$tox_env/bin/pbr freeze set -e exit $EXIT_CODE openstacksdk-0.11.3/openstack/tests/functional/cloud/test_security_groups.py0000666000175100017510000000476213236151340027602 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_security_groups ---------------------------------- Functional tests for `shade` security_groups resource. """ from openstack.tests.functional.cloud import base class TestSecurityGroups(base.BaseFunctionalTestCase): def test_create_list_security_groups(self): sg1 = self.user_cloud.create_security_group( name="sg1", description="sg1") self.addCleanup(self.user_cloud.delete_security_group, sg1['id']) sg2 = self.operator_cloud.create_security_group( name="sg2", description="sg2") self.addCleanup(self.operator_cloud.delete_security_group, sg2['id']) if self.user_cloud.has_service('network'): # Neutron defaults to all_tenants=1 when admin sg_list = self.operator_cloud.list_security_groups() self.assertIn(sg1['id'], [sg['id'] for sg in sg_list]) # Filter by tenant_id (filtering by project_id won't work with # Keystone V2) sg_list = self.operator_cloud.list_security_groups( filters={'tenant_id': self.user_cloud.current_project_id}) self.assertIn(sg1['id'], [sg['id'] for sg in sg_list]) self.assertNotIn(sg2['id'], [sg['id'] for sg in sg_list]) else: # Nova does not list all tenants by default sg_list = self.operator_cloud.list_security_groups() self.assertIn(sg2['id'], [sg['id'] for sg in sg_list]) self.assertNotIn(sg1['id'], [sg['id'] for sg in sg_list]) sg_list = self.operator_cloud.list_security_groups( filters={'all_tenants': 1}) self.assertIn(sg1['id'], [sg['id'] for sg in sg_list]) def test_get_security_group_by_id(self): sg = self.user_cloud.create_security_group(name='sg', description='sg') self.addCleanup(self.user_cloud.delete_security_group, sg['id']) ret_sg = self.user_cloud.get_security_group_by_id(sg['id']) self.assertEqual(sg, ret_sg) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_qos_policy.py0000666000175100017510000000764713236151340026522 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_qos_policy ---------------------------------- Functional tests for `shade`QoS policies methods. """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestQosPolicy(base.BaseFunctionalTestCase): def setUp(self): super(TestQosPolicy, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') if not self.operator_cloud._has_neutron_extension('qos'): self.skipTest('QoS network extension not supported by cloud') self.policy_name = self.getUniqueString('qos_policy') self.addCleanup(self._cleanup_policies) def _cleanup_policies(self): exception_list = list() for policy in self.operator_cloud.list_qos_policies(): if policy['name'].startswith(self.policy_name): try: self.operator_cloud.delete_qos_policy(policy['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_qos_policy_basic(self): policy = self.operator_cloud.create_qos_policy(name=self.policy_name) self.assertIn('id', policy) self.assertEqual(self.policy_name, policy['name']) self.assertFalse(policy['shared']) self.assertFalse(policy['is_default']) def test_create_qos_policy_shared(self): policy = self.operator_cloud.create_qos_policy( name=self.policy_name, shared=True) self.assertIn('id', policy) self.assertEqual(self.policy_name, policy['name']) self.assertTrue(policy['shared']) self.assertFalse(policy['is_default']) def test_create_qos_policy_default(self): if not self.operator_cloud._has_neutron_extension('qos-default'): self.skipTest("'qos-default' network extension not supported " "by cloud") policy = self.operator_cloud.create_qos_policy( name=self.policy_name, default=True) self.assertIn('id', policy) self.assertEqual(self.policy_name, policy['name']) self.assertFalse(policy['shared']) self.assertTrue(policy['is_default']) def test_update_qos_policy(self): policy = self.operator_cloud.create_qos_policy(name=self.policy_name) self.assertEqual(self.policy_name, policy['name']) self.assertFalse(policy['shared']) self.assertFalse(policy['is_default']) updated_policy = self.operator_cloud.update_qos_policy( policy['id'], shared=True, default=True) self.assertEqual(self.policy_name, updated_policy['name']) self.assertTrue(updated_policy['shared']) self.assertTrue(updated_policy['is_default']) def test_list_qos_policies_filtered(self): policy1 = self.operator_cloud.create_qos_policy(name=self.policy_name) self.assertIsNotNone(policy1) policy2 = self.operator_cloud.create_qos_policy( name=self.policy_name + 'other') self.assertIsNotNone(policy2) match = self.operator_cloud.list_qos_policies( filters=dict(name=self.policy_name)) self.assertEqual(1, len(match)) self.assertEqual(policy1['name'], match[0]['name']) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_endpoints.py0000666000175100017510000001726513236151364026347 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_endpoint ---------------------------------- Functional tests for `shade` endpoint resource. """ import string import random from openstack.cloud.exc import OpenStackCloudException from openstack.cloud.exc import OpenStackCloudUnavailableFeature from openstack.tests.functional.cloud import base class TestEndpoints(base.KeystoneBaseFunctionalTestCase): endpoint_attributes = ['id', 'region', 'publicurl', 'internalurl', 'service_id', 'adminurl'] def setUp(self): super(TestEndpoints, self).setUp() # Generate a random name for services and regions in this test self.new_item_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_services) self.addCleanup(self._cleanup_endpoints) def _cleanup_endpoints(self): exception_list = list() for e in self.operator_cloud.list_endpoints(): if e.get('region') is not None and \ e['region'].startswith(self.new_item_name): try: self.operator_cloud.delete_endpoint(id=e['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_services(self): exception_list = list() for s in self.operator_cloud.list_services(): if s['name'] is not None and \ s['name'].startswith(self.new_item_name): try: self.operator_cloud.delete_service(name_or_id=s['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_endpoint(self): service_name = self.new_item_name + '_create' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', admin_url='http://admin.url/', region=service_name) self.assertNotEqual([], endpoints) self.assertIsNotNone(endpoints[0].get('id')) # Test None parameters endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', region=service_name) self.assertNotEqual([], endpoints) self.assertIsNotNone(endpoints[0].get('id')) def test_update_endpoint(self): ver = self.operator_cloud.cloud_config.get_api_version('identity') if ver.startswith('2'): # NOTE(SamYaple): Update endpoint only works with v3 api self.assertRaises(OpenStackCloudUnavailableFeature, self.operator_cloud.update_endpoint, 'endpoint_id1') else: service = self.operator_cloud.create_service( name='service1', type='test_type') endpoint = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], url='http://admin.url/', interface='admin', region='orig_region', enabled=False)[0] new_service = self.operator_cloud.create_service( name='service2', type='test_type') new_endpoint = self.operator_cloud.update_endpoint( endpoint.id, service_name_or_id=new_service.id, url='http://public.url/', interface='public', region='update_region', enabled=True) self.assertEqual(new_endpoint.url, 'http://public.url/') self.assertEqual(new_endpoint.interface, 'public') self.assertEqual(new_endpoint.region, 'update_region') self.assertEqual(new_endpoint.service_id, new_service.id) self.assertTrue(new_endpoint.enabled) def test_list_endpoints(self): service_name = self.new_item_name + '_list' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', region=service_name) observed_endpoints = self.operator_cloud.list_endpoints() found = False for e in observed_endpoints: # Test all attributes are returned for endpoint in endpoints: if e['id'] == endpoint['id']: found = True self.assertEqual(service['id'], e['service_id']) if 'interface' in e: if 'interface' == 'internal': self.assertEqual('http://internal.test/', e['url']) elif 'interface' == 'public': self.assertEqual('http://public.test/', e['url']) else: self.assertEqual('http://public.test/', e['publicurl']) self.assertEqual('http://internal.test/', e['internalurl']) self.assertEqual(service_name, e['region']) self.assertTrue(found, msg='new endpoint not found in endpoints list!') def test_delete_endpoint(self): service_name = self.new_item_name + '_delete' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', region=service_name) self.assertNotEqual([], endpoints) for endpoint in endpoints: self.operator_cloud.delete_endpoint(endpoint['id']) observed_endpoints = self.operator_cloud.list_endpoints() found = False for e in observed_endpoints: for endpoint in endpoints: if e['id'] == endpoint['id']: found = True break self.failUnlessEqual( False, found, message='new endpoint was not deleted!') openstacksdk-0.11.3/openstack/tests/functional/cloud/test_qos_dscp_marking_rule.py0000666000175100017510000000534113236151340030700 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_qos_dscp_marking_rule ---------------------------------- Functional tests for `shade`QoS DSCP marking rule methods. """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestQosDscpMarkingRule(base.BaseFunctionalTestCase): def setUp(self): super(TestQosDscpMarkingRule, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') if not self.operator_cloud._has_neutron_extension('qos'): self.skipTest('QoS network extension not supported by cloud') policy_name = self.getUniqueString('qos_policy') self.policy = self.operator_cloud.create_qos_policy(name=policy_name) self.addCleanup(self._cleanup_qos_policy) def _cleanup_qos_policy(self): try: self.operator_cloud.delete_qos_policy(self.policy['id']) except Exception as e: raise OpenStackCloudException(e) def test_qos_dscp_marking_rule_lifecycle(self): dscp_mark = 16 updated_dscp_mark = 32 # Create DSCP marking rule rule = self.operator_cloud.create_qos_dscp_marking_rule( self.policy['id'], dscp_mark=dscp_mark) self.assertIn('id', rule) self.assertEqual(dscp_mark, rule['dscp_mark']) # Now try to update rule updated_rule = self.operator_cloud.update_qos_dscp_marking_rule( self.policy['id'], rule['id'], dscp_mark=updated_dscp_mark) self.assertIn('id', updated_rule) self.assertEqual(updated_dscp_mark, updated_rule['dscp_mark']) # List rules from policy policy_rules = self.operator_cloud.list_qos_dscp_marking_rules( self.policy['id']) self.assertEqual([updated_rule], policy_rules) # Delete rule self.operator_cloud.delete_qos_dscp_marking_rule( self.policy['id'], updated_rule['id']) # Check if there is no rules in policy policy_rules = self.operator_cloud.list_qos_dscp_marking_rules( self.policy['id']) self.assertEqual([], policy_rules) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_quotas.py0000666000175100017510000000627413236151340025650 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_quotas ---------------------------------- Functional tests for `shade` quotas methods. """ from openstack.tests.functional.cloud import base class TestComputeQuotas(base.BaseFunctionalTestCase): def test_quotas(self): '''Test quotas functionality''' quotas = self.operator_cloud.get_compute_quotas('demo') cores = quotas['cores'] self.operator_cloud.set_compute_quotas('demo', cores=cores + 1) self.assertEqual( cores + 1, self.operator_cloud.get_compute_quotas('demo')['cores']) self.operator_cloud.delete_compute_quotas('demo') self.assertEqual( cores, self.operator_cloud.get_compute_quotas('demo')['cores']) class TestVolumeQuotas(base.BaseFunctionalTestCase): def setUp(self): super(TestVolumeQuotas, self).setUp() if not self.operator_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') def test_quotas(self): '''Test quotas functionality''' quotas = self.operator_cloud.get_volume_quotas('demo') volumes = quotas['volumes'] self.operator_cloud.set_volume_quotas('demo', volumes=volumes + 1) self.assertEqual( volumes + 1, self.operator_cloud.get_volume_quotas('demo')['volumes']) self.operator_cloud.delete_volume_quotas('demo') self.assertEqual( volumes, self.operator_cloud.get_volume_quotas('demo')['volumes']) class TestNetworkQuotas(base.BaseFunctionalTestCase): def setUp(self): super(TestNetworkQuotas, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('network service not supported by cloud') def test_quotas(self): '''Test quotas functionality''' quotas = self.operator_cloud.get_network_quotas('demo') network = quotas['network'] self.operator_cloud.set_network_quotas('demo', network=network + 1) self.assertEqual( network + 1, self.operator_cloud.get_network_quotas('demo')['network']) self.operator_cloud.delete_network_quotas('demo') self.assertEqual( network, self.operator_cloud.get_network_quotas('demo')['network']) def test_get_quotas_details(self): expected_keys = ['limit', 'used', 'reserved'] '''Test getting details about quota usage''' quota_details = self.operator_cloud.get_network_quotas( 'demo', details=True) for quota_values in quota_details.values(): for expected_key in expected_keys: self.assertTrue(expected_key in quota_values.keys()) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_qos_bandwidth_limit_rule.py0000666000175100017510000001003013236151340031370 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_qos_bandwidth_limit_rule ---------------------------------- Functional tests for `shade`QoS bandwidth limit methods. """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestQosBandwidthLimitRule(base.BaseFunctionalTestCase): def setUp(self): super(TestQosBandwidthLimitRule, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') if not self.operator_cloud._has_neutron_extension('qos'): self.skipTest('QoS network extension not supported by cloud') policy_name = self.getUniqueString('qos_policy') self.policy = self.operator_cloud.create_qos_policy(name=policy_name) self.addCleanup(self._cleanup_qos_policy) def _cleanup_qos_policy(self): try: self.operator_cloud.delete_qos_policy(self.policy['id']) except Exception as e: raise OpenStackCloudException(e) def test_qos_bandwidth_limit_rule_lifecycle(self): max_kbps = 1500 max_burst_kbps = 500 updated_max_kbps = 2000 # Create bw limit rule rule = self.operator_cloud.create_qos_bandwidth_limit_rule( self.policy['id'], max_kbps=max_kbps, max_burst_kbps=max_burst_kbps) self.assertIn('id', rule) self.assertEqual(max_kbps, rule['max_kbps']) self.assertEqual(max_burst_kbps, rule['max_burst_kbps']) # Now try to update rule updated_rule = self.operator_cloud.update_qos_bandwidth_limit_rule( self.policy['id'], rule['id'], max_kbps=updated_max_kbps) self.assertIn('id', updated_rule) self.assertEqual(updated_max_kbps, updated_rule['max_kbps']) self.assertEqual(max_burst_kbps, updated_rule['max_burst_kbps']) # List rules from policy policy_rules = self.operator_cloud.list_qos_bandwidth_limit_rules( self.policy['id']) self.assertEqual([updated_rule], policy_rules) # Delete rule self.operator_cloud.delete_qos_bandwidth_limit_rule( self.policy['id'], updated_rule['id']) # Check if there is no rules in policy policy_rules = self.operator_cloud.list_qos_bandwidth_limit_rules( self.policy['id']) self.assertEqual([], policy_rules) def test_create_qos_bandwidth_limit_rule_direction(self): if not self.operator_cloud._has_neutron_extension( 'qos-bw-limit-direction'): self.skipTest("'qos-bw-limit-direction' network extension " "not supported by cloud") max_kbps = 1500 direction = "ingress" updated_direction = "egress" # Create bw limit rule rule = self.operator_cloud.create_qos_bandwidth_limit_rule( self.policy['id'], max_kbps=max_kbps, direction=direction) self.assertIn('id', rule) self.assertEqual(max_kbps, rule['max_kbps']) self.assertEqual(direction, rule['direction']) # Now try to update direction in rule updated_rule = self.operator_cloud.update_qos_bandwidth_limit_rule( self.policy['id'], rule['id'], direction=updated_direction) self.assertIn('id', updated_rule) self.assertEqual(max_kbps, updated_rule['max_kbps']) self.assertEqual(updated_direction, updated_rule['direction']) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_services.py0000666000175100017510000001235013236151364026155 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_services ---------------------------------- Functional tests for `shade` service resource. """ import string import random from openstack.cloud.exc import OpenStackCloudException from openstack.cloud.exc import OpenStackCloudUnavailableFeature from openstack.tests.functional.cloud import base class TestServices(base.KeystoneBaseFunctionalTestCase): service_attributes = ['id', 'name', 'type', 'description'] def setUp(self): super(TestServices, self).setUp() # Generate a random name for services in this test self.new_service_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_services) def _cleanup_services(self): exception_list = list() for s in self.operator_cloud.list_services(): if s['name'] is not None and \ s['name'].startswith(self.new_service_name): try: self.operator_cloud.delete_service(name_or_id=s['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_service(self): service = self.operator_cloud.create_service( name=self.new_service_name + '_create', type='test_type', description='this is a test description') self.assertIsNotNone(service.get('id')) def test_update_service(self): ver = self.operator_cloud.cloud_config.get_api_version('identity') if ver.startswith('2'): # NOTE(SamYaple): Update service only works with v3 api self.assertRaises(OpenStackCloudUnavailableFeature, self.operator_cloud.update_service, 'service_id', name='new name') else: service = self.operator_cloud.create_service( name=self.new_service_name + '_create', type='test_type', description='this is a test description', enabled=True) new_service = self.operator_cloud.update_service( service.id, name=self.new_service_name + '_update', description='this is an updated description', enabled=False ) self.assertEqual(new_service.name, self.new_service_name + '_update') self.assertEqual(new_service.description, 'this is an updated description') self.assertFalse(new_service.enabled) self.assertEqual(service.id, new_service.id) def test_list_services(self): service = self.operator_cloud.create_service( name=self.new_service_name + '_list', type='test_type') observed_services = self.operator_cloud.list_services() self.assertIsInstance(observed_services, list) found = False for s in observed_services: # Test all attributes are returned if s['id'] == service['id']: self.assertEqual(self.new_service_name + '_list', s.get('name')) self.assertEqual('test_type', s.get('type')) found = True self.assertTrue(found, msg='new service not found in service list!') def test_delete_service_by_name(self): # Test delete by name service = self.operator_cloud.create_service( name=self.new_service_name + '_delete_by_name', type='test_type') self.operator_cloud.delete_service(name_or_id=service['name']) observed_services = self.operator_cloud.list_services() found = False for s in observed_services: if s['id'] == service['id']: found = True break self.failUnlessEqual(False, found, message='service was not deleted!') def test_delete_service_by_id(self): # Test delete by id service = self.operator_cloud.create_service( name=self.new_service_name + '_delete_by_id', type='test_type') self.operator_cloud.delete_service(name_or_id=service['id']) observed_services = self.operator_cloud.list_services() found = False for s in observed_services: if s['id'] == service['id']: found = True self.failUnlessEqual(False, found, message='service was not deleted!') openstacksdk-0.11.3/openstack/tests/functional/cloud/test_compute.py0000666000175100017510000005313113236151340026002 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_compute ---------------------------------- Functional tests for `shade` compute methods. """ import datetime from fixtures import TimeoutException import six from openstack.cloud import exc from openstack.tests.functional.cloud import base from openstack.tests.functional.cloud.util import pick_flavor from openstack import utils class TestCompute(base.BaseFunctionalTestCase): def setUp(self): # OS_TEST_TIMEOUT is 60 sec by default # but on a bad day, test_attach_detach_volume can take more time. self.TIMEOUT_SCALING_FACTOR = 1.5 super(TestCompute, self).setUp() self.flavor = pick_flavor( self.user_cloud.list_flavors(get_extra=False)) if self.flavor is None: self.assertFalse('no sensible flavor available') self.image = self.pick_image() self.server_name = self.getUniqueString() def _cleanup_servers_and_volumes(self, server_name): """Delete the named server and any attached volumes. Adding separate cleanup calls for servers and volumes can be tricky since they need to be done in the proper order. And sometimes deleting a server can start the process of deleting a volume if it is booted from that volume. This encapsulates that logic. """ server = self.user_cloud.get_server(server_name) if not server: return volumes = self.user_cloud.get_volumes(server) try: self.user_cloud.delete_server(server.name, wait=True) for volume in volumes: if volume.status != 'deleting': self.user_cloud.delete_volume(volume.id, wait=True) except (exc.OpenStackCloudTimeout, TimeoutException): # Ups, some timeout occured during process of deletion server # or volumes, so now we will try to call delete each of them # once again and we will try to live with it self.user_cloud.delete_server(server.name) for volume in volumes: self.operator_cloud.delete_volume( volume.id, wait=False, force=True) def test_create_and_delete_server(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_create_and_delete_server_auto_ip_delete_ips(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, auto_ip=True, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server( self.server_name, wait=True, delete_ips=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_attach_detach_volume(self): self.skipTest('Volume functional tests temporarily disabled') server_name = self.getUniqueString() self.addCleanup(self._cleanup_servers_and_volumes, server_name) server = self.user_cloud.create_server( name=server_name, image=self.image, flavor=self.flavor, wait=True) volume = self.user_cloud.create_volume(1) vol_attachment = self.user_cloud.attach_volume(server, volume) for key in ('device', 'serverId', 'volumeId'): self.assertIn(key, vol_attachment) self.assertTrue(vol_attachment[key]) # assert string is not empty self.assertIsNone(self.user_cloud.detach_volume(server, volume)) def test_create_and_delete_server_with_config_drive(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, config_drive=True, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertTrue(server['has_config_drive']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_create_and_delete_server_with_config_drive_none(self): # check that we're not sending invalid values for config_drive # if it's passed in explicitly as None - which nodepool does if it's # not set in the config self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, config_drive=None, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertFalse(server['has_config_drive']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server( self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_list_all_servers(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) # We're going to get servers from other tests, but that's ok, as long # as we get the server we created with the demo user. found_server = False for s in self.operator_cloud.list_servers(all_projects=True): if s.name == server.name: found_server = True self.assertTrue(found_server) def test_list_all_servers_bad_permissions(self): # Normal users are not allowed to pass all_projects=True self.assertRaises( exc.OpenStackCloudException, self.user_cloud.list_servers, all_projects=True) def test_create_server_image_flavor_dict(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image={'id': self.image.id}, flavor={'id': self.flavor.id}, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_get_server_console(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) # _get_server_console_output does not trap HTTP exceptions, so this # returning a string tests that the call is correct. Testing that # the cloud returns actual data in the output is out of scope. log = self.user_cloud._get_server_console_output(server_id=server.id) self.assertTrue(isinstance(log, six.string_types)) def test_get_server_console_name_or_id(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) log = self.user_cloud.get_server_console(server=self.server_name) self.assertTrue(isinstance(log, six.string_types)) def test_list_availability_zone_names(self): self.assertEqual( ['nova'], self.user_cloud.list_availability_zone_names()) def test_get_server_console_bad_server(self): self.assertRaises( exc.OpenStackCloudException, self.user_cloud.get_server_console, server=self.server_name) def test_create_and_delete_server_with_admin_pass(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, admin_pass='sheiqu9loegahSh', wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertEqual(server['adminPass'], 'sheiqu9loegahSh') self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_get_image_id(self): self.assertEqual( self.image.id, self.user_cloud.get_image_id(self.image.id)) self.assertEqual( self.image.id, self.user_cloud.get_image_id(self.image.name)) def test_get_image_name(self): self.assertEqual( self.image.name, self.user_cloud.get_image_name(self.image.id)) self.assertEqual( self.image.name, self.user_cloud.get_image_name(self.image.name)) def _assert_volume_attach(self, server, volume_id=None, image=''): self.assertEqual(self.server_name, server['name']) self.assertEqual(image, server['image']) self.assertEqual(self.flavor.id, server['flavor']['id']) volumes = self.user_cloud.get_volumes(server) self.assertEqual(1, len(volumes)) volume = volumes[0] if volume_id: self.assertEqual(volume_id, volume['id']) else: volume_id = volume['id'] self.assertEqual(1, len(volume['attachments']), 1) self.assertEqual(server['id'], volume['attachments'][0]['server_id']) return volume_id def test_create_boot_from_volume_image(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, boot_from_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server) volume = self.user_cloud.get_volume(volume_id) self.assertIsNotNone(volume) self.assertEqual(volume['name'], volume['display_name']) self.assertTrue(volume['bootable']) self.assertEqual(server['id'], volume['attachments'][0]['server_id']) self.assertTrue(self.user_cloud.delete_server(server.id, wait=True)) self._wait_for_detach(volume.id) self.assertTrue(self.user_cloud.delete_volume(volume.id, wait=True)) self.assertIsNone(self.user_cloud.get_server(server.id)) self.assertIsNone(self.user_cloud.get_volume(volume.id)) def _wait_for_detach(self, volume_id): # Volumes do not show up as unattached for a bit immediately after # deleting a server that had had a volume attached. Yay for eventual # consistency! for count in utils.iterate_timeout( 60, 'Timeout waiting for volume {volume_id} to detach'.format( volume_id=volume_id)): volume = self.user_cloud.get_volume(volume_id) if volume.status in ( 'available', 'error', 'error_restoring', 'error_extending'): return def test_create_terminate_volume_image(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, boot_from_volume=True, terminate_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) volume = self.user_cloud.get_volume(volume_id) # We can either get None (if the volume delete was quick), or a volume # that is in the process of being deleted. if volume: self.assertEqual('deleting', volume.status) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_create_boot_from_volume_preexisting(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) volume = self.user_cloud.create_volume( size=1, name=self.server_name, image=self.image, wait=True) self.addCleanup(self.user_cloud.delete_volume, volume.id) server = self.user_cloud.create_server( name=self.server_name, image=None, flavor=self.flavor, boot_volume=volume, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server, volume_id=volume['id']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) volume = self.user_cloud.get_volume(volume_id) self.assertIsNotNone(volume) self.assertEqual(volume['name'], volume['display_name']) self.assertTrue(volume['bootable']) self.assertEqual([], volume['attachments']) self._wait_for_detach(volume.id) self.assertTrue(self.user_cloud.delete_volume(volume_id)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) self.assertIsNone(self.user_cloud.get_volume(volume_id)) def test_create_boot_attach_volume(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) volume = self.user_cloud.create_volume( size=1, name=self.server_name, image=self.image, wait=True) self.addCleanup(self.user_cloud.delete_volume, volume['id']) server = self.user_cloud.create_server( name=self.server_name, flavor=self.flavor, image=self.image, boot_from_volume=False, volumes=[volume], wait=True) volume_id = self._assert_volume_attach( server, volume_id=volume['id'], image={'id': self.image['id']}) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) volume = self.user_cloud.get_volume(volume_id) self.assertIsNotNone(volume) self.assertEqual(volume['name'], volume['display_name']) self.assertEqual([], volume['attachments']) self._wait_for_detach(volume.id) self.assertTrue(self.user_cloud.delete_volume(volume_id)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) self.assertIsNone(self.user_cloud.get_volume(volume_id)) def test_create_boot_from_volume_preexisting_terminate(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) volume = self.user_cloud.create_volume( size=1, name=self.server_name, image=self.image, wait=True) server = self.user_cloud.create_server( name=self.server_name, image=None, flavor=self.flavor, boot_volume=volume, terminate_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server, volume_id=volume['id']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) volume = self.user_cloud.get_volume(volume_id) # We can either get None (if the volume delete was quick), or a volume # that is in the process of being deleted. if volume: self.assertEqual('deleting', volume.status) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_create_image_snapshot_wait_active(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, admin_pass='sheiqu9loegahSh', wait=True) image = self.user_cloud.create_image_snapshot('test-snapshot', server, wait=True) self.addCleanup(self.user_cloud.delete_image, image['id']) self.assertEqual('active', image['status']) def test_set_and_delete_metadata(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) self.user_cloud.set_server_metadata(self.server_name, {'key1': 'value1', 'key2': 'value2'}) updated_server = self.user_cloud.get_server(self.server_name) self.assertEqual(set(updated_server.metadata.items()), set({'key1': 'value1', 'key2': 'value2'}.items())) self.user_cloud.set_server_metadata(self.server_name, {'key2': 'value3'}) updated_server = self.user_cloud.get_server(self.server_name) self.assertEqual(set(updated_server.metadata.items()), set({'key1': 'value1', 'key2': 'value3'}.items())) self.user_cloud.delete_server_metadata(self.server_name, ['key2']) updated_server = self.user_cloud.get_server(self.server_name) self.assertEqual(set(updated_server.metadata.items()), set({'key1': 'value1'}.items())) self.user_cloud.delete_server_metadata(self.server_name, ['key1']) updated_server = self.user_cloud.get_server(self.server_name) self.assertEqual(set(updated_server.metadata.items()), set([])) self.assertRaises( exc.OpenStackCloudURINotFound, self.user_cloud.delete_server_metadata, self.server_name, ['key1']) def test_update_server(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) server_updated = self.user_cloud.update_server( self.server_name, name='new_name' ) self.assertEqual('new_name', server_updated['name']) def test_get_compute_usage(self): '''Test usage functionality''' # Add a server so that we can know we have usage self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) start = datetime.datetime.now() - datetime.timedelta(seconds=5) usage = self.operator_cloud.get_compute_usage('demo', start) self.add_info_on_exception('usage', usage) self.assertIsNotNone(usage) self.assertIn('total_hours', usage) self.assertIn('started_at', usage) self.assertEqual(start.isoformat(), usage['started_at']) self.assertIn('location', usage) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_magnum_services.py0000666000175100017510000000261313236151340027514 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_magnum_services -------------------- Functional tests for `shade` services method. """ from openstack.tests.functional.cloud import base class TestMagnumServices(base.BaseFunctionalTestCase): def setUp(self): super(TestMagnumServices, self).setUp() if not self.operator_cloud.has_service('container-infra'): self.skipTest('Container service not supported by cloud') def test_magnum_services(self): '''Test magnum services functionality''' # Test that we can list services services = self.operator_cloud.list_magnum_services() self.assertEqual(1, len(services)) self.assertEqual(services[0]['id'], 1) self.assertEqual('up', services[0]['state']) self.assertEqual('magnum-conductor', services[0]['binary']) self.assertGreater(services[0]['report_count'], 0) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_server_group.py0000666000175100017510000000266313236151340027054 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_server_group ---------------------------------- Functional tests for `shade` server_group resource. """ from openstack.tests.functional.cloud import base class TestServerGroup(base.BaseFunctionalTestCase): def test_server_group(self): server_group_name = self.getUniqueString() self.addCleanup(self.cleanup, server_group_name) server_group = self.user_cloud.create_server_group( server_group_name, ['affinity']) server_group_ids = [v['id'] for v in self.user_cloud.list_server_groups()] self.assertIn(server_group['id'], server_group_ids) self.user_cloud.delete_server_group(server_group_name) def cleanup(self, server_group_name): server_group = self.user_cloud.get_server_group(server_group_name) if server_group: self.user_cloud.delete_server_group(server_group['id']) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_router.py0000666000175100017510000003141113236151340025643 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_router ---------------------------------- Functional tests for `shade` router methods. """ import ipaddress from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base EXPECTED_TOPLEVEL_FIELDS = ( 'id', 'name', 'admin_state_up', 'external_gateway_info', 'tenant_id', 'routes', 'status' ) EXPECTED_GW_INFO_FIELDS = ('network_id', 'enable_snat', 'external_fixed_ips') class TestRouter(base.BaseFunctionalTestCase): def setUp(self): super(TestRouter, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') self.router_prefix = self.getUniqueString('router') self.network_prefix = self.getUniqueString('network') self.subnet_prefix = self.getUniqueString('subnet') # NOTE(Shrews): Order matters! self.addCleanup(self._cleanup_networks) self.addCleanup(self._cleanup_subnets) self.addCleanup(self._cleanup_routers) def _cleanup_routers(self): exception_list = list() for router in self.operator_cloud.list_routers(): if router['name'].startswith(self.router_prefix): try: self.operator_cloud.delete_router(router['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_networks(self): exception_list = list() for network in self.operator_cloud.list_networks(): if network['name'].startswith(self.network_prefix): try: self.operator_cloud.delete_network(network['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_subnets(self): exception_list = list() for subnet in self.operator_cloud.list_subnets(): if subnet['name'].startswith(self.subnet_prefix): try: self.operator_cloud.delete_subnet(subnet['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_router_basic(self): net1_name = self.network_prefix + '_net1' net1 = self.operator_cloud.create_network( name=net1_name, external=True) router_name = self.router_prefix + '_create_basic' router = self.operator_cloud.create_router( name=router_name, admin_state_up=True, ext_gateway_net_id=net1['id'], ) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, router) ext_gw_info = router['external_gateway_info'] for field in EXPECTED_GW_INFO_FIELDS: self.assertIn(field, ext_gw_info) self.assertEqual(router_name, router['name']) self.assertEqual('ACTIVE', router['status']) self.assertEqual(net1['id'], ext_gw_info['network_id']) self.assertTrue(ext_gw_info['enable_snat']) def test_create_router_project(self): project = self.operator_cloud.get_project('demo') self.assertIsNotNone(project) proj_id = project['id'] net1_name = self.network_prefix + '_net1' net1 = self.operator_cloud.create_network( name=net1_name, external=True, project_id=proj_id) router_name = self.router_prefix + '_create_project' router = self.operator_cloud.create_router( name=router_name, admin_state_up=True, ext_gateway_net_id=net1['id'], project_id=proj_id ) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, router) ext_gw_info = router['external_gateway_info'] for field in EXPECTED_GW_INFO_FIELDS: self.assertIn(field, ext_gw_info) self.assertEqual(router_name, router['name']) self.assertEqual('ACTIVE', router['status']) self.assertEqual(proj_id, router['tenant_id']) self.assertEqual(net1['id'], ext_gw_info['network_id']) self.assertTrue(ext_gw_info['enable_snat']) def _create_and_verify_advanced_router(self, external_cidr, external_gateway_ip=None): # external_cidr must be passed in as unicode (u'') # NOTE(Shrews): The arguments are needed because these tests # will run in parallel and we want to make sure that each test # is using different resources to prevent race conditions. net1_name = self.network_prefix + '_net1' sub1_name = self.subnet_prefix + '_sub1' net1 = self.operator_cloud.create_network( name=net1_name, external=True) sub1 = self.operator_cloud.create_subnet( net1['id'], external_cidr, subnet_name=sub1_name, gateway_ip=external_gateway_ip ) ip_net = ipaddress.IPv4Network(external_cidr) last_ip = str(list(ip_net.hosts())[-1]) router_name = self.router_prefix + '_create_advanced' router = self.operator_cloud.create_router( name=router_name, admin_state_up=False, ext_gateway_net_id=net1['id'], enable_snat=False, ext_fixed_ips=[ {'subnet_id': sub1['id'], 'ip_address': last_ip} ] ) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, router) ext_gw_info = router['external_gateway_info'] for field in EXPECTED_GW_INFO_FIELDS: self.assertIn(field, ext_gw_info) self.assertEqual(router_name, router['name']) self.assertEqual('ACTIVE', router['status']) self.assertFalse(router['admin_state_up']) self.assertEqual(1, len(ext_gw_info['external_fixed_ips'])) self.assertEqual( sub1['id'], ext_gw_info['external_fixed_ips'][0]['subnet_id'] ) self.assertEqual( last_ip, ext_gw_info['external_fixed_ips'][0]['ip_address'] ) return router def test_create_router_advanced(self): self._create_and_verify_advanced_router(external_cidr=u'10.2.2.0/24') def test_add_remove_router_interface(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.3.3.0/24') net_name = self.network_prefix + '_intnet1' sub_name = self.subnet_prefix + '_intsub1' net = self.operator_cloud.create_network(name=net_name) sub = self.operator_cloud.create_subnet( net['id'], '10.4.4.0/24', subnet_name=sub_name, gateway_ip='10.4.4.1' ) iface = self.operator_cloud.add_router_interface( router, subnet_id=sub['id']) self.assertIsNone( self.operator_cloud.remove_router_interface( router, subnet_id=sub['id']) ) # Test return values *after* the interface is detached so the # resources we've created can be cleaned up if these asserts fail. self.assertIsNotNone(iface) for key in ('id', 'subnet_id', 'port_id', 'tenant_id'): self.assertIn(key, iface) self.assertEqual(router['id'], iface['id']) self.assertEqual(sub['id'], iface['subnet_id']) def test_list_router_interfaces(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.5.5.0/24') net_name = self.network_prefix + '_intnet1' sub_name = self.subnet_prefix + '_intsub1' net = self.operator_cloud.create_network(name=net_name) sub = self.operator_cloud.create_subnet( net['id'], '10.6.6.0/24', subnet_name=sub_name, gateway_ip='10.6.6.1' ) iface = self.operator_cloud.add_router_interface( router, subnet_id=sub['id']) all_ifaces = self.operator_cloud.list_router_interfaces(router) int_ifaces = self.operator_cloud.list_router_interfaces( router, interface_type='internal') ext_ifaces = self.operator_cloud.list_router_interfaces( router, interface_type='external') self.assertIsNone( self.operator_cloud.remove_router_interface( router, subnet_id=sub['id']) ) # Test return values *after* the interface is detached so the # resources we've created can be cleaned up if these asserts fail. self.assertIsNotNone(iface) self.assertEqual(2, len(all_ifaces)) self.assertEqual(1, len(int_ifaces)) self.assertEqual(1, len(ext_ifaces)) ext_fixed_ips = router['external_gateway_info']['external_fixed_ips'] self.assertEqual(ext_fixed_ips[0]['subnet_id'], ext_ifaces[0]['fixed_ips'][0]['subnet_id']) self.assertEqual(sub['id'], int_ifaces[0]['fixed_ips'][0]['subnet_id']) def test_update_router_name(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.7.7.0/24') new_name = self.router_prefix + '_update_name' updated = self.operator_cloud.update_router( router['id'], name=new_name) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # Name is the only change we expect self.assertEqual(new_name, updated['name']) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['admin_state_up'], updated['admin_state_up']) self.assertEqual(router['external_gateway_info'], updated['external_gateway_info']) def test_update_router_admin_state(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.8.8.0/24') updated = self.operator_cloud.update_router( router['id'], admin_state_up=True) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # admin_state_up is the only change we expect self.assertTrue(updated['admin_state_up']) self.assertNotEqual(router['admin_state_up'], updated['admin_state_up']) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['name'], updated['name']) self.assertEqual(router['external_gateway_info'], updated['external_gateway_info']) def test_update_router_ext_gw_info(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.9.9.0/24') # create a new subnet existing_net_id = router['external_gateway_info']['network_id'] sub_name = self.subnet_prefix + '_update' sub = self.operator_cloud.create_subnet( existing_net_id, '10.10.10.0/24', subnet_name=sub_name, gateway_ip='10.10.10.1' ) updated = self.operator_cloud.update_router( router['id'], ext_gateway_net_id=existing_net_id, ext_fixed_ips=[ {'subnet_id': sub['id'], 'ip_address': '10.10.10.77'} ] ) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # external_gateway_info is the only change we expect ext_gw_info = updated['external_gateway_info'] self.assertEqual(1, len(ext_gw_info['external_fixed_ips'])) self.assertEqual( sub['id'], ext_gw_info['external_fixed_ips'][0]['subnet_id'] ) self.assertEqual( '10.10.10.77', ext_gw_info['external_fixed_ips'][0]['ip_address'] ) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['name'], updated['name']) self.assertEqual(router['admin_state_up'], updated['admin_state_up']) openstacksdk-0.11.3/openstack/tests/functional/cloud/__init__.py0000666000175100017510000000000013236151340025011 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/cloud/test_keypairs.py0000666000175100017510000000460213236151340026154 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_keypairs ---------------------------------- Functional tests for `shade` keypairs methods """ from openstack.tests import fakes from openstack.tests.functional.cloud import base class TestKeypairs(base.BaseFunctionalTestCase): def test_create_and_delete(self): '''Test creating and deleting keypairs functionality''' name = self.getUniqueString('keypair') self.addCleanup(self.user_cloud.delete_keypair, name) keypair = self.user_cloud.create_keypair(name=name) self.assertEqual(keypair['name'], name) self.assertIsNotNone(keypair['public_key']) self.assertIsNotNone(keypair['private_key']) self.assertIsNotNone(keypair['fingerprint']) self.assertEqual(keypair['type'], 'ssh') keypairs = self.user_cloud.list_keypairs() self.assertIn(name, [k['name'] for k in keypairs]) self.user_cloud.delete_keypair(name) keypairs = self.user_cloud.list_keypairs() self.assertNotIn(name, [k['name'] for k in keypairs]) def test_create_and_delete_with_key(self): '''Test creating and deleting keypairs functionality''' name = self.getUniqueString('keypair') self.addCleanup(self.user_cloud.delete_keypair, name) keypair = self.user_cloud.create_keypair( name=name, public_key=fakes.FAKE_PUBLIC_KEY) self.assertEqual(keypair['name'], name) self.assertIsNotNone(keypair['public_key']) self.assertIsNone(keypair['private_key']) self.assertIsNotNone(keypair['fingerprint']) self.assertEqual(keypair['type'], 'ssh') keypairs = self.user_cloud.list_keypairs() self.assertIn(name, [k['name'] for k in keypairs]) self.user_cloud.delete_keypair(name) keypairs = self.user_cloud.list_keypairs() self.assertNotIn(name, [k['name'] for k in keypairs]) openstacksdk-0.11.3/openstack/tests/functional/cloud/base.py0000666000175100017510000000644013236151364024210 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(shade) Merge this with openstack.tests.functional.base import os import openstack.config as occ import openstack.cloud from openstack.tests import base class BaseFunctionalTestCase(base.TestCase): def setUp(self): super(BaseFunctionalTestCase, self).setUp() self._demo_name = os.environ.get('OPENSTACKSDK_DEMO_CLOUD', 'devstack') self._op_name = os.environ.get( 'OPENSTACKSDK_OPERATOR_CLOUD', 'devstack-admin') self.config = occ.OpenStackConfig() self._set_user_cloud() self._set_operator_cloud() self.identity_version = \ self.operator_cloud.cloud_config.get_api_version('identity') def _set_user_cloud(self, **kwargs): user_config = self.config.get_one( cloud=self._demo_name, **kwargs) self.user_cloud = openstack.cloud.OpenStackCloud( cloud_config=user_config) def _set_operator_cloud(self, **kwargs): operator_config = self.config.get_one( cloud=self._op_name, **kwargs) self.operator_cloud = openstack.cloud.OpenStackCloud( cloud_config=operator_config) def pick_image(self): images = self.user_cloud.list_images() self.add_info_on_exception('images', images) image_name = os.environ.get('OPENSTACKSDK_IMAGE') if image_name: for image in images: if image.name == image_name: return image self.assertFalse( "Cloud does not have {image}".format(image=image_name)) for image in images: if image.name.startswith('cirros') and image.name.endswith('-uec'): return image for image in images: if (image.name.startswith('cirros') and image.disk_format == 'qcow2'): return image for image in images: if image.name.lower().startswith('ubuntu'): return image for image in images: if image.name.lower().startswith('centos'): return image self.assertFalse('no sensible image available') class KeystoneBaseFunctionalTestCase(BaseFunctionalTestCase): def setUp(self): super(KeystoneBaseFunctionalTestCase, self).setUp() use_keystone_v2 = os.environ.get('OPENSTACKSDK_USE_KEYSTONE_V2', False) if use_keystone_v2: # keystone v2 has special behavior for the admin # interface and some of the operations, so make a new cloud # object with interface set to admin. # We only do it for keystone tests on v2 because otherwise # the admin interface is not a thing that wants to actually # be used self._set_operator_cloud(interface='admin') openstacksdk-0.11.3/openstack/tests/functional/cloud/test_volume_backup.py0000666000175100017510000000633313236151340027164 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.functional.cloud import base class TestVolume(base.BaseFunctionalTestCase): # Creating a volume backup is incredibly slow. TIMEOUT_SCALING_FACTOR = 1.5 def setUp(self): super(TestVolume, self).setUp() self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') if not self.user_cloud.has_service('object-store'): self.skipTest('volume backups require swift') def test_create_get_delete_volume_backup(self): volume = self.user_cloud.create_volume( display_name=self.getUniqueString(), size=1) self.addCleanup(self.user_cloud.delete_volume, volume['id']) backup_name_1 = self.getUniqueString() backup_desc_1 = self.getUniqueString() backup = self.user_cloud.create_volume_backup( volume_id=volume['id'], name=backup_name_1, description=backup_desc_1, wait=True) self.assertEqual(backup_name_1, backup['name']) backup = self.user_cloud.get_volume_backup(backup['id']) self.assertEqual("available", backup['status']) self.assertEqual(backup_desc_1, backup['description']) self.user_cloud.delete_volume_backup(backup['id'], wait=True) self.assertIsNone(self.user_cloud.get_volume_backup(backup['id'])) def test_list_volume_backups(self): vol1 = self.user_cloud.create_volume( display_name=self.getUniqueString(), size=1) self.addCleanup(self.user_cloud.delete_volume, vol1['id']) # We create 2 volumes to create 2 backups. We could have created 2 # backups from the same volume but taking 2 successive backups seems # to be race-condition prone. And I didn't want to use an ugly sleep() # here. vol2 = self.user_cloud.create_volume( display_name=self.getUniqueString(), size=1) self.addCleanup(self.user_cloud.delete_volume, vol2['id']) backup_name_1 = self.getUniqueString() backup = self.user_cloud.create_volume_backup( volume_id=vol1['id'], name=backup_name_1) self.addCleanup(self.user_cloud.delete_volume_backup, backup['id']) backup = self.user_cloud.create_volume_backup(volume_id=vol2['id']) self.addCleanup(self.user_cloud.delete_volume_backup, backup['id']) backups = self.user_cloud.list_volume_backups() self.assertEqual(2, len(backups)) backups = self.user_cloud.list_volume_backups( search_opts={"name": backup_name_1}) self.assertEqual(1, len(backups)) self.assertEqual(backup_name_1, backups[0]['name']) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_domain.py0000666000175100017510000001200113236151364025572 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_domain ---------------------------------- Functional tests for `shade` keystone domain resource. """ import openstack.cloud from openstack.tests.functional.cloud import base class TestDomain(base.BaseFunctionalTestCase): def setUp(self): super(TestDomain, self).setUp() i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support domains') self.domain_prefix = self.getUniqueString('domain') self.addCleanup(self._cleanup_domains) def _cleanup_domains(self): exception_list = list() for domain in self.operator_cloud.list_domains(): if domain['name'].startswith(self.domain_prefix): try: self.operator_cloud.delete_domain(domain['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise openstack.cloud.OpenStackCloudException( '\n'.join(exception_list)) def test_search_domains(self): domain_name = self.domain_prefix + '_search' # Shouldn't find any domain with this name yet results = self.operator_cloud.search_domains( filters=dict(name=domain_name)) self.assertEqual(0, len(results)) # Now create a new domain domain = self.operator_cloud.create_domain(domain_name) self.assertEqual(domain_name, domain['name']) # Now we should find only the new domain results = self.operator_cloud.search_domains( filters=dict(name=domain_name)) self.assertEqual(1, len(results)) self.assertEqual(domain_name, results[0]['name']) # Now we search by name with name_or_id, should find only new domain results = self.operator_cloud.search_domains(name_or_id=domain_name) self.assertEqual(1, len(results)) self.assertEqual(domain_name, results[0]['name']) def test_update_domain(self): domain = self.operator_cloud.create_domain( self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) updated = self.operator_cloud.update_domain( domain['id'], name='updated name', description='updated description', enabled=False) self.assertEqual('updated name', updated['name']) self.assertEqual('updated description', updated['description']) self.assertFalse(updated['enabled']) # Now we update domain by name with name_or_id updated = self.operator_cloud.update_domain( None, name_or_id='updated name', name='updated name 2', description='updated description 2', enabled=True) self.assertEqual('updated name 2', updated['name']) self.assertEqual('updated description 2', updated['description']) self.assertTrue(updated['enabled']) def test_delete_domain(self): domain = self.operator_cloud.create_domain(self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) deleted = self.operator_cloud.delete_domain(domain['id']) self.assertTrue(deleted) # Now we delete domain by name with name_or_id domain = self.operator_cloud.create_domain( self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) deleted = self.operator_cloud.delete_domain(None, domain['name']) self.assertTrue(deleted) # Finally, we assert we get False from delete_domain if domain does # not exist domain = self.operator_cloud.create_domain( self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) deleted = self.operator_cloud.delete_domain(None, 'bogus_domain') self.assertFalse(deleted) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_volume.py0000666000175100017510000001410713236151340025635 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_volume ---------------------------------- Functional tests for `shade` block storage methods. """ from fixtures import TimeoutException from testtools import content from openstack.cloud import exc from openstack.tests.functional.cloud import base from openstack import utils class TestVolume(base.BaseFunctionalTestCase): # Creating and deleting volumes is slow TIMEOUT_SCALING_FACTOR = 1.5 def setUp(self): super(TestVolume, self).setUp() self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') def test_volumes(self): '''Test volume and snapshot functionality''' volume_name = self.getUniqueString() snapshot_name = self.getUniqueString() self.addDetail('volume', content.text_content(volume_name)) self.addCleanup(self.cleanup, volume_name, snapshot_name=snapshot_name) volume = self.user_cloud.create_volume( display_name=volume_name, size=1) snapshot = self.user_cloud.create_volume_snapshot( volume['id'], display_name=snapshot_name ) ret_volume = self.user_cloud.get_volume_by_id(volume['id']) self.assertEqual(volume['id'], ret_volume['id']) volume_ids = [v['id'] for v in self.user_cloud.list_volumes()] self.assertIn(volume['id'], volume_ids) snapshot_list = self.user_cloud.list_volume_snapshots() snapshot_ids = [s['id'] for s in snapshot_list] self.assertIn(snapshot['id'], snapshot_ids) ret_snapshot = self.user_cloud.get_volume_snapshot_by_id( snapshot['id']) self.assertEqual(snapshot['id'], ret_snapshot['id']) self.user_cloud.delete_volume_snapshot(snapshot_name, wait=True) self.user_cloud.delete_volume(volume_name, wait=True) def test_volume_to_image(self): '''Test volume export to image functionality''' volume_name = self.getUniqueString() image_name = self.getUniqueString() self.addDetail('volume', content.text_content(volume_name)) self.addCleanup(self.cleanup, volume_name, image_name=image_name) volume = self.user_cloud.create_volume( display_name=volume_name, size=1) image = self.user_cloud.create_image( image_name, volume=volume, wait=True) volume_ids = [v['id'] for v in self.user_cloud.list_volumes()] self.assertIn(volume['id'], volume_ids) image_list = self.user_cloud.list_images() image_ids = [s['id'] for s in image_list] self.assertIn(image['id'], image_ids) self.user_cloud.delete_image(image_name, wait=True) self.user_cloud.delete_volume(volume_name, wait=True) def cleanup(self, volume, snapshot_name=None, image_name=None): # Need to delete snapshots before volumes if snapshot_name: snapshot = self.user_cloud.get_volume_snapshot(snapshot_name) if snapshot: self.user_cloud.delete_volume_snapshot( snapshot_name, wait=True) if image_name: image = self.user_cloud.get_image(image_name) if image: self.user_cloud.delete_image(image_name, wait=True) if not isinstance(volume, list): self.user_cloud.delete_volume(volume, wait=True) else: # We have more than one volume to clean up - submit all of the # deletes without wait, then poll until none of them are found # in the volume list anymore for v in volume: self.user_cloud.delete_volume(v, wait=False) try: for count in utils.iterate_timeout( 180, "Timeout waiting for volume cleanup"): found = False for existing in self.user_cloud.list_volumes(): for v in volume: if v['id'] == existing['id']: found = True break if found: break if not found: break except (exc.OpenStackCloudTimeout, TimeoutException): # NOTE(slaweq): ups, some volumes are still not removed # so we should try to force delete it once again and move # forward for existing in self.user_cloud.list_volumes(): for v in volume: if v['id'] == existing['id']: self.operator_cloud.delete_volume( v, wait=False, force=True) def test_list_volumes_pagination(self): '''Test pagination for list volumes functionality''' volumes = [] # the number of created volumes needs to be higher than # CONF.osapi_max_limit but not higher than volume quotas for # the test user in the tenant(default quotas is set to 10) num_volumes = 8 for i in range(num_volumes): name = self.getUniqueString() v = self.user_cloud.create_volume(display_name=name, size=1) volumes.append(v) self.addCleanup(self.cleanup, volumes) result = [] for i in self.user_cloud.list_volumes(): if i['name'] and i['name'].startswith(self.id()): result.append(i['id']) self.assertEqual( sorted([i['id'] for i in volumes]), sorted(result)) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_floating_ip_pool.py0000666000175100017510000000341213236151340027647 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_pool ---------------------------------- Functional tests for floating IP pool resource (managed by nova) """ from openstack.tests.functional.cloud import base # When using nova-network, floating IP pools are created with nova-manage # command. # When using Neutron, floating IP pools in Nova are mapped from external # network names. This only if the floating-ip-pools nova extension is # available. # For instance, for current implementation of hpcloud that's not true: # nova floating-ip-pool-list returns 404. class TestFloatingIPPool(base.BaseFunctionalTestCase): def setUp(self): super(TestFloatingIPPool, self).setUp() if not self.user_cloud._has_nova_extension('os-floating-ip-pools'): # Skipping this test is floating-ip-pool extension is not # available on the testing cloud self.skip( 'Floating IP pools extension is not available') def test_list_floating_ip_pools(self): pools = self.user_cloud.list_floating_ip_pools() if not pools: self.assertFalse('no floating-ip pool available') for pool in pools: self.assertIn('name', pool) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_image.py0000666000175100017510000001422713236151340025413 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_compute ---------------------------------- Functional tests for `shade` image methods. """ import filecmp import os import tempfile from openstack.tests.functional.cloud import base class TestImage(base.BaseFunctionalTestCase): def setUp(self): super(TestImage, self).setUp() self.image = self.pick_image() def test_create_image(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) finally: self.user_cloud.delete_image(image_name, wait=True) def test_download_image(self): test_image = tempfile.NamedTemporaryFile(delete=False) self.addCleanup(os.remove, test_image.name) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) self.addCleanup(self.user_cloud.delete_image, image_name, wait=True) output = os.path.join(tempfile.gettempdir(), self.getUniqueString()) self.user_cloud.download_image(image_name, output) self.addCleanup(os.remove, output) self.assertTrue(filecmp.cmp(test_image.name, output), "Downloaded contents don't match created image") def test_create_image_skip_duplicate(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: first_image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) second_image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) self.assertEqual(first_image.id, second_image.id) finally: self.user_cloud.delete_image(image_name, wait=True) def test_create_image_force_duplicate(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') first_image = None second_image = None try: first_image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) second_image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, allow_duplicates=True, wait=True) self.assertNotEqual(first_image.id, second_image.id) finally: if first_image: self.user_cloud.delete_image(first_image.id, wait=True) if second_image: self.user_cloud.delete_image(second_image.id, wait=True) def test_create_image_update_properties(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) self.user_cloud.update_image_properties( image=image, name=image_name, foo='bar') image = self.user_cloud.get_image(image_name) self.assertIn('foo', image.properties) self.assertEqual(image.properties['foo'], 'bar') finally: self.user_cloud.delete_image(image_name, wait=True) def test_get_image_by_id(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) image = self.user_cloud.get_image_by_id(image.id) self.assertEqual(image_name, image.name) self.assertEqual('raw', image.disk_format) finally: self.user_cloud.delete_image(image_name, wait=True) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_zone.py0000666000175100017510000000602113236151340025275 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_zone ---------------------------------- Functional tests for `shade` zone methods. """ from testtools import content from openstack.tests.functional.cloud import base class TestZone(base.BaseFunctionalTestCase): def setUp(self): super(TestZone, self).setUp() if not self.user_cloud.has_service('dns'): self.skipTest('dns service not supported by cloud') def test_zones(self): '''Test DNS zones functionality''' name = 'example.net.' zone_type = 'primary' email = 'test@example.net' description = 'Test zone' ttl = 3600 masters = None self.addDetail('zone', content.text_content(name)) self.addCleanup(self.cleanup, name) # Test we can create a zone and we get it returned zone = self.user_cloud.create_zone( name=name, zone_type=zone_type, email=email, description=description, ttl=ttl, masters=masters) self.assertEqual(zone['name'], name) self.assertEqual(zone['type'], zone_type.upper()) self.assertEqual(zone['email'], email) self.assertEqual(zone['description'], description) self.assertEqual(zone['ttl'], ttl) self.assertEqual(zone['masters'], []) # Test that we can list zones zones = self.user_cloud.list_zones() self.assertIsNotNone(zones) # Test we get the same zone with the get_zone method zone_get = self.user_cloud.get_zone(zone['id']) self.assertEqual(zone_get['id'], zone['id']) # Test the get method also works by name zone_get = self.user_cloud.get_zone(name) self.assertEqual(zone_get['name'], zone['name']) # Test we can update a field on the zone and only that field # is updated zone_update = self.user_cloud.update_zone(zone['id'], ttl=7200) self.assertEqual(zone_update['id'], zone['id']) self.assertEqual(zone_update['name'], zone['name']) self.assertEqual(zone_update['type'], zone['type']) self.assertEqual(zone_update['email'], zone['email']) self.assertEqual(zone_update['description'], zone['description']) self.assertEqual(zone_update['ttl'], 7200) self.assertEqual(zone_update['masters'], zone['masters']) # Test we can delete and get True returned zone_delete = self.user_cloud.delete_zone(zone['id']) self.assertTrue(zone_delete) def cleanup(self, name): self.user_cloud.delete_zone(name) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_aggregate.py0000666000175100017510000000375013236151340026256 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_aggregate ---------------------------------- Functional tests for `shade` aggregate resource. """ from openstack.tests.functional.cloud import base class TestAggregate(base.BaseFunctionalTestCase): def test_aggregates(self): aggregate_name = self.getUniqueString() availability_zone = self.getUniqueString() self.addCleanup(self.cleanup, aggregate_name) aggregate = self.operator_cloud.create_aggregate(aggregate_name) aggregate_ids = [v['id'] for v in self.operator_cloud.list_aggregates()] self.assertIn(aggregate['id'], aggregate_ids) aggregate = self.operator_cloud.update_aggregate( aggregate_name, availability_zone=availability_zone ) self.assertEqual(availability_zone, aggregate['availability_zone']) aggregate = self.operator_cloud.set_aggregate_metadata( aggregate_name, {'key': 'value'} ) self.assertIn('key', aggregate['metadata']) aggregate = self.operator_cloud.set_aggregate_metadata( aggregate_name, {'key': None} ) self.assertNotIn('key', aggregate['metadata']) self.operator_cloud.delete_aggregate(aggregate_name) def cleanup(self, aggregate_name): aggregate = self.operator_cloud.get_aggregate(aggregate_name) if aggregate: self.operator_cloud.delete_aggregate(aggregate_name) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_network.py0000666000175100017510000001036413236151340026020 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_network ---------------------------------- Functional tests for `shade` network methods. """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestNetwork(base.BaseFunctionalTestCase): def setUp(self): super(TestNetwork, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') self.network_name = self.getUniqueString('network') self.addCleanup(self._cleanup_networks) def _cleanup_networks(self): exception_list = list() for network in self.operator_cloud.list_networks(): if network['name'].startswith(self.network_name): try: self.operator_cloud.delete_network(network['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_network_basic(self): net1 = self.operator_cloud.create_network(name=self.network_name) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertFalse(net1['shared']) self.assertFalse(net1['router:external']) self.assertTrue(net1['admin_state_up']) def test_get_network_by_id(self): net1 = self.operator_cloud.create_network(name=self.network_name) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertFalse(net1['shared']) self.assertFalse(net1['router:external']) self.assertTrue(net1['admin_state_up']) ret_net1 = self.operator_cloud.get_network_by_id(net1.id) self.assertIn('id', ret_net1) self.assertEqual(self.network_name, ret_net1['name']) self.assertFalse(ret_net1['shared']) self.assertFalse(ret_net1['router:external']) self.assertTrue(ret_net1['admin_state_up']) def test_create_network_advanced(self): net1 = self.operator_cloud.create_network( name=self.network_name, shared=True, external=True, admin_state_up=False, ) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertTrue(net1['router:external']) self.assertTrue(net1['shared']) self.assertFalse(net1['admin_state_up']) def test_create_network_provider_flat(self): existing_public = self.operator_cloud.search_networks( filters={'provider:network_type': 'flat'}) if existing_public: self.skipTest('Physical network already allocated') net1 = self.operator_cloud.create_network( name=self.network_name, shared=True, provider={ 'physical_network': 'public', 'network_type': 'flat', } ) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertEqual('flat', net1['provider:network_type']) self.assertEqual('public', net1['provider:physical_network']) self.assertIsNone(net1['provider:segmentation_id']) def test_list_networks_filtered(self): net1 = self.operator_cloud.create_network(name=self.network_name) self.assertIsNotNone(net1) net2 = self.operator_cloud.create_network( name=self.network_name + 'other') self.assertIsNotNone(net2) match = self.operator_cloud.list_networks( filters=dict(name=self.network_name)) self.assertEqual(1, len(match)) self.assertEqual(net1['name'], match[0]['name']) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_groups.py0000666000175100017510000000770513236151364025661 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_groups ---------------------------------- Functional tests for `shade` keystone group resource. """ import openstack.cloud from openstack.tests.functional.cloud import base class TestGroup(base.BaseFunctionalTestCase): def setUp(self): super(TestGroup, self).setUp() i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support groups') self.group_prefix = self.getUniqueString('group') self.addCleanup(self._cleanup_groups) def _cleanup_groups(self): exception_list = list() for group in self.operator_cloud.list_groups(): if group['name'].startswith(self.group_prefix): try: self.operator_cloud.delete_group(group['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise openstack.cloud.OpenStackCloudException( '\n'.join(exception_list)) def test_create_group(self): group_name = self.group_prefix + '_create' group = self.operator_cloud.create_group(group_name, 'test group') for key in ('id', 'name', 'description', 'domain_id'): self.assertIn(key, group) self.assertEqual(group_name, group['name']) self.assertEqual('test group', group['description']) def test_delete_group(self): group_name = self.group_prefix + '_delete' group = self.operator_cloud.create_group(group_name, 'test group') self.assertIsNotNone(group) self.assertTrue(self.operator_cloud.delete_group(group_name)) results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(0, len(results)) def test_delete_group_not_exists(self): self.assertFalse(self.operator_cloud.delete_group('xInvalidGroupx')) def test_search_groups(self): group_name = self.group_prefix + '_search' # Shouldn't find any group with this name yet results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(0, len(results)) # Now create a new group group = self.operator_cloud.create_group(group_name, 'test group') self.assertEqual(group_name, group['name']) # Now we should find only the new group results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(1, len(results)) self.assertEqual(group_name, results[0]['name']) def test_update_group(self): group_name = self.group_prefix + '_update' group_desc = 'test group' group = self.operator_cloud.create_group(group_name, group_desc) self.assertEqual(group_name, group['name']) self.assertEqual(group_desc, group['description']) updated_group_name = group_name + '_xyz' updated_group_desc = group_desc + ' updated' updated_group = self.operator_cloud.update_group( group_name, name=updated_group_name, description=updated_group_desc) self.assertEqual(updated_group_name, updated_group['name']) self.assertEqual(updated_group_desc, updated_group['description']) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_users.py0000666000175100017510000001500013236151364025466 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_users ---------------------------------- Functional tests for `shade` user methods. """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestUsers(base.KeystoneBaseFunctionalTestCase): def setUp(self): super(TestUsers, self).setUp() self.user_prefix = self.getUniqueString('user') self.addCleanup(self._cleanup_users) def _cleanup_users(self): exception_list = list() for user in self.operator_cloud.list_users(): if user['name'].startswith(self.user_prefix): try: self.operator_cloud.delete_user(user['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _create_user(self, **kwargs): domain_id = None i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver not in ('2', '2.0'): domain = self.operator_cloud.get_domain('default') domain_id = domain['id'] return self.operator_cloud.create_user(domain_id=domain_id, **kwargs) def test_list_users(self): users = self.operator_cloud.list_users() self.assertIsNotNone(users) self.assertNotEqual([], users) def test_get_user(self): user = self.operator_cloud.get_user('admin') self.assertIsNotNone(user) self.assertIn('id', user) self.assertIn('name', user) self.assertEqual('admin', user['name']) def test_search_users(self): users = self.operator_cloud.search_users(filters={'enabled': True}) self.assertIsNotNone(users) def test_search_users_jmespath(self): users = self.operator_cloud.search_users(filters="[?enabled]") self.assertIsNotNone(users) def test_create_user(self): user_name = self.user_prefix + '_create' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertEqual(user_name, user['name']) self.assertEqual(user_email, user['email']) self.assertTrue(user['enabled']) def test_delete_user(self): user_name = self.user_prefix + '_delete' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertTrue(self.operator_cloud.delete_user(user['id'])) def test_delete_user_not_found(self): self.assertFalse(self.operator_cloud.delete_user('does_not_exist')) def test_update_user(self): user_name = self.user_prefix + '_updatev3' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertTrue(user['enabled']) # Pass some keystone v3 params. This should work no matter which # version of keystone we are testing against. new_user = self.operator_cloud.update_user( user['id'], name=user_name + '2', email='somebody@nowhere.com', enabled=False, password='secret', description='') self.assertIsNotNone(new_user) self.assertEqual(user['id'], new_user['id']) self.assertEqual(user_name + '2', new_user['name']) self.assertEqual('somebody@nowhere.com', new_user['email']) self.assertFalse(new_user['enabled']) def test_update_user_password(self): user_name = self.user_prefix + '_password' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email, password='old_secret') self.assertIsNotNone(user) self.assertTrue(user['enabled']) # This should work for both v2 and v3 new_user = self.operator_cloud.update_user( user['id'], password='new_secret') self.assertIsNotNone(new_user) self.assertEqual(user['id'], new_user['id']) self.assertEqual(user_name, new_user['name']) self.assertEqual(user_email, new_user['email']) self.assertTrue(new_user['enabled']) self.assertTrue(self.operator_cloud.grant_role( 'Member', user=user['id'], project='demo', wait=True)) self.addCleanup( self.operator_cloud.revoke_role, 'Member', user=user['id'], project='demo', wait=True) new_cloud = self.operator_cloud.connect_as( user_id=user['id'], password='new_secret', project_name='demo') self.assertIsNotNone(new_cloud) location = new_cloud.current_location self.assertEqual(location['project']['name'], 'demo') self.assertIsNotNone(new_cloud.service_catalog) def test_users_and_groups(self): i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support groups') group_name = self.getUniqueString('group') self.addCleanup(self.operator_cloud.delete_group, group_name) # Create a group group = self.operator_cloud.create_group(group_name, 'test group') self.assertIsNotNone(group) # Create a user user_name = self.user_prefix + '_ug' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) # Add the user to the group self.operator_cloud.add_user_to_group(user_name, group_name) self.assertTrue( self.operator_cloud.is_user_in_group(user_name, group_name)) # Remove them from the group self.operator_cloud.remove_user_from_group(user_name, group_name) self.assertFalse( self.operator_cloud.is_user_in_group(user_name, group_name)) openstacksdk-0.11.3/openstack/tests/functional/cloud/test_port.py0000666000175100017510000001262613236151340025316 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_port ---------------------------------- Functional tests for `shade` port resource. """ import string import random from openstack.cloud.exc import OpenStackCloudException from openstack.tests.functional.cloud import base class TestPort(base.BaseFunctionalTestCase): def setUp(self): super(TestPort, self).setUp() # Skip Neutron tests if neutron is not present if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') # Generate a unique port name to allow concurrent tests self.new_port_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_ports) def _cleanup_ports(self): exception_list = list() for p in self.operator_cloud.list_ports(): if p['name'].startswith(self.new_port_name): try: self.operator_cloud.delete_port(name_or_id=p['id']) except Exception as e: # We were unable to delete this port, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_port(self): port_name = self.new_port_name + '_create' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertIn('id', port) self.assertEqual(port.get('name'), port_name) def test_get_port(self): port_name = self.new_port_name + '_get' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertIn('id', port) self.assertEqual(port.get('name'), port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) # extra_dhcp_opts is added later by Neutron... if 'extra_dhcp_opts' in updated_port and 'extra_dhcp_opts' not in port: del updated_port['extra_dhcp_opts'] self.assertEqual(port, updated_port) def test_get_port_by_id(self): port_name = self.new_port_name + '_get_by_id' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertIn('id', port) self.assertEqual(port.get('name'), port_name) updated_port = self.operator_cloud.get_port_by_id(port['id']) # extra_dhcp_opts is added later by Neutron... if 'extra_dhcp_opts' in updated_port and 'extra_dhcp_opts' not in port: del updated_port['extra_dhcp_opts'] self.assertEqual(port, updated_port) def test_update_port(self): port_name = self.new_port_name + '_update' new_port_name = port_name + '_new' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) port = self.operator_cloud.update_port( name_or_id=port_name, name=new_port_name) self.assertIsInstance(port, dict) self.assertEqual(port.get('name'), new_port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertEqual(port.get('name'), new_port_name) self.assertEqual(port, updated_port) def test_delete_port(self): port_name = self.new_port_name + '_delete' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertIn('id', port) self.assertEqual(port.get('name'), port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertIsNotNone(updated_port) self.operator_cloud.delete_port(name_or_id=port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertIsNone(updated_port) openstacksdk-0.11.3/openstack/tests/functional/compute/0000775000175100017510000000000013236151501023255 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/compute/v2/0000775000175100017510000000000013236151501023604 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/compute/v2/test_flavor.py0000666000175100017510000000374313236151340026520 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from openstack import exceptions from openstack.tests.functional import base class TestFlavor(base.BaseFunctionalTest): def setUp(self): super(TestFlavor, self).setUp() self.one_flavor = list(self.conn.compute.flavors())[0] def test_flavors(self): flavors = list(self.conn.compute.flavors()) self.assertGreater(len(flavors), 0) for flavor in flavors: self.assertIsInstance(flavor.id, six.string_types) self.assertIsInstance(flavor.name, six.string_types) self.assertIsInstance(flavor.disk, int) self.assertIsInstance(flavor.ram, int) self.assertIsInstance(flavor.vcpus, int) def test_find_flavors_by_id(self): rslt = self.conn.compute.find_flavor(self.one_flavor.id) self.assertEqual(rslt.id, self.one_flavor.id) def test_find_flavors_by_name(self): rslt = self.conn.compute.find_flavor(self.one_flavor.name) self.assertEqual(rslt.name, self.one_flavor.name) def test_find_flavors_no_match_ignore_true(self): rslt = self.conn.compute.find_flavor("not a flavor", ignore_missing=True) self.assertIsNone(rslt) def test_find_flavors_no_match_ignore_false(self): self.assertRaises(exceptions.ResourceNotFound, self.conn.compute.find_flavor, "not a flavor", ignore_missing=False) openstacksdk-0.11.3/openstack/tests/functional/compute/v2/test_limits.py0000666000175100017510000000200513236151340026516 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.functional import base class TestLimits(base.BaseFunctionalTest): def test_limits(self): sot = self.conn.compute.get_limits() self.assertIsNotNone('maxTotalInstances', sot.absolute) self.assertIsNotNone('maxTotalRAMSize', sot.absolute) self.assertIsNotNone('maxTotalKeypairs', sot.absolute) self.assertIsNotNone('maxSecurityGroups', sot.absolute) self.assertIsNotNone('maxSecurityGroupRules', sot.absolute) openstacksdk-0.11.3/openstack/tests/functional/compute/v2/test_keypair.py0000666000175100017510000000321713236151340026667 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute.v2 import keypair from openstack.tests.functional import base class TestKeypair(base.BaseFunctionalTest): def setUp(self): super(TestKeypair, self).setUp() # Keypairs can't have .'s in the name. Because why? self.NAME = self.getUniqueString().split('.')[-1] sot = self.conn.compute.create_keypair(name=self.NAME) assert isinstance(sot, keypair.Keypair) self.assertEqual(self.NAME, sot.name) self._keypair = sot def tearDown(self): sot = self.conn.compute.delete_keypair(self._keypair) self.assertIsNone(sot) super(TestKeypair, self).tearDown() def test_find(self): sot = self.conn.compute.find_keypair(self.NAME) self.assertEqual(self.NAME, sot.name) self.assertEqual(self.NAME, sot.id) def test_get(self): sot = self.conn.compute.get_keypair(self.NAME) self.assertEqual(self.NAME, sot.name) self.assertEqual(self.NAME, sot.id) def test_list(self): names = [o.name for o in self.conn.compute.keypairs()] self.assertIn(self.NAME, names) openstacksdk-0.11.3/openstack/tests/functional/compute/v2/test_extension.py0000666000175100017510000000175513236151340027244 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from openstack.tests.functional import base class TestExtension(base.BaseFunctionalTest): def test_list(self): extensions = list(self.conn.compute.extensions()) self.assertGreater(len(extensions), 0) for ext in extensions: self.assertIsInstance(ext.name, six.string_types) self.assertIsInstance(ext.namespace, six.string_types) self.assertIsInstance(ext.alias, six.string_types) openstacksdk-0.11.3/openstack/tests/functional/compute/v2/__init__.py0000666000175100017510000000000013236151340025706 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/compute/v2/test_image.py0000666000175100017510000001010213236151340026274 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from openstack.tests.functional import base from openstack.tests.functional.image.v2.test_image import TEST_IMAGE_NAME class TestImage(base.BaseFunctionalTest): def test_images(self): images = list(self.conn.compute.images()) self.assertGreater(len(images), 0) for image in images: self.assertIsInstance(image.id, six.string_types) def _get_non_test_image(self): images = self.conn.compute.images() image = next(images) if image.name == TEST_IMAGE_NAME: image = next(images) return image def test_find_image(self): image = self._get_non_test_image() self.assertIsNotNone(image) sot = self.conn.compute.find_image(image.id) self.assertEqual(image.id, sot.id) self.assertEqual(image.name, sot.name) def test_get_image(self): image = self._get_non_test_image() self.assertIsNotNone(image) sot = self.conn.compute.get_image(image.id) self.assertEqual(image.id, sot.id) self.assertEqual(image.name, sot.name) self.assertIsNotNone(image.links) self.assertIsNotNone(image.min_disk) self.assertIsNotNone(image.min_ram) self.assertIsNotNone(image.metadata) self.assertIsNotNone(image.progress) self.assertIsNotNone(image.status) def test_image_metadata(self): image = self._get_non_test_image() # delete pre-existing metadata self.conn.compute.delete_image_metadata(image, image.metadata.keys()) image = self.conn.compute.get_image_metadata(image) self.assertFalse(image.metadata) # get metadata image = self.conn.compute.get_image_metadata(image) self.assertFalse(image.metadata) # set no metadata self.conn.compute.set_image_metadata(image) image = self.conn.compute.get_image_metadata(image) self.assertFalse(image.metadata) # set empty metadata self.conn.compute.set_image_metadata(image, k0='') image = self.conn.compute.get_image_metadata(image) self.assertIn('k0', image.metadata) self.assertEqual('', image.metadata['k0']) # set metadata self.conn.compute.set_image_metadata(image, k1='v1') image = self.conn.compute.get_image_metadata(image) self.assertTrue(image.metadata) self.assertEqual(2, len(image.metadata)) self.assertIn('k1', image.metadata) self.assertEqual('v1', image.metadata['k1']) # set more metadata self.conn.compute.set_image_metadata(image, k2='v2') image = self.conn.compute.get_image_metadata(image) self.assertTrue(image.metadata) self.assertEqual(3, len(image.metadata)) self.assertIn('k1', image.metadata) self.assertEqual('v1', image.metadata['k1']) self.assertIn('k2', image.metadata) self.assertEqual('v2', image.metadata['k2']) # update metadata self.conn.compute.set_image_metadata(image, k1='v1.1') image = self.conn.compute.get_image_metadata(image) self.assertTrue(image.metadata) self.assertEqual(3, len(image.metadata)) self.assertIn('k1', image.metadata) self.assertEqual('v1.1', image.metadata['k1']) self.assertIn('k2', image.metadata) self.assertEqual('v2', image.metadata['k2']) # delete metadata self.conn.compute.delete_image_metadata(image, image.metadata.keys()) image = self.conn.compute.get_image_metadata(image) self.assertFalse(image.metadata) openstacksdk-0.11.3/openstack/tests/functional/compute/v2/test_server.py0000666000175100017510000001212613236151340026530 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from openstack.compute.v2 import server from openstack.tests.functional import base from openstack.tests.functional.network.v2 import test_network class TestServer(base.BaseFunctionalTest): def setUp(self): super(TestServer, self).setUp() self.NAME = self.getUniqueString() self.server = None self.network = None self.subnet = None self.cidr = '10.99.99.0/16' flavor = self.conn.compute.find_flavor(base.FLAVOR_NAME, ignore_missing=False) image = self.conn.compute.find_image(base.IMAGE_NAME, ignore_missing=False) self.network, self.subnet = test_network.create_network( self.conn, self.NAME, self.cidr) self.assertIsNotNone(self.network) sot = self.conn.compute.create_server( name=self.NAME, flavor_id=flavor.id, image_id=image.id, networks=[{"uuid": self.network.id}]) self.conn.compute.wait_for_server(sot) assert isinstance(sot, server.Server) self.assertEqual(self.NAME, sot.name) self.server = sot def tearDown(self): sot = self.conn.compute.delete_server(self.server.id) self.assertIsNone(sot) # Need to wait for the stack to go away before network delete self.conn.compute.wait_for_delete(self.server) # TODO(shade) sleeping in tests is bad mmkay? time.sleep(40) test_network.delete_network(self.conn, self.network, self.subnet) super(TestServer, self).tearDown() def test_find(self): sot = self.conn.compute.find_server(self.NAME) self.assertEqual(self.server.id, sot.id) def test_get(self): sot = self.conn.compute.get_server(self.server.id) self.assertEqual(self.NAME, sot.name) self.assertEqual(self.server.id, sot.id) def test_list(self): names = [o.name for o in self.conn.compute.servers()] self.assertIn(self.NAME, names) def test_server_metadata(self): test_server = self.conn.compute.get_server(self.server.id) # get metadata test_server = self.conn.compute.get_server_metadata(test_server) self.assertFalse(test_server.metadata) # set no metadata self.conn.compute.set_server_metadata(test_server) test_server = self.conn.compute.get_server_metadata(test_server) self.assertFalse(test_server.metadata) # set empty metadata self.conn.compute.set_server_metadata(test_server, k0='') server = self.conn.compute.get_server_metadata(test_server) self.assertTrue(server.metadata) # set metadata self.conn.compute.set_server_metadata(test_server, k1='v1') test_server = self.conn.compute.get_server_metadata(test_server) self.assertTrue(test_server.metadata) self.assertEqual(2, len(test_server.metadata)) self.assertIn('k0', test_server.metadata) self.assertEqual('', test_server.metadata['k0']) self.assertIn('k1', test_server.metadata) self.assertEqual('v1', test_server.metadata['k1']) # set more metadata self.conn.compute.set_server_metadata(test_server, k2='v2') test_server = self.conn.compute.get_server_metadata(test_server) self.assertTrue(test_server.metadata) self.assertEqual(3, len(test_server.metadata)) self.assertIn('k0', test_server.metadata) self.assertEqual('', test_server.metadata['k0']) self.assertIn('k1', test_server.metadata) self.assertEqual('v1', test_server.metadata['k1']) self.assertIn('k2', test_server.metadata) self.assertEqual('v2', test_server.metadata['k2']) # update metadata self.conn.compute.set_server_metadata(test_server, k1='v1.1') test_server = self.conn.compute.get_server_metadata(test_server) self.assertTrue(test_server.metadata) self.assertEqual(3, len(test_server.metadata)) self.assertIn('k0', test_server.metadata) self.assertEqual('', test_server.metadata['k0']) self.assertIn('k1', test_server.metadata) self.assertEqual('v1.1', test_server.metadata['k1']) self.assertIn('k2', test_server.metadata) self.assertEqual('v2', test_server.metadata['k2']) # delete metadata self.conn.compute.delete_server_metadata( test_server, test_server.metadata.keys()) test_server = self.conn.compute.get_server_metadata(test_server) self.assertFalse(test_server.metadata) openstacksdk-0.11.3/openstack/tests/functional/compute/__init__.py0000666000175100017510000000000013236151340025357 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/object_store/0000775000175100017510000000000013236151501024263 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/object_store/v1/0000775000175100017510000000000013236151501024611 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/object_store/v1/test_obj.py0000666000175100017510000001352313236151340027003 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.functional import base class TestObject(base.BaseFunctionalTest): DATA = b'abc' def setUp(self): super(TestObject, self).setUp() self.require_service('object-store') self.FOLDER = self.getUniqueString() self.FILE = self.getUniqueString() self.conn.object_store.create_container(name=self.FOLDER) self.addCleanup(self.conn.object_store.delete_container, self.FOLDER) self.sot = self.conn.object_store.upload_object( container=self.FOLDER, name=self.FILE, data=self.DATA) self.addEmptyCleanup( self.conn.object_store.delete_object, self.sot, ignore_missing=False) def test_list(self): names = [o.name for o in self.conn.object_store.objects(container=self.FOLDER)] self.assertIn(self.FILE, names) def test_download_object(self): result = self.conn.object_store.download_object( self.FILE, container=self.FOLDER) self.assertEqual(self.DATA, result) result = self.conn.object_store.download_object(self.sot) self.assertEqual(self.DATA, result) def test_system_metadata(self): # get system metadata obj = self.conn.object_store.get_object_metadata( self.FILE, container=self.FOLDER) # TODO(shade) obj.bytes is coming up None on python3 but not python2 # self.assertGreaterEqual(0, obj.bytes) self.assertIsNotNone(obj.etag) # set system metadata obj = self.conn.object_store.get_object_metadata( self.FILE, container=self.FOLDER) self.assertIsNone(obj.content_disposition) self.assertIsNone(obj.content_encoding) self.conn.object_store.set_object_metadata( obj, content_disposition='attachment', content_encoding='gzip') obj = self.conn.object_store.get_object_metadata(obj) self.assertEqual('attachment', obj.content_disposition) self.assertEqual('gzip', obj.content_encoding) # update system metadata self.conn.object_store.set_object_metadata( obj, content_encoding='deflate') obj = self.conn.object_store.get_object_metadata(obj) self.assertEqual('attachment', obj.content_disposition) self.assertEqual('deflate', obj.content_encoding) # set custom metadata self.conn.object_store.set_object_metadata(obj, k0='v0') obj = self.conn.object_store.get_object_metadata(obj) self.assertIn('k0', obj.metadata) self.assertEqual('v0', obj.metadata['k0']) self.assertEqual('attachment', obj.content_disposition) self.assertEqual('deflate', obj.content_encoding) # unset more system metadata self.conn.object_store.delete_object_metadata( obj, keys=['content_disposition']) obj = self.conn.object_store.get_object_metadata(obj) self.assertIn('k0', obj.metadata) self.assertEqual('v0', obj.metadata['k0']) self.assertIsNone(obj.content_disposition) self.assertEqual('deflate', obj.content_encoding) self.assertIsNone(obj.delete_at) def test_custom_metadata(self): # get custom metadata obj = self.conn.object_store.get_object_metadata( self.FILE, container=self.FOLDER) self.assertFalse(obj.metadata) # set no custom metadata self.conn.object_store.set_object_metadata(obj) obj = self.conn.object_store.get_object_metadata(obj) self.assertFalse(obj.metadata) # set empty custom metadata self.conn.object_store.set_object_metadata(obj, k0='') obj = self.conn.object_store.get_object_metadata(obj) self.assertFalse(obj.metadata) # set custom metadata self.conn.object_store.set_object_metadata(obj, k1='v1') obj = self.conn.object_store.get_object_metadata(obj) self.assertTrue(obj.metadata) self.assertEqual(1, len(obj.metadata)) self.assertIn('k1', obj.metadata) self.assertEqual('v1', obj.metadata['k1']) # set more custom metadata by named object and container self.conn.object_store.set_object_metadata(self.FILE, self.FOLDER, k2='v2') obj = self.conn.object_store.get_object_metadata(obj) self.assertTrue(obj.metadata) self.assertEqual(2, len(obj.metadata)) self.assertIn('k1', obj.metadata) self.assertEqual('v1', obj.metadata['k1']) self.assertIn('k2', obj.metadata) self.assertEqual('v2', obj.metadata['k2']) # update custom metadata self.conn.object_store.set_object_metadata(obj, k1='v1.1') obj = self.conn.object_store.get_object_metadata(obj) self.assertTrue(obj.metadata) self.assertEqual(2, len(obj.metadata)) self.assertIn('k1', obj.metadata) self.assertEqual('v1.1', obj.metadata['k1']) self.assertIn('k2', obj.metadata) self.assertEqual('v2', obj.metadata['k2']) # unset custom metadata self.conn.object_store.delete_object_metadata(obj, keys=['k1']) obj = self.conn.object_store.get_object_metadata(obj) self.assertTrue(obj.metadata) self.assertEqual(1, len(obj.metadata)) self.assertIn('k2', obj.metadata) self.assertEqual('v2', obj.metadata['k2']) openstacksdk-0.11.3/openstack/tests/functional/object_store/v1/test_account.py0000666000175100017510000000656413236151340027674 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.functional import base class TestAccount(base.BaseFunctionalTest): def setUp(self): super(TestAccount, self).setUp() self.require_service('object-store') def tearDown(self): account = self.conn.object_store.get_account_metadata() self.conn.object_store.delete_account_metadata(account.metadata.keys()) super(TestAccount, self).tearDown() def test_system_metadata(self): account = self.conn.object_store.get_account_metadata() self.assertGreaterEqual(account.account_bytes_used, 0) self.assertGreaterEqual(account.account_container_count, 0) self.assertGreaterEqual(account.account_object_count, 0) def test_custom_metadata(self): # get custom metadata account = self.conn.object_store.get_account_metadata() self.assertFalse(account.metadata) # set no custom metadata self.conn.object_store.set_account_metadata() account = self.conn.object_store.get_account_metadata() self.assertFalse(account.metadata) # set empty custom metadata self.conn.object_store.set_account_metadata(k0='') account = self.conn.object_store.get_account_metadata() self.assertFalse(account.metadata) # set custom metadata self.conn.object_store.set_account_metadata(k1='v1') account = self.conn.object_store.get_account_metadata() self.assertTrue(account.metadata) self.assertEqual(1, len(account.metadata)) self.assertIn('k1', account.metadata) self.assertEqual('v1', account.metadata['k1']) # set more custom metadata self.conn.object_store.set_account_metadata(k2='v2') account = self.conn.object_store.get_account_metadata() self.assertTrue(account.metadata) self.assertEqual(2, len(account.metadata)) self.assertIn('k1', account.metadata) self.assertEqual('v1', account.metadata['k1']) self.assertIn('k2', account.metadata) self.assertEqual('v2', account.metadata['k2']) # update custom metadata self.conn.object_store.set_account_metadata(k1='v1.1') account = self.conn.object_store.get_account_metadata() self.assertTrue(account.metadata) self.assertEqual(2, len(account.metadata)) self.assertIn('k1', account.metadata) self.assertEqual('v1.1', account.metadata['k1']) self.assertIn('k2', account.metadata) self.assertEqual('v2', account.metadata['k2']) # unset custom metadata self.conn.object_store.delete_account_metadata(['k1']) account = self.conn.object_store.get_account_metadata() self.assertTrue(account.metadata) self.assertEqual(1, len(account.metadata)) self.assertIn('k2', account.metadata) self.assertEqual('v2', account.metadata['k2']) openstacksdk-0.11.3/openstack/tests/functional/object_store/v1/test_container.py0000666000175100017510000001341413236151340030212 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.object_store.v1 import container as _container from openstack.tests.functional import base class TestContainer(base.BaseFunctionalTest): def setUp(self): super(TestContainer, self).setUp() self.require_service('object-store') self.NAME = self.getUniqueString() container = self.conn.object_store.create_container(name=self.NAME) self.addEmptyCleanup( self.conn.object_store.delete_container, self.NAME, ignore_missing=False) assert isinstance(container, _container.Container) self.assertEqual(self.NAME, container.name) def test_list(self): names = [o.name for o in self.conn.object_store.containers()] self.assertIn(self.NAME, names) def test_system_metadata(self): # get system metadata container = self.conn.object_store.get_container_metadata(self.NAME) self.assertEqual(0, container.object_count) self.assertEqual(0, container.bytes_used) # set system metadata container = self.conn.object_store.get_container_metadata(self.NAME) self.assertIsNone(container.read_ACL) self.assertIsNone(container.write_ACL) self.conn.object_store.set_container_metadata( container, read_ACL='.r:*', write_ACL='demo:demo') container = self.conn.object_store.get_container_metadata(self.NAME) self.assertEqual('.r:*', container.read_ACL) self.assertEqual('demo:demo', container.write_ACL) # update system metadata self.conn.object_store.set_container_metadata( container, read_ACL='.r:demo') container = self.conn.object_store.get_container_metadata(self.NAME) self.assertEqual('.r:demo', container.read_ACL) self.assertEqual('demo:demo', container.write_ACL) # set system metadata and custom metadata self.conn.object_store.set_container_metadata( container, k0='v0', sync_key='1234') container = self.conn.object_store.get_container_metadata(self.NAME) self.assertTrue(container.metadata) self.assertIn('k0', container.metadata) self.assertEqual('v0', container.metadata['k0']) self.assertEqual('.r:demo', container.read_ACL) self.assertEqual('demo:demo', container.write_ACL) self.assertEqual('1234', container.sync_key) # unset system metadata self.conn.object_store.delete_container_metadata(container, ['sync_key']) container = self.conn.object_store.get_container_metadata(self.NAME) self.assertTrue(container.metadata) self.assertIn('k0', container.metadata) self.assertEqual('v0', container.metadata['k0']) self.assertEqual('.r:demo', container.read_ACL) self.assertEqual('demo:demo', container.write_ACL) self.assertIsNone(container.sync_key) def test_custom_metadata(self): # get custom metadata container = self.conn.object_store.get_container_metadata(self.NAME) self.assertFalse(container.metadata) # set no custom metadata self.conn.object_store.set_container_metadata(container) container = self.conn.object_store.get_container_metadata(container) self.assertFalse(container.metadata) # set empty custom metadata self.conn.object_store.set_container_metadata(container, k0='') container = self.conn.object_store.get_container_metadata(container) self.assertFalse(container.metadata) # set custom metadata self.conn.object_store.set_container_metadata(container, k1='v1') container = self.conn.object_store.get_container_metadata(container) self.assertTrue(container.metadata) self.assertEqual(1, len(container.metadata)) self.assertIn('k1', container.metadata) self.assertEqual('v1', container.metadata['k1']) # set more custom metadata by named container self.conn.object_store.set_container_metadata(self.NAME, k2='v2') container = self.conn.object_store.get_container_metadata(container) self.assertTrue(container.metadata) self.assertEqual(2, len(container.metadata)) self.assertIn('k1', container.metadata) self.assertEqual('v1', container.metadata['k1']) self.assertIn('k2', container.metadata) self.assertEqual('v2', container.metadata['k2']) # update metadata self.conn.object_store.set_container_metadata(container, k1='v1.1') container = self.conn.object_store.get_container_metadata(self.NAME) self.assertTrue(container.metadata) self.assertEqual(2, len(container.metadata)) self.assertIn('k1', container.metadata) self.assertEqual('v1.1', container.metadata['k1']) self.assertIn('k2', container.metadata) self.assertEqual('v2', container.metadata['k2']) # delete metadata self.conn.object_store.delete_container_metadata(container, ['k1']) container = self.conn.object_store.get_container_metadata(self.NAME) self.assertTrue(container.metadata) self.assertEqual(1, len(container.metadata)) self.assertIn('k2', container.metadata) self.assertEqual('v2', container.metadata['k2']) openstacksdk-0.11.3/openstack/tests/functional/object_store/v1/__init__.py0000666000175100017510000000000013236151340026713 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/object_store/__init__.py0000666000175100017510000000000013236151340026365 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/image/0000775000175100017510000000000013236151501022663 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/image/v2/0000775000175100017510000000000013236151501023212 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/image/v2/__init__.py0000666000175100017510000000000013236151340025314 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/image/v2/test_image.py0000666000175100017510000000266013236151340025714 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import connection from openstack.tests.functional import base TEST_IMAGE_NAME = 'Test Image' class TestImage(base.BaseFunctionalTest): class ImageOpts(object): def __init__(self): self.image_api_version = '2' def setUp(self): super(TestImage, self).setUp() opts = self.ImageOpts() self.conn = connection.from_config( cloud_name=base.TEST_CLOUD_NAME, options=opts) self.img = self.conn.image.upload_image( name=TEST_IMAGE_NAME, disk_format='raw', container_format='bare', properties='{"description": "This is not an image"}', data=open('CONTRIBUTING.rst', 'r') ) self.addCleanup(self.conn.image.delete_image, self.img) def test_get_image(self): img2 = self.conn.image.get_image(self.img) self.assertEqual(self.img, img2) openstacksdk-0.11.3/openstack/tests/functional/image/__init__.py0000666000175100017510000000000013236151340024765 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/__init__.py0000666000175100017510000000000013236151340023703 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/base.py0000666000175100017510000000502613236151340023073 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import openstack.config from keystoneauth1 import exceptions as _exceptions from openstack import connection from openstack.tests import base #: Defines the OpenStack Client Config (OCC) cloud key in your OCC config #: file, typically in $HOME/.config/openstack/clouds.yaml. That configuration #: will determine where the functional tests will be run and what resource #: defaults will be used to run the functional tests. TEST_CLOUD_NAME = os.getenv('OS_CLOUD', 'devstack-admin') TEST_CLOUD_REGION = openstack.config.get_cloud_region(cloud=TEST_CLOUD_NAME) def _get_resource_value(resource_key, default): try: return TEST_CLOUD_REGION.config['functional'][resource_key] except KeyError: return default IMAGE_NAME = _get_resource_value('image_name', 'cirros-0.3.5-x86_64-disk') FLAVOR_NAME = _get_resource_value('flavor_name', 'm1.small') class BaseFunctionalTest(base.TestCase): def setUp(self): super(BaseFunctionalTest, self).setUp() self.conn = connection.Connection(config=TEST_CLOUD_REGION) def addEmptyCleanup(self, func, *args, **kwargs): def cleanup(): result = func(*args, **kwargs) self.assertIsNone(result) self.addCleanup(cleanup) # TODO(shade) Replace this with call to conn.has_service when we've merged # the shade methods into Connection. def require_service(self, service_type, **kwargs): """Method to check whether a service exists Usage: class TestMeter(base.BaseFunctionalTest): ... def setUp(self): super(TestMeter, self).setUp() self.require_service('metering') :returns: True if the service exists, otherwise False. """ try: self.conn.session.get_endpoint(service_type=service_type, **kwargs) except _exceptions.EndpointNotFound: self.skipTest('Service {service_type} not found in cloud'.format( service_type=service_type)) openstacksdk-0.11.3/openstack/tests/functional/orchestration/0000775000175100017510000000000013236151501024465 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/orchestration/v1/0000775000175100017510000000000013236151501025013 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/orchestration/v1/test_stack.py0000666000175100017510000000536413236151340027544 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import unittest from openstack import exceptions from openstack.orchestration.v1 import stack from openstack.tests.functional import base from openstack.tests.functional.network.v2 import test_network @unittest.skip("bug/1525005") class TestStack(base.BaseFunctionalTest): NAME = 'test_stack' stack = None network = None subnet = None cidr = '10.99.99.0/16' def setUp(self): super(TestStack, self).setUp() self.require_service('orchestration') if self.conn.compute.find_keypair(self.NAME) is None: self.conn.compute.create_keypair(name=self.NAME) image = next(self.conn.image.images()) tname = "openstack/tests/functional/orchestration/v1/hello_world.yaml" with open(tname) as f: template = f.read() self.network, self.subnet = test_network.create_network( self.conn, self.NAME, self.cidr) parameters = { 'image': image.id, 'key_name': self.NAME, 'network': self.network.id, } sot = self.conn.orchestration.create_stack( name=self.NAME, parameters=parameters, template=template, ) assert isinstance(sot, stack.Stack) self.assertEqual(True, (sot.id is not None)) self.stack = sot self.assertEqual(self.NAME, sot.name) self.conn.orchestration.wait_for_status( sot, status='CREATE_COMPLETE', failures=['CREATE_FAILED']) def tearDown(self): self.conn.orchestration.delete_stack(self.stack, ignore_missing=False) self.conn.compute.delete_keypair(self.NAME) # Need to wait for the stack to go away before network delete try: self.conn.orchestration.wait_for_status( self.stack, 'DELETE_COMPLETE') except exceptions.NotFoundException: pass # TODO(shade) sleeping in tests is bad mmkay? time.sleep(40) test_network.delete_network(self.conn, self.network, self.subnet) super(TestStack, self).tearDown() def test_list(self): names = [o.name for o in self.conn.orchestration.stacks()] self.assertIn(self.NAME, names) openstacksdk-0.11.3/openstack/tests/functional/orchestration/v1/__init__.py0000666000175100017510000000000013236151340027115 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/orchestration/v1/hello_world.yaml0000666000175100017510000000201613236151340030213 0ustar zuulzuul00000000000000# # Minimal HOT template defining a single compute server. # heat_template_version: 2013-05-23 description: > Minimal HOT template for stack parameters: key_name: type: string description: Name of an existing key pair to use for the server constraints: - custom_constraint: nova.keypair flavor: type: string description: Flavor for the server to be created default: m1.small constraints: - custom_constraint: nova.flavor image: type: string description: Image ID or image name to use for the server constraints: - custom_constraint: glance.image network: type: string description: Network used by the server resources: server: type: OS::Nova::Server properties: key_name: { get_param: key_name } image: { get_param: image } flavor: { get_param: flavor } networks: [{network: {get_param: network} }] outputs: server_networks: description: The networks of the deployed server value: { get_attr: [server, networks] } openstacksdk-0.11.3/openstack/tests/functional/orchestration/__init__.py0000666000175100017510000000000013236151340026567 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/load_balancer/0000775000175100017510000000000013236151501024347 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/load_balancer/v2/0000775000175100017510000000000013236151501024676 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/load_balancer/v2/__init__.py0000666000175100017510000000000013236151340027000 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/load_balancer/v2/test_load_balancer.py0000666000175100017510000004372313236151340031071 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.load_balancer.v2 import health_monitor from openstack.load_balancer.v2 import l7_policy from openstack.load_balancer.v2 import l7_rule from openstack.load_balancer.v2 import listener from openstack.load_balancer.v2 import load_balancer from openstack.load_balancer.v2 import member from openstack.load_balancer.v2 import pool from openstack.tests.functional.load_balancer import base as lb_base class TestLoadBalancer(lb_base.BaseLBFunctionalTest): HM_ID = None L7POLICY_ID = None LB_ID = None LISTENER_ID = None MEMBER_ID = None POOL_ID = None VIP_SUBNET_ID = None PROJECT_ID = None PROTOCOL = 'HTTP' PROTOCOL_PORT = 80 LB_ALGORITHM = 'ROUND_ROBIN' MEMBER_ADDRESS = '192.0.2.16' WEIGHT = 10 DELAY = 2 TIMEOUT = 1 MAX_RETRY = 3 HM_TYPE = 'HTTP' ACTION = 'REDIRECT_TO_URL' REDIRECT_URL = 'http://www.example.com' COMPARE_TYPE = 'CONTAINS' L7RULE_TYPE = 'HOST_NAME' L7RULE_VALUE = 'example' # TODO(shade): Creating load balancers can be slow on some hosts due to # nova instance boot times (up to ten minutes). This used to # use setUpClass, but that's a whole other pile of bad, so # we may need to engineer something pleasing here. def setUp(self): super(TestLoadBalancer, self).setUp() self.require_service('load-balancer') self.HM_NAME = self.getUniqueString() self.L7POLICY_NAME = self.getUniqueString() self.LB_NAME = self.getUniqueString() self.LISTENER_NAME = self.getUniqueString() self.MEMBER_NAME = self.getUniqueString() self.POOL_NAME = self.getUniqueString() self.UPDATE_NAME = self.getUniqueString() subnets = list(self.conn.network.subnets()) self.VIP_SUBNET_ID = subnets[0].id self.PROJECT_ID = self.conn.session.get_project_id() test_lb = self.conn.load_balancer.create_load_balancer( name=self.LB_NAME, vip_subnet_id=self.VIP_SUBNET_ID, project_id=self.PROJECT_ID) assert isinstance(test_lb, load_balancer.LoadBalancer) self.assertEqual(self.LB_NAME, test_lb.name) # Wait for the LB to go ACTIVE. On non-virtualization enabled hosts # it can take nova up to ten minutes to boot a VM. self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR'], interval=1, wait=600) self.LB_ID = test_lb.id test_listener = self.conn.load_balancer.create_listener( name=self.LISTENER_NAME, protocol=self.PROTOCOL, protocol_port=self.PROTOCOL_PORT, loadbalancer_id=self.LB_ID) assert isinstance(test_listener, listener.Listener) self.assertEqual(self.LISTENER_NAME, test_listener.name) self.LISTENER_ID = test_listener.id self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_pool = self.conn.load_balancer.create_pool( name=self.POOL_NAME, protocol=self.PROTOCOL, lb_algorithm=self.LB_ALGORITHM, listener_id=self.LISTENER_ID) assert isinstance(test_pool, pool.Pool) self.assertEqual(self.POOL_NAME, test_pool.name) self.POOL_ID = test_pool.id self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_member = self.conn.load_balancer.create_member( pool=self.POOL_ID, name=self.MEMBER_NAME, address=self.MEMBER_ADDRESS, protocol_port=self.PROTOCOL_PORT, weight=self.WEIGHT) assert isinstance(test_member, member.Member) self.assertEqual(self.MEMBER_NAME, test_member.name) self.MEMBER_ID = test_member.id self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_hm = self.conn.load_balancer.create_health_monitor( pool_id=self.POOL_ID, name=self.HM_NAME, delay=self.DELAY, timeout=self.TIMEOUT, max_retries=self.MAX_RETRY, type=self.HM_TYPE) assert isinstance(test_hm, health_monitor.HealthMonitor) self.assertEqual(self.HM_NAME, test_hm.name) self.HM_ID = test_hm.id self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_l7policy = self.conn.load_balancer.create_l7_policy( listener_id=self.LISTENER_ID, name=self.L7POLICY_NAME, action=self.ACTION, redirect_url=self.REDIRECT_URL) assert isinstance(test_l7policy, l7_policy.L7Policy) self.assertEqual(self.L7POLICY_NAME, test_l7policy.name) self.L7POLICY_ID = test_l7policy.id self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_l7rule = self.conn.load_balancer.create_l7_rule( l7_policy=self.L7POLICY_ID, compare_type=self.COMPARE_TYPE, type=self.L7RULE_TYPE, value=self.L7RULE_VALUE) assert isinstance(test_l7rule, l7_rule.L7Rule) self.assertEqual(self.COMPARE_TYPE, test_l7rule.compare_type) self.L7RULE_ID = test_l7rule.id self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) def tearDown(self): test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) self.conn.load_balancer.delete_l7_rule( self.L7RULE_ID, l7_policy=self.L7POLICY_ID, ignore_missing=False) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) self.conn.load_balancer.delete_l7_policy( self.L7POLICY_ID, ignore_missing=False) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) self.conn.load_balancer.delete_health_monitor( self.HM_ID, ignore_missing=False) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) self.conn.load_balancer.delete_member( self.MEMBER_ID, self.POOL_ID, ignore_missing=False) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) self.conn.load_balancer.delete_pool(self.POOL_ID, ignore_missing=False) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) self.conn.load_balancer.delete_listener(self.LISTENER_ID, ignore_missing=False) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) self.conn.load_balancer.delete_load_balancer( self.LB_ID, ignore_missing=False) super(TestLoadBalancer, self).tearDown() def test_lb_find(self): test_lb = self.conn.load_balancer.find_load_balancer(self.LB_NAME) self.assertEqual(self.LB_ID, test_lb.id) def test_lb_get(self): test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.assertEqual(self.LB_NAME, test_lb.name) self.assertEqual(self.LB_ID, test_lb.id) self.assertEqual(self.VIP_SUBNET_ID, test_lb.vip_subnet_id) def test_lb_list(self): names = [lb.name for lb in self.conn.load_balancer.load_balancers()] self.assertIn(self.LB_NAME, names) def test_lb_update(self): update_lb = self.conn.load_balancer.update_load_balancer( self.LB_ID, name=self.UPDATE_NAME) self.lb_wait_for_status(update_lb, status='ACTIVE', failures=['ERROR']) test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.assertEqual(self.UPDATE_NAME, test_lb.name) update_lb = self.conn.load_balancer.update_load_balancer( self.LB_ID, name=self.LB_NAME) self.lb_wait_for_status(update_lb, status='ACTIVE', failures=['ERROR']) test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.assertEqual(self.LB_NAME, test_lb.name) def test_listener_find(self): test_listener = self.conn.load_balancer.find_listener( self.LISTENER_NAME) self.assertEqual(self.LISTENER_ID, test_listener.id) def test_listener_get(self): test_listener = self.conn.load_balancer.get_listener(self.LISTENER_ID) self.assertEqual(self.LISTENER_NAME, test_listener.name) self.assertEqual(self.LISTENER_ID, test_listener.id) self.assertEqual(self.PROTOCOL, test_listener.protocol) self.assertEqual(self.PROTOCOL_PORT, test_listener.protocol_port) def test_listener_list(self): names = [ls.name for ls in self.conn.load_balancer.listeners()] self.assertIn(self.LISTENER_NAME, names) def test_listener_update(self): test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.conn.load_balancer.update_listener( self.LISTENER_ID, name=self.UPDATE_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_listener = self.conn.load_balancer.get_listener(self.LISTENER_ID) self.assertEqual(self.UPDATE_NAME, test_listener.name) self.conn.load_balancer.update_listener( self.LISTENER_ID, name=self.LISTENER_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_listener = self.conn.load_balancer.get_listener(self.LISTENER_ID) self.assertEqual(self.LISTENER_NAME, test_listener.name) def test_pool_find(self): test_pool = self.conn.load_balancer.find_pool(self.POOL_NAME) self.assertEqual(self.POOL_ID, test_pool.id) def test_pool_get(self): test_pool = self.conn.load_balancer.get_pool(self.POOL_ID) self.assertEqual(self.POOL_NAME, test_pool.name) self.assertEqual(self.POOL_ID, test_pool.id) self.assertEqual(self.PROTOCOL, test_pool.protocol) def test_pool_list(self): names = [pool.name for pool in self.conn.load_balancer.pools()] self.assertIn(self.POOL_NAME, names) def test_pool_update(self): test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.conn.load_balancer.update_pool(self.POOL_ID, name=self.UPDATE_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_pool = self.conn.load_balancer.get_pool(self.POOL_ID) self.assertEqual(self.UPDATE_NAME, test_pool.name) self.conn.load_balancer.update_pool(self.POOL_ID, name=self.POOL_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_pool = self.conn.load_balancer.get_pool(self.POOL_ID) self.assertEqual(self.POOL_NAME, test_pool.name) def test_member_find(self): test_member = self.conn.load_balancer.find_member(self.MEMBER_NAME, self.POOL_ID) self.assertEqual(self.MEMBER_ID, test_member.id) def test_member_get(self): test_member = self.conn.load_balancer.get_member(self.MEMBER_ID, self.POOL_ID) self.assertEqual(self.MEMBER_NAME, test_member.name) self.assertEqual(self.MEMBER_ID, test_member.id) self.assertEqual(self.MEMBER_ADDRESS, test_member.address) self.assertEqual(self.PROTOCOL_PORT, test_member.protocol_port) self.assertEqual(self.WEIGHT, test_member.weight) def test_member_list(self): names = [mb.name for mb in self.conn.load_balancer.members( self.POOL_ID)] self.assertIn(self.MEMBER_NAME, names) def test_member_update(self): test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.conn.load_balancer.update_member(self.MEMBER_ID, self.POOL_ID, name=self.UPDATE_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_member = self.conn.load_balancer.get_member(self.MEMBER_ID, self.POOL_ID) self.assertEqual(self.UPDATE_NAME, test_member.name) self.conn.load_balancer.update_member(self.MEMBER_ID, self.POOL_ID, name=self.MEMBER_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_member = self.conn.load_balancer.get_member(self.MEMBER_ID, self.POOL_ID) self.assertEqual(self.MEMBER_NAME, test_member.name) def test_health_monitor_find(self): test_hm = self.conn.load_balancer.find_health_monitor(self.HM_NAME) self.assertEqual(self.HM_ID, test_hm.id) def test_health_monitor_get(self): test_hm = self.conn.load_balancer.get_health_monitor(self.HM_ID) self.assertEqual(self.HM_NAME, test_hm.name) self.assertEqual(self.HM_ID, test_hm.id) self.assertEqual(self.DELAY, test_hm.delay) self.assertEqual(self.TIMEOUT, test_hm.timeout) self.assertEqual(self.MAX_RETRY, test_hm.max_retries) self.assertEqual(self.HM_TYPE, test_hm.type) def test_health_monitor_list(self): names = [hm.name for hm in self.conn.load_balancer.health_monitors()] self.assertIn(self.HM_NAME, names) def test_health_monitor_update(self): test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.conn.load_balancer.update_health_monitor(self.HM_ID, name=self.UPDATE_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_hm = self.conn.load_balancer.get_health_monitor(self.HM_ID) self.assertEqual(self.UPDATE_NAME, test_hm.name) self.conn.load_balancer.update_health_monitor(self.HM_ID, name=self.HM_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_hm = self.conn.load_balancer.get_health_monitor(self.HM_ID) self.assertEqual(self.HM_NAME, test_hm.name) def test_l7_policy_find(self): test_l7_policy = self.conn.load_balancer.find_l7_policy( self.L7POLICY_NAME) self.assertEqual(self.L7POLICY_ID, test_l7_policy.id) def test_l7_policy_get(self): test_l7_policy = self.conn.load_balancer.get_l7_policy( self.L7POLICY_ID) self.assertEqual(self.L7POLICY_NAME, test_l7_policy.name) self.assertEqual(self.L7POLICY_ID, test_l7_policy.id) self.assertEqual(self.ACTION, test_l7_policy.action) def test_l7_policy_list(self): names = [l7.name for l7 in self.conn.load_balancer.l7_policies()] self.assertIn(self.L7POLICY_NAME, names) def test_l7_policy_update(self): test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.conn.load_balancer.update_l7_policy( self.L7POLICY_ID, name=self.UPDATE_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_l7_policy = self.conn.load_balancer.get_l7_policy( self.L7POLICY_ID) self.assertEqual(self.UPDATE_NAME, test_l7_policy.name) self.conn.load_balancer.update_l7_policy(self.L7POLICY_ID, name=self.L7POLICY_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_l7_policy = self.conn.load_balancer.get_l7_policy( self.L7POLICY_ID) self.assertEqual(self.L7POLICY_NAME, test_l7_policy.name) def test_l7_rule_find(self): test_l7_rule = self.conn.load_balancer.find_l7_rule( self.L7RULE_ID, self.L7POLICY_ID) self.assertEqual(self.L7RULE_ID, test_l7_rule.id) self.assertEqual(self.L7RULE_TYPE, test_l7_rule.type) def test_l7_rule_get(self): test_l7_rule = self.conn.load_balancer.get_l7_rule( self.L7RULE_ID, l7_policy=self.L7POLICY_ID) self.assertEqual(self.L7RULE_ID, test_l7_rule.id) self.assertEqual(self.COMPARE_TYPE, test_l7_rule.compare_type) self.assertEqual(self.L7RULE_TYPE, test_l7_rule.type) self.assertEqual(self.L7RULE_VALUE, test_l7_rule.rule_value) def test_l7_rule_list(self): ids = [l7.id for l7 in self.conn.load_balancer.l7_rules( l7_policy=self.L7POLICY_ID)] self.assertIn(self.L7RULE_ID, ids) def test_l7_rule_update(self): test_lb = self.conn.load_balancer.get_load_balancer(self.LB_ID) self.conn.load_balancer.update_l7_rule(self.L7RULE_ID, l7_policy=self.L7POLICY_ID, rule_value=self.UPDATE_NAME) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_l7_rule = self.conn.load_balancer.get_l7_rule( self.L7RULE_ID, l7_policy=self.L7POLICY_ID) self.assertEqual(self.UPDATE_NAME, test_l7_rule.rule_value) self.conn.load_balancer.update_l7_rule(self.L7RULE_ID, l7_policy=self.L7POLICY_ID, rule_value=self.L7RULE_VALUE) self.lb_wait_for_status(test_lb, status='ACTIVE', failures=['ERROR']) test_l7_rule = self.conn.load_balancer.get_l7_rule( self.L7RULE_ID, l7_policy=self.L7POLICY_ID,) self.assertEqual(self.L7RULE_VALUE, test_l7_rule.rule_value) openstacksdk-0.11.3/openstack/tests/functional/load_balancer/__init__.py0000666000175100017510000000000013236151340026451 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/functional/load_balancer/base.py0000666000175100017510000000501613236151340025640 0ustar zuulzuul00000000000000# Copyright 2017 Rackspace, US Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from openstack import exceptions from openstack.tests.functional import base class BaseLBFunctionalTest(base.BaseFunctionalTest): def lb_wait_for_status(self, lb, status, failures, interval=1, wait=120): """Wait for load balancer to be in a particular provisioning status. :param lb: The load balancer to wait on to reach the status. :type lb: :class:`~openstack.load_blanacer.v2.load_balancer :param status: Desired status of the resource. :param list failures: Statuses that would indicate the transition failed such as 'ERROR'. :param interval: Number of seconds to wait between checks. :param wait: Maximum number of seconds to wait for transition. Note, most actions should easily finish in 120 seconds, but for load balancer create slow hosts can take up to ten minutes for nova to fully boot a VM. :return: None :raises: :class:`~openstack.exceptions.ResourceTimeout` transition to status failed to occur in wait seconds. :raises: :class:`~openstack.exceptions.ResourceFailure` resource transitioned to one of the failure states. """ total_sleep = 0 if failures is None: failures = [] while total_sleep < wait: lb = self.conn.load_balancer.get_load_balancer(lb.id) if lb.provisioning_status == status: return None if lb.provisioning_status in failures: msg = ("Load Balancer %s transitioned to failure state %s" % (lb.id, lb.provisioning_status)) raise exceptions.ResourceFailure(msg) time.sleep(interval) total_sleep += interval msg = "Timeout waiting for Load Balancer %s to transition to %s" % ( lb.id, status) raise exceptions.ResourceTimeout(msg) openstacksdk-0.11.3/openstack/tests/__init__.py0000666000175100017510000000000013236151340021541 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/base.py0000666000175100017510000000740413236151340020733 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures import logging import munch import pprint from six import StringIO import testtools import testtools.content _TRUE_VALUES = ('true', '1', 'yes') class TestCase(testtools.TestCase): """Test case base class for all tests.""" # A way to adjust slow test classes TIMEOUT_SCALING_FACTOR = 1.0 def setUp(self): """Run before each test method to initialize test environment.""" super(TestCase, self).setUp() test_timeout = int(os.environ.get('OS_TEST_TIMEOUT', 0)) try: test_timeout = int(test_timeout * self.TIMEOUT_SCALING_FACTOR) except ValueError: # If timeout value is invalid do not set a timeout. test_timeout = 0 if test_timeout > 0: self.useFixture(fixtures.Timeout(test_timeout, gentle=True)) self.useFixture(fixtures.NestedTempfile()) self.useFixture(fixtures.TempHomeDir()) if os.environ.get('OS_STDOUT_CAPTURE') in _TRUE_VALUES: stdout = self.useFixture(fixtures.StringStream('stdout')).stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout)) if os.environ.get('OS_STDERR_CAPTURE') in _TRUE_VALUES: stderr = self.useFixture(fixtures.StringStream('stderr')).stream self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr)) self._log_stream = StringIO() if os.environ.get('OS_ALWAYS_LOG') in _TRUE_VALUES: self.addCleanup(self.printLogs) else: self.addOnException(self.attachLogs) handler = logging.StreamHandler(self._log_stream) formatter = logging.Formatter('%(asctime)s %(name)-32s %(message)s') handler.setFormatter(formatter) logger = logging.getLogger('openstack') logger.setLevel(logging.DEBUG) logger.addHandler(handler) # Enable HTTP level tracing logger = logging.getLogger('keystoneauth') logger.setLevel(logging.DEBUG) logger.addHandler(handler) logger.propagate = False def assertEqual(self, first, second, *args, **kwargs): '''Munch aware wrapper''' if isinstance(first, munch.Munch): first = first.toDict() if isinstance(second, munch.Munch): second = second.toDict() return super(TestCase, self).assertEqual( first, second, *args, **kwargs) def printLogs(self, *args): self._log_stream.seek(0) print(self._log_stream.read()) def attachLogs(self, *args): def reader(): self._log_stream.seek(0) while True: x = self._log_stream.read(4096) if not x: break yield x.encode('utf8') content = testtools.content.content_from_reader( reader, testtools.content_type.UTF8_TEXT, False) self.addDetail('logging', content) def add_info_on_exception(self, name, text): def add_content(unused): self.addDetail(name, testtools.content.text_content( pprint.pformat(text))) self.addOnException(add_content) openstacksdk-0.11.3/openstack/tests/ansible/0000775000175100017510000000000013236151501021054 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/0000775000175100017510000000000013236151501022200 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/network/0000775000175100017510000000000013236151501023671 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/network/vars/0000775000175100017510000000000013236151501024644 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/network/vars/main.yml0000666000175100017510000000011213236151340026310 0ustar zuulzuul00000000000000network_name: shade_network network_shared: false network_external: false openstacksdk-0.11.3/openstack/tests/ansible/roles/network/tasks/0000775000175100017510000000000013236151501025016 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/network/tasks/main.yml0000666000175100017510000000046613236151340026476 0ustar zuulzuul00000000000000--- - name: Create network os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: present shared: "{{ network_shared }}" external: "{{ network_external }}" - name: Delete network os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: absent openstacksdk-0.11.3/openstack/tests/ansible/roles/user/0000775000175100017510000000000013236151501023156 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/user/tasks/0000775000175100017510000000000013236151501024303 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/user/tasks/main.yml0000666000175100017510000000107013236151340025753 0ustar zuulzuul00000000000000--- - name: Create user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: ansible.user@nowhere.net domain: default default_project: demo register: user - debug: var=user - name: Update user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: updated.ansible.user@nowhere.net register: updateduser - debug: var=updateduser - name: Delete user os_user: cloud: "{{ cloud }}" state: absent name: ansible_user openstacksdk-0.11.3/openstack/tests/ansible/roles/volume/0000775000175100017510000000000013236151501023507 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/volume/tasks/0000775000175100017510000000000013236151501024634 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/volume/tasks/main.yml0000666000175100017510000000047713236151340026316 0ustar zuulzuul00000000000000--- - name: Create volume os_volume: cloud: "{{ cloud }}" state: present size: 1 display_name: ansible_volume display_description: Test volume register: vol - debug: var=vol - name: Delete volume os_volume: cloud: "{{ cloud }}" state: absent display_name: ansible_volume openstacksdk-0.11.3/openstack/tests/ansible/roles/keypair/0000775000175100017510000000000013236151501023644 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keypair/vars/0000775000175100017510000000000013236151501024617 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keypair/vars/main.yml0000666000175100017510000000003413236151340026266 0ustar zuulzuul00000000000000keypair_name: shade_keypair openstacksdk-0.11.3/openstack/tests/ansible/roles/keypair/tasks/0000775000175100017510000000000013236151501024771 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keypair/tasks/main.yml0000666000175100017510000000240613236151340026445 0ustar zuulzuul00000000000000--- - name: Create keypair (non-existing) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present - name: Delete keypair (non-existing) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Generate test key file user: name: "{{ ansible_env.USER }}" generate_ssh_key: yes ssh_key_file: .ssh/shade_id_rsa - name: Create keypair (file) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present public_key_file: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa.pub" - name: Delete keypair (file) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Create keypair (key) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present public_key: "{{ lookup('file', '~/.ssh/shade_id_rsa.pub') }}" - name: Delete keypair (key) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Delete test key pub file file: name: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa.pub" state: absent - name: Delete test key pvt file file: name: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa" state: absent openstacksdk-0.11.3/openstack/tests/ansible/roles/user_group/0000775000175100017510000000000013236151501024372 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/user_group/tasks/0000775000175100017510000000000013236151501025517 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/user_group/tasks/main.yml0000666000175100017510000000116513236151340027174 0ustar zuulzuul00000000000000--- - name: Create user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: ansible.user@nowhere.net domain: default default_project: demo register: user - name: Assign user to nonadmins group os_user_group: cloud: "{{ cloud }}" state: present user: ansible_user group: nonadmins - name: Remove user from nonadmins group os_user_group: cloud: "{{ cloud }}" state: absent user: ansible_user group: nonadmins - name: Delete user os_user: cloud: "{{ cloud }}" state: absent name: ansible_user openstacksdk-0.11.3/openstack/tests/ansible/roles/nova_flavor/0000775000175100017510000000000013236151501024514 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/nova_flavor/tasks/0000775000175100017510000000000013236151501025641 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/nova_flavor/tasks/main.yml0000666000175100017510000000204013236151340027307 0ustar zuulzuul00000000000000--- - name: Create public flavor os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_public_flavor is_public: True ram: 1024 vcpus: 1 disk: 10 ephemeral: 10 swap: 1 flavorid: 12345 - name: Delete public flavor os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_public_flavor - name: Create private flavor os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_private_flavor is_public: False ram: 1024 vcpus: 1 disk: 10 ephemeral: 10 swap: 1 flavorid: 12345 - name: Delete private flavor os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_private_flavor - name: Create flavor (defaults) os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_defaults_flavor ram: 1024 vcpus: 1 disk: 10 - name: Delete flavor (defaults) os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_defaults_flavor openstacksdk-0.11.3/openstack/tests/ansible/roles/group/0000775000175100017510000000000013236151501023334 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/group/vars/0000775000175100017510000000000013236151501024307 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/group/vars/main.yml0000666000175100017510000000003213236151340025754 0ustar zuulzuul00000000000000group_name: ansible_group openstacksdk-0.11.3/openstack/tests/ansible/roles/group/tasks/0000775000175100017510000000000013236151501024461 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/group/tasks/main.yml0000666000175100017510000000056413236151340026140 0ustar zuulzuul00000000000000--- - name: Create group os_group: cloud: "{{ cloud }}" state: present name: "{{ group_name }}" - name: Update group os_group: cloud: "{{ cloud }}" state: present name: "{{ group_name }}" description: "updated description" - name: Delete group os_group: cloud: "{{ cloud }}" state: absent name: "{{ group_name }}" openstacksdk-0.11.3/openstack/tests/ansible/roles/subnet/0000775000175100017510000000000013236151501023500 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/subnet/vars/0000775000175100017510000000000013236151501024453 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/subnet/vars/main.yml0000666000175100017510000000003213236151340026120 0ustar zuulzuul00000000000000subnet_name: shade_subnet openstacksdk-0.11.3/openstack/tests/ansible/roles/subnet/tasks/0000775000175100017510000000000013236151501024625 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/subnet/tasks/main.yml0000666000175100017510000000176013236151340026303 0ustar zuulzuul00000000000000--- - name: Create network {{ network_name }} os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: present - name: Create subnet {{ subnet_name }} on network {{ network_name }} os_subnet: cloud: "{{ cloud }}" network_name: "{{ network_name }}" name: "{{ subnet_name }}" state: present enable_dhcp: false dns_nameservers: - 8.8.8.7 - 8.8.8.8 cidr: 192.168.0.0/24 gateway_ip: 192.168.0.1 allocation_pool_start: 192.168.0.2 allocation_pool_end: 192.168.0.254 - name: Update subnet os_subnet: cloud: "{{ cloud }}" network_name: "{{ network_name }}" name: "{{ subnet_name }}" state: present dns_nameservers: - 8.8.8.7 cidr: 192.168.0.0/24 - name: Delete subnet {{ subnet_name }} os_subnet: cloud: "{{ cloud }}" name: "{{ subnet_name }}" state: absent - name: Delete network {{ network_name }} os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: absent openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_domain/0000775000175100017510000000000013236151501025370 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_domain/vars/0000775000175100017510000000000013236151501026343 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_domain/vars/main.yml0000666000175100017510000000003413236151340030012 0ustar zuulzuul00000000000000domain_name: ansible_domain openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_domain/tasks/0000775000175100017510000000000013236151501026515 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_domain/tasks/main.yml0000666000175100017510000000070413236151340030170 0ustar zuulzuul00000000000000--- - name: Create keystone domain os_keystone_domain: cloud: "{{ cloud }}" state: present name: "{{ domain_name }}" description: "test description" - name: Update keystone domain os_keystone_domain: cloud: "{{ cloud }}" name: "{{ domain_name }}" description: "updated description" - name: Delete keystone domain os_keystone_domain: cloud: "{{ cloud }}" state: absent name: "{{ domain_name }}" openstacksdk-0.11.3/openstack/tests/ansible/roles/image/0000775000175100017510000000000013236151501023262 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/image/vars/0000775000175100017510000000000013236151501024235 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/image/vars/main.yml0000666000175100017510000000003213236151340025702 0ustar zuulzuul00000000000000image_name: ansible_image openstacksdk-0.11.3/openstack/tests/ansible/roles/image/tasks/0000775000175100017510000000000013236151501024407 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/image/tasks/main.yml0000666000175100017510000000215113236151340026060 0ustar zuulzuul00000000000000--- - name: Create a test image file shell: mktemp register: tmp_file - name: Fill test image file to 1MB shell: truncate -s 1048576 {{ tmp_file.stdout }} - name: Create raw image (defaults) os_image: cloud: "{{ cloud }}" state: present name: "{{ image_name }}" filename: "{{ tmp_file.stdout }}" disk_format: raw register: image - debug: var=image - name: Delete raw image (defaults) os_image: cloud: "{{ cloud }}" state: absent name: "{{ image_name }}" - name: Create raw image (complex) os_image: cloud: "{{ cloud }}" state: present name: "{{ image_name }}" filename: "{{ tmp_file.stdout }}" disk_format: raw is_public: True min_disk: 10 min_ram: 1024 kernel: cirros-vmlinuz ramdisk: cirros-initrd properties: cpu_arch: x86_64 distro: ubuntu register: image - debug: var=image - name: Delete raw image (complex) os_image: cloud: "{{ cloud }}" state: absent name: "{{ image_name }}" - name: Delete test image file file: name: "{{ tmp_file.stdout }}" state: absent openstacksdk-0.11.3/openstack/tests/ansible/roles/object/0000775000175100017510000000000013236151501023446 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/object/tasks/0000775000175100017510000000000013236151501024573 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/object/tasks/main.yml0000666000175100017510000000136513236151340026252 0ustar zuulzuul00000000000000--- - name: Create a test object file shell: mktemp register: tmp_file - name: Create container os_object: cloud: "{{ cloud }}" state: present container: ansible_container container_access: private - name: Put object os_object: cloud: "{{ cloud }}" state: present name: ansible_object filename: "{{ tmp_file.stdout }}" container: ansible_container - name: Delete object os_object: cloud: "{{ cloud }}" state: absent name: ansible_object container: ansible_container - name: Delete container os_object: cloud: "{{ cloud }}" state: absent container: ansible_container - name: Delete test object file file: name: "{{ tmp_file.stdout }}" state: absent openstacksdk-0.11.3/openstack/tests/ansible/roles/port/0000775000175100017510000000000013236151501023164 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/port/vars/0000775000175100017510000000000013236151501024137 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/port/vars/main.yml0000666000175100017510000000020113236151340025602 0ustar zuulzuul00000000000000network_name: ansible_port_network subnet_name: ansible_port_subnet port_name: ansible_port secgroup_name: ansible_port_secgroup openstacksdk-0.11.3/openstack/tests/ansible/roles/port/tasks/0000775000175100017510000000000013236151501024311 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/port/tasks/main.yml0000666000175100017510000000426013236151340025765 0ustar zuulzuul00000000000000--- - name: Create network os_network: cloud: "{{ cloud }}" state: present name: "{{ network_name }}" external: True - name: Create subnet os_subnet: cloud: "{{ cloud }}" state: present name: "{{ subnet_name }}" network_name: "{{ network_name }}" cidr: 10.5.5.0/24 - name: Create port (no security group) os_port: cloud: "{{ cloud }}" state: present name: "{{ port_name }}" network: "{{ network_name }}" no_security_groups: True fixed_ips: - ip_address: 10.5.5.69 register: port - debug: var=port - name: Delete port (no security group) os_port: cloud: "{{ cloud }}" state: absent name: "{{ port_name }}" - name: Create security group os_security_group: cloud: "{{ cloud }}" state: present name: "{{ secgroup_name }}" description: Test group - name: Create port (with security group) os_port: cloud: "{{ cloud }}" state: present name: "{{ port_name }}" network: "{{ network_name }}" fixed_ips: - ip_address: 10.5.5.69 security_groups: - "{{ secgroup_name }}" register: port - debug: var=port - name: Delete port (with security group) os_port: cloud: "{{ cloud }}" state: absent name: "{{ port_name }}" - name: Create port (with allowed_address_pairs and extra_dhcp_opts) os_port: cloud: "{{ cloud }}" state: present name: "{{ port_name }}" network: "{{ network_name }}" no_security_groups: True allowed_address_pairs: - ip_address: 10.6.7.0/24 extra_dhcp_opts: - opt_name: "bootfile-name" opt_value: "testfile.1" register: port - debug: var=port - name: Delete port (with allowed_address_pairs and extra_dhcp_opts) os_port: cloud: "{{ cloud }}" state: absent name: "{{ port_name }}" - name: Delete security group os_security_group: cloud: "{{ cloud }}" state: absent name: "{{ secgroup_name }}" - name: Delete subnet os_subnet: cloud: "{{ cloud }}" state: absent name: "{{ subnet_name }}" - name: Delete network os_network: cloud: "{{ cloud }}" state: absent name: "{{ network_name }}" openstacksdk-0.11.3/openstack/tests/ansible/roles/router/0000775000175100017510000000000013236151501023520 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/router/vars/0000775000175100017510000000000013236151501024473 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/router/vars/main.yml0000666000175100017510000000011013236151340026135 0ustar zuulzuul00000000000000external_network_name: ansible_external_net router_name: ansible_router openstacksdk-0.11.3/openstack/tests/ansible/roles/router/tasks/0000775000175100017510000000000013236151501024645 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/router/tasks/main.yml0000666000175100017510000000307113236151340026320 0ustar zuulzuul00000000000000--- - name: Create external network os_network: cloud: "{{ cloud }}" state: present name: "{{ external_network_name }}" external: true - name: Create internal network os_network: cloud: "{{ cloud }}" state: present name: "{{ network_name }}" external: false - name: Create subnet1 os_subnet: cloud: "{{ cloud }}" state: present network_name: "{{ external_network_name }}" name: shade_subnet1 cidr: 10.6.6.0/24 - name: Create subnet2 os_subnet: cloud: "{{ cloud }}" state: present network_name: "{{ network_name }}" name: shade_subnet2 cidr: 10.7.7.0/24 - name: Create router os_router: cloud: "{{ cloud }}" state: present name: "{{ router_name }}" network: "{{ external_network_name }}" - name: Update router os_router: cloud: "{{ cloud }}" state: present name: "{{ router_name }}" network: "{{ external_network_name }}" interfaces: - shade_subnet2 - name: Delete router os_router: cloud: "{{ cloud }}" state: absent name: "{{ router_name }}" - name: Delete subnet1 os_subnet: cloud: "{{ cloud }}" state: absent name: shade_subnet1 - name: Delete subnet2 os_subnet: cloud: "{{ cloud }}" state: absent name: shade_subnet2 - name: Delete internal network os_network: cloud: "{{ cloud }}" state: absent name: "{{ network_name }}" - name: Delete external network os_network: cloud: "{{ cloud }}" state: absent name: "{{ external_network_name }}" openstacksdk-0.11.3/openstack/tests/ansible/roles/security_group/0000775000175100017510000000000013236151501025263 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/security_group/vars/0000775000175100017510000000000013236151501026236 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/security_group/vars/main.yml0000666000175100017510000000003613236151340027707 0ustar zuulzuul00000000000000secgroup_name: shade_secgroup openstacksdk-0.11.3/openstack/tests/ansible/roles/security_group/tasks/0000775000175100017510000000000013236151501026410 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/security_group/tasks/main.yml0000666000175100017510000000570613236151340030072 0ustar zuulzuul00000000000000--- - name: Create security group os_security_group: cloud: "{{ cloud }}" name: "{{ secgroup_name }}" state: present description: Created from Ansible playbook - name: Create empty ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: icmp remote_ip_prefix: 0.0.0.0/0 - name: Create -1 ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: icmp port_range_min: -1 port_range_max: -1 remote_ip_prefix: 0.0.0.0/0 - name: Create empty TCP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp remote_ip_prefix: 0.0.0.0/0 - name: Create empty UDP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: udp remote_ip_prefix: 0.0.0.0/0 - name: Create HTTP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp port_range_min: 80 port_range_max: 80 remote_ip_prefix: 0.0.0.0/0 - name: Create egress rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp port_range_min: 30000 port_range_max: 30001 remote_ip_prefix: 0.0.0.0/0 direction: egress - name: Delete empty ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: icmp remote_ip_prefix: 0.0.0.0/0 - name: Delete -1 ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: icmp port_range_min: -1 port_range_max: -1 remote_ip_prefix: 0.0.0.0/0 - name: Delete empty TCP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp remote_ip_prefix: 0.0.0.0/0 - name: Delete empty UDP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: udp remote_ip_prefix: 0.0.0.0/0 - name: Delete HTTP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp port_range_min: 80 port_range_max: 80 remote_ip_prefix: 0.0.0.0/0 - name: Delete egress rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp port_range_min: 30000 port_range_max: 30001 remote_ip_prefix: 0.0.0.0/0 direction: egress - name: Delete security group os_security_group: cloud: "{{ cloud }}" name: "{{ secgroup_name }}" state: absent openstacksdk-0.11.3/openstack/tests/ansible/roles/auth/0000775000175100017510000000000013236151501023141 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/auth/tasks/0000775000175100017510000000000013236151501024266 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/auth/tasks/main.yml0000666000175100017510000000014513236151340025740 0ustar zuulzuul00000000000000--- - name: Authenticate to the cloud os_auth: cloud={{ cloud }} - debug: var=service_catalog openstacksdk-0.11.3/openstack/tests/ansible/roles/server/0000775000175100017510000000000013236151501023506 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/server/vars/0000775000175100017510000000000013236151501024461 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/server/vars/main.yaml0000666000175100017510000000010413236151340026267 0ustar zuulzuul00000000000000server_network: private server_name: ansible_server flavor: m1.tiny openstacksdk-0.11.3/openstack/tests/ansible/roles/server/tasks/0000775000175100017510000000000013236151501024633 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/server/tasks/main.yml0000666000175100017510000000370413236151340026311 0ustar zuulzuul00000000000000--- - name: Create server with meta as CSV os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" network: "{{ server_network }}" auto_floating_ip: false meta: "key1=value1,key2=value2" wait: true register: server - debug: var=server - name: Delete server with meta as CSV os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true - name: Create server with meta as dict os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" auto_floating_ip: false network: "{{ server_network }}" meta: key1: value1 key2: value2 wait: true register: server - debug: var=server - name: Delete server with meta as dict os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true - name: Create server (FIP from pool/network) os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" network: "{{ server_network }}" floating_ip_pools: - public wait: true register: server - debug: var=server - name: Delete server (FIP from pool/network) os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true - name: Create server from volume os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" network: "{{ server_network }}" auto_floating_ip: false boot_from_volume: true volume_size: 5 terminate_volume: true wait: true register: server - debug: var=server - name: Delete server with volume os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_role/0000775000175100017510000000000013236151501025062 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_role/vars/0000775000175100017510000000000013236151501026035 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_role/vars/main.yml0000666000175100017510000000004113236151340027502 0ustar zuulzuul00000000000000role_name: ansible_keystone_role openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_role/tasks/0000775000175100017510000000000013236151501026207 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/keystone_role/tasks/main.yml0000666000175100017510000000037413236151340027665 0ustar zuulzuul00000000000000--- - name: Create keystone role os_keystone_role: cloud: "{{ cloud }}" state: present name: "{{ role_name }}" - name: Delete keystone role os_keystone_role: cloud: "{{ cloud }}" state: absent name: "{{ role_name }}" openstacksdk-0.11.3/openstack/tests/ansible/roles/client_config/0000775000175100017510000000000013236151501025003 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/client_config/tasks/0000775000175100017510000000000013236151501026130 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/roles/client_config/tasks/main.yml0000666000175100017510000000023313236151340027600 0ustar zuulzuul00000000000000--- - name: List all profiles os_client_config: register: list # WARNING: This will output sensitive authentication information!!!! - debug: var=list openstacksdk-0.11.3/openstack/tests/ansible/hooks/0000775000175100017510000000000013236151501022177 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/ansible/hooks/post_test_hook.sh0000777000175100017510000000210313236151340025601 0ustar zuulzuul00000000000000#!/bin/sh # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(shade) Rework for Zuul v3 export OPENSTACKSDK_DIR="$BASE/new/python-openstacksdk" cd $OPENSTACKSDK_DIR sudo chown -R jenkins:stack $OPENSTACKSDK_DIR echo "Running shade Ansible test suite" if [ ${OPENSTACKSDK_ANSIBLE_DEV:-0} -eq 1 ] then # Use the upstream development version of Ansible set +e sudo -E -H -u jenkins tox -eansible -- -d EXIT_CODE=$? set -e else # Use the release version of Ansible set +e sudo -E -H -u jenkins tox -eansible EXIT_CODE=$? set -e fi exit $EXIT_CODE openstacksdk-0.11.3/openstack/tests/ansible/run.yml0000666000175100017510000000162313236151340022410 0ustar zuulzuul00000000000000--- - hosts: localhost connection: local gather_facts: true roles: - { role: auth, tags: auth } - { role: client_config, tags: client_config } - { role: group, tags: group } # TODO(mordred) Reenable this once the fixed os_image winds up in an # upstream ansible release. # - { role: image, tags: image } - { role: keypair, tags: keypair } - { role: keystone_domain, tags: keystone_domain } - { role: keystone_role, tags: keystone_role } - { role: network, tags: network } - { role: nova_flavor, tags: nova_flavor } - { role: object, tags: object } - { role: port, tags: port } - { role: router, tags: router } - { role: security_group, tags: security_group } - { role: server, tags: server } - { role: subnet, tags: subnet } - { role: user, tags: user } - { role: user_group, tags: user_group } - { role: volume, tags: volume } openstacksdk-0.11.3/openstack/tests/ansible/README.txt0000666000175100017510000000211113236151340022550 0ustar zuulzuul00000000000000This directory contains a testing infrastructure for the Ansible OpenStack modules. You will need a clouds.yaml file in order to run the tests. You must provide a value for the `cloud` variable for each run (using the -e option) as a default is not currently provided. If you want to run these tests against devstack, it is easiest to use the tox target. This assumes you have a devstack-admin cloud defined in your clouds.yaml file that points to devstack. Some examples of using tox: tox -e ansible tox -e ansible keypair security_group If you want to run these tests directly, or against different clouds, then you'll need to use the ansible-playbook command that comes with the Ansible distribution and feed it the run.yml playbook. Some examples: # Run all module tests against a provider ansible-playbook run.yml -e "cloud=hp" # Run only the keypair and security_group tests ansible-playbook run.yml -e "cloud=hp" --tags "keypair,security_group" # Run all tests except security_group ansible-playbook run.yml -e "cloud=hp" --skip-tags "security_group" openstacksdk-0.11.3/openstack/tests/unit/0000775000175100017510000000000013236151501020416 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/block_store/0000775000175100017510000000000013236151501022724 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/block_store/v2/0000775000175100017510000000000013236151501023253 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/block_store/v2/test_stats.py0000666000175100017510000000320513236151340026025 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.block_storage.v2 import stats POOLS = {"name": "pool1", "capabilities": { "updated": "2014-10-28T00=00=00-00=00", "total_capacity": 1024, "free_capacity": 100, "volume_backend_name": "pool1", "reserved_percentage": "0", "driver_version": "1.0.0", "storage_protocol": "iSCSI", "QoS_support": "false" } } class TestBackendPools(testtools.TestCase): def setUp(self): super(TestBackendPools, self).setUp() def test_basic(self): sot = stats.Pools(POOLS) self.assertEqual("pool", sot.resource_key) self.assertEqual("pools", sot.resources_key) self.assertEqual("/scheduler-stats/get_pools?detail=True", sot.base_path) self.assertEqual("volume", sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertFalse(sot.allow_update) openstacksdk-0.11.3/openstack/tests/unit/test_resource.py0000666000175100017510000016762713236151340023704 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from keystoneauth1 import adapter import mock import requests import six from openstack import exceptions from openstack import format from openstack import resource from openstack.tests.unit import base class FakeResponse(object): def __init__(self, response, status_code=200, headers=None): self.body = response self.status_code = status_code headers = headers if headers else {'content-type': 'application/json'} self.headers = requests.structures.CaseInsensitiveDict(headers) def json(self): return self.body class TestComponent(base.TestCase): class ExampleComponent(resource._BaseComponent): key = "_example" # Since we're testing ExampleComponent, which is as isolated as we # can test _BaseComponent due to it's needing to be a data member # of a class that has an attribute on the parent class named `key`, # each test has to implement a class with a name that is the same # as ExampleComponent.key, which should be a dict containing the # keys and values to test against. def test_implementations(self): self.assertEqual("_body", resource.Body.key) self.assertEqual("_header", resource.Header.key) self.assertEqual("_uri", resource.URI.key) def test_creation(self): sot = resource._BaseComponent( "name", type=int, default=1, alternate_id=True) self.assertEqual("name", sot.name) self.assertEqual(int, sot.type) self.assertEqual(1, sot.default) self.assertTrue(sot.alternate_id) def test_get_no_instance(self): sot = resource._BaseComponent("test") # Test that we short-circuit everything when given no instance. result = sot.__get__(None, None) self.assertIsNone(result) # NOTE: Some tests will use a default=1 setting when testing result # values that should be None because the default-for-default is also None. def test_get_name_None(self): name = "name" class Parent(object): _example = {name: None} instance = Parent() sot = TestComponent.ExampleComponent(name, default=1) # Test that we short-circuit any typing of a None value. result = sot.__get__(instance, None) self.assertIsNone(result) def test_get_default(self): expected_result = 123 class Parent(object): _example = {} instance = Parent() # NOTE: type=dict but the default value is an int. If we didn't # short-circuit the typing part of __get__ it would fail. sot = TestComponent.ExampleComponent("name", type=dict, default=expected_result) # Test that we directly return any default value. result = sot.__get__(instance, None) self.assertEqual(expected_result, result) def test_get_name_untyped(self): name = "name" expected_result = 123 class Parent(object): _example = {name: expected_result} instance = Parent() sot = TestComponent.ExampleComponent("name") # Test that we return any the value as it is set. result = sot.__get__(instance, None) self.assertEqual(expected_result, result) # The code path for typing after a raw value has been found is the same. def test_get_name_typed(self): name = "name" value = "123" class Parent(object): _example = {name: value} instance = Parent() sot = TestComponent.ExampleComponent("name", type=int) # Test that we run the underlying value through type conversion. result = sot.__get__(instance, None) self.assertEqual(int(value), result) def test_get_name_formatter(self): name = "name" value = "123" expected_result = "one hundred twenty three" class Parent(object): _example = {name: value} class FakeFormatter(format.Formatter): @classmethod def deserialize(cls, value): return expected_result instance = Parent() sot = TestComponent.ExampleComponent("name", type=FakeFormatter) # Mock out issubclass rather than having an actual format.Formatter # This can't be mocked via decorator, isolate it to wrapping the call. result = sot.__get__(instance, None) self.assertEqual(expected_result, result) def test_set_name_untyped(self): name = "name" expected_value = "123" class Parent(object): _example = {} instance = Parent() sot = TestComponent.ExampleComponent("name") # Test that we don't run the value through type conversion. sot.__set__(instance, expected_value) self.assertEqual(expected_value, instance._example[name]) def test_set_name_typed(self): expected_value = "123" class Parent(object): _example = {} instance = Parent() # The type we give to ExampleComponent has to be an actual type, # not an instance, so we can't get the niceties of a mock.Mock # instance that would allow us to call `assert_called_once_with` to # ensure that we're sending the value through the type. # Instead, we use this tiny version of a similar thing. class FakeType(object): calls = [] def __init__(self, arg): FakeType.calls.append(arg) sot = TestComponent.ExampleComponent("name", type=FakeType) # Test that we run the value through type conversion. sot.__set__(instance, expected_value) self.assertEqual([expected_value], FakeType.calls) def test_set_name_formatter(self): expected_value = "123" class Parent(object): _example = {} instance = Parent() # As with test_set_name_typed, create a pseudo-Mock to track what # gets called on the type. class FakeFormatter(format.Formatter): calls = [] @classmethod def serialize(cls, arg): FakeFormatter.calls.append(arg) @classmethod def deserialize(cls, arg): FakeFormatter.calls.append(arg) sot = TestComponent.ExampleComponent("name", type=FakeFormatter) # Test that we run the value through type conversion. sot.__set__(instance, expected_value) self.assertEqual([expected_value], FakeFormatter.calls) def test_delete_name(self): name = "name" expected_value = "123" class Parent(object): _example = {name: expected_value} instance = Parent() sot = TestComponent.ExampleComponent("name") sot.__delete__(instance) self.assertNotIn(name, instance._example) def test_delete_name_doesnt_exist(self): name = "name" expected_value = "123" class Parent(object): _example = {"what": expected_value} instance = Parent() sot = TestComponent.ExampleComponent(name) sot.__delete__(instance) self.assertNotIn(name, instance._example) class TestComponentManager(base.TestCase): def test_create_basic(self): sot = resource._ComponentManager() self.assertEqual(dict(), sot.attributes) self.assertEqual(set(), sot._dirty) def test_create_unsynced(self): attrs = {"hey": 1, "hi": 2, "hello": 3} sync = False sot = resource._ComponentManager(attributes=attrs, synchronized=sync) self.assertEqual(attrs, sot.attributes) self.assertEqual(set(attrs.keys()), sot._dirty) def test_create_synced(self): attrs = {"hey": 1, "hi": 2, "hello": 3} sync = True sot = resource._ComponentManager(attributes=attrs, synchronized=sync) self.assertEqual(attrs, sot.attributes) self.assertEqual(set(), sot._dirty) def test_getitem(self): key = "key" value = "value" attrs = {key: value} sot = resource._ComponentManager(attributes=attrs) self.assertEqual(value, sot.__getitem__(key)) def test_setitem_new(self): key = "key" value = "value" sot = resource._ComponentManager() sot.__setitem__(key, value) self.assertIn(key, sot.attributes) self.assertIn(key, sot.dirty) def test_setitem_unchanged(self): key = "key" value = "value" attrs = {key: value} sot = resource._ComponentManager(attributes=attrs, synchronized=True) # This shouldn't end up in the dirty list since we're just re-setting. sot.__setitem__(key, value) self.assertEqual(value, sot.attributes[key]) self.assertNotIn(key, sot.dirty) def test_delitem(self): key = "key" value = "value" attrs = {key: value} sot = resource._ComponentManager(attributes=attrs, synchronized=True) sot.__delitem__(key) self.assertIsNone(sot.dirty[key]) def test_iter(self): attrs = {"key": "value"} sot = resource._ComponentManager(attributes=attrs) self.assertItemsEqual(iter(attrs), sot.__iter__()) def test_len(self): attrs = {"key": "value"} sot = resource._ComponentManager(attributes=attrs) self.assertEqual(len(attrs), sot.__len__()) def test_dirty(self): key = "key" key2 = "key2" value = "value" attrs = {key: value} sot = resource._ComponentManager(attributes=attrs, synchronized=False) self.assertEqual({key: value}, sot.dirty) sot.__setitem__(key2, value) self.assertEqual({key: value, key2: value}, sot.dirty) def test_clean(self): key = "key" value = "value" attrs = {key: value} sot = resource._ComponentManager(attributes=attrs, synchronized=False) self.assertEqual(attrs, sot.dirty) sot.clean() self.assertEqual(dict(), sot.dirty) class Test_Request(base.TestCase): def test_create(self): uri = 1 body = 2 headers = 3 sot = resource._Request(uri, body, headers) self.assertEqual(uri, sot.url) self.assertEqual(body, sot.body) self.assertEqual(headers, sot.headers) class TestQueryParameters(base.TestCase): def test_create(self): location = "location" mapping = {"first_name": "first-name"} sot = resource.QueryParameters(location, **mapping) self.assertEqual({"location": "location", "first_name": "first-name", "limit": "limit", "marker": "marker"}, sot._mapping) def test_transpose_unmapped(self): location = "location" mapping = {"first_name": "first-name"} sot = resource.QueryParameters(location, **mapping) result = sot._transpose({"location": "Brooklyn", "first_name": "Brian", "last_name": "Curtin"}) # last_name isn't mapped and shouldn't be included self.assertEqual({"location": "Brooklyn", "first-name": "Brian"}, result) def test_transpose_not_in_query(self): location = "location" mapping = {"first_name": "first-name"} sot = resource.QueryParameters(location, **mapping) result = sot._transpose({"location": "Brooklyn"}) # first_name not being in the query shouldn't affect results self.assertEqual({"location": "Brooklyn"}, result) class TestResource(base.TestCase): def test_initialize_basic(self): body = {"body": 1} header = {"header": 2, "Location": "somewhere"} uri = {"uri": 3} everything = dict(itertools.chain(body.items(), header.items(), uri.items())) mock_collect = mock.Mock() mock_collect.return_value = body, header, uri with mock.patch.object(resource.Resource, "_collect_attrs", mock_collect): sot = resource.Resource(_synchronized=False, **everything) mock_collect.assert_called_once_with(everything) self.assertEqual("somewhere", sot.location) self.assertIsInstance(sot._body, resource._ComponentManager) self.assertEqual(body, sot._body.dirty) self.assertIsInstance(sot._header, resource._ComponentManager) self.assertEqual(header, sot._header.dirty) self.assertIsInstance(sot._uri, resource._ComponentManager) self.assertEqual(uri, sot._uri.dirty) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) self.assertFalse(sot.allow_head) self.assertEqual('PUT', sot.update_method) self.assertEqual('POST', sot.create_method) def test_repr(self): a = {"a": 1} b = {"b": 2} c = {"c": 3} class Test(resource.Resource): def __init__(self): self._body = mock.Mock() self._body.attributes.items = mock.Mock( return_value=a.items()) self._header = mock.Mock() self._header.attributes.items = mock.Mock( return_value=b.items()) self._uri = mock.Mock() self._uri.attributes.items = mock.Mock( return_value=c.items()) the_repr = repr(Test()) # Don't test the arguments all together since the dictionary order # they're rendered in can't be depended on, nor does it matter. self.assertIn("openstack.tests.unit.test_resource.Test", the_repr) self.assertIn("a=1", the_repr) self.assertIn("b=2", the_repr) self.assertIn("c=3", the_repr) def test_equality(self): class Example(resource.Resource): x = resource.Body("x") y = resource.Header("y") z = resource.URI("z") e1 = Example(x=1, y=2, z=3) e2 = Example(x=1, y=2, z=3) e3 = Example(x=0, y=0, z=0) self.assertEqual(e1, e2) self.assertNotEqual(e1, e3) def test__update(self): sot = resource.Resource() body = "body" header = "header" uri = "uri" sot._collect_attrs = mock.Mock(return_value=(body, header, uri)) sot._body.update = mock.Mock() sot._header.update = mock.Mock() sot._uri.update = mock.Mock() args = {"arg": 1} sot._update(**args) sot._collect_attrs.assert_called_once_with(args) sot._body.update.assert_called_once_with(body) sot._header.update.assert_called_once_with(header) sot._uri.update.assert_called_once_with(uri) def test__collect_attrs(self): sot = resource.Resource() expected_attrs = ["body", "header", "uri"] sot._consume_attrs = mock.Mock() sot._consume_attrs.side_effect = expected_attrs # It'll get passed an empty dict at the least. actual_attrs = sot._collect_attrs(dict()) self.assertItemsEqual(expected_attrs, actual_attrs) def test__consume_attrs(self): serverside_key1 = "someKey1" clientside_key1 = "some_key1" serverside_key2 = "someKey2" clientside_key2 = "some_key2" value1 = "value1" value2 = "value2" mapping = {serverside_key1: clientside_key1, serverside_key2: clientside_key2} other_key = "otherKey" other_value = "other" attrs = {clientside_key1: value1, serverside_key2: value2, other_key: other_value} sot = resource.Resource() result = sot._consume_attrs(mapping, attrs) # Make sure that the expected key was consumed and we're only # left with the other stuff. self.assertDictEqual({other_key: other_value}, attrs) # Make sure that after we've popped our relevant client-side # key off that we are returning it keyed off of its server-side # name. self.assertDictEqual({serverside_key1: value1, serverside_key2: value2}, result) def test__mapping_defaults(self): # Check that even on an empty class, we get the expected # built-in attributes. self.assertIn("location", resource.Resource._header_mapping()) self.assertIn("name", resource.Resource._body_mapping()) self.assertIn("id", resource.Resource._body_mapping()) def test__mapping_overrides(self): # Iterating through the MRO used to wipe out overrides of mappings # found in base classes. new_name = "MyName" new_id = "MyID" class Test(resource.Resource): name = resource.Body(new_name) id = resource.Body(new_id) mapping = Test._body_mapping() self.assertEqual("name", mapping["MyName"]) self.assertEqual("id", mapping["MyID"]) def test__body_mapping(self): class Test(resource.Resource): x = resource.Body("x") y = resource.Body("y") z = resource.Body("z") self.assertIn("x", Test._body_mapping()) self.assertIn("y", Test._body_mapping()) self.assertIn("z", Test._body_mapping()) def test__header_mapping(self): class Test(resource.Resource): x = resource.Header("x") y = resource.Header("y") z = resource.Header("z") self.assertIn("x", Test._header_mapping()) self.assertIn("y", Test._header_mapping()) self.assertIn("z", Test._header_mapping()) def test__uri_mapping(self): class Test(resource.Resource): x = resource.URI("x") y = resource.URI("y") z = resource.URI("z") self.assertIn("x", Test._uri_mapping()) self.assertIn("y", Test._uri_mapping()) self.assertIn("z", Test._uri_mapping()) def test__getattribute__id_in_body(self): id = "lol" sot = resource.Resource(id=id) result = getattr(sot, "id") self.assertEqual(result, id) def test__getattribute__id_with_alternate(self): id = "lol" class Test(resource.Resource): blah = resource.Body("blah", alternate_id=True) sot = Test(blah=id) result = getattr(sot, "id") self.assertEqual(result, id) def test__getattribute__id_without_alternate(self): class Test(resource.Resource): id = None sot = Test() self.assertIsNone(sot.id) def test__alternate_id_None(self): self.assertEqual("", resource.Resource._alternate_id()) def test__alternate_id(self): class Test(resource.Resource): alt = resource.Body("the_alt", alternate_id=True) self.assertTrue("the_alt", Test._alternate_id()) value1 = "lol" sot = Test(alt=value1) self.assertEqual(sot.alt, value1) self.assertEqual(sot.id, value1) value2 = "rofl" sot = Test(the_alt=value2) self.assertEqual(sot.alt, value2) self.assertEqual(sot.id, value2) def test__get_id_instance(self): class Test(resource.Resource): id = resource.Body("id") value = "id" sot = Test(id=value) self.assertEqual(value, sot._get_id(sot)) def test__get_id_instance_alternate(self): class Test(resource.Resource): attr = resource.Body("attr", alternate_id=True) value = "id" sot = Test(attr=value) self.assertEqual(value, sot._get_id(sot)) def test__get_id_value(self): value = "id" self.assertEqual(value, resource.Resource._get_id(value)) def test_to_dict(self): class Test(resource.Resource): foo = resource.Header('foo') bar = resource.Body('bar') res = Test(id='FAKE_ID') expected = { 'id': 'FAKE_ID', 'name': None, 'location': None, 'foo': None, 'bar': None } self.assertEqual(expected, res.to_dict()) def test_to_dict_no_body(self): class Test(resource.Resource): foo = resource.Header('foo') bar = resource.Body('bar') res = Test(id='FAKE_ID') expected = { 'location': None, 'foo': None, } self.assertEqual(expected, res.to_dict(body=False)) def test_to_dict_no_header(self): class Test(resource.Resource): foo = resource.Header('foo') bar = resource.Body('bar') res = Test(id='FAKE_ID') expected = { 'id': 'FAKE_ID', 'name': None, 'bar': None } self.assertEqual(expected, res.to_dict(headers=False)) def test_to_dict_ignore_none(self): class Test(resource.Resource): foo = resource.Header('foo') bar = resource.Body('bar') res = Test(id='FAKE_ID', bar='BAR') expected = { 'id': 'FAKE_ID', 'bar': 'BAR', } self.assertEqual(expected, res.to_dict(ignore_none=True)) def test_to_dict_with_mro(self): class Parent(resource.Resource): foo = resource.Header('foo') bar = resource.Body('bar') class Child(Parent): foo_new = resource.Header('foo_baz_server') bar_new = resource.Body('bar_baz_server') res = Child(id='FAKE_ID') expected = { 'foo': None, 'bar': None, 'foo_new': None, 'bar_new': None, 'id': 'FAKE_ID', 'location': None, 'name': None } self.assertEqual(expected, res.to_dict()) def test_to_dict_value_error(self): class Test(resource.Resource): foo = resource.Header('foo') bar = resource.Body('bar') res = Test(id='FAKE_ID') err = self.assertRaises(ValueError, res.to_dict, body=False, headers=False) self.assertEqual('At least one of `body` or `headers` must be True', six.text_type(err)) def test_to_dict_with_mro_no_override(self): class Parent(resource.Resource): header = resource.Header('HEADER') body = resource.Body('BODY') class Child(Parent): # The following two properties are not supposed to be overridden # by the parent class property values. header = resource.Header('ANOTHER_HEADER') body = resource.Body('ANOTHER_BODY') res = Child(id='FAKE_ID', body='BODY_VALUE', header='HEADER_VALUE') expected = { 'body': 'BODY_VALUE', 'header': 'HEADER_VALUE', 'id': 'FAKE_ID', 'location': None, 'name': None } self.assertEqual(expected, res.to_dict()) def test_new(self): class Test(resource.Resource): attr = resource.Body("attr") value = "value" sot = Test.new(attr=value) self.assertIn("attr", sot._body.dirty) self.assertEqual(value, sot.attr) def test_existing(self): class Test(resource.Resource): attr = resource.Body("attr") value = "value" sot = Test.existing(attr=value) self.assertNotIn("attr", sot._body.dirty) self.assertEqual(value, sot.attr) def test__prepare_request_with_id(self): class Test(resource.Resource): base_path = "/something" body_attr = resource.Body("x") header_attr = resource.Header("y") the_id = "id" body_value = "body" header_value = "header" sot = Test(id=the_id, body_attr=body_value, header_attr=header_value, _synchronized=False) result = sot._prepare_request(requires_id=True) self.assertEqual("something/id", result.url) self.assertEqual({"x": body_value, "id": the_id}, result.body) self.assertEqual({"y": header_value}, result.headers) def test__prepare_request_missing_id(self): sot = resource.Resource(id=None) self.assertRaises(exceptions.InvalidRequest, sot._prepare_request, requires_id=True) def test__prepare_request_with_key(self): key = "key" class Test(resource.Resource): base_path = "/something" resource_key = key body_attr = resource.Body("x") header_attr = resource.Header("y") body_value = "body" header_value = "header" sot = Test(body_attr=body_value, header_attr=header_value, _synchronized=False) result = sot._prepare_request(requires_id=False, prepend_key=True) self.assertEqual("/something", result.url) self.assertEqual({key: {"x": body_value}}, result.body) self.assertEqual({"y": header_value}, result.headers) def test__translate_response_no_body(self): class Test(resource.Resource): attr = resource.Header("attr") response = FakeResponse({}, headers={"attr": "value"}) sot = Test() sot._translate_response(response, has_body=False) self.assertEqual(dict(), sot._header.dirty) self.assertEqual("value", sot.attr) def test__translate_response_with_body_no_resource_key(self): class Test(resource.Resource): attr = resource.Body("attr") body = {"attr": "value"} response = FakeResponse(body) sot = Test() sot._filter_component = mock.Mock(side_effect=[body, dict()]) sot._translate_response(response, has_body=True) self.assertEqual("value", sot.attr) self.assertEqual(dict(), sot._body.dirty) self.assertEqual(dict(), sot._header.dirty) def test__translate_response_with_body_with_resource_key(self): key = "key" class Test(resource.Resource): resource_key = key attr = resource.Body("attr") body = {"attr": "value"} response = FakeResponse({key: body}) sot = Test() sot._filter_component = mock.Mock(side_effect=[body, dict()]) sot._translate_response(response, has_body=True) self.assertEqual("value", sot.attr) self.assertEqual(dict(), sot._body.dirty) self.assertEqual(dict(), sot._header.dirty) def test_cant_do_anything(self): class Test(resource.Resource): allow_create = False allow_get = False allow_update = False allow_delete = False allow_head = False allow_list = False sot = Test() # The first argument to all of these operations is the session, # but we raise before we get to it so just pass anything in. self.assertRaises(exceptions.MethodNotSupported, sot.create, "") self.assertRaises(exceptions.MethodNotSupported, sot.get, "") self.assertRaises(exceptions.MethodNotSupported, sot.delete, "") self.assertRaises(exceptions.MethodNotSupported, sot.head, "") # list is a generator so you need to begin consuming # it in order to exercise the failure. the_list = sot.list("") self.assertRaises(exceptions.MethodNotSupported, next, the_list) # Update checks the dirty list first before even trying to see # if the call can be made, so fake a dirty list. sot._body = mock.Mock() sot._body.dirty = mock.Mock(return_value={"x": "y"}) self.assertRaises(exceptions.MethodNotSupported, sot.update, "") class TestResourceActions(base.TestCase): def setUp(self): super(TestResourceActions, self).setUp() self.service_name = "service" self.base_path = "base_path" class Test(resource.Resource): service = self.service_name base_path = self.base_path resources_key = 'resources' allow_create = True allow_get = True allow_head = True allow_update = True allow_delete = True allow_list = True self.test_class = Test self.request = mock.Mock(spec=resource._Request) self.request.url = "uri" self.request.body = "body" self.request.headers = "headers" self.response = FakeResponse({}) self.sot = Test(id="id") self.sot._prepare_request = mock.Mock(return_value=self.request) self.sot._translate_response = mock.Mock() self.session = mock.Mock(spec=adapter.Adapter) self.session.create = mock.Mock(return_value=self.response) self.session.get = mock.Mock(return_value=self.response) self.session.put = mock.Mock(return_value=self.response) self.session.patch = mock.Mock(return_value=self.response) self.session.post = mock.Mock(return_value=self.response) self.session.delete = mock.Mock(return_value=self.response) self.session.head = mock.Mock(return_value=self.response) def _test_create(self, cls, requires_id=False, prepend_key=False): id = "id" if requires_id else None sot = cls(id=id) sot._prepare_request = mock.Mock(return_value=self.request) sot._translate_response = mock.Mock() result = sot.create(self.session, prepend_key=prepend_key) sot._prepare_request.assert_called_once_with( requires_id=requires_id, prepend_key=prepend_key) if requires_id: self.session.put.assert_called_once_with( self.request.url, json=self.request.body, headers=self.request.headers) else: self.session.post.assert_called_once_with( self.request.url, json=self.request.body, headers=self.request.headers) sot._translate_response.assert_called_once_with(self.response) self.assertEqual(result, sot) def test_put_create(self): class Test(resource.Resource): service = self.service_name base_path = self.base_path allow_create = True create_method = 'PUT' self._test_create(Test, requires_id=True, prepend_key=True) def test_post_create(self): class Test(resource.Resource): service = self.service_name base_path = self.base_path allow_create = True create_method = 'POST' self._test_create(Test, requires_id=False, prepend_key=True) def test_get(self): result = self.sot.get(self.session) self.sot._prepare_request.assert_called_once_with(requires_id=True) self.session.get.assert_called_once_with( self.request.url,) self.sot._translate_response.assert_called_once_with(self.response) self.assertEqual(result, self.sot) def test_get_not_requires_id(self): result = self.sot.get(self.session, False) self.sot._prepare_request.assert_called_once_with(requires_id=False) self.session.get.assert_called_once_with( self.request.url,) self.sot._translate_response.assert_called_once_with(self.response) self.assertEqual(result, self.sot) def test_head(self): result = self.sot.head(self.session) self.sot._prepare_request.assert_called_once_with() self.session.head.assert_called_once_with( self.request.url, headers={"Accept": ""}) self.sot._translate_response.assert_called_once_with( self.response, has_body=False) self.assertEqual(result, self.sot) def _test_update(self, update_method='PUT', prepend_key=True, has_body=True): self.sot.update_method = update_method # Need to make sot look dirty so we can attempt an update self.sot._body = mock.Mock() self.sot._body.dirty = mock.Mock(return_value={"x": "y"}) self.sot.update(self.session, prepend_key=prepend_key, has_body=has_body) self.sot._prepare_request.assert_called_once_with( prepend_key=prepend_key) if update_method == 'PATCH': self.session.patch.assert_called_once_with( self.request.url, json=self.request.body, headers=self.request.headers) elif update_method == 'POST': self.session.post.assert_called_once_with( self.request.url, json=self.request.body, headers=self.request.headers) elif update_method == 'PUT': self.session.put.assert_called_once_with( self.request.url, json=self.request.body, headers=self.request.headers) self.sot._translate_response.assert_called_once_with( self.response, has_body=has_body) def test_update_put(self): self._test_update(update_method='PUT', prepend_key=True, has_body=True) def test_update_patch(self): self._test_update( update_method='PATCH', prepend_key=False, has_body=False) def test_update_not_dirty(self): self.sot._body = mock.Mock() self.sot._body.dirty = dict() self.sot._header = mock.Mock() self.sot._header.dirty = dict() self.sot.update(self.session) self.session.put.assert_not_called() def test_delete(self): result = self.sot.delete(self.session) self.sot._prepare_request.assert_called_once_with() self.session.delete.assert_called_once_with( self.request.url, headers={"Accept": ""}) self.sot._translate_response.assert_called_once_with( self.response, has_body=False) self.assertEqual(result, self.sot) # NOTE: As list returns a generator, testing it requires consuming # the generator. Wrap calls to self.sot.list in a `list` # and then test the results as a list of responses. def test_list_empty_response(self): mock_response = mock.Mock() mock_response.status_code = 200 mock_response.json.return_value = {"resources": []} self.session.get.return_value = mock_response result = list(self.sot.list(self.session)) self.session.get.assert_called_once_with( self.base_path, headers={"Accept": "application/json"}, params={}) self.assertEqual([], result) def test_list_one_page_response_paginated(self): id_value = 1 mock_response = mock.Mock() mock_response.status_code = 200 mock_response.links = {} mock_response.json.return_value = {"resources": [{"id": id_value}]} self.session.get.return_value = mock_response # Ensure that we break out of the loop on a paginated call # that still only results in one page of data. results = list(self.sot.list(self.session, paginated=True)) self.assertEqual(1, len(results)) self.assertEqual(1, len(self.session.get.call_args_list)) self.assertEqual(id_value, results[0].id) self.assertIsInstance(results[0], self.test_class) def test_list_one_page_response_not_paginated(self): id_value = 1 mock_response = mock.Mock() mock_response.status_code = 200 mock_response.json.return_value = {"resources": [{"id": id_value}]} self.session.get.return_value = mock_response results = list(self.sot.list(self.session, paginated=False)) self.session.get.assert_called_once_with( self.base_path, headers={"Accept": "application/json"}, params={}) self.assertEqual(1, len(results)) self.assertEqual(id_value, results[0].id) self.assertIsInstance(results[0], self.test_class) def test_list_one_page_response_resources_key(self): key = "resources" class Test(self.test_class): resources_key = key id_value = 1 mock_response = mock.Mock() mock_response.status_code = 200 mock_response.json.return_value = {key: [{"id": id_value}]} self.session.get.return_value = mock_response sot = Test() results = list(sot.list(self.session)) self.session.get.assert_called_once_with( self.base_path, headers={"Accept": "application/json"}, params={}) self.assertEqual(1, len(results)) self.assertEqual(id_value, results[0].id) self.assertIsInstance(results[0], self.test_class) def test_list_response_paginated_without_links(self): ids = [1, 2] mock_response = mock.Mock() mock_response.status_code = 200 mock_response.links = {} mock_response.json.return_value = { "resources": [{"id": ids[0]}], "resources_links": [{ "href": "https://example.com/next-url", "rel": "next", }] } mock_response2 = mock.Mock() mock_response2.status_code = 200 mock_response2.links = {} mock_response2.json.return_value = { "resources": [{"id": ids[1]}], } self.session.get.side_effect = [mock_response, mock_response2] results = list(self.sot.list(self.session, paginated=True)) self.assertEqual(2, len(results)) self.assertEqual(ids[0], results[0].id) self.assertEqual(ids[1], results[1].id) self.assertEqual( mock.call('base_path', headers={'Accept': 'application/json'}, params={}), self.session.get.mock_calls[0]) self.assertEqual( mock.call('https://example.com/next-url', headers={'Accept': 'application/json'}, params={}), self.session.get.mock_calls[1]) self.assertEqual(2, len(self.session.get.call_args_list)) self.assertIsInstance(results[0], self.test_class) def test_list_response_paginated_with_links(self): ids = [1, 2] mock_response = mock.Mock() mock_response.status_code = 200 mock_response.links = {} mock_response.json.side_effect = [ { "resources": [{"id": ids[0]}], "resources_links": [{ "href": "https://example.com/next-url", "rel": "next", }] }, { "resources": [{"id": ids[1]}], }] self.session.get.return_value = mock_response results = list(self.sot.list(self.session, paginated=True)) self.assertEqual(2, len(results)) self.assertEqual(ids[0], results[0].id) self.assertEqual(ids[1], results[1].id) self.assertEqual( mock.call('base_path', headers={'Accept': 'application/json'}, params={}), self.session.get.mock_calls[0]) self.assertEqual( mock.call('https://example.com/next-url', headers={'Accept': 'application/json'}, params={}), self.session.get.mock_calls[2]) self.assertEqual(2, len(self.session.get.call_args_list)) self.assertIsInstance(results[0], self.test_class) def test_list_multi_page_response_not_paginated(self): ids = [1, 2] mock_response = mock.Mock() mock_response.status_code = 200 mock_response.json.side_effect = [ {"resources": [{"id": ids[0]}]}, {"resources": [{"id": ids[1]}]}, ] self.session.get.return_value = mock_response results = list(self.sot.list(self.session, paginated=False)) self.assertEqual(1, len(results)) self.assertEqual(ids[0], results[0].id) self.assertIsInstance(results[0], self.test_class) def test_list_query_params(self): id = 1 qp = "query param!" qp_name = "query-param" uri_param = "uri param!" mock_response = mock.Mock() mock_response.status_code = 200 mock_response.links = {} mock_response.json.return_value = {"resources": [{"id": id}]} mock_empty = mock.Mock() mock_empty.status_code = 200 mock_empty.links = {} mock_empty.json.return_value = {"resources": []} self.session.get.side_effect = [mock_response, mock_empty] class Test(self.test_class): _query_mapping = resource.QueryParameters(query_param=qp_name) base_path = "/%(something)s/blah" something = resource.URI("something") results = list(Test.list(self.session, paginated=True, query_param=qp, something=uri_param)) self.assertEqual(1, len(results)) # Look at the `params` argument to each of the get calls that # were made. self.assertEqual( self.session.get.call_args_list[0][1]["params"], {qp_name: qp}) self.assertEqual(self.session.get.call_args_list[0][0][0], Test.base_path % {"something": uri_param}) def test_invalid_list_params(self): id = 1 qp = "query param!" qp_name = "query-param" uri_param = "uri param!" mock_response = mock.Mock() mock_response.json.side_effect = [[{"id": id}], []] self.session.get.return_value = mock_response class Test(self.test_class): _query_mapping = resource.QueryParameters(query_param=qp_name) base_path = "/%(something)s/blah" something = resource.URI("something") try: list(Test.list(self.session, paginated=True, query_param=qp, something=uri_param, something_wrong=True)) self.assertFail('The above line should fail') except exceptions.InvalidResourceQuery as err: self.assertEqual(str(err), 'Invalid query params: something_wrong') def test_values_as_list_params(self): id = 1 qp = "query param!" qp_name = "query-param" uri_param = "uri param!" mock_response = mock.Mock() mock_response.status_code = 200 mock_response.links = {} mock_response.json.return_value = {"resources": [{"id": id}]} mock_empty = mock.Mock() mock_empty.status_code = 200 mock_empty.links = {} mock_empty.json.return_value = {"resources": []} self.session.get.side_effect = [mock_response, mock_empty] class Test(self.test_class): _query_mapping = resource.QueryParameters(query_param=qp_name) base_path = "/%(something)s/blah" something = resource.URI("something") results = list(Test.list(self.session, paginated=True, something=uri_param, **{qp_name: qp})) self.assertEqual(1, len(results)) # Look at the `params` argument to each of the get calls that # were made. self.assertEqual( self.session.get.call_args_list[0][1]["params"], {qp_name: qp}) self.assertEqual(self.session.get.call_args_list[0][0][0], Test.base_path % {"something": uri_param}) def test_values_as_list_params_precedence(self): id = 1 qp = "query param!" qp2 = "query param!!!!!" qp_name = "query-param" uri_param = "uri param!" mock_response = mock.Mock() mock_response.status_code = 200 mock_response.links = {} mock_response.json.return_value = {"resources": [{"id": id}]} mock_empty = mock.Mock() mock_empty.status_code = 200 mock_empty.links = {} mock_empty.json.return_value = {"resources": []} self.session.get.side_effect = [mock_response, mock_empty] class Test(self.test_class): _query_mapping = resource.QueryParameters(query_param=qp_name) base_path = "/%(something)s/blah" something = resource.URI("something") results = list(Test.list(self.session, paginated=True, query_param=qp2, something=uri_param, **{qp_name: qp})) self.assertEqual(1, len(results)) # Look at the `params` argument to each of the get calls that # were made. self.assertEqual( self.session.get.call_args_list[0][1]["params"], {qp_name: qp2}) self.assertEqual(self.session.get.call_args_list[0][0][0], Test.base_path % {"something": uri_param}) def test_list_multi_page_response_paginated(self): ids = [1, 2] resp1 = mock.Mock() resp1.status_code = 200 resp1.links = {} resp1.json.return_value = { "resources": [{"id": ids[0]}], "resources_links": [{ "href": "https://example.com/next-url", "rel": "next", }], } resp2 = mock.Mock() resp2.status_code = 200 resp2.links = {} resp2.json.return_value = { "resources": [{"id": ids[1]}], "resources_links": [{ "href": "https://example.com/next-url", "rel": "next", }], } resp3 = mock.Mock() resp3.status_code = 200 resp3.links = {} resp3.json.return_value = { "resources": [] } self.session.get.side_effect = [resp1, resp2, resp3] results = self.sot.list(self.session, paginated=True) result0 = next(results) self.assertEqual(result0.id, ids[0]) self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={}) result1 = next(results) self.assertEqual(result1.id, ids[1]) self.session.get.assert_called_with( 'https://example.com/next-url', headers={"Accept": "application/json"}, params={}) self.assertRaises(StopIteration, next, results) self.session.get.assert_called_with( 'https://example.com/next-url', headers={"Accept": "application/json"}, params={}) def test_list_multi_page_no_early_termination(self): # This tests verifies that multipages are not early terminated. # APIs can set max_limit to the number of items returned in each # query. If that max_limit is smaller than the limit given by the # user, the return value would contain less items than the limit, # but that doesn't stand to reason that there are no more records, # we should keep trying to get more results. ids = [1, 2, 3, 4] resp1 = mock.Mock() resp1.status_code = 200 resp1.links = {} resp1.json.return_value = { # API's max_limit is set to 2. "resources": [{"id": ids[0]}, {"id": ids[1]}], } resp2 = mock.Mock() resp2.status_code = 200 resp2.links = {} resp2.json.return_value = { # API's max_limit is set to 2. "resources": [{"id": ids[2]}, {"id": ids[3]}], } resp3 = mock.Mock() resp3.status_code = 200 resp3.json.return_value = { "resources": [], } self.session.get.side_effect = [resp1, resp2, resp3] results = self.sot.list(self.session, limit=3, paginated=True) # First page constains only two items, less than the limit given result0 = next(results) self.assertEqual(result0.id, ids[0]) result1 = next(results) self.assertEqual(result1.id, ids[1]) self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={"limit": 3}) # Second page contains another two items result2 = next(results) self.assertEqual(result2.id, ids[2]) result3 = next(results) self.assertEqual(result3.id, ids[3]) self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={"limit": 3, "marker": 2}) # Ensure we're done after those four items self.assertRaises(StopIteration, next, results) # Ensure we've given the last try to get more results self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={"limit": 3, "marker": 4}) # Ensure we made three calls to get this done self.assertEqual(3, len(self.session.get.call_args_list)) def test_list_multi_page_inferred_additional(self): # If we explicitly request a limit and we receive EXACTLY that # amount of results and there is no next link, we make one additional # call to check to see if there are more records and the service is # just sad. # NOTE(mordred) In a perfect world we would not do this. But it's 2018 # and I don't think anyone has any illusions that we live in a perfect # world anymore. ids = [1, 2, 3] resp1 = mock.Mock() resp1.status_code = 200 resp1.links = {} resp1.json.return_value = { "resources": [{"id": ids[0]}, {"id": ids[1]}], } resp2 = mock.Mock() resp2.status_code = 200 resp2.links = {} resp2.json.return_value = {"resources": [{"id": ids[2]}]} self.session.get.side_effect = [resp1, resp2] results = self.sot.list(self.session, limit=2, paginated=True) # Get the first page's two items result0 = next(results) self.assertEqual(result0.id, ids[0]) result1 = next(results) self.assertEqual(result1.id, ids[1]) self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={"limit": 2}) result2 = next(results) self.assertEqual(result2.id, ids[2]) self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={'limit': 2, 'marker': 2}) # Ensure we're done after those three items self.assertRaises(StopIteration, next, results) # Ensure we only made two calls to get this done self.assertEqual(3, len(self.session.get.call_args_list)) def test_list_multi_page_header_count(self): class Test(self.test_class): resources_key = None pagination_key = 'X-Container-Object-Count' self.sot = Test() # Swift returns a total number of objects in a header and we compare # that against the total number returned to know if we need to fetch # more objects. ids = [1, 2, 3] resp1 = mock.Mock() resp1.status_code = 200 resp1.links = {} resp1.headers = {'X-Container-Object-Count': 3} resp1.json.return_value = [{"id": ids[0]}, {"id": ids[1]}] resp2 = mock.Mock() resp2.status_code = 200 resp2.links = {} resp2.headers = {'X-Container-Object-Count': 3} resp2.json.return_value = [{"id": ids[2]}] self.session.get.side_effect = [resp1, resp2] results = self.sot.list(self.session, paginated=True) # Get the first page's two items result0 = next(results) self.assertEqual(result0.id, ids[0]) result1 = next(results) self.assertEqual(result1.id, ids[1]) self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={}) result2 = next(results) self.assertEqual(result2.id, ids[2]) self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={'marker': 2}) # Ensure we're done after those three items self.assertRaises(StopIteration, next, results) # Ensure we only made two calls to get this done self.assertEqual(2, len(self.session.get.call_args_list)) def test_list_multi_page_link_header(self): # Swift returns a total number of objects in a header and we compare # that against the total number returned to know if we need to fetch # more objects. ids = [1, 2, 3] resp1 = mock.Mock() resp1.status_code = 200 resp1.links = { 'next': {'uri': 'https://example.com/next-url', 'rel': 'next'}} resp1.headers = {} resp1.json.return_value = { "resources": [{"id": ids[0]}, {"id": ids[1]}], } resp2 = mock.Mock() resp2.status_code = 200 resp2.links = {} resp2.headers = {} resp2.json.return_value = {"resources": [{"id": ids[2]}]} self.session.get.side_effect = [resp1, resp2] results = self.sot.list(self.session, paginated=True) # Get the first page's two items result0 = next(results) self.assertEqual(result0.id, ids[0]) result1 = next(results) self.assertEqual(result1.id, ids[1]) self.session.get.assert_called_with( self.base_path, headers={"Accept": "application/json"}, params={}) result2 = next(results) self.assertEqual(result2.id, ids[2]) self.session.get.assert_called_with( 'https://example.com/next-url', headers={"Accept": "application/json"}, params={}) # Ensure we're done after those three items self.assertRaises(StopIteration, next, results) # Ensure we only made two calls to get this done self.assertEqual(2, len(self.session.get.call_args_list)) class TestResourceFind(base.TestCase): def setUp(self): super(TestResourceFind, self).setUp() self.result = 1 class Base(resource.Resource): @classmethod def existing(cls, **kwargs): response = mock.Mock() response.status_code = 404 raise exceptions.NotFoundException( 'Not Found', response=response) @classmethod def list(cls, session): return None class OneResult(Base): @classmethod def _get_one_match(cls, *args): return self.result class NoResults(Base): @classmethod def _get_one_match(cls, *args): return None self.no_results = NoResults self.one_result = OneResult def test_find_short_circuit(self): value = 1 class Test(resource.Resource): @classmethod def existing(cls, **kwargs): mock_match = mock.Mock() mock_match.get.return_value = value return mock_match result = Test.find("session", "name") self.assertEqual(result, value) def test_no_match_raise(self): self.assertRaises(exceptions.ResourceNotFound, self.no_results.find, "session", "name", ignore_missing=False) def test_no_match_return(self): self.assertIsNone( self.no_results.find("session", "name", ignore_missing=True)) def test_find_result(self): self.assertEqual(self.result, self.one_result.find("session", "name")) def test_match_empty_results(self): self.assertIsNone(resource.Resource._get_one_match("name", [])) def test_no_match_by_name(self): the_name = "Brian" match = mock.Mock(spec=resource.Resource) match.name = the_name result = resource.Resource._get_one_match("Richard", [match]) self.assertIsNone(result, match) def test_single_match_by_name(self): the_name = "Brian" match = mock.Mock(spec=resource.Resource) match.name = the_name result = resource.Resource._get_one_match(the_name, [match]) self.assertIs(result, match) def test_single_match_by_id(self): the_id = "Brian" match = mock.Mock(spec=resource.Resource) match.id = the_id result = resource.Resource._get_one_match(the_id, [match]) self.assertIs(result, match) def test_single_match_by_alternate_id(self): the_id = "Richard" class Test(resource.Resource): other_id = resource.Body("other_id", alternate_id=True) match = Test(other_id=the_id) result = Test._get_one_match(the_id, [match]) self.assertIs(result, match) def test_multiple_matches(self): the_id = "Brian" match = mock.Mock(spec=resource.Resource) match.id = the_id self.assertRaises( exceptions.DuplicateResource, resource.Resource._get_one_match, the_id, [match, match]) class TestWaitForStatus(base.TestCase): def test_immediate_status(self): status = "loling" res = mock.Mock() res.status = status result = resource.wait_for_status( "session", res, status, "failures", "interval", "wait") self.assertTrue(result, res) def _resources_from_statuses(self, *statuses): resources = [] for status in statuses: res = mock.Mock() res.status = status resources.append(res) for index, res in enumerate(resources[:-1]): res.get.return_value = resources[index + 1] return resources def test_status_match(self): status = "loling" # other gets past the first check, two anothers gets through # the sleep loop, and the third matches resources = self._resources_from_statuses( "first", "other", "another", "another", status) result = resource.wait_for_status( mock.Mock(), resources[0], status, None, 1, 5) self.assertEqual(result, resources[-1]) def test_status_fails(self): failure = "crying" resources = self._resources_from_statuses("success", "other", failure) self.assertRaises( exceptions.ResourceFailure, resource.wait_for_status, mock.Mock(), resources[0], "loling", [failure], 1, 5) def test_timeout(self): status = "loling" res = mock.Mock() # The first "other" gets past the first check, and then three # pairs of "other" statuses run through the sleep counter loop, # after which time should be up. This is because we have a # one second interval and three second waiting period. statuses = ["other"] * 7 type(res).status = mock.PropertyMock(side_effect=statuses) self.assertRaises(exceptions.ResourceTimeout, resource.wait_for_status, "session", res, status, None, 1, 3) def test_no_sleep(self): res = mock.Mock() statuses = ["other"] type(res).status = mock.PropertyMock(side_effect=statuses) self.assertRaises(exceptions.ResourceTimeout, resource.wait_for_status, "session", res, "status", None, 0, -1) class TestWaitForDelete(base.TestCase): def test_success(self): response = mock.Mock() response.headers = {} response.status_code = 404 res = mock.Mock() res.get.side_effect = [ None, None, exceptions.NotFoundException('Not Found', response)] result = resource.wait_for_delete("session", res, 1, 3) self.assertEqual(result, res) def test_timeout(self): res = mock.Mock() res.status = 'ACTIVE' res.get.return_value = res self.assertRaises( exceptions.ResourceTimeout, resource.wait_for_delete, "session", res, 0.1, 0.3) openstacksdk-0.11.3/openstack/tests/unit/network/0000775000175100017510000000000013236151501022107 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/network/test_version.py0000666000175100017510000000265213236151340025215 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network import version IDENTIFIER = 'v2.0' EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'status': '3', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['status'], sot.status) openstacksdk-0.11.3/openstack/tests/unit/network/test_network_service.py0000666000175100017510000000210713236151340026734 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network import network_service class TestNetworkService(testtools.TestCase): def test_service(self): sot = network_service.NetworkService() self.assertEqual('network', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v2', sot.valid_versions[0].module) self.assertEqual('v2.0', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/network/v2/0000775000175100017510000000000013236151501022436 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_vpn_service.py0000666000175100017510000000407613236151340026404 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import vpn_service IDENTIFIER = 'IDENTIFIER' EXAMPLE = { "admin_state_up": True, "description": "1", "external_v4_ip": "2", "external_v6_ip": "3", "id": IDENTIFIER, "name": "4", "router_id": "5", "status": "6", "subnet_id": "7", "tenant_id": "8", } class TestVPNService(testtools.TestCase): def test_basic(self): sot = vpn_service.VPNService() self.assertEqual('vpnservice', sot.resource_key) self.assertEqual('vpnservices', sot.resources_key) self.assertEqual('/vpn/vpnservices', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = vpn_service.VPNService(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['external_v4_ip'], sot.external_v4_ip) self.assertEqual(EXAMPLE['external_v6_ip'], sot.external_v6_ip) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['router_id'], sot.router_id) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['subnet_id'], sot.subnet_id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_floating_ip.py0000666000175100017510000000731613236151340026354 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import adapter import mock import testtools from openstack.network.v2 import floating_ip IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'created_at': '0', 'fixed_ip_address': '1', 'floating_ip_address': '127.0.0.1', 'floating_network_id': '3', 'id': IDENTIFIER, 'port_id': '5', 'qos_policy_id': '51', 'tenant_id': '6', 'router_id': '7', 'description': '8', 'status': 'ACTIVE', 'revision_number': 12, 'updated_at': '13', 'subnet_id': '14' } class TestFloatingIP(testtools.TestCase): def test_basic(self): sot = floating_ip.FloatingIP() self.assertEqual('floatingip', sot.resource_key) self.assertEqual('floatingips', sot.resources_key) self.assertEqual('/floatingips', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = floating_ip.FloatingIP(**EXAMPLE) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['fixed_ip_address'], sot.fixed_ip_address) self.assertEqual(EXAMPLE['floating_ip_address'], sot.floating_ip_address) self.assertEqual(EXAMPLE['floating_network_id'], sot.floating_network_id) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['port_id'], sot.port_id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['router_id'], sot.router_id) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['revision_number'], sot.revision_number) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) self.assertEqual(EXAMPLE['subnet_id'], sot.subnet_id) def test_find_available(self): mock_session = mock.Mock(spec=adapter.Adapter) mock_session.get_filter = mock.Mock(return_value={}) data = {'id': 'one', 'floating_ip_address': '10.0.0.1'} fake_response = mock.Mock() body = {floating_ip.FloatingIP.resources_key: [data]} fake_response.json = mock.Mock(return_value=body) fake_response.status_code = 200 mock_session.get = mock.Mock(return_value=fake_response) result = floating_ip.FloatingIP.find_available(mock_session) self.assertEqual('one', result.id) mock_session.get.assert_called_with( floating_ip.FloatingIP.base_path, headers={'Accept': 'application/json'}, params={'port_id': ''}) def test_find_available_nada(self): mock_session = mock.Mock(spec=adapter.Adapter) fake_response = mock.Mock() body = {floating_ip.FloatingIP.resources_key: []} fake_response.json = mock.Mock(return_value=body) fake_response.status_code = 200 mock_session.get = mock.Mock(return_value=fake_response) self.assertIsNone(floating_ip.FloatingIP.find_available(mock_session)) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_flavor.py0000666000175100017510000000660713236151340025354 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.network.v2 import flavor IDENTIFIER = 'IDENTIFIER' EXAMPLE_WITH_OPTIONAL = { 'id': IDENTIFIER, 'name': 'test-flavor', 'service_type': 'VPN', 'description': 'VPN flavor', 'enabled': True, 'service_profiles': ['1', '2'], } EXAMPLE = { 'id': IDENTIFIER, 'name': 'test-flavor', 'service_type': 'VPN', } class TestFlavor(testtools.TestCase): def test_basic(self): flavors = flavor.Flavor() self.assertEqual('flavor', flavors.resource_key) self.assertEqual('flavors', flavors.resources_key) self.assertEqual('/flavors', flavors.base_path) self.assertEqual('network', flavors.service.service_type) self.assertTrue(flavors.allow_create) self.assertTrue(flavors.allow_get) self.assertTrue(flavors.allow_update) self.assertTrue(flavors.allow_delete) self.assertTrue(flavors.allow_list) def test_make_it(self): flavors = flavor.Flavor(**EXAMPLE) self.assertEqual(EXAMPLE['name'], flavors.name) self.assertEqual(EXAMPLE['service_type'], flavors.service_type) def test_make_it_with_optional(self): flavors = flavor.Flavor(**EXAMPLE_WITH_OPTIONAL) self.assertEqual(EXAMPLE_WITH_OPTIONAL['name'], flavors.name) self.assertEqual(EXAMPLE_WITH_OPTIONAL['service_type'], flavors.service_type) self.assertEqual(EXAMPLE_WITH_OPTIONAL['description'], flavors.description) self.assertEqual(EXAMPLE_WITH_OPTIONAL['enabled'], flavors.is_enabled) self.assertEqual(EXAMPLE_WITH_OPTIONAL['service_profiles'], flavors.service_profile_ids) def test_associate_flavor_with_service_profile(self): flav = flavor.Flavor(EXAMPLE) response = mock.Mock() response.body = { 'service_profile': {'id': '1'}, } response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.post = mock.Mock(return_value=response) flav.id = 'IDENTIFIER' self.assertEqual( response.body, flav.associate_flavor_with_service_profile( sess, '1')) url = 'flavors/IDENTIFIER/service_profiles' sess.post.assert_called_with(url, json=response.body) def test_disassociate_flavor_from_service_profile(self): flav = flavor.Flavor(EXAMPLE) response = mock.Mock() response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.post = mock.Mock(return_value=response) flav.id = 'IDENTIFIER' self.assertEqual( None, flav.disassociate_flavor_from_service_profile( sess, '1')) url = 'flavors/IDENTIFIER/service_profiles/1' sess.delete.assert_called_with(url,) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_qos_minimum_bandwidth_rule.py0000666000175100017510000000341613236151340031466 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.network.v2 import qos_minimum_bandwidth_rule EXAMPLE = { 'id': 'IDENTIFIER', 'qos_policy_id': 'qos-policy-' + uuid.uuid4().hex, 'min_kbps': 1500, 'direction': 'egress', } class TestQoSMinimumBandwidthRule(testtools.TestCase): def test_basic(self): sot = qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule() self.assertEqual('minimum_bandwidth_rule', sot.resource_key) self.assertEqual('minimum_bandwidth_rules', sot.resources_key) self.assertEqual( '/qos/policies/%(qos_policy_id)s/minimum_bandwidth_rules', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['qos_policy_id'], sot.qos_policy_id) self.assertEqual(EXAMPLE['min_kbps'], sot.min_kbps) self.assertEqual(EXAMPLE['direction'], sot.direction) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_service_profile.py0000666000175100017510000000460713236151340027241 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import service_profile IDENTIFIER = 'IDENTIFIER' EXAMPLE_WITH_OPTIONAL = { 'description': 'test flavor profile', 'driver': 'neutron_lbaas.drivers.octavia.driver.OctaviaDriver', 'enabled': True, 'metainfo': {'foo': 'bar'}, 'tenant_id': '5', } EXAMPLE = { 'driver': 'neutron_lbaas.drivers.octavia.driver.OctaviaDriver', } class TestServiceProfile(testtools.TestCase): def test_basic(self): service_profiles = service_profile.ServiceProfile() self.assertEqual('service_profile', service_profiles.resource_key) self.assertEqual('service_profiles', service_profiles.resources_key) self.assertEqual('/service_profiles', service_profiles.base_path) self.assertTrue(service_profiles.allow_create) self.assertTrue(service_profiles.allow_get) self.assertTrue(service_profiles.allow_update) self.assertTrue(service_profiles.allow_delete) self.assertTrue(service_profiles.allow_list) def test_make_it(self): service_profiles = service_profile.ServiceProfile(**EXAMPLE) self.assertEqual(EXAMPLE['driver'], service_profiles.driver) def test_make_it_with_optional(self): service_profiles = service_profile.ServiceProfile( **EXAMPLE_WITH_OPTIONAL) self.assertEqual(EXAMPLE_WITH_OPTIONAL['description'], service_profiles.description) self.assertEqual(EXAMPLE_WITH_OPTIONAL['driver'], service_profiles.driver) self.assertEqual(EXAMPLE_WITH_OPTIONAL['enabled'], service_profiles.is_enabled) self.assertEqual(EXAMPLE_WITH_OPTIONAL['metainfo'], service_profiles.meta_info) self.assertEqual(EXAMPLE_WITH_OPTIONAL['tenant_id'], service_profiles.project_id) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_pool.py0000666000175100017510000000605213236151340025026 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import pool IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'description': '2', 'health_monitors': ['3'], 'health_monitor_status': ['4'], 'id': IDENTIFIER, 'lb_algorithm': '5', 'listeners': [{'id': '6'}], 'listener_id': '6', 'members': [{'id': '7'}], 'name': '8', 'tenant_id': '9', 'protocol': '10', 'provider': '11', 'session_persistence': '12', 'status': '13', 'status_description': '14', 'subnet_id': '15', 'loadbalancers': [{'id': '16'}], 'loadbalancer_id': '16', 'vip_id': '17', } class TestPool(testtools.TestCase): def test_basic(self): sot = pool.Pool() self.assertEqual('pool', sot.resource_key) self.assertEqual('pools', sot.resources_key) self.assertEqual('/lbaas/pools', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = pool.Pool(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['health_monitors'], sot.health_monitor_ids) self.assertEqual(EXAMPLE['health_monitor_status'], sot.health_monitor_status) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['lb_algorithm'], sot.lb_algorithm) self.assertEqual(EXAMPLE['listeners'], sot.listener_ids) self.assertEqual(EXAMPLE['listener_id'], sot.listener_id) self.assertEqual(EXAMPLE['members'], sot.member_ids) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['protocol'], sot.protocol) self.assertEqual(EXAMPLE['provider'], sot.provider) self.assertEqual(EXAMPLE['session_persistence'], sot.session_persistence) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['status_description'], sot.status_description) self.assertEqual(EXAMPLE['subnet_id'], sot.subnet_id) self.assertEqual(EXAMPLE['loadbalancers'], sot.load_balancer_ids) self.assertEqual(EXAMPLE['loadbalancer_id'], sot.load_balancer_id) self.assertEqual(EXAMPLE['vip_id'], sot.virtual_ip_id) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_qos_policy.py0000666000175100017510000000345313236151340026240 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.network.v2 import qos_policy EXAMPLE = { 'id': 'IDENTIFIER', 'description': 'QoS policy description', 'name': 'qos-policy-name', 'shared': True, 'tenant_id': '2', 'rules': [uuid.uuid4().hex], 'is_default': False } class TestQoSPolicy(testtools.TestCase): def test_basic(self): sot = qos_policy.QoSPolicy() self.assertEqual('policy', sot.resource_key) self.assertEqual('policies', sot.resources_key) self.assertEqual('/qos/policies', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = qos_policy.QoSPolicy(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['name'], sot.name) self.assertTrue(sot.is_shared) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['rules'], sot.rules) self.assertEqual(EXAMPLE['is_default'], sot.is_default) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_extension.py0000666000175100017510000000327713236151340026077 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import extension IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'alias': '1', 'description': '2', 'links': '3', 'name': '4', 'updated': '2016-03-09T12:14:57.233772', } class TestExtension(testtools.TestCase): def test_basic(self): sot = extension.Extension() self.assertEqual('extension', sot.resource_key) self.assertEqual('extensions', sot.resources_key) self.assertEqual('/extensions', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = extension.Extension(**EXAMPLE) self.assertEqual(EXAMPLE['alias'], sot.id) self.assertEqual(EXAMPLE['alias'], sot.alias) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['updated'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_metering_label.py0000666000175100017510000000325513236151340027030 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import metering_label IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'description': '1', 'id': IDENTIFIER, 'name': '3', 'tenant_id': '4', 'shared': False, } class TestMeteringLabel(testtools.TestCase): def test_basic(self): sot = metering_label.MeteringLabel() self.assertEqual('metering_label', sot.resource_key) self.assertEqual('metering_labels', sot.resources_key) self.assertEqual('/metering/metering-labels', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = metering_label.MeteringLabel(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['shared'], sot.is_shared) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_metering_label_rule.py0000666000175100017510000000352213236151340030054 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import metering_label_rule IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'direction': '1', 'excluded': False, 'id': IDENTIFIER, 'metering_label_id': '4', 'tenant_id': '5', 'remote_ip_prefix': '6', } class TestMeteringLabelRule(testtools.TestCase): def test_basic(self): sot = metering_label_rule.MeteringLabelRule() self.assertEqual('metering_label_rule', sot.resource_key) self.assertEqual('metering_label_rules', sot.resources_key) self.assertEqual('/metering/metering-label-rules', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = metering_label_rule.MeteringLabelRule(**EXAMPLE) self.assertEqual(EXAMPLE['direction'], sot.direction) self.assertFalse(sot.is_excluded) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['metering_label_id'], sot.metering_label_id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['remote_ip_prefix'], sot.remote_ip_prefix) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_qos_dscp_marking_rule.py0000666000175100017510000000321113236151340030421 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.network.v2 import qos_dscp_marking_rule EXAMPLE = { 'id': 'IDENTIFIER', 'qos_policy_id': 'qos-policy-' + uuid.uuid4().hex, 'dscp_mark': 40, } class TestQoSDSCPMarkingRule(testtools.TestCase): def test_basic(self): sot = qos_dscp_marking_rule.QoSDSCPMarkingRule() self.assertEqual('dscp_marking_rule', sot.resource_key) self.assertEqual('dscp_marking_rules', sot.resources_key) self.assertEqual('/qos/policies/%(qos_policy_id)s/dscp_marking_rules', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = qos_dscp_marking_rule.QoSDSCPMarkingRule(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['qos_policy_id'], sot.qos_policy_id) self.assertEqual(EXAMPLE['dscp_mark'], sot.dscp_mark) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_network_ip_availability.py0000666000175100017510000000622113236151340030766 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import network_ip_availability IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'network_id': IDENTIFIER, 'network_name': 'private', 'subnet_ip_availability': [], 'tenant_id': '5', 'total_ips': 6, 'used_ips': 10, } EXAMPLE_WITH_OPTIONAL = { 'network_id': IDENTIFIER, 'network_name': 'private', 'subnet_ip_availability': [{"used_ips": 3, "subnet_id": "2e4db1d6-ab2d-4bb1-93bb-a003fdbc9b39", "subnet_name": "private-subnet", "ip_version": 6, "cidr": "fd91:c3ba:e818::/64", "total_ips": 18446744073709551614}], 'tenant_id': '2', 'total_ips': 1844, 'used_ips': 6, } class TestNetworkIPAvailability(testtools.TestCase): def test_basic(self): sot = network_ip_availability.NetworkIPAvailability() self.assertEqual('network_ip_availability', sot.resource_key) self.assertEqual('network_ip_availabilities', sot.resources_key) self.assertEqual('/network-ip-availabilities', sot.base_path) self.assertEqual('network_name', sot.name_attribute) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = network_ip_availability.NetworkIPAvailability(**EXAMPLE) self.assertEqual(EXAMPLE['network_id'], sot.network_id) self.assertEqual(EXAMPLE['network_name'], sot.network_name) self.assertEqual(EXAMPLE['subnet_ip_availability'], sot.subnet_ip_availability) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['total_ips'], sot.total_ips) self.assertEqual(EXAMPLE['used_ips'], sot.used_ips) def test_make_it_with_optional(self): sot = network_ip_availability.NetworkIPAvailability( **EXAMPLE_WITH_OPTIONAL) self.assertEqual(EXAMPLE_WITH_OPTIONAL['network_id'], sot.network_id) self.assertEqual(EXAMPLE_WITH_OPTIONAL['network_name'], sot.network_name) self.assertEqual(EXAMPLE_WITH_OPTIONAL['subnet_ip_availability'], sot.subnet_ip_availability) self.assertEqual(EXAMPLE_WITH_OPTIONAL['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE_WITH_OPTIONAL['total_ips'], sot.total_ips) self.assertEqual(EXAMPLE_WITH_OPTIONAL['used_ips'], sot.used_ips) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_qos_bandwidth_limit_rule.py0000666000175100017510000000354013236151340031127 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.network.v2 import qos_bandwidth_limit_rule EXAMPLE = { 'id': 'IDENTIFIER', 'qos_policy_id': 'qos-policy-' + uuid.uuid4().hex, 'max_kbps': 1500, 'max_burst_kbps': 1200, 'direction': 'egress', } class TestQoSBandwidthLimitRule(testtools.TestCase): def test_basic(self): sot = qos_bandwidth_limit_rule.QoSBandwidthLimitRule() self.assertEqual('bandwidth_limit_rule', sot.resource_key) self.assertEqual('bandwidth_limit_rules', sot.resources_key) self.assertEqual( '/qos/policies/%(qos_policy_id)s/bandwidth_limit_rules', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = qos_bandwidth_limit_rule.QoSBandwidthLimitRule(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['qos_policy_id'], sot.qos_policy_id) self.assertEqual(EXAMPLE['max_kbps'], sot.max_kbps) self.assertEqual(EXAMPLE['max_burst_kbps'], sot.max_burst_kbps) self.assertEqual(EXAMPLE['direction'], sot.direction) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_rbac_policy.py0000666000175100017510000000332013236151340026336 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import rbac_policy IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'action': 'access_as_shared', 'object_id': IDENTIFIER, 'object_type': 'network', 'target_tenant': '10', 'tenant_id': '5', } class TestRBACPolicy(testtools.TestCase): def test_basic(self): sot = rbac_policy.RBACPolicy() self.assertEqual('rbac_policy', sot.resource_key) self.assertEqual('rbac_policies', sot.resources_key) self.assertEqual('/rbac-policies', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = rbac_policy.RBACPolicy(**EXAMPLE) self.assertEqual(EXAMPLE['action'], sot.action) self.assertEqual(EXAMPLE['object_id'], sot.object_id) self.assertEqual(EXAMPLE['object_type'], sot.object_type) self.assertEqual(EXAMPLE['target_tenant'], sot.target_project_id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_quota.py0000666000175100017510000001122213236151340025201 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import quota from openstack import resource IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'floatingip': 1, 'network': 2, 'port': 3, 'tenant_id': '4', 'router': 5, 'subnet': 6, 'subnetpool': 7, 'security_group_rule': 8, 'security_group': 9, 'rbac_policy': -1, 'healthmonitor': 11, 'listener': 12, 'loadbalancer': 13, 'l7policy': 14, 'pool': 15, } class TestQuota(testtools.TestCase): def test_basic(self): sot = quota.Quota() self.assertEqual('quota', sot.resource_key) self.assertEqual('quotas', sot.resources_key) self.assertEqual('/quotas', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = quota.Quota(**EXAMPLE) self.assertEqual(EXAMPLE['floatingip'], sot.floating_ips) self.assertEqual(EXAMPLE['network'], sot.networks) self.assertEqual(EXAMPLE['port'], sot.ports) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['router'], sot.routers) self.assertEqual(EXAMPLE['subnet'], sot.subnets) self.assertEqual(EXAMPLE['subnetpool'], sot.subnet_pools) self.assertEqual(EXAMPLE['security_group_rule'], sot.security_group_rules) self.assertEqual(EXAMPLE['security_group'], sot.security_groups) self.assertEqual(EXAMPLE['rbac_policy'], sot.rbac_policies) self.assertEqual(EXAMPLE['healthmonitor'], sot.health_monitors) self.assertEqual(EXAMPLE['listener'], sot.listeners) self.assertEqual(EXAMPLE['loadbalancer'], sot.load_balancers) self.assertEqual(EXAMPLE['l7policy'], sot.l7_policies) self.assertEqual(EXAMPLE['pool'], sot.pools) def test_prepare_request(self): body = {'id': 'ABCDEFGH', 'network': '12345'} quota_obj = quota.Quota(**body) response = quota_obj._prepare_request() self.assertNotIn('id', response) def test_alternate_id(self): my_tenant_id = 'my-tenant-id' body = {'tenant_id': my_tenant_id, 'network': 12345} quota_obj = quota.Quota(**body) self.assertEqual(my_tenant_id, resource.Resource._get_id(quota_obj)) class TestQuotaDefault(testtools.TestCase): def test_basic(self): sot = quota.QuotaDefault() self.assertEqual('quota', sot.resource_key) self.assertEqual('quotas', sot.resources_key) self.assertEqual('/quotas/%(project)s/default', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) def test_make_it(self): sot = quota.QuotaDefault(project='FAKE_PROJECT', **EXAMPLE) self.assertEqual(EXAMPLE['floatingip'], sot.floating_ips) self.assertEqual(EXAMPLE['network'], sot.networks) self.assertEqual(EXAMPLE['port'], sot.ports) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['router'], sot.routers) self.assertEqual(EXAMPLE['subnet'], sot.subnets) self.assertEqual(EXAMPLE['subnetpool'], sot.subnet_pools) self.assertEqual(EXAMPLE['security_group_rule'], sot.security_group_rules) self.assertEqual(EXAMPLE['security_group'], sot.security_groups) self.assertEqual(EXAMPLE['rbac_policy'], sot.rbac_policies) self.assertEqual(EXAMPLE['healthmonitor'], sot.health_monitors) self.assertEqual(EXAMPLE['listener'], sot.listeners) self.assertEqual(EXAMPLE['loadbalancer'], sot.load_balancers) self.assertEqual(EXAMPLE['l7policy'], sot.l7_policies) self.assertEqual(EXAMPLE['pool'], sot.pools) self.assertEqual('FAKE_PROJECT', sot.project) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_agent.py0000666000175100017510000001337013236151340025154 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.network.v2 import agent IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'agent_type': 'Test Agent', 'alive': True, 'availability_zone': 'az1', 'binary': 'test-binary', 'configurations': {'attr1': 'value1', 'attr2': 'value2'}, 'created_at': '2016-03-09T12:14:57.233772', 'description': 'test description', 'heartbeat_timestamp': '2016-08-09T12:14:57.233772', 'host': 'test-host', 'id': IDENTIFIER, 'started_at': '2016-07-09T12:14:57.233772', 'topic': 'test-topic', 'ha_state': 'active' } class TestAgent(testtools.TestCase): def test_basic(self): sot = agent.Agent() self.assertEqual('agent', sot.resource_key) self.assertEqual('agents', sot.resources_key) self.assertEqual('/agents', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = agent.Agent(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['agent_type'], sot.agent_type) self.assertTrue(sot.is_alive) self.assertEqual(EXAMPLE['availability_zone'], sot.availability_zone) self.assertEqual(EXAMPLE['binary'], sot.binary) self.assertEqual(EXAMPLE['configurations'], sot.configuration) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['heartbeat_timestamp'], sot.last_heartbeat_at) self.assertEqual(EXAMPLE['host'], sot.host) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['started_at'], sot.started_at) self.assertEqual(EXAMPLE['topic'], sot.topic) self.assertEqual(EXAMPLE['ha_state'], sot.ha_state) def test_add_agent_to_network(self): # Add agent to network net = agent.Agent(**EXAMPLE) response = mock.Mock() response.body = {'network_id': '1'} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.post = mock.Mock(return_value=response) body = {'network_id': '1'} self.assertEqual(response.body, net.add_agent_to_network(sess, **body)) url = 'agents/IDENTIFIER/dhcp-networks' sess.post.assert_called_with(url, json=body) def test_remove_agent_from_network(self): # Remove agent from agent net = agent.Agent(**EXAMPLE) sess = mock.Mock() network_id = {} self.assertIsNone(net.remove_agent_from_network(sess, network_id)) body = {'network_id': {}} sess.delete.assert_called_with('agents/IDENTIFIER/dhcp-networks/', json=body) def test_add_router_to_agent(self): # Add router to agent sot = agent.Agent(**EXAMPLE) response = mock.Mock() response.body = {'router_id': '1'} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.post = mock.Mock(return_value=response) router_id = '1' self.assertEqual(response.body, sot.add_router_to_agent(sess, router_id)) body = {'router_id': router_id} url = 'agents/IDENTIFIER/l3-routers' sess.post.assert_called_with(url, json=body) def test_remove_router_from_agent(self): # Remove router from agent sot = agent.Agent(**EXAMPLE) sess = mock.Mock() router_id = {} self.assertIsNone(sot.remove_router_from_agent(sess, router_id)) body = {'router_id': {}} sess.delete.assert_called_with('agents/IDENTIFIER/l3-routers/', json=body) class TestNetworkHostingDHCPAgent(testtools.TestCase): def test_basic(self): net = agent.NetworkHostingDHCPAgent() self.assertEqual('agent', net.resource_key) self.assertEqual('agents', net.resources_key) self.assertEqual('/networks/%(network_id)s/dhcp-agents', net.base_path) self.assertEqual('dhcp-agent', net.resource_name) self.assertEqual('network', net.service.service_type) self.assertFalse(net.allow_create) self.assertTrue(net.allow_get) self.assertFalse(net.allow_update) self.assertFalse(net.allow_delete) self.assertTrue(net.allow_list) class TestRouterL3Agent(testtools.TestCase): def test_basic(self): sot = agent.RouterL3Agent() self.assertEqual('agent', sot.resource_key) self.assertEqual('agents', sot.resources_key) self.assertEqual('/routers/%(router_id)s/l3-agents', sot.base_path) self.assertEqual('l3-agent', sot.resource_name) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_retrieve) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_router.py0000666000175100017510000002067013236151340025377 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.network.v2 import router IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'availability_zone_hints': ['1'], 'availability_zones': ['2'], 'created_at': 'timestamp1', 'description': '3', 'distributed': False, 'external_gateway_info': {'4': 4}, 'flavor_id': '5', 'ha': False, 'id': IDENTIFIER, 'name': '6', 'revision': 7, 'routes': ['8'], 'status': '9', 'tenant_id': '10', 'updated_at': 'timestamp2', } EXAMPLE_WITH_OPTIONAL = { 'admin_state_up': False, 'availability_zone_hints': ['zone-1', 'zone-2'], 'availability_zones': ['zone-2'], 'description': 'description', 'distributed': True, 'external_gateway_info': { 'network_id': '1', 'enable_snat': True, 'external_fixed_ips': [] }, 'ha': True, 'id': IDENTIFIER, 'name': 'router1', 'routes': [{ 'nexthop': '172.24.4.20', 'destination': '10.0.3.1/24' }], 'status': 'ACTIVE', 'tenant_id': '2', } class TestRouter(testtools.TestCase): def test_basic(self): sot = router.Router() self.assertEqual('router', sot.resource_key) self.assertEqual('routers', sot.resources_key) self.assertEqual('/routers', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = router.Router(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['availability_zone_hints'], sot.availability_zone_hints) self.assertEqual(EXAMPLE['availability_zones'], sot.availability_zones) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['description'], sot.description) self.assertFalse(sot.is_distributed) self.assertEqual(EXAMPLE['external_gateway_info'], sot.external_gateway_info) self.assertEqual(EXAMPLE['flavor_id'], sot.flavor_id) self.assertFalse(sot.is_ha) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['revision'], sot.revision_number) self.assertEqual(EXAMPLE['routes'], sot.routes) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) def test_make_it_with_optional(self): sot = router.Router(**EXAMPLE_WITH_OPTIONAL) self.assertFalse(sot.is_admin_state_up) self.assertEqual(EXAMPLE_WITH_OPTIONAL['availability_zone_hints'], sot.availability_zone_hints) self.assertEqual(EXAMPLE_WITH_OPTIONAL['availability_zones'], sot.availability_zones) self.assertEqual(EXAMPLE_WITH_OPTIONAL['description'], sot.description) self.assertTrue(sot.is_distributed) self.assertEqual(EXAMPLE_WITH_OPTIONAL['external_gateway_info'], sot.external_gateway_info) self.assertTrue(sot.is_ha) self.assertEqual(EXAMPLE_WITH_OPTIONAL['id'], sot.id) self.assertEqual(EXAMPLE_WITH_OPTIONAL['name'], sot.name) self.assertEqual(EXAMPLE_WITH_OPTIONAL['routes'], sot.routes) self.assertEqual(EXAMPLE_WITH_OPTIONAL['status'], sot.status) self.assertEqual(EXAMPLE_WITH_OPTIONAL['tenant_id'], sot.project_id) def test_add_interface_subnet(self): # Add subnet to a router sot = router.Router(**EXAMPLE) response = mock.Mock() response.body = {"subnet_id": "3", "port_id": "2"} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.put = mock.Mock(return_value=response) body = {"subnet_id": "3"} self.assertEqual(response.body, sot.add_interface(sess, **body)) url = 'routers/IDENTIFIER/add_router_interface' sess.put.assert_called_with(url, json=body) def test_add_interface_port(self): # Add port to a router sot = router.Router(**EXAMPLE) response = mock.Mock() response.body = {"subnet_id": "3", "port_id": "3"} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.put = mock.Mock(return_value=response) body = {"port_id": "3"} self.assertEqual(response.body, sot.add_interface(sess, **body)) url = 'routers/IDENTIFIER/add_router_interface' sess.put.assert_called_with(url, json=body) def test_remove_interface_subnet(self): # Remove subnet from a router sot = router.Router(**EXAMPLE) response = mock.Mock() response.body = {"subnet_id": "3", "port_id": "2"} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.put = mock.Mock(return_value=response) body = {"subnet_id": "3"} self.assertEqual(response.body, sot.remove_interface(sess, **body)) url = 'routers/IDENTIFIER/remove_router_interface' sess.put.assert_called_with(url, json=body) def test_remove_interface_port(self): # Remove port from a router sot = router.Router(**EXAMPLE) response = mock.Mock() response.body = {"subnet_id": "3", "port_id": "3"} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.put = mock.Mock(return_value=response) body = {"network_id": 3, "enable_snat": True} self.assertEqual(response.body, sot.remove_interface(sess, **body)) url = 'routers/IDENTIFIER/remove_router_interface' sess.put.assert_called_with(url, json=body) def test_add_router_gateway(self): # Add gateway to a router sot = router.Router(**EXAMPLE_WITH_OPTIONAL) response = mock.Mock() response.body = {"network_id": "3", "enable_snat": True} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.put = mock.Mock(return_value=response) body = {"network_id": 3, "enable_snat": True} self.assertEqual(response.body, sot.add_gateway(sess, **body)) url = 'routers/IDENTIFIER/add_gateway_router' sess.put.assert_called_with(url, json=body) def test_remove_router_gateway(self): # Remove gateway to a router sot = router.Router(**EXAMPLE_WITH_OPTIONAL) response = mock.Mock() response.body = {"network_id": "3", "enable_snat": True} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.put = mock.Mock(return_value=response) body = {"network_id": 3, "enable_snat": True} self.assertEqual(response.body, sot.remove_gateway(sess, **body)) url = 'routers/IDENTIFIER/remove_gateway_router' sess.put.assert_called_with(url, json=body) class TestL3AgentRouters(testtools.TestCase): def test_basic(self): sot = router.L3AgentRouter() self.assertEqual('router', sot.resource_key) self.assertEqual('routers', sot.resources_key) self.assertEqual('/agents/%(agent_id)s/l3-routers', sot.base_path) self.assertEqual('l3-router', sot.resource_name) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_retrieve) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) openstacksdk-0.11.3/openstack/tests/unit/network/v2/__init__.py0000666000175100017510000000000013236151340024540 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_security_group.py0000666000175100017510000000600313236151340027134 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import security_group IDENTIFIER = 'IDENTIFIER' RULES = [ { "remote_group_id": None, "direction": "egress", "remote_ip_prefix": None, "protocol": None, "ethertype": "IPv6", "tenant_id": "4", "port_range_max": None, "port_range_min": None, "id": "5", "security_group_id": IDENTIFIER, "created_at": "2016-10-04T12:14:57.233772", "updated_at": "2016-10-12T12:15:34.233222", "revision_number": 6, }, { "remote_group_id": "9", "direction": "ingress", "remote_ip_prefix": None, "protocol": None, "ethertype": "IPv6", "tenant_id": "4", "port_range_max": None, "port_range_min": None, "id": "6", "security_group_id": IDENTIFIER, "created_at": "2016-10-04T12:14:57.233772", "updated_at": "2016-10-12T12:15:34.233222", "revision_number": 7, }, ] EXAMPLE = { 'created_at': '2016-10-04T12:14:57.233772', 'description': '1', 'id': IDENTIFIER, 'name': '2', 'revision_number': 3, 'security_group_rules': RULES, 'tenant_id': '4', 'updated_at': '2016-10-14T12:16:57.233772', } class TestSecurityGroup(testtools.TestCase): def test_basic(self): sot = security_group.SecurityGroup() self.assertEqual('security_group', sot.resource_key) self.assertEqual('security_groups', sot.resources_key) self.assertEqual('/security-groups', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = security_group.SecurityGroup(**EXAMPLE) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['revision_number'], sot.revision_number) self.assertEqual(EXAMPLE['security_group_rules'], sot.security_group_rules) self.assertEqual(dict, type(sot.security_group_rules[0])) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_qos_rule_type.py0000666000175100017510000000365613236151340026756 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import qos_rule_type EXAMPLE = { 'type': 'bandwidth_limit', 'drivers': [{ 'name': 'openvswitch', 'supported_parameters': [{ 'parameter_values': {'start': 0, 'end': 2147483647}, 'parameter_type': 'range', 'parameter_name': 'max_kbps' }, { 'parameter_values': ['ingress', 'egress'], 'parameter_type': 'choices', 'parameter_name': 'direction' }, { 'parameter_values': {'start': 0, 'end': 2147483647}, 'parameter_type': 'range', 'parameter_name': 'max_burst_kbps' }] }] } class TestQoSRuleType(testtools.TestCase): def test_basic(self): sot = qos_rule_type.QoSRuleType() self.assertEqual('rule_type', sot.resource_key) self.assertEqual('rule_types', sot.resources_key) self.assertEqual('/qos/rule-types', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = qos_rule_type.QoSRuleType(**EXAMPLE) self.assertEqual(EXAMPLE['type'], sot.type) self.assertEqual(EXAMPLE['drivers'], sot.drivers) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_load_balancer.py0000666000175100017510000000465413236151340026631 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import load_balancer IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'description': '2', 'id': IDENTIFIER, 'listeners': [{'id', '4'}], 'name': '5', 'operating_status': '6', 'provisioning_status': '7', 'tenant_id': '8', 'vip_address': '9', 'vip_subnet_id': '10', 'vip_port_id': '11', 'provider': '12', 'pools': [{'id', '13'}], } class TestLoadBalancer(testtools.TestCase): def test_basic(self): sot = load_balancer.LoadBalancer() self.assertEqual('loadbalancer', sot.resource_key) self.assertEqual('loadbalancers', sot.resources_key) self.assertEqual('/lbaas/loadbalancers', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = load_balancer.LoadBalancer(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['listeners'], sot.listener_ids) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['operating_status'], sot.operating_status) self.assertEqual(EXAMPLE['provisioning_status'], sot.provisioning_status) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['vip_address'], sot.vip_address) self.assertEqual(EXAMPLE['vip_subnet_id'], sot.vip_subnet_id) self.assertEqual(EXAMPLE['vip_port_id'], sot.vip_port_id) self.assertEqual(EXAMPLE['provider'], sot.provider) self.assertEqual(EXAMPLE['pools'], sot.pool_ids) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_security_group_rule.py0000666000175100017510000000510513236151340030165 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import security_group_rule IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'created_at': '0', 'description': '1', 'direction': '2', 'ethertype': '3', 'id': IDENTIFIER, 'port_range_max': 4, 'port_range_min': 5, 'protocol': '6', 'remote_group_id': '7', 'remote_ip_prefix': '8', 'revision_number': 9, 'security_group_id': '10', 'tenant_id': '11', 'updated_at': '12' } class TestSecurityGroupRule(testtools.TestCase): def test_basic(self): sot = security_group_rule.SecurityGroupRule() self.assertEqual('security_group_rule', sot.resource_key) self.assertEqual('security_group_rules', sot.resources_key) self.assertEqual('/security-group-rules', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = security_group_rule.SecurityGroupRule(**EXAMPLE) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['direction'], sot.direction) self.assertEqual(EXAMPLE['ethertype'], sot.ether_type) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['port_range_max'], sot.port_range_max) self.assertEqual(EXAMPLE['port_range_min'], sot.port_range_min) self.assertEqual(EXAMPLE['protocol'], sot.protocol) self.assertEqual(EXAMPLE['remote_group_id'], sot.remote_group_id) self.assertEqual(EXAMPLE['remote_ip_prefix'], sot.remote_ip_prefix) self.assertEqual(EXAMPLE['revision_number'], sot.revision_number) self.assertEqual(EXAMPLE['security_group_id'], sot.security_group_id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_subnet_pool.py0000666000175100017510000000521713236151340026410 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import subnet_pool IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'address_scope_id': '1', 'created_at': '2', 'default_prefixlen': 3, 'default_quota': 4, 'description': '5', 'id': IDENTIFIER, 'ip_version': 6, 'is_default': True, 'max_prefixlen': 7, 'min_prefixlen': 8, 'name': '9', 'prefixes': ['10', '11'], 'revision_number': 12, 'shared': True, 'tenant_id': '13', 'updated_at': '14', } class TestSubnetpool(testtools.TestCase): def test_basic(self): sot = subnet_pool.SubnetPool() self.assertEqual('subnetpool', sot.resource_key) self.assertEqual('subnetpools', sot.resources_key) self.assertEqual('/subnetpools', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = subnet_pool.SubnetPool(**EXAMPLE) self.assertEqual(EXAMPLE['address_scope_id'], sot.address_scope_id) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['default_prefixlen'], sot.default_prefix_length) self.assertEqual(EXAMPLE['default_quota'], sot.default_quota) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['ip_version'], sot.ip_version) self.assertTrue(sot.is_default) self.assertEqual(EXAMPLE['max_prefixlen'], sot.maximum_prefix_length) self.assertEqual(EXAMPLE['min_prefixlen'], sot.minimum_prefix_length) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['prefixes'], sot.prefixes) self.assertEqual(EXAMPLE['revision_number'], sot.revision_number) self.assertTrue(sot.is_shared) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_pool_member.py0000666000175100017510000000372213236151340026356 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import pool_member IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'address': '1', 'admin_state_up': True, 'id': IDENTIFIER, 'tenant_id': '4', 'protocol_port': 5, 'subnet_id': '6', 'weight': 7, 'name': '8', 'pool_id': 'FAKE_POOL', } class TestPoolMember(testtools.TestCase): def test_basic(self): sot = pool_member.PoolMember() self.assertEqual('member', sot.resource_key) self.assertEqual('members', sot.resources_key) self.assertEqual('/lbaas/pools/%(pool_id)s/members', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = pool_member.PoolMember(**EXAMPLE) self.assertEqual(EXAMPLE['address'], sot.address) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['protocol_port'], sot.protocol_port) self.assertEqual(EXAMPLE['subnet_id'], sot.subnet_id) self.assertEqual(EXAMPLE['weight'], sot.weight) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['pool_id'], sot.pool_id) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_subnet.py0000666000175100017510000000605013236151340025353 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import subnet IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'allocation_pools': [{'1': 1}], 'cidr': '2', 'created_at': '3', 'description': '4', 'dns_nameservers': ['5'], 'enable_dhcp': True, 'gateway_ip': '6', 'host_routes': ['7'], 'id': IDENTIFIER, 'ip_version': 8, 'ipv6_address_mode': '9', 'ipv6_ra_mode': '10', 'name': '11', 'network_id': '12', 'revision_number': 13, 'segment_id': '14', 'service_types': ['15'], 'subnetpool_id': '16', 'tenant_id': '17', 'updated_at': '18', 'use_default_subnetpool': True, } class TestSubnet(testtools.TestCase): def test_basic(self): sot = subnet.Subnet() self.assertEqual('subnet', sot.resource_key) self.assertEqual('subnets', sot.resources_key) self.assertEqual('/subnets', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = subnet.Subnet(**EXAMPLE) self.assertEqual(EXAMPLE['allocation_pools'], sot.allocation_pools) self.assertEqual(EXAMPLE['cidr'], sot.cidr) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['dns_nameservers'], sot.dns_nameservers) self.assertTrue(sot.is_dhcp_enabled) self.assertEqual(EXAMPLE['gateway_ip'], sot.gateway_ip) self.assertEqual(EXAMPLE['host_routes'], sot.host_routes) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['ip_version'], sot.ip_version) self.assertEqual(EXAMPLE['ipv6_address_mode'], sot.ipv6_address_mode) self.assertEqual(EXAMPLE['ipv6_ra_mode'], sot.ipv6_ra_mode) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['network_id'], sot.network_id) self.assertEqual(EXAMPLE['revision_number'], sot.revision_number) self.assertEqual(EXAMPLE['segment_id'], sot.segment_id) self.assertEqual(EXAMPLE['service_types'], sot.service_types) self.assertEqual(EXAMPLE['subnetpool_id'], sot.subnet_pool_id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) self.assertTrue(sot.use_default_subnet_pool) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_availability_zone.py0000666000175100017510000000313513236151340027561 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import availability_zone IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'name': '1', 'resource': '2', 'state': '3', } class TestAvailabilityZone(testtools.TestCase): def test_basic(self): sot = availability_zone.AvailabilityZone() self.assertEqual('availability_zone', sot.resource_key) self.assertEqual('availability_zones', sot.resources_key) self.assertEqual('/availability_zones', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = availability_zone.AvailabilityZone(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['resource'], sot.resource) self.assertEqual(EXAMPLE['state'], sot.state) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_tag.py0000666000175100017510000000423013236151340024624 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import mock import testtools from openstack.network.v2 import network import openstack.network.v2 as network_resources from openstack.network.v2.tag import TagMixin ID = 'IDENTIFIER' class TestTag(testtools.TestCase): @staticmethod def _create_network_resource(tags=None): tags = tags or [] return network.Network(id=ID, name='test-net', tags=tags) def test_tags_attribute(self): net = self._create_network_resource() self.assertTrue(hasattr(net, 'tags')) self.assertIsInstance(net.tags, list) def test_set_tags(self): net = self._create_network_resource() sess = mock.Mock() result = net.set_tags(sess, ['blue', 'green']) # Check tags attribute is updated self.assertEqual(['blue', 'green'], net.tags) # Check the passed resource is returned self.assertEqual(net, result) url = 'networks/' + ID + '/tags' sess.put.assert_called_once_with(url, json={'tags': ['blue', 'green']}) def test_tagged_resource_always_created_with_empty_tag_list(self): for _, module in inspect.getmembers(network_resources, inspect.ismodule): for _, resource in inspect.getmembers(module, inspect.isclass): if issubclass(resource, TagMixin) and resource != TagMixin: x_resource = resource.new( id="%s_ID" % resource.resource_key.upper()) self.assertIsNotNone(x_resource.tags) self.assertEqual(x_resource.tags, list()) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_health_monitor.py0000666000175100017510000000446113236151340027073 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import health_monitor IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'delay': '2', 'expected_codes': '3', 'http_method': '4', 'id': IDENTIFIER, 'max_retries': '6', 'pools': [{'id': '7'}], 'pool_id': '7', 'tenant_id': '8', 'timeout': '9', 'type': '10', 'url_path': '11', 'name': '12', } class TestHealthMonitor(testtools.TestCase): def test_basic(self): sot = health_monitor.HealthMonitor() self.assertEqual('healthmonitor', sot.resource_key) self.assertEqual('healthmonitors', sot.resources_key) self.assertEqual('/lbaas/healthmonitors', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = health_monitor.HealthMonitor(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['delay'], sot.delay) self.assertEqual(EXAMPLE['expected_codes'], sot.expected_codes) self.assertEqual(EXAMPLE['http_method'], sot.http_method) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['max_retries'], sot.max_retries) self.assertEqual(EXAMPLE['pools'], sot.pool_ids) self.assertEqual(EXAMPLE['pool_id'], sot.pool_id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['timeout'], sot.timeout) self.assertEqual(EXAMPLE['type'], sot.type) self.assertEqual(EXAMPLE['url_path'], sot.url_path) self.assertEqual(EXAMPLE['name'], sot.name) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_auto_allocated_topology.py0000666000175100017510000000247513236151340030776 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import auto_allocated_topology EXAMPLE = { 'tenant_id': '1', 'dry_run': False, } class TestAutoAllocatedTopology(testtools.TestCase): def test_basic(self): topo = auto_allocated_topology.AutoAllocatedTopology self.assertEqual('auto_allocated_topology', topo.resource_key) self.assertEqual('/auto-allocated-topology', topo.base_path) self.assertFalse(topo.allow_create) self.assertTrue(topo.allow_get) self.assertFalse(topo.allow_update) self.assertTrue(topo.allow_delete) self.assertFalse(topo.allow_list) def test_make_it(self): topo = auto_allocated_topology.AutoAllocatedTopology(**EXAMPLE) self.assertEqual(EXAMPLE['tenant_id'], topo.project_id) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_segment.py0000666000175100017510000000350513236151340025517 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import segment IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'description': '1', 'id': IDENTIFIER, 'name': '2', 'network_id': '3', 'network_type': '4', 'physical_network': '5', 'segmentation_id': 6, } class TestSegment(testtools.TestCase): def test_basic(self): sot = segment.Segment() self.assertEqual('segment', sot.resource_key) self.assertEqual('segments', sot.resources_key) self.assertEqual('/segments', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = segment.Segment(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['network_id'], sot.network_id) self.assertEqual(EXAMPLE['network_type'], sot.network_type) self.assertEqual(EXAMPLE['physical_network'], sot.physical_network) self.assertEqual(EXAMPLE['segmentation_id'], sot.segmentation_id) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_address_scope.py0000666000175100017510000000320113236151340026664 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import address_scope IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'ip_version': 4, 'name': '1', 'shared': True, 'tenant_id': '2', } class TestAddressScope(testtools.TestCase): def test_basic(self): sot = address_scope.AddressScope() self.assertEqual('address_scope', sot.resource_key) self.assertEqual('address_scopes', sot.resources_key) self.assertEqual('/address-scopes', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = address_scope.AddressScope(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['ip_version'], sot.ip_version) self.assertEqual(EXAMPLE['name'], sot.name) self.assertTrue(sot.is_shared) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_network.py0000666000175100017510000001261713236151340025552 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import network IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'availability_zone_hints': ['1', '2'], 'availability_zones': ['3'], 'created_at': '2016-03-09T12:14:57.233772', 'description': '4', 'dns_domain': '5', 'id': IDENTIFIER, 'ipv4_address_scope': '6', 'ipv6_address_scope': '7', 'is_default': False, 'mtu': 8, 'name': '9', 'port_security_enabled': True, 'project_id': '10', 'provider:network_type': '11', 'provider:physical_network': '12', 'provider:segmentation_id': '13', 'qos_policy_id': '14', 'revision_number': 15, 'router:external': True, 'segments': '16', 'shared': True, 'status': '17', 'subnets': ['18', '19'], 'updated_at': '2016-07-09T12:14:57.233772', 'vlan_transparent': False, } class TestNetwork(testtools.TestCase): def test_basic(self): sot = network.Network() self.assertEqual('network', sot.resource_key) self.assertEqual('networks', sot.resources_key) self.assertEqual('/networks', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = network.Network(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['availability_zone_hints'], sot.availability_zone_hints) self.assertEqual(EXAMPLE['availability_zones'], sot.availability_zones) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['dns_domain'], sot.dns_domain) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['ipv4_address_scope'], sot.ipv4_address_scope_id) self.assertEqual(EXAMPLE['ipv6_address_scope'], sot.ipv6_address_scope_id) self.assertFalse(sot.is_default) self.assertEqual(EXAMPLE['mtu'], sot.mtu) self.assertEqual(EXAMPLE['name'], sot.name) self.assertTrue(sot.is_port_security_enabled) self.assertEqual(EXAMPLE['project_id'], sot.project_id) self.assertEqual(EXAMPLE['provider:network_type'], sot.provider_network_type) self.assertEqual(EXAMPLE['provider:physical_network'], sot.provider_physical_network) self.assertEqual(EXAMPLE['provider:segmentation_id'], sot.provider_segmentation_id) self.assertEqual(EXAMPLE['qos_policy_id'], sot.qos_policy_id) self.assertEqual(EXAMPLE['revision_number'], sot.revision_number) self.assertTrue(sot.is_router_external) self.assertEqual(EXAMPLE['segments'], sot.segments) self.assertTrue(sot.is_shared) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['subnets'], sot.subnet_ids) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) self.assertEqual(EXAMPLE['vlan_transparent'], sot.is_vlan_transparent) self.assertDictEqual( {'limit': 'limit', 'marker': 'marker', 'description': 'description', 'name': 'name', 'project_id': 'tenant_id', 'status': 'status', 'ipv4_address_scope_id': 'ipv4_address_scope', 'ipv6_address_scope_id': 'ipv6_address_scope', 'is_admin_state_up': 'admin_state_up', 'is_port_security_enabled': 'port_security_enabled', 'is_router_external': 'router:external', 'is_shared': 'shared', 'provider_network_type': 'provider:network_type', 'provider_physical_network': 'provider:physical_network', 'provider_segmentation_id': 'provider:segmentation_id', 'tags': 'tags', 'any_tags': 'tags-any', 'not_tags': 'not-tags', 'not_any_tags': 'not-tags-any', }, sot._query_mapping._mapping) class TestDHCPAgentHostingNetwork(testtools.TestCase): def test_basic(self): net = network.DHCPAgentHostingNetwork() self.assertEqual('network', net.resource_key) self.assertEqual('networks', net.resources_key) self.assertEqual('/agents/%(agent_id)s/dhcp-networks', net.base_path) self.assertEqual('dhcp-network', net.resource_name) self.assertEqual('network', net.service.service_type) self.assertFalse(net.allow_create) self.assertTrue(net.allow_get) self.assertFalse(net.allow_update) self.assertFalse(net.allow_delete) self.assertTrue(net.allow_list) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_service_provider.py0000666000175100017510000000276313236151340027434 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import service_provider IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'service_type': 'L3_ROUTER_NAT', 'name': '4', 'default': False, } class TestServiceProvider(testtools.TestCase): def test_basic(self): sot = service_provider.ServiceProvider() self.assertEqual('service_providers', sot.resources_key) self.assertEqual('/service-providers', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = service_provider.ServiceProvider(**EXAMPLE) self.assertEqual(EXAMPLE['service_type'], sot.service_type) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['default'], sot.is_default) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_listener.py0000666000175100017510000000475713236151340025714 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import listener IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'connection_limit': '2', 'default_pool_id': '3', 'description': '4', 'id': IDENTIFIER, 'loadbalancers': [{'id': '6'}], 'loadbalancer_id': '6', 'name': '7', 'project_id': '8', 'protocol': '9', 'protocol_port': '10', 'default_tls_container_ref': '11', 'sni_container_refs': [], } class TestListener(testtools.TestCase): def test_basic(self): sot = listener.Listener() self.assertEqual('listener', sot.resource_key) self.assertEqual('listeners', sot.resources_key) self.assertEqual('/lbaas/listeners', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = listener.Listener(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['connection_limit'], sot.connection_limit) self.assertEqual(EXAMPLE['default_pool_id'], sot.default_pool_id) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['loadbalancers'], sot.load_balancer_ids) self.assertEqual(EXAMPLE['loadbalancer_id'], sot.load_balancer_id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['project_id'], sot.project_id) self.assertEqual(EXAMPLE['protocol'], sot.protocol) self.assertEqual(EXAMPLE['protocol_port'], sot.protocol_port) self.assertEqual(EXAMPLE['default_tls_container_ref'], sot.default_tls_container_ref) self.assertEqual(EXAMPLE['sni_container_refs'], sot.sni_container_refs) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_proxy.py0000666000175100017510000013260613236151340025243 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import deprecation import mock import uuid from openstack import exceptions from openstack.network.v2 import _proxy from openstack.network.v2 import address_scope from openstack.network.v2 import agent from openstack.network.v2 import auto_allocated_topology from openstack.network.v2 import availability_zone from openstack.network.v2 import extension from openstack.network.v2 import flavor from openstack.network.v2 import floating_ip from openstack.network.v2 import health_monitor from openstack.network.v2 import listener from openstack.network.v2 import load_balancer from openstack.network.v2 import metering_label from openstack.network.v2 import metering_label_rule from openstack.network.v2 import network from openstack.network.v2 import network_ip_availability from openstack.network.v2 import pool from openstack.network.v2 import pool_member from openstack.network.v2 import port from openstack.network.v2 import qos_bandwidth_limit_rule from openstack.network.v2 import qos_dscp_marking_rule from openstack.network.v2 import qos_minimum_bandwidth_rule from openstack.network.v2 import qos_policy from openstack.network.v2 import qos_rule_type from openstack.network.v2 import quota from openstack.network.v2 import rbac_policy from openstack.network.v2 import router from openstack.network.v2 import security_group from openstack.network.v2 import security_group_rule from openstack.network.v2 import segment from openstack.network.v2 import service_profile from openstack.network.v2 import service_provider from openstack.network.v2 import subnet from openstack.network.v2 import subnet_pool from openstack.network.v2 import vpn_service from openstack import proxy as proxy_base from openstack.tests.unit import test_proxy_base QOS_POLICY_ID = 'qos-policy-id-' + uuid.uuid4().hex QOS_RULE_ID = 'qos-rule-id-' + uuid.uuid4().hex NETWORK_ID = 'network-id-' + uuid.uuid4().hex AGENT_ID = 'agent-id-' + uuid.uuid4().hex ROUTER_ID = 'router-id-' + uuid.uuid4().hex class TestNetworkProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestNetworkProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_address_scope_create_attrs(self): self.verify_create(self.proxy.create_address_scope, address_scope.AddressScope) def test_address_scope_delete(self): self.verify_delete(self.proxy.delete_address_scope, address_scope.AddressScope, False) def test_address_scope_delete_ignore(self): self.verify_delete(self.proxy.delete_address_scope, address_scope.AddressScope, True) def test_address_scope_find(self): self.verify_find(self.proxy.find_address_scope, address_scope.AddressScope) def test_address_scope_get(self): self.verify_get(self.proxy.get_address_scope, address_scope.AddressScope) def test_address_scopes(self): self.verify_list(self.proxy.address_scopes, address_scope.AddressScope, paginated=False) def test_address_scope_update(self): self.verify_update(self.proxy.update_address_scope, address_scope.AddressScope) def test_agent_delete(self): self.verify_delete(self.proxy.delete_agent, agent.Agent, True) def test_agent_get(self): self.verify_get(self.proxy.get_agent, agent.Agent) def test_agents(self): self.verify_list(self.proxy.agents, agent.Agent, paginated=False) def test_agent_update(self): self.verify_update(self.proxy.update_agent, agent.Agent) def test_availability_zones(self): self.verify_list_no_kwargs(self.proxy.availability_zones, availability_zone.AvailabilityZone, paginated=False) def test_dhcp_agent_hosting_networks(self): self.verify_list( self.proxy.dhcp_agent_hosting_networks, network.DHCPAgentHostingNetwork, paginated=False, method_kwargs={'agent': AGENT_ID}, expected_kwargs={'agent_id': AGENT_ID} ) def test_network_hosting_dhcp_agents(self): self.verify_list( self.proxy.network_hosting_dhcp_agents, agent.NetworkHostingDHCPAgent, paginated=False, method_kwargs={'network': NETWORK_ID}, expected_kwargs={'network_id': NETWORK_ID} ) def test_extension_find(self): self.verify_find(self.proxy.find_extension, extension.Extension) def test_extensions(self): self.verify_list(self.proxy.extensions, extension.Extension, paginated=False) def test_floating_ip_create_attrs(self): self.verify_create(self.proxy.create_ip, floating_ip.FloatingIP) def test_floating_ip_delete(self): self.verify_delete(self.proxy.delete_ip, floating_ip.FloatingIP, False) def test_floating_ip_delete_ignore(self): self.verify_delete(self.proxy.delete_ip, floating_ip.FloatingIP, True) def test_floating_ip_find(self): self.verify_find(self.proxy.find_ip, floating_ip.FloatingIP) def test_floating_ip_get(self): self.verify_get(self.proxy.get_ip, floating_ip.FloatingIP) def test_ips(self): self.verify_list(self.proxy.ips, floating_ip.FloatingIP, paginated=False) def test_floating_ip_update(self): self.verify_update(self.proxy.update_ip, floating_ip.FloatingIP) def test_health_monitor_create_attrs(self): self.verify_create(self.proxy.create_health_monitor, health_monitor.HealthMonitor) def test_health_monitor_delete(self): self.verify_delete(self.proxy.delete_health_monitor, health_monitor.HealthMonitor, False) def test_health_monitor_delete_ignore(self): self.verify_delete(self.proxy.delete_health_monitor, health_monitor.HealthMonitor, True) def test_health_monitor_find(self): self.verify_find(self.proxy.find_health_monitor, health_monitor.HealthMonitor) def test_health_monitor_get(self): self.verify_get(self.proxy.get_health_monitor, health_monitor.HealthMonitor) def test_health_monitors(self): self.verify_list(self.proxy.health_monitors, health_monitor.HealthMonitor, paginated=False) def test_health_monitor_update(self): self.verify_update(self.proxy.update_health_monitor, health_monitor.HealthMonitor) def test_listener_create_attrs(self): self.verify_create(self.proxy.create_listener, listener.Listener) def test_listener_delete(self): self.verify_delete(self.proxy.delete_listener, listener.Listener, False) def test_listener_delete_ignore(self): self.verify_delete(self.proxy.delete_listener, listener.Listener, True) def test_listener_find(self): self.verify_find(self.proxy.find_listener, listener.Listener) def test_listener_get(self): self.verify_get(self.proxy.get_listener, listener.Listener) def test_listeners(self): self.verify_list(self.proxy.listeners, listener.Listener, paginated=False) def test_listener_update(self): self.verify_update(self.proxy.update_listener, listener.Listener) def test_load_balancer_create_attrs(self): self.verify_create(self.proxy.create_load_balancer, load_balancer.LoadBalancer) def test_load_balancer_delete(self): self.verify_delete(self.proxy.delete_load_balancer, load_balancer.LoadBalancer, False) def test_load_balancer_delete_ignore(self): self.verify_delete(self.proxy.delete_load_balancer, load_balancer.LoadBalancer, True) def test_load_balancer_find(self): self.verify_find(self.proxy.find_load_balancer, load_balancer.LoadBalancer) def test_load_balancer_get(self): self.verify_get(self.proxy.get_load_balancer, load_balancer.LoadBalancer) def test_load_balancers(self): self.verify_list(self.proxy.load_balancers, load_balancer.LoadBalancer, paginated=False) def test_load_balancer_update(self): self.verify_update(self.proxy.update_load_balancer, load_balancer.LoadBalancer) def test_metering_label_create_attrs(self): self.verify_create(self.proxy.create_metering_label, metering_label.MeteringLabel) def test_metering_label_delete(self): self.verify_delete(self.proxy.delete_metering_label, metering_label.MeteringLabel, False) def test_metering_label_delete_ignore(self): self.verify_delete(self.proxy.delete_metering_label, metering_label.MeteringLabel, True) def test_metering_label_find(self): self.verify_find(self.proxy.find_metering_label, metering_label.MeteringLabel) def test_metering_label_get(self): self.verify_get(self.proxy.get_metering_label, metering_label.MeteringLabel) def test_metering_labels(self): self.verify_list(self.proxy.metering_labels, metering_label.MeteringLabel, paginated=False) def test_metering_label_update(self): self.verify_update(self.proxy.update_metering_label, metering_label.MeteringLabel) def test_metering_label_rule_create_attrs(self): self.verify_create(self.proxy.create_metering_label_rule, metering_label_rule.MeteringLabelRule) def test_metering_label_rule_delete(self): self.verify_delete(self.proxy.delete_metering_label_rule, metering_label_rule.MeteringLabelRule, False) def test_metering_label_rule_delete_ignore(self): self.verify_delete(self.proxy.delete_metering_label_rule, metering_label_rule.MeteringLabelRule, True) def test_metering_label_rule_find(self): self.verify_find(self.proxy.find_metering_label_rule, metering_label_rule.MeteringLabelRule) def test_metering_label_rule_get(self): self.verify_get(self.proxy.get_metering_label_rule, metering_label_rule.MeteringLabelRule) def test_metering_label_rules(self): self.verify_list(self.proxy.metering_label_rules, metering_label_rule.MeteringLabelRule, paginated=False) def test_metering_label_rule_update(self): self.verify_update(self.proxy.update_metering_label_rule, metering_label_rule.MeteringLabelRule) def test_network_create_attrs(self): self.verify_create(self.proxy.create_network, network.Network) def test_network_delete(self): self.verify_delete(self.proxy.delete_network, network.Network, False) def test_network_delete_ignore(self): self.verify_delete(self.proxy.delete_network, network.Network, True) def test_network_find(self): self.verify_find(self.proxy.find_network, network.Network) def test_network_find_with_filter(self): self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_network, method_args=["net1"], method_kwargs={"project_id": "1"}, expected_args=[network.Network, "net1"], expected_kwargs={"project_id": "1", "ignore_missing": True}) def test_network_get(self): self.verify_get(self.proxy.get_network, network.Network) def test_networks(self): self.verify_list(self.proxy.networks, network.Network, paginated=False) def test_network_update(self): self.verify_update(self.proxy.update_network, network.Network) def test_flavor_create_attrs(self): self.verify_create(self.proxy.create_flavor, flavor.Flavor) def test_flavor_delete(self): self.verify_delete(self.proxy.delete_flavor, flavor.Flavor, True) def test_flavor_find(self): self.verify_find(self.proxy.find_flavor, flavor.Flavor) def test_flavor_get(self): self.verify_get(self.proxy.get_flavor, flavor.Flavor) def test_flavor_update(self): self.verify_update(self.proxy.update_flavor, flavor.Flavor) def test_flavors(self): self.verify_list(self.proxy.flavors, flavor.Flavor, paginated=True) def test_service_profile_create_attrs(self): self.verify_create(self.proxy.create_service_profile, service_profile.ServiceProfile) def test_service_profile_delete(self): self.verify_delete(self.proxy.delete_service_profile, service_profile.ServiceProfile, True) def test_service_profile_find(self): self.verify_find(self.proxy.find_service_profile, service_profile.ServiceProfile) def test_service_profile_get(self): self.verify_get(self.proxy.get_service_profile, service_profile.ServiceProfile) def test_service_profiles(self): self.verify_list(self.proxy.service_profiles, service_profile.ServiceProfile, paginated=True) def test_service_profile_update(self): self.verify_update(self.proxy.update_service_profile, service_profile.ServiceProfile) def test_network_ip_availability_find(self): self.verify_find(self.proxy.find_network_ip_availability, network_ip_availability.NetworkIPAvailability) def test_network_ip_availability_get(self): self.verify_get(self.proxy.get_network_ip_availability, network_ip_availability.NetworkIPAvailability) def test_network_ip_availabilities(self): self.verify_list(self.proxy.network_ip_availabilities, network_ip_availability.NetworkIPAvailability) def test_pool_member_create_attrs(self): self.verify_create(self.proxy.create_pool_member, pool_member.PoolMember, method_kwargs={"pool": "test_id"}, expected_kwargs={"pool_id": "test_id"}) def test_pool_member_delete(self): self.verify_delete(self.proxy.delete_pool_member, pool_member.PoolMember, False, {"pool": "test_id"}, {"pool_id": "test_id"}) def test_pool_member_delete_ignore(self): self.verify_delete(self.proxy.delete_pool_member, pool_member.PoolMember, True, {"pool": "test_id"}, {"pool_id": "test_id"}) def test_pool_member_find(self): self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_pool_member, method_args=["MEMBER", "POOL"], expected_args=[pool_member.PoolMember, "MEMBER"], expected_kwargs={"pool_id": "POOL", "ignore_missing": True}) def test_pool_member_get(self): self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_pool_member, method_args=["MEMBER", "POOL"], expected_args=[pool_member.PoolMember, "MEMBER"], expected_kwargs={"pool_id": "POOL"}) def test_pool_members(self): self.verify_list(self.proxy.pool_members, pool_member.PoolMember, paginated=False, method_args=["test_id"], expected_kwargs={"pool_id": "test_id"}) def test_pool_member_update(self): self._verify2("openstack.proxy.BaseProxy._update", self.proxy.update_pool_member, method_args=["MEMBER", "POOL"], expected_args=[pool_member.PoolMember, "MEMBER"], expected_kwargs={"pool_id": "POOL"}) def test_pool_create_attrs(self): self.verify_create(self.proxy.create_pool, pool.Pool) def test_pool_delete(self): self.verify_delete(self.proxy.delete_pool, pool.Pool, False) def test_pool_delete_ignore(self): self.verify_delete(self.proxy.delete_pool, pool.Pool, True) def test_pool_find(self): self.verify_find(self.proxy.find_pool, pool.Pool) def test_pool_get(self): self.verify_get(self.proxy.get_pool, pool.Pool) def test_pools(self): self.verify_list(self.proxy.pools, pool.Pool, paginated=False) def test_pool_update(self): self.verify_update(self.proxy.update_pool, pool.Pool) def test_port_create_attrs(self): self.verify_create(self.proxy.create_port, port.Port) def test_port_delete(self): self.verify_delete(self.proxy.delete_port, port.Port, False) def test_port_delete_ignore(self): self.verify_delete(self.proxy.delete_port, port.Port, True) def test_port_find(self): self.verify_find(self.proxy.find_port, port.Port) def test_port_get(self): self.verify_get(self.proxy.get_port, port.Port) def test_ports(self): self.verify_list(self.proxy.ports, port.Port, paginated=False) def test_port_update(self): self.verify_update(self.proxy.update_port, port.Port) def test_qos_bandwidth_limit_rule_create_attrs(self): self.verify_create( self.proxy.create_qos_bandwidth_limit_rule, qos_bandwidth_limit_rule.QoSBandwidthLimitRule, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_bandwidth_limit_rule_delete(self): self.verify_delete( self.proxy.delete_qos_bandwidth_limit_rule, qos_bandwidth_limit_rule.QoSBandwidthLimitRule, False, input_path_args=["resource_or_id", QOS_POLICY_ID], expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_bandwidth_limit_rule_delete_ignore(self): self.verify_delete( self.proxy.delete_qos_bandwidth_limit_rule, qos_bandwidth_limit_rule.QoSBandwidthLimitRule, True, input_path_args=["resource_or_id", QOS_POLICY_ID], expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_bandwidth_limit_rule_find(self): policy = qos_policy.QoSPolicy.new(id=QOS_POLICY_ID) self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_qos_bandwidth_limit_rule, method_args=['rule_id', policy], expected_args=[ qos_bandwidth_limit_rule.QoSBandwidthLimitRule, 'rule_id'], expected_kwargs={'ignore_missing': True, 'qos_policy_id': QOS_POLICY_ID}) def test_qos_bandwidth_limit_rule_get(self): self.verify_get( self.proxy.get_qos_bandwidth_limit_rule, qos_bandwidth_limit_rule.QoSBandwidthLimitRule, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_bandwidth_limit_rules(self): self.verify_list( self.proxy.qos_bandwidth_limit_rules, qos_bandwidth_limit_rule.QoSBandwidthLimitRule, paginated=False, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_bandwidth_limit_rule_update(self): policy = qos_policy.QoSPolicy.new(id=QOS_POLICY_ID) self._verify2('openstack.proxy.BaseProxy._update', self.proxy.update_qos_bandwidth_limit_rule, method_args=['rule_id', policy], method_kwargs={'foo': 'bar'}, expected_args=[ qos_bandwidth_limit_rule.QoSBandwidthLimitRule, 'rule_id'], expected_kwargs={'qos_policy_id': QOS_POLICY_ID, 'foo': 'bar'}) def test_qos_dscp_marking_rule_create_attrs(self): self.verify_create( self.proxy.create_qos_dscp_marking_rule, qos_dscp_marking_rule.QoSDSCPMarkingRule, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_dscp_marking_rule_delete(self): self.verify_delete( self.proxy.delete_qos_dscp_marking_rule, qos_dscp_marking_rule.QoSDSCPMarkingRule, False, input_path_args=["resource_or_id", QOS_POLICY_ID], expected_path_args={'qos_policy_id': QOS_POLICY_ID},) def test_qos_dscp_marking_rule_delete_ignore(self): self.verify_delete( self.proxy.delete_qos_dscp_marking_rule, qos_dscp_marking_rule.QoSDSCPMarkingRule, True, input_path_args=["resource_or_id", QOS_POLICY_ID], expected_path_args={'qos_policy_id': QOS_POLICY_ID}, ) def test_qos_dscp_marking_rule_find(self): policy = qos_policy.QoSPolicy.new(id=QOS_POLICY_ID) self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_qos_dscp_marking_rule, method_args=['rule_id', policy], expected_args=[qos_dscp_marking_rule.QoSDSCPMarkingRule, 'rule_id'], expected_kwargs={'ignore_missing': True, 'qos_policy_id': QOS_POLICY_ID}) def test_qos_dscp_marking_rule_get(self): self.verify_get( self.proxy.get_qos_dscp_marking_rule, qos_dscp_marking_rule.QoSDSCPMarkingRule, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_dscp_marking_rules(self): self.verify_list( self.proxy.qos_dscp_marking_rules, qos_dscp_marking_rule.QoSDSCPMarkingRule, paginated=False, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_dscp_marking_rule_update(self): policy = qos_policy.QoSPolicy.new(id=QOS_POLICY_ID) self._verify2('openstack.proxy.BaseProxy._update', self.proxy.update_qos_dscp_marking_rule, method_args=['rule_id', policy], method_kwargs={'foo': 'bar'}, expected_args=[ qos_dscp_marking_rule.QoSDSCPMarkingRule, 'rule_id'], expected_kwargs={'qos_policy_id': QOS_POLICY_ID, 'foo': 'bar'}) def test_qos_minimum_bandwidth_rule_create_attrs(self): self.verify_create( self.proxy.create_qos_minimum_bandwidth_rule, qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_minimum_bandwidth_rule_delete(self): self.verify_delete( self.proxy.delete_qos_minimum_bandwidth_rule, qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, False, input_path_args=["resource_or_id", QOS_POLICY_ID], expected_path_args={'qos_policy_id': QOS_POLICY_ID},) def test_qos_minimum_bandwidth_rule_delete_ignore(self): self.verify_delete( self.proxy.delete_qos_minimum_bandwidth_rule, qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, True, input_path_args=["resource_or_id", QOS_POLICY_ID], expected_path_args={'qos_policy_id': QOS_POLICY_ID}, ) def test_qos_minimum_bandwidth_rule_find(self): policy = qos_policy.QoSPolicy.new(id=QOS_POLICY_ID) self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_qos_minimum_bandwidth_rule, method_args=['rule_id', policy], expected_args=[ qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, 'rule_id'], expected_kwargs={'ignore_missing': True, 'qos_policy_id': QOS_POLICY_ID}) def test_qos_minimum_bandwidth_rule_get(self): self.verify_get( self.proxy.get_qos_minimum_bandwidth_rule, qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_minimum_bandwidth_rules(self): self.verify_list( self.proxy.qos_minimum_bandwidth_rules, qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, paginated=False, method_kwargs={'qos_policy': QOS_POLICY_ID}, expected_kwargs={'qos_policy_id': QOS_POLICY_ID}) def test_qos_minimum_bandwidth_rule_update(self): policy = qos_policy.QoSPolicy.new(id=QOS_POLICY_ID) self._verify2('openstack.proxy.BaseProxy._update', self.proxy.update_qos_minimum_bandwidth_rule, method_args=['rule_id', policy], method_kwargs={'foo': 'bar'}, expected_args=[ qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule, 'rule_id'], expected_kwargs={'qos_policy_id': QOS_POLICY_ID, 'foo': 'bar'}) def test_qos_policy_create_attrs(self): self.verify_create(self.proxy.create_qos_policy, qos_policy.QoSPolicy) def test_qos_policy_delete(self): self.verify_delete(self.proxy.delete_qos_policy, qos_policy.QoSPolicy, False) def test_qos_policy_delete_ignore(self): self.verify_delete(self.proxy.delete_qos_policy, qos_policy.QoSPolicy, True) def test_qos_policy_find(self): self.verify_find(self.proxy.find_qos_policy, qos_policy.QoSPolicy) def test_qos_policy_get(self): self.verify_get(self.proxy.get_qos_policy, qos_policy.QoSPolicy) def test_qos_policies(self): self.verify_list(self.proxy.qos_policies, qos_policy.QoSPolicy, paginated=False) def test_qos_policy_update(self): self.verify_update(self.proxy.update_qos_policy, qos_policy.QoSPolicy) def test_qos_rule_type_find(self): self.verify_find(self.proxy.find_qos_rule_type, qos_rule_type.QoSRuleType) def test_qos_rule_type_get(self): self.verify_get(self.proxy.get_qos_rule_type, qos_rule_type.QoSRuleType) def test_qos_rule_types(self): self.verify_list(self.proxy.qos_rule_types, qos_rule_type.QoSRuleType, paginated=False) def test_quota_delete(self): self.verify_delete(self.proxy.delete_quota, quota.Quota, False) def test_quota_delete_ignore(self): self.verify_delete(self.proxy.delete_quota, quota.Quota, True) def test_quota_get(self): self.verify_get(self.proxy.get_quota, quota.Quota) @mock.patch.object(proxy_base.BaseProxy, "_get_resource") def test_quota_get_details(self, mock_get): fake_quota = mock.Mock(project_id='PROJECT') mock_get.return_value = fake_quota self._verify2("openstack.proxy.BaseProxy._get", self.proxy.get_quota, method_args=['QUOTA_ID'], method_kwargs={'details': True}, expected_args=[quota.QuotaDetails], expected_kwargs={'project': fake_quota.id, 'requires_id': False}) mock_get.assert_called_once_with(quota.Quota, 'QUOTA_ID') @mock.patch.object(proxy_base.BaseProxy, "_get_resource") def test_quota_default_get(self, mock_get): fake_quota = mock.Mock(project_id='PROJECT') mock_get.return_value = fake_quota self._verify2("openstack.proxy.BaseProxy._get", self.proxy.get_quota_default, method_args=['QUOTA_ID'], expected_args=[quota.QuotaDefault], expected_kwargs={'project': fake_quota.id, 'requires_id': False}) mock_get.assert_called_once_with(quota.Quota, 'QUOTA_ID') def test_quotas(self): self.verify_list(self.proxy.quotas, quota.Quota, paginated=False) def test_quota_update(self): self.verify_update(self.proxy.update_quota, quota.Quota) def test_rbac_policy_create_attrs(self): self.verify_create(self.proxy.create_rbac_policy, rbac_policy.RBACPolicy) def test_rbac_policy_delete(self): self.verify_delete(self.proxy.delete_rbac_policy, rbac_policy.RBACPolicy, False) def test_rbac_policy_delete_ignore(self): self.verify_delete(self.proxy.delete_rbac_policy, rbac_policy.RBACPolicy, True) def test_rbac_policy_find(self): self.verify_find(self.proxy.find_rbac_policy, rbac_policy.RBACPolicy) def test_rbac_policy_get(self): self.verify_get(self.proxy.get_rbac_policy, rbac_policy.RBACPolicy) def test_rbac_policies(self): self.verify_list(self.proxy.rbac_policies, rbac_policy.RBACPolicy, paginated=False) def test_rbac_policy_update(self): self.verify_update(self.proxy.update_rbac_policy, rbac_policy.RBACPolicy) def test_router_create_attrs(self): self.verify_create(self.proxy.create_router, router.Router) def test_router_delete(self): self.verify_delete(self.proxy.delete_router, router.Router, False) def test_router_delete_ignore(self): self.verify_delete(self.proxy.delete_router, router.Router, True) def test_router_find(self): self.verify_find(self.proxy.find_router, router.Router) def test_router_get(self): self.verify_get(self.proxy.get_router, router.Router) def test_routers(self): self.verify_list(self.proxy.routers, router.Router, paginated=False) def test_router_update(self): self.verify_update(self.proxy.update_router, router.Router) @mock.patch.object(proxy_base.BaseProxy, '_get_resource') @mock.patch.object(router.Router, 'add_interface') def test_add_interface_to_router_with_port(self, mock_add_interface, mock_get): x_router = router.Router.new(id="ROUTER_ID") mock_get.return_value = x_router self._verify("openstack.network.v2.router.Router.add_interface", self.proxy.add_interface_to_router, method_args=["FAKE_ROUTER"], method_kwargs={"port_id": "PORT"}, expected_kwargs={"port_id": "PORT"}) mock_get.assert_called_once_with(router.Router, "FAKE_ROUTER") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') @mock.patch.object(router.Router, 'add_interface') def test_add_interface_to_router_with_subnet(self, mock_add_interface, mock_get): x_router = router.Router.new(id="ROUTER_ID") mock_get.return_value = x_router self._verify("openstack.network.v2.router.Router.add_interface", self.proxy.add_interface_to_router, method_args=["FAKE_ROUTER"], method_kwargs={"subnet_id": "SUBNET"}, expected_kwargs={"subnet_id": "SUBNET"}) mock_get.assert_called_once_with(router.Router, "FAKE_ROUTER") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') @mock.patch.object(router.Router, 'remove_interface') def test_remove_interface_from_router_with_port(self, mock_remove, mock_get): x_router = router.Router.new(id="ROUTER_ID") mock_get.return_value = x_router self._verify("openstack.network.v2.router.Router.remove_interface", self.proxy.remove_interface_from_router, method_args=["FAKE_ROUTER"], method_kwargs={"port_id": "PORT"}, expected_kwargs={"port_id": "PORT"}) mock_get.assert_called_once_with(router.Router, "FAKE_ROUTER") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') @mock.patch.object(router.Router, 'remove_interface') def test_remove_interface_from_router_with_subnet(self, mock_remove, mock_get): x_router = router.Router.new(id="ROUTER_ID") mock_get.return_value = x_router self._verify("openstack.network.v2.router.Router.remove_interface", self.proxy.remove_interface_from_router, method_args=["FAKE_ROUTER"], method_kwargs={"subnet_id": "SUBNET"}, expected_kwargs={"subnet_id": "SUBNET"}) mock_get.assert_called_once_with(router.Router, "FAKE_ROUTER") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') @mock.patch.object(router.Router, 'add_gateway') def test_add_gateway_to_router(self, mock_add, mock_get): x_router = router.Router.new(id="ROUTER_ID") mock_get.return_value = x_router self._verify("openstack.network.v2.router.Router.add_gateway", self.proxy.add_gateway_to_router, method_args=["FAKE_ROUTER"], method_kwargs={"foo": "bar"}, expected_kwargs={"foo": "bar"}) mock_get.assert_called_once_with(router.Router, "FAKE_ROUTER") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') @mock.patch.object(router.Router, 'remove_gateway') def test_remove_gateway_from_router(self, mock_remove, mock_get): x_router = router.Router.new(id="ROUTER_ID") mock_get.return_value = x_router self._verify("openstack.network.v2.router.Router.remove_gateway", self.proxy.remove_gateway_from_router, method_args=["FAKE_ROUTER"], method_kwargs={"foo": "bar"}, expected_kwargs={"foo": "bar"}) mock_get.assert_called_once_with(router.Router, "FAKE_ROUTER") def test_router_hosting_l3_agents_list(self): self.verify_list( self.proxy.routers_hosting_l3_agents, agent.RouterL3Agent, paginated=False, method_kwargs={'router': ROUTER_ID}, expected_kwargs={'router_id': ROUTER_ID}, ) def test_agent_hosted_routers_list(self): self.verify_list( self.proxy.agent_hosted_routers, router.L3AgentRouter, paginated=False, method_kwargs={'agent': AGENT_ID}, expected_kwargs={'agent_id': AGENT_ID}, ) def test_security_group_create_attrs(self): self.verify_create(self.proxy.create_security_group, security_group.SecurityGroup) def test_security_group_delete(self): self.verify_delete(self.proxy.delete_security_group, security_group.SecurityGroup, False) def test_security_group_delete_ignore(self): self.verify_delete(self.proxy.delete_security_group, security_group.SecurityGroup, True) def test_security_group_find(self): self.verify_find(self.proxy.find_security_group, security_group.SecurityGroup) def test_security_group_get(self): self.verify_get(self.proxy.get_security_group, security_group.SecurityGroup) def test_security_groups(self): self.verify_list(self.proxy.security_groups, security_group.SecurityGroup, paginated=False) def test_security_group_update(self): self.verify_update(self.proxy.update_security_group, security_group.SecurityGroup) @deprecation.fail_if_not_removed def test_security_group_open_port(self): mock_class = 'openstack.network.v2._proxy.Proxy' mock_method = mock_class + '.create_security_group_rule' expected_result = 'result' sgid = 1 port = 2 with mock.patch(mock_method) as mocked: mocked.return_value = expected_result actual = self.proxy.security_group_open_port(sgid, port) self.assertEqual(expected_result, actual) expected_args = { 'direction': 'ingress', 'protocol': 'tcp', 'remote_ip_prefix': '0.0.0.0/0', 'port_range_max': port, 'security_group_id': sgid, 'port_range_min': port, 'ethertype': 'IPv4', } mocked.assert_called_with(**expected_args) @deprecation.fail_if_not_removed def test_security_group_allow_ping(self): mock_class = 'openstack.network.v2._proxy.Proxy' mock_method = mock_class + '.create_security_group_rule' expected_result = 'result' sgid = 1 with mock.patch(mock_method) as mocked: mocked.return_value = expected_result actual = self.proxy.security_group_allow_ping(sgid) self.assertEqual(expected_result, actual) expected_args = { 'direction': 'ingress', 'protocol': 'icmp', 'remote_ip_prefix': '0.0.0.0/0', 'port_range_max': None, 'security_group_id': sgid, 'port_range_min': None, 'ethertype': 'IPv4', } mocked.assert_called_with(**expected_args) def test_security_group_rule_create_attrs(self): self.verify_create(self.proxy.create_security_group_rule, security_group_rule.SecurityGroupRule) def test_security_group_rule_delete(self): self.verify_delete(self.proxy.delete_security_group_rule, security_group_rule.SecurityGroupRule, False) def test_security_group_rule_delete_ignore(self): self.verify_delete(self.proxy.delete_security_group_rule, security_group_rule.SecurityGroupRule, True) def test_security_group_rule_find(self): self.verify_find(self.proxy.find_security_group_rule, security_group_rule.SecurityGroupRule) def test_security_group_rule_get(self): self.verify_get(self.proxy.get_security_group_rule, security_group_rule.SecurityGroupRule) def test_security_group_rules(self): self.verify_list(self.proxy.security_group_rules, security_group_rule.SecurityGroupRule, paginated=False) def test_segment_create_attrs(self): self.verify_create(self.proxy.create_segment, segment.Segment) def test_segment_delete(self): self.verify_delete(self.proxy.delete_segment, segment.Segment, False) def test_segment_delete_ignore(self): self.verify_delete(self.proxy.delete_segment, segment.Segment, True) def test_segment_find(self): self.verify_find(self.proxy.find_segment, segment.Segment) def test_segment_get(self): self.verify_get(self.proxy.get_segment, segment.Segment) def test_segments(self): self.verify_list(self.proxy.segments, segment.Segment, paginated=False) def test_segment_update(self): self.verify_update(self.proxy.update_segment, segment.Segment) def test_subnet_create_attrs(self): self.verify_create(self.proxy.create_subnet, subnet.Subnet) def test_subnet_delete(self): self.verify_delete(self.proxy.delete_subnet, subnet.Subnet, False) def test_subnet_delete_ignore(self): self.verify_delete(self.proxy.delete_subnet, subnet.Subnet, True) def test_subnet_find(self): self.verify_find(self.proxy.find_subnet, subnet.Subnet) def test_subnet_get(self): self.verify_get(self.proxy.get_subnet, subnet.Subnet) def test_subnets(self): self.verify_list(self.proxy.subnets, subnet.Subnet, paginated=False) def test_subnet_update(self): self.verify_update(self.proxy.update_subnet, subnet.Subnet) def test_subnet_pool_create_attrs(self): self.verify_create(self.proxy.create_subnet_pool, subnet_pool.SubnetPool) def test_subnet_pool_delete(self): self.verify_delete(self.proxy.delete_subnet_pool, subnet_pool.SubnetPool, False) def test_subnet_pool_delete_ignore(self): self.verify_delete(self.proxy.delete_subnet_pool, subnet_pool.SubnetPool, True) def test_subnet_pool_find(self): self.verify_find(self.proxy.find_subnet_pool, subnet_pool.SubnetPool) def test_subnet_pool_get(self): self.verify_get(self.proxy.get_subnet_pool, subnet_pool.SubnetPool) def test_subnet_pools(self): self.verify_list(self.proxy.subnet_pools, subnet_pool.SubnetPool, paginated=False) def test_subnet_pool_update(self): self.verify_update(self.proxy.update_subnet_pool, subnet_pool.SubnetPool) def test_vpn_service_create_attrs(self): self.verify_create(self.proxy.create_vpn_service, vpn_service.VPNService) def test_vpn_service_delete(self): self.verify_delete(self.proxy.delete_vpn_service, vpn_service.VPNService, False) def test_vpn_service_delete_ignore(self): self.verify_delete(self.proxy.delete_vpn_service, vpn_service.VPNService, True) def test_vpn_service_find(self): self.verify_find(self.proxy.find_vpn_service, vpn_service.VPNService) def test_vpn_service_get(self): self.verify_get(self.proxy.get_vpn_service, vpn_service.VPNService) def test_vpn_services(self): self.verify_list(self.proxy.vpn_services, vpn_service.VPNService, paginated=False) def test_vpn_service_update(self): self.verify_update(self.proxy.update_vpn_service, vpn_service.VPNService) def test_service_provider(self): self.verify_list(self.proxy.service_providers, service_provider.ServiceProvider, paginated=False) def test_auto_allocated_topology_get(self): self.verify_get(self.proxy.get_auto_allocated_topology, auto_allocated_topology.AutoAllocatedTopology) def test_auto_allocated_topology_delete(self): self.verify_delete(self.proxy.delete_auto_allocated_topology, auto_allocated_topology.AutoAllocatedTopology, False) def test_auto_allocated_topology_delete_ignore(self): self.verify_delete(self.proxy.delete_auto_allocated_topology, auto_allocated_topology.AutoAllocatedTopology, True) def test_validate_topology(self): self.verify_get(self.proxy.validate_auto_allocated_topology, auto_allocated_topology.ValidateTopology, value=[mock.sentinel.project_id], expected_args=[ auto_allocated_topology.ValidateTopology], expected_kwargs={"project": mock.sentinel.project_id, "requires_id": False}) def test_set_tags(self): x_network = network.Network.new(id='NETWORK_ID') self._verify('openstack.network.v2.network.Network.set_tags', self.proxy.set_tags, method_args=[x_network, ['TAG1', 'TAG2']], expected_args=[['TAG1', 'TAG2']], expected_result=mock.sentinel.result_set_tags) @mock.patch('openstack.network.v2.network.Network.set_tags') def test_set_tags_resource_without_tag_suport(self, mock_set_tags): no_tag_resource = object() self.assertRaises(exceptions.InvalidRequest, self.proxy.set_tags, no_tag_resource, ['TAG1', 'TAG2']) self.assertEqual(0, mock_set_tags.call_count) openstacksdk-0.11.3/openstack/tests/unit/network/v2/test_port.py0000666000175100017510000001307413236151340025043 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.network.v2 import port IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'allowed_address_pairs': [{'2': 2}], 'binding:host_id': '3', 'binding:profile': {'4': 4}, 'binding:vif_details': {'5': 5}, 'binding:vif_type': '6', 'binding:vnic_type': '7', 'created_at': '2016-03-09T12:14:57.233772', 'data_plane_status': '32', 'description': '8', 'device_id': '9', 'device_owner': '10', 'dns_assignment': [{'11': 11}], 'dns_name': '12', 'extra_dhcp_opts': [{'13': 13}], 'fixed_ips': [{'14': '14'}], 'id': IDENTIFIER, 'ip_address': '15', 'mac_address': '16', 'name': '17', 'network_id': '18', 'opt_name': '19', 'opt_value': '20', 'port_security_enabled': True, 'qos_policy_id': '21', 'revision_number': 22, 'security_groups': ['23'], 'subnet_id': '24', 'status': '25', 'tenant_id': '26', 'trunk_details': { 'trunk_id': '27', 'sub_ports': [{ 'port_id': '28', 'segmentation_id': 29, 'segmentation_type': '30', 'mac_address': '31'}]}, 'updated_at': '2016-07-09T12:14:57.233772', } class TestPort(testtools.TestCase): def test_basic(self): sot = port.Port() self.assertEqual('port', sot.resource_key) self.assertEqual('ports', sot.resources_key) self.assertEqual('/ports', sot.base_path) self.assertEqual('network', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual({"description": "description", "device_id": "device_id", "device_owner": "device_owner", "fixed_ips": "fixed_ips", "ip_address": "ip_address", "mac_address": "mac_address", "name": "name", "network_id": "network_id", "status": "status", "subnet_id": "subnet_id", "is_admin_state_up": "admin_state_up", "is_port_security_enabled": "port_security_enabled", "project_id": "tenant_id", "limit": "limit", "marker": "marker", "any_tags": "tags-any", "not_any_tags": "not-tags-any", "not_tags": "not-tags", "tags": "tags"}, sot._query_mapping._mapping) def test_make_it(self): sot = port.Port(**EXAMPLE) self.assertTrue(sot.is_admin_state_up) self.assertEqual(EXAMPLE['allowed_address_pairs'], sot.allowed_address_pairs) self.assertEqual(EXAMPLE['binding:host_id'], sot.binding_host_id) self.assertEqual(EXAMPLE['binding:profile'], sot.binding_profile) self.assertEqual(EXAMPLE['binding:vif_details'], sot.binding_vif_details) self.assertEqual(EXAMPLE['binding:vif_type'], sot.binding_vif_type) self.assertEqual(EXAMPLE['binding:vnic_type'], sot.binding_vnic_type) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['data_plane_status'], sot.data_plane_status) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['device_id'], sot.device_id) self.assertEqual(EXAMPLE['device_owner'], sot.device_owner) self.assertEqual(EXAMPLE['dns_assignment'], sot.dns_assignment) self.assertEqual(EXAMPLE['dns_name'], sot.dns_name) self.assertEqual(EXAMPLE['extra_dhcp_opts'], sot.extra_dhcp_opts) self.assertEqual(EXAMPLE['fixed_ips'], sot.fixed_ips) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['ip_address'], sot.ip_address) self.assertEqual(EXAMPLE['mac_address'], sot.mac_address) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['network_id'], sot.network_id) self.assertEqual(EXAMPLE['opt_name'], sot.option_name) self.assertEqual(EXAMPLE['opt_value'], sot.option_value) self.assertTrue(sot.is_port_security_enabled) self.assertEqual(EXAMPLE['qos_policy_id'], sot.qos_policy_id) self.assertEqual(EXAMPLE['revision_number'], sot.revision_number) self.assertEqual(EXAMPLE['security_groups'], sot.security_group_ids) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['subnet_id'], sot.subnet_id) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['trunk_details'], sot.trunk_details) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/network/__init__.py0000666000175100017510000000000013236151340024211 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/test_utils.py0000666000175100017510000000676113236151340023204 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import mock import sys import testtools import fixtures from openstack import utils class Test_enable_logging(testtools.TestCase): def setUp(self): super(Test_enable_logging, self).setUp() self.openstack_logger = mock.Mock() self.openstack_logger.handlers = [] self.ksa_logger_1 = mock.Mock() self.ksa_logger_1.handlers = [] self.ksa_logger_2 = mock.Mock() self.ksa_logger_2.handlers = [] self.ksa_logger_3 = mock.Mock() self.ksa_logger_3.handlers = [] self.fake_get_logger = mock.Mock() self.fake_get_logger.side_effect = [ self.openstack_logger, self.ksa_logger_1, self.ksa_logger_2, self.ksa_logger_3 ] self.useFixture( fixtures.MonkeyPatch('logging.getLogger', self.fake_get_logger)) def _console_tests(self, level, debug, stream): utils.enable_logging(debug=debug, stream=stream) self.assertEqual(self.openstack_logger.addHandler.call_count, 1) self.openstack_logger.setLevel.assert_called_with(level) def _file_tests(self, level, debug): file_handler = mock.Mock() self.useFixture( fixtures.MonkeyPatch('logging.FileHandler', file_handler)) fake_path = "fake/path.log" utils.enable_logging(debug=debug, path=fake_path) file_handler.assert_called_with(fake_path) self.assertEqual(self.openstack_logger.addHandler.call_count, 1) self.openstack_logger.setLevel.assert_called_with(level) def test_none(self): utils.enable_logging(debug=True) self.fake_get_logger.assert_has_calls([]) self.openstack_logger.setLevel.assert_called_with(logging.DEBUG) self.assertEqual(self.openstack_logger.addHandler.call_count, 1) self.assertIsInstance( self.openstack_logger.addHandler.call_args_list[0][0][0], logging.StreamHandler) def test_debug_console_stderr(self): self._console_tests(logging.DEBUG, True, sys.stderr) def test_warning_console_stderr(self): self._console_tests(logging.INFO, False, sys.stderr) def test_debug_console_stdout(self): self._console_tests(logging.DEBUG, True, sys.stdout) def test_warning_console_stdout(self): self._console_tests(logging.INFO, False, sys.stdout) def test_debug_file(self): self._file_tests(logging.DEBUG, True) def test_warning_file(self): self._file_tests(logging.INFO, False) class Test_urljoin(testtools.TestCase): def test_strings(self): root = "http://www.example.com" leaves = "foo", "bar" result = utils.urljoin(root, *leaves) self.assertEqual(result, "http://www.example.com/foo/bar") def test_with_none(self): root = "http://www.example.com" leaves = "foo", None result = utils.urljoin(root, *leaves) self.assertEqual(result, "http://www.example.com/foo/") openstacksdk-0.11.3/openstack/tests/unit/block_storage/0000775000175100017510000000000013236151501023234 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/block_storage/v2/0000775000175100017510000000000013236151501023563 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/block_storage/v2/test_snapshot.py0000666000175100017510000000621513236151340027042 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.block_storage.v2 import snapshot FAKE_ID = "ffa9bc5e-1172-4021-acaf-cdcd78a9584d" SNAPSHOT = { "status": "creating", "description": "Daily backup", "created_at": "2015-03-09T12:14:57.233772", "metadata": {}, "volume_id": "5aa119a8-d25b-45a7-8d1b-88e127885635", "size": 1, "id": FAKE_ID, "name": "snap-001", "force": "true", } DETAILS = { "os-extended-snapshot-attributes:progress": "100%", "os-extended-snapshot-attributes:project_id": "0c2eba2c5af04d3f9e9d0d410b371fde" } DETAILED_SNAPSHOT = SNAPSHOT.copy() DETAILED_SNAPSHOT.update(**DETAILS) class TestSnapshot(testtools.TestCase): def test_basic(self): sot = snapshot.Snapshot(SNAPSHOT) self.assertEqual("snapshot", sot.resource_key) self.assertEqual("snapshots", sot.resources_key) self.assertEqual("/snapshots", sot.base_path) self.assertEqual("volume", sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual({"name": "name", "status": "status", "all_tenants": "all_tenants", "volume_id": "volume_id", "limit": "limit", "marker": "marker"}, sot._query_mapping._mapping) def test_create_basic(self): sot = snapshot.Snapshot(**SNAPSHOT) self.assertEqual(SNAPSHOT["id"], sot.id) self.assertEqual(SNAPSHOT["status"], sot.status) self.assertEqual(SNAPSHOT["created_at"], sot.created_at) self.assertEqual(SNAPSHOT["metadata"], sot.metadata) self.assertEqual(SNAPSHOT["volume_id"], sot.volume_id) self.assertEqual(SNAPSHOT["size"], sot.size) self.assertEqual(SNAPSHOT["name"], sot.name) self.assertTrue(sot.is_forced) class TestSnapshotDetail(testtools.TestCase): def test_basic(self): sot = snapshot.SnapshotDetail(DETAILED_SNAPSHOT) self.assertIsInstance(sot, snapshot.Snapshot) self.assertEqual("/snapshots/detail", sot.base_path) def test_create_detailed(self): sot = snapshot.SnapshotDetail(**DETAILED_SNAPSHOT) self.assertEqual( DETAILED_SNAPSHOT["os-extended-snapshot-attributes:progress"], sot.progress) self.assertEqual( DETAILED_SNAPSHOT["os-extended-snapshot-attributes:project_id"], sot.project_id) openstacksdk-0.11.3/openstack/tests/unit/block_storage/v2/__init__.py0000666000175100017510000000000013236151340025665 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/block_storage/v2/test_volume.py0000666000175100017510000001202613236151340026507 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import testtools from openstack.block_storage.v2 import volume FAKE_ID = "6685584b-1eac-4da6-b5c3-555430cf68ff" IMAGE_METADATA = { 'container_format': 'bare', 'min_ram': '64', 'disk_format': u'qcow2', 'image_name': 'TestVM', 'image_id': '625d4f2c-cf67-4af3-afb6-c7220f766947', 'checksum': '64d7c1cd2b6f60c92c14662941cb7913', 'min_disk': '0', u'size': '13167616' } VOLUME = { "status": "creating", "name": "my_volume", "attachments": [], "availability_zone": "nova", "bootable": "false", "created_at": "2015-03-09T12:14:57.233772", "description": "something", "volume_type": "some_type", "snapshot_id": "93c2e2aa-7744-4fd6-a31a-80c4726b08d7", "source_volid": None, "imageRef": "some_image", "metadata": {}, "volume_image_metadata": IMAGE_METADATA, "id": FAKE_ID, "size": 10 } DETAILS = { "os-vol-host-attr:host": "127.0.0.1", "os-vol-tenant-attr:tenant_id": "some tenant", "os-vol-mig-status-attr:migstat": "done", "os-vol-mig-status-attr:name_id": "93c2e2aa-7744-4fd6-a31a-80c4726b08d7", "replication_status": "nah", "os-volume-replication:extended_status": "really nah", "consistencygroup_id": "123asf-asdf123", "os-volume-replication:driver_data": "ahasadfasdfasdfasdfsdf", "snapshot_id": "93c2e2aa-7744-4fd6-a31a-80c4726b08d7", "encrypted": "false", } VOLUME_DETAIL = copy.copy(VOLUME) VOLUME_DETAIL.update(DETAILS) class TestVolume(testtools.TestCase): def test_basic(self): sot = volume.Volume(VOLUME) self.assertEqual("volume", sot.resource_key) self.assertEqual("volumes", sot.resources_key) self.assertEqual("/volumes", sot.base_path) self.assertEqual("volume", sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual({"name": "name", "status": "status", "all_tenants": "all_tenants", "project_id": "project_id", "limit": "limit", "marker": "marker"}, sot._query_mapping._mapping) def test_create(self): sot = volume.Volume(**VOLUME) self.assertEqual(VOLUME["id"], sot.id) self.assertEqual(VOLUME["status"], sot.status) self.assertEqual(VOLUME["attachments"], sot.attachments) self.assertEqual(VOLUME["availability_zone"], sot.availability_zone) self.assertFalse(sot.is_bootable) self.assertEqual(VOLUME["created_at"], sot.created_at) self.assertEqual(VOLUME["description"], sot.description) self.assertEqual(VOLUME["volume_type"], sot.volume_type) self.assertEqual(VOLUME["snapshot_id"], sot.snapshot_id) self.assertEqual(VOLUME["source_volid"], sot.source_volume_id) self.assertEqual(VOLUME["metadata"], sot.metadata) self.assertEqual(VOLUME["volume_image_metadata"], sot.volume_image_metadata) self.assertEqual(VOLUME["size"], sot.size) self.assertEqual(VOLUME["imageRef"], sot.image_id) class TestVolumeDetail(testtools.TestCase): def test_basic(self): sot = volume.VolumeDetail(VOLUME_DETAIL) self.assertIsInstance(sot, volume.Volume) self.assertEqual("/volumes/detail", sot.base_path) def test_create(self): sot = volume.VolumeDetail(**VOLUME_DETAIL) self.assertEqual(VOLUME_DETAIL["os-vol-host-attr:host"], sot.host) self.assertEqual(VOLUME_DETAIL["os-vol-tenant-attr:tenant_id"], sot.project_id) self.assertEqual(VOLUME_DETAIL["os-vol-mig-status-attr:migstat"], sot.migration_status) self.assertEqual(VOLUME_DETAIL["os-vol-mig-status-attr:name_id"], sot.migration_id) self.assertEqual(VOLUME_DETAIL["replication_status"], sot.replication_status) self.assertEqual( VOLUME_DETAIL["os-volume-replication:extended_status"], sot.extended_replication_status) self.assertEqual(VOLUME_DETAIL["consistencygroup_id"], sot.consistency_group_id) self.assertEqual(VOLUME_DETAIL["os-volume-replication:driver_data"], sot.replication_driver_data) self.assertFalse(sot.is_encrypted) openstacksdk-0.11.3/openstack/tests/unit/block_storage/v2/test_type.py0000666000175100017510000000312413236151340026160 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.block_storage.v2 import type FAKE_ID = "6685584b-1eac-4da6-b5c3-555430cf68ff" TYPE = { "extra_specs": { "capabilities": "gpu" }, "id": FAKE_ID, "name": "SSD" } class TestType(testtools.TestCase): def test_basic(self): sot = type.Type(**TYPE) self.assertEqual("volume_type", sot.resource_key) self.assertEqual("volume_types", sot.resources_key) self.assertEqual("/types", sot.base_path) self.assertEqual("volume", sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertFalse(sot.allow_update) def test_new(self): sot = type.Type.new(id=FAKE_ID) self.assertEqual(FAKE_ID, sot.id) def test_create(self): sot = type.Type(**TYPE) self.assertEqual(TYPE["id"], sot.id) self.assertEqual(TYPE["extra_specs"], sot.extra_specs) self.assertEqual(TYPE["name"], sot.name) openstacksdk-0.11.3/openstack/tests/unit/block_storage/v2/test_proxy.py0000666000175100017510000000712413236151340026364 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.block_storage.v2 import _proxy from openstack.block_storage.v2 import snapshot from openstack.block_storage.v2 import stats from openstack.block_storage.v2 import type from openstack.block_storage.v2 import volume from openstack.tests.unit import test_proxy_base class TestVolumeProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestVolumeProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_snapshot_get(self): self.verify_get(self.proxy.get_snapshot, snapshot.Snapshot) def test_snapshots_detailed(self): self.verify_list(self.proxy.snapshots, snapshot.SnapshotDetail, paginated=True, method_kwargs={"details": True, "query": 1}, expected_kwargs={"query": 1}) def test_snapshots_not_detailed(self): self.verify_list(self.proxy.snapshots, snapshot.Snapshot, paginated=True, method_kwargs={"details": False, "query": 1}, expected_kwargs={"query": 1}) def test_snapshot_create_attrs(self): self.verify_create(self.proxy.create_snapshot, snapshot.Snapshot) def test_snapshot_delete(self): self.verify_delete(self.proxy.delete_snapshot, snapshot.Snapshot, False) def test_snapshot_delete_ignore(self): self.verify_delete(self.proxy.delete_snapshot, snapshot.Snapshot, True) def test_type_get(self): self.verify_get(self.proxy.get_type, type.Type) def test_types(self): self.verify_list(self.proxy.types, type.Type, paginated=False) def test_type_create_attrs(self): self.verify_create(self.proxy.create_type, type.Type) def test_type_delete(self): self.verify_delete(self.proxy.delete_type, type.Type, False) def test_type_delete_ignore(self): self.verify_delete(self.proxy.delete_type, type.Type, True) def test_volume_get(self): self.verify_get(self.proxy.get_volume, volume.Volume) def test_volumes_detailed(self): self.verify_list(self.proxy.volumes, volume.VolumeDetail, paginated=True, method_kwargs={"details": True, "query": 1}, expected_kwargs={"query": 1}) def test_volumes_not_detailed(self): self.verify_list(self.proxy.volumes, volume.Volume, paginated=True, method_kwargs={"details": False, "query": 1}, expected_kwargs={"query": 1}) def test_volume_create_attrs(self): self.verify_create(self.proxy.create_volume, volume.Volume) def test_volume_delete(self): self.verify_delete(self.proxy.delete_volume, volume.Volume, False) def test_volume_delete_ignore(self): self.verify_delete(self.proxy.delete_volume, volume.Volume, True) def test_backend_pools(self): self.verify_list(self.proxy.backend_pools, stats.Pools, paginated=False) openstacksdk-0.11.3/openstack/tests/unit/block_storage/__init__.py0000666000175100017510000000000013236151340025336 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/block_storage/test_block_storage_service.py0000666000175100017510000000214013236151340031203 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.block_storage import block_storage_service class TestBlockStorageService(testtools.TestCase): def test_service(self): sot = block_storage_service.BlockStorageService() self.assertEqual("volume", sot.service_type) self.assertEqual("public", sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual("v2", sot.valid_versions[0].module) self.assertEqual("v2", sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/cloud/0000775000175100017510000000000013236151501021524 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/cloud/test_floating_ip_common.py0000666000175100017510000001773713236151340027022 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_common ---------------------------------- Tests floating IP resource methods for Neutron and Nova-network. """ from mock import patch from openstack.cloud import meta from openstack.cloud import OpenStackCloud from openstack.tests import fakes from openstack.tests.unit import base class TestFloatingIP(base.TestCase): @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_attach_ip_to_server') @patch.object(OpenStackCloud, 'available_floating_ip') def test_add_auto_ip( self, mock_available_floating_ip, mock_attach_ip_to_server, mock_get_floating_ip): server_dict = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={} ) floating_ip_dict = { "id": "this-is-a-floating-ip-id", "fixed_ip_address": None, "internal_network": None, "floating_ip_address": "203.0.113.29", "network": "this-is-a-net-or-pool-id", "attached": False, "status": "ACTIVE" } mock_available_floating_ip.return_value = floating_ip_dict self.cloud.add_auto_ip(server=server_dict) mock_attach_ip_to_server.assert_called_with( timeout=60, wait=False, server=server_dict, floating_ip=floating_ip_dict, skip_attach=False) @patch.object(OpenStackCloud, '_add_ip_from_pool') def test_add_ips_to_server_pool(self, mock_add_ip_from_pool): server_dict = fakes.make_fake_server( server_id='romeo', name='test-server', status="ACTIVE", addresses={}) pool = 'nova' self.cloud.add_ips_to_server(server_dict, ip_pool=pool) mock_add_ip_from_pool.assert_called_with( server_dict, pool, reuse=True, wait=False, timeout=60, fixed_address=None, nat_destination=None) @patch.object(OpenStackCloud, 'has_service') @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_ipv6_only( self, mock_add_auto_ip, mock_get_floating_ip, mock_has_service): self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = True mock_has_service.return_value = False server = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42', u'OS-EXT-IPS:type': u'fixed', 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } ) server_dict = meta.add_server_interfaces(self.cloud, server) new_server = self.cloud.add_ips_to_server(server=server_dict) mock_get_floating_ip.assert_not_called() mock_add_auto_ip.assert_not_called() self.assertEqual( new_server['interface_ip'], '2001:4800:7819:103:be76:4eff:fe05:8525') self.assertEqual(new_server['private_v4'], '10.223.160.141') self.assertEqual(new_server['public_v4'], '') self.assertEqual( new_server['public_v6'], '2001:4800:7819:103:be76:4eff:fe05:8525') @patch.object(OpenStackCloud, 'has_service') @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_rackspace( self, mock_add_auto_ip, mock_get_floating_ip, mock_has_service): self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = True mock_has_service.return_value = False server = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ 'addr': "104.130.246.91", 'version': 4 }, { 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } ) server_dict = meta.add_server_interfaces(self.cloud, server) new_server = self.cloud.add_ips_to_server(server=server_dict) mock_get_floating_ip.assert_not_called() mock_add_auto_ip.assert_not_called() self.assertEqual( new_server['interface_ip'], '2001:4800:7819:103:be76:4eff:fe05:8525') @patch.object(OpenStackCloud, 'has_service') @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_rackspace_local_ipv4( self, mock_add_auto_ip, mock_get_floating_ip, mock_has_service): self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = False mock_has_service.return_value = False server = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ 'addr': "104.130.246.91", 'version': 4 }, { 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } ) server_dict = meta.add_server_interfaces(self.cloud, server) new_server = self.cloud.add_ips_to_server(server=server_dict) mock_get_floating_ip.assert_not_called() mock_add_auto_ip.assert_not_called() self.assertEqual(new_server['interface_ip'], '104.130.246.91') @patch.object(OpenStackCloud, 'add_ip_list') def test_add_ips_to_server_ip_list(self, mock_add_ip_list): server_dict = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={}) ips = ['203.0.113.29', '172.24.4.229'] self.cloud.add_ips_to_server(server_dict, ips=ips) mock_add_ip_list.assert_called_with( server_dict, ips, wait=False, timeout=60, fixed_address=None) @patch.object(OpenStackCloud, '_needs_floating_ip') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_auto_ip( self, mock_add_auto_ip, mock_needs_floating_ip): server_dict = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={}) # TODO(mordred) REMOVE THIS MOCK WHEN THE NEXT PATCH LANDS # SERIOUSLY THIS TIME. NEXT PATCH - WHICH SHOULD ADD MOCKS FOR # list_ports AND list_networks AND list_subnets. BUT THAT WOULD # BE NOT ACTUALLY RELATED TO THIS PATCH. SO DO IT NEXT PATCH mock_needs_floating_ip.return_value = True self.cloud.add_ips_to_server(server_dict) mock_add_auto_ip.assert_called_with( server_dict, wait=False, timeout=60, reuse=True) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_cluster_templates.py0000666000175100017510000001656013236151340026707 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import munch import openstack.cloud import testtools from openstack.tests.unit import base cluster_template_obj = munch.Munch( apiserver_port=12345, cluster_distro='fake-distro', coe='fake-coe', created_at='fake-date', dns_nameserver='8.8.8.8', docker_volume_size=1, external_network_id='public', fixed_network=None, flavor_id='fake-flavor', https_proxy=None, human_id=None, image_id='fake-image', insecure_registry='https://192.168.0.10', keypair_id='fake-key', labels={}, links={}, master_flavor_id=None, name='fake-cluster-template', network_driver='fake-driver', no_proxy=None, public=False, registry_enabled=False, server_type='vm', tls_disabled=False, updated_at=None, uuid='fake-uuid', volume_driver=None, ) class TestClusterTemplates(base.RequestsMockTestCase): def test_list_cluster_templates_without_detail(self): self.register_uris([dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) cluster_templates_list = self.cloud.list_cluster_templates() self.assertEqual( cluster_templates_list[0], self.cloud._normalize_cluster_template(cluster_template_obj)) self.assert_calls() def test_list_cluster_templates_with_detail(self): self.register_uris([dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) cluster_templates_list = self.cloud.list_cluster_templates(detail=True) self.assertEqual( cluster_templates_list[0], self.cloud._normalize_cluster_template(cluster_template_obj)) self.assert_calls() def test_search_cluster_templates_by_name(self): self.register_uris([dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) cluster_templates = self.cloud.search_cluster_templates( name_or_id='fake-cluster-template') self.assertEqual(1, len(cluster_templates)) self.assertEqual('fake-uuid', cluster_templates[0]['uuid']) self.assert_calls() def test_search_cluster_templates_not_found(self): self.register_uris([dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) cluster_templates = self.cloud.search_cluster_templates( name_or_id='non-existent') self.assertEqual(0, len(cluster_templates)) self.assert_calls() def test_get_cluster_template(self): self.register_uris([dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) r = self.cloud.get_cluster_template('fake-cluster-template') self.assertIsNotNone(r) self.assertDictEqual( r, self.cloud._normalize_cluster_template(cluster_template_obj)) self.assert_calls() def test_get_cluster_template_not_found(self): self.register_uris([dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[]))]) r = self.cloud.get_cluster_template('doesNotExist') self.assertIsNone(r) self.assert_calls() def test_create_cluster_template(self): self.register_uris([dict( method='POST', uri='https://container-infra.example.com/v1/baymodels', json=dict(baymodels=[cluster_template_obj.toDict()]), validate=dict(json={ 'coe': 'fake-coe', 'image_id': 'fake-image', 'keypair_id': 'fake-key', 'name': 'fake-cluster-template'}), )]) self.cloud.create_cluster_template( name=cluster_template_obj.name, image_id=cluster_template_obj.image_id, keypair_id=cluster_template_obj.keypair_id, coe=cluster_template_obj.coe) self.assert_calls() def test_create_cluster_template_exception(self): self.register_uris([dict( method='POST', uri='https://container-infra.example.com/v1/baymodels', status_code=403)]) # TODO(mordred) requests here doens't give us a great story # for matching the old error message text. Investigate plumbing # an error message in to the adapter call so that we can give a # more informative error. Also, the test was originally catching # OpenStackCloudException - but for some reason testtools will not # match the more specific HTTPError, even though it's a subclass # of OpenStackCloudException. with testtools.ExpectedException( openstack.cloud.OpenStackCloudHTTPError): self.cloud.create_cluster_template('fake-cluster-template') self.assert_calls() def test_delete_cluster_template(self): uri = 'https://container-infra.example.com/v1/baymodels/fake-uuid' self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()])), dict( method='DELETE', uri=uri), ]) self.cloud.delete_cluster_template('fake-uuid') self.assert_calls() def test_update_cluster_template(self): uri = 'https://container-infra.example.com/v1/baymodels/fake-uuid' self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()])), dict( method='PATCH', uri=uri, status_code=200, validate=dict( json=[{ u'op': u'replace', u'path': u'/name', u'value': u'new-cluster-template' }] )), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', # This json value is not meaningful to the test - it just has # to be valid. json=dict(baymodels=[cluster_template_obj.toDict()])), ]) new_name = 'new-cluster-template' self.cloud.update_cluster_template( 'fake-uuid', 'replace', name=new_name) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_inventory.py0000666000175100017510000001264113236151364025207 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from openstack.cloud import exc from openstack.cloud import inventory import openstack.config from openstack.config import exceptions as occ_exc from openstack.tests import fakes from openstack.tests.unit import base class TestInventory(base.TestCase): def setUp(self): super(TestInventory, self).setUp() @mock.patch("openstack.config.loader.OpenStackConfig") @mock.patch("openstack.cloud.OpenStackCloud") def test__init(self, mock_cloud, mock_config): mock_config.return_value.get_all.return_value = [{}] inv = inventory.OpenStackInventory() mock_config.assert_called_once_with( config_files=openstack.config.loader.CONFIG_FILES ) self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) self.assertTrue(mock_config.return_value.get_all.called) @mock.patch("openstack.config.loader.OpenStackConfig") @mock.patch("openstack.cloud.OpenStackCloud") def test__init_one_cloud(self, mock_cloud, mock_config): mock_config.return_value.get_one.return_value = [{}] inv = inventory.OpenStackInventory(cloud='supercloud') mock_config.assert_called_once_with( config_files=openstack.config.loader.CONFIG_FILES ) self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) self.assertFalse(mock_config.return_value.get_all.called) mock_config.return_value.get_one.assert_called_once_with( 'supercloud') @mock.patch("openstack.config.loader.OpenStackConfig") @mock.patch("openstack.cloud.OpenStackCloud") def test__raise_exception_on_no_cloud(self, mock_cloud, mock_config): """ Test that when os-client-config can't find a named cloud, a shade exception is emitted. """ mock_config.return_value.get_one.side_effect = ( occ_exc.OpenStackConfigException() ) self.assertRaises(exc.OpenStackCloudException, inventory.OpenStackInventory, cloud='supercloud') mock_config.return_value.get_one.assert_called_once_with( 'supercloud') @mock.patch("openstack.config.loader.OpenStackConfig") @mock.patch("openstack.cloud.OpenStackCloud") def test_list_hosts(self, mock_cloud, mock_config): mock_config.return_value.get_all.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.list_hosts() inv.clouds[0].list_servers.assert_called_once_with(detailed=True) self.assertFalse(inv.clouds[0].get_openstack_vars.called) self.assertEqual([server], ret) @mock.patch("openstack.config.loader.OpenStackConfig") @mock.patch("openstack.cloud.OpenStackCloud") def test_list_hosts_no_detail(self, mock_cloud, mock_config): mock_config.return_value.get_all.return_value = [{}] inv = inventory.OpenStackInventory() server = self.cloud._normalize_server( fakes.make_fake_server( '1234', 'test', 'ACTIVE', addresses={})) self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.list_hosts(expand=False) inv.clouds[0].list_servers.assert_called_once_with(detailed=False) self.assertFalse(inv.clouds[0].get_openstack_vars.called) @mock.patch("openstack.config.loader.OpenStackConfig") @mock.patch("openstack.cloud.OpenStackCloud") def test_search_hosts(self, mock_cloud, mock_config): mock_config.return_value.get_all.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.search_hosts('server_id') self.assertEqual([server], ret) @mock.patch("openstack.config.loader.OpenStackConfig") @mock.patch("openstack.cloud.OpenStackCloud") def test_get_host(self, mock_cloud, mock_config): mock_config.return_value.get_all.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.get_host('server_id') self.assertEqual(server, ret) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_recordset.py0000666000175100017510000001576013236151340025143 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import testtools import openstack.cloud from openstack.tests.unit import base zone = { 'id': '1', 'name': 'example.net.', 'type': 'PRIMARY', 'email': 'test@example.net', 'description': 'Example zone', 'ttl': 3600, } recordset = { 'name': 'www.example.net.', 'type': 'A', 'description': 'Example zone', 'ttl': 3600, 'records': ['192.168.1.1'] } recordset_zone = '1' new_recordset = copy.copy(recordset) new_recordset['id'] = '1' new_recordset['zone'] = recordset_zone class TestRecordset(base.RequestsMockTestCase): def setUp(self): super(TestRecordset, self).setUp() self.use_designate() def test_create_recordset(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [zone], "links": {}, "metadata": { 'total_count': 1}}), dict(method='POST', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets']), json=new_recordset, validate=dict(json=recordset)), ]) rs = self.cloud.create_recordset( zone=recordset_zone, name=recordset['name'], recordset_type=recordset['type'], records=recordset['records'], description=recordset['description'], ttl=recordset['ttl']) self.assertEqual(new_recordset, rs) self.assert_calls() def test_create_recordset_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [zone], "links": {}, "metadata": { 'total_count': 1}}), dict(method='POST', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets']), status_code=500, validate=dict(json={ 'name': 'www2.example.net.', 'records': ['192.168.1.2'], 'type': 'A'})), ]) with testtools.ExpectedException( openstack.cloud.exc.OpenStackCloudHTTPError, "Error creating recordset www2.example.net." ): self.cloud.create_recordset('1', 'www2.example.net.', 'a', ['192.168.1.2']) self.assert_calls() def test_update_recordset(self): new_ttl = 7200 expected_recordset = { 'name': recordset['name'], 'records': recordset['records'], 'type': recordset['type'] } self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [zone], "links": {}, "metadata": { 'total_count': 1}}), dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets', new_recordset['id']]), json=new_recordset), dict(method='PUT', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets', new_recordset['id']]), json=expected_recordset, validate=dict(json={'ttl': new_ttl})) ]) updated_rs = self.cloud.update_recordset('1', '1', ttl=new_ttl) self.assertEqual(expected_recordset, updated_rs) self.assert_calls() def test_delete_recordset(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [zone], "links": {}, "metadata": { 'total_count': 1}}), dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets', new_recordset['id']]), json=new_recordset), dict(method='DELETE', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets', new_recordset['id']]), json={}) ]) self.assertTrue(self.cloud.delete_recordset('1', '1')) self.assert_calls() def test_get_recordset_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1', 'recordsets', '1']), json=new_recordset), ]) recordset = self.cloud.get_recordset('1', '1') self.assertEqual(recordset['id'], '1') self.assert_calls() def test_get_recordset_by_name(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1', 'recordsets', new_recordset['name']]), json=new_recordset), ]) recordset = self.cloud.get_recordset('1', new_recordset['name']) self.assertEqual(new_recordset['name'], recordset['name']) self.assert_calls() def test_get_recordset_not_found_returns_false(self): recordset_name = "www.nonexistingrecord.net." self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1', 'recordsets', recordset_name]), json=[]) ]) recordset = self.cloud.get_recordset('1', recordset_name) self.assertFalse(recordset) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_project.py0000666000175100017510000002563713236151340024623 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from testtools import matchers import openstack.cloud import openstack.cloud._utils from openstack.tests.unit import base class TestProject(base.RequestsMockTestCase): def get_mock_url(self, service_type='identity', interface='admin', resource=None, append=None, base_url_append=None, v3=True): if v3 and resource is None: resource = 'projects' elif not v3 and resource is None: resource = 'tenants' if base_url_append is None and v3: base_url_append = 'v3' return super(TestProject, self).get_mock_url( service_type=service_type, interface=interface, resource=resource, append=append, base_url_append=base_url_append) def test_create_project_v2(self): self.use_keystone_v2() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='POST', uri=self.get_mock_url(v3=False), status_code=200, json=project_data.json_response, validate=dict(json=project_data.json_request)) ]) project = self.cloud.create_project( name=project_data.project_name, description=project_data.description) self.assertThat(project.id, matchers.Equals(project_data.project_id)) self.assertThat( project.name, matchers.Equals(project_data.project_name)) self.assert_calls() def test_create_project_v3(self,): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) reference_req = project_data.json_request.copy() reference_req['project']['enabled'] = True self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=200, json=project_data.json_response, validate=dict(json=reference_req)) ]) project = self.cloud.create_project( name=project_data.project_name, description=project_data.description, domain_id=project_data.domain_id) self.assertThat(project.id, matchers.Equals(project_data.project_id)) self.assertThat( project.name, matchers.Equals(project_data.project_name)) self.assertThat( project.description, matchers.Equals(project_data.description)) self.assertThat( project.domain_id, matchers.Equals(project_data.domain_id)) self.assert_calls() def test_create_project_v3_no_domain(self): with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "User or project creation requires an explicit" " domain_id argument." ): self.cloud.create_project(name='foo', description='bar') def test_delete_project_v2(self): self.use_keystone_v2() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url(v3=False), status_code=200, json={'tenants': [project_data.json_response['tenant']]}), dict(method='DELETE', uri=self.get_mock_url( v3=False, append=[project_data.project_id]), status_code=204) ]) self.cloud.delete_project(project_data.project_id) self.assert_calls() def test_delete_project_v3(self): project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'projects': [project_data.json_response['tenant']]}), dict(method='DELETE', uri=self.get_mock_url(append=[project_data.project_id]), status_code=204) ]) self.cloud.delete_project(project_data.project_id) self.assert_calls() def test_update_project_not_found(self): project_data = self._get_project_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'projects': []}) ]) # NOTE(notmorgan): This test (and shade) does not represent a case # where the project is in the project list but a 404 is raised when # the PATCH is issued. This is a bug in shade and should be fixed, # shade will raise an attribute error instead of the proper # project not found exception. with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Project %s not found." % project_data.project_id ): self.cloud.update_project(project_data.project_id) self.assert_calls() def test_update_project_v2(self): self.use_keystone_v2() project_data = self._get_project_data( v3=False, description=self.getUniqueString('projectDesc')) # remove elements that are not updated in this test. project_data.json_request['tenant'].pop('name') project_data.json_request['tenant'].pop('enabled') self.register_uris([ dict(method='GET', uri=self.get_mock_url(v3=False), status_code=200, json={'tenants': [project_data.json_response['tenant']]}), dict(method='POST', uri=self.get_mock_url( v3=False, append=[project_data.project_id]), status_code=200, json=project_data.json_response, validate=dict(json=project_data.json_request)) ]) project = self.cloud.update_project( project_data.project_id, description=project_data.description) self.assertThat(project.id, matchers.Equals(project_data.project_id)) self.assertThat( project.name, matchers.Equals(project_data.project_name)) self.assertThat( project.description, matchers.Equals(project_data.description)) self.assert_calls() def test_update_project_v3(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) reference_req = project_data.json_request.copy() # Remove elements not actually sent in the update reference_req['project'].pop('domain_id') reference_req['project'].pop('name') reference_req['project'].pop('enabled') self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % project_data.domain_id)), status_code=200, json={'projects': [project_data.json_response['project']]}), dict(method='PATCH', uri=self.get_mock_url(append=[project_data.project_id]), status_code=200, json=project_data.json_response, validate=dict(json=reference_req)) ]) project = self.cloud.update_project( project_data.project_id, description=project_data.description, domain_id=project_data.domain_id) self.assertThat(project.id, matchers.Equals(project_data.project_id)) self.assertThat( project.name, matchers.Equals(project_data.project_name)) self.assertThat( project.description, matchers.Equals(project_data.description)) self.assert_calls() def test_list_projects_v3(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % project_data.domain_id)), status_code=200, json={'projects': [project_data.json_response['project']]}) ]) projects = self.cloud.list_projects(project_data.domain_id) self.assertThat(len(projects), matchers.Equals(1)) self.assertThat( projects[0].id, matchers.Equals(project_data.project_id)) self.assert_calls() def test_list_projects_v3_kwarg(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % project_data.domain_id)), status_code=200, json={'projects': [project_data.json_response['project']]}) ]) projects = self.cloud.list_projects( domain_id=project_data.domain_id) self.assertThat(len(projects), matchers.Equals(1)) self.assertThat( projects[0].id, matchers.Equals(project_data.project_id)) self.assert_calls() def test_list_projects_search_compat(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'projects': [project_data.json_response['project']]}) ]) projects = self.cloud.search_projects(project_data.project_id) self.assertThat(len(projects), matchers.Equals(1)) self.assertThat( projects[0].id, matchers.Equals(project_data.project_id)) self.assert_calls() def test_list_projects_search_compat_v3(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % project_data.domain_id)), status_code=200, json={'projects': [project_data.json_response['project']]}) ]) projects = self.cloud.search_projects( domain_id=project_data.domain_id) self.assertThat(len(projects), matchers.Equals(1)) self.assertThat( projects[0].id, matchers.Equals(project_data.project_id)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_task_manager.py0000666000175100017510000000565313236151340025605 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import concurrent.futures import mock from openstack import task_manager from openstack.tests.unit import base class TestException(Exception): pass class TaskTest(task_manager.Task): def main(self): raise TestException("This is a test exception") class TaskTestGenerator(task_manager.Task): def main(self): yield 1 class TaskTestInt(task_manager.Task): def main(self): return int(1) class TaskTestFloat(task_manager.Task): def main(self): return float(2.0) class TaskTestStr(task_manager.Task): def main(self): return "test" class TaskTestBool(task_manager.Task): def main(self): return True class TaskTestSet(task_manager.Task): def main(self): return set([1, 2]) class TaskTestAsync(task_manager.Task): def __init__(self): super(TaskTestAsync, self).__init__(run_async=True) def main(self): pass class TestTaskManager(base.TestCase): def setUp(self): super(TestTaskManager, self).setUp() self.manager = task_manager.TaskManager(name='test') def test_wait_re_raise(self): """Test that Exceptions thrown in a Task is reraised correctly This test is aimed to six.reraise(), called in Task::wait(). Specifically, we test if we get the same behaviour with all the configured interpreters (e.g. py27, p34, pypy, ...) """ self.assertRaises(TestException, self.manager.submit_task, TaskTest()) def test_dont_munchify_int(self): ret = self.manager.submit_task(TaskTestInt()) self.assertIsInstance(ret, int) def test_dont_munchify_float(self): ret = self.manager.submit_task(TaskTestFloat()) self.assertIsInstance(ret, float) def test_dont_munchify_str(self): ret = self.manager.submit_task(TaskTestStr()) self.assertIsInstance(ret, str) def test_dont_munchify_bool(self): ret = self.manager.submit_task(TaskTestBool()) self.assertIsInstance(ret, bool) def test_dont_munchify_set(self): ret = self.manager.submit_task(TaskTestSet()) self.assertIsInstance(ret, set) @mock.patch.object(concurrent.futures.ThreadPoolExecutor, 'submit') def test_async(self, mock_submit): self.manager.submit_task(TaskTestAsync()) self.assertTrue(mock_submit.called) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_object.py0000666000175100017510000010722213236151340024412 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile import testtools import openstack.cloud import openstack.cloud.openstackcloud as oc_oc from openstack.cloud import exc from openstack.tests.unit import base class BaseTestObject(base.RequestsMockTestCase): def setUp(self): super(BaseTestObject, self).setUp() self.container = self.getUniqueString() self.object = self.getUniqueString() self.endpoint = self.cloud._object_store_client.get_endpoint() self.container_endpoint = '{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container) self.object_endpoint = '{endpoint}/{object}'.format( endpoint=self.container_endpoint, object=self.object) class TestObject(BaseTestObject): def test_create_container(self): """Test creating a (private) container""" self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, status_code=404), dict(method='PUT', uri=self.container_endpoint, status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri=self.container_endpoint, headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}) ]) self.cloud.create_container(self.container) self.assert_calls() def test_create_container_public(self): """Test creating a public container""" self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, status_code=404), dict(method='PUT', uri=self.container_endpoint, status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='POST', uri=self.container_endpoint, status_code=201, validate=dict( headers={ 'x-container-read': oc_oc.OBJECT_CONTAINER_ACLS[ 'public']})), dict(method='HEAD', uri=self.container_endpoint, headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}) ]) self.cloud.create_container(self.container, public=True) self.assert_calls() def test_create_container_exists(self): """Test creating a container that exists.""" self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}) ]) container = self.cloud.create_container(self.container) self.assert_calls() self.assertIsNotNone(container) def test_delete_container(self): self.register_uris([ dict(method='DELETE', uri=self.container_endpoint)]) self.assertTrue(self.cloud.delete_container(self.container)) self.assert_calls() def test_delete_container_404(self): """No exception when deleting a container that does not exist""" self.register_uris([ dict(method='DELETE', uri=self.container_endpoint, status_code=404)]) self.assertFalse(self.cloud.delete_container(self.container)) self.assert_calls() def test_delete_container_error(self): """Non-404 swift error re-raised as OSCE""" # 409 happens if the container is not empty self.register_uris([ dict(method='DELETE', uri=self.container_endpoint, status_code=409)]) self.assertRaises( openstack.cloud.OpenStackCloudException, self.cloud.delete_container, self.container) self.assert_calls() def test_update_container(self): headers = { 'x-container-read': oc_oc.OBJECT_CONTAINER_ACLS['public']} self.register_uris([ dict(method='POST', uri=self.container_endpoint, status_code=204, validate=dict(headers=headers))]) self.cloud.update_container(self.container, headers) self.assert_calls() def test_update_container_error(self): """Swift error re-raised as OSCE""" # This test is of questionable value - the swift API docs do not # declare error codes (other than 404 for the container) for this # method, and I cannot make a synthetic failure to validate a real # error code. So we're really just testing the shade adapter error # raising logic here, rather than anything specific to swift. self.register_uris([ dict(method='POST', uri=self.container_endpoint, status_code=409)]) self.assertRaises( openstack.cloud.OpenStackCloudException, self.cloud.update_container, self.container, dict(foo='bar')) self.assert_calls() def test_set_container_access_public(self): self.register_uris([ dict(method='POST', uri=self.container_endpoint, status_code=204, validate=dict( headers={ 'x-container-read': oc_oc.OBJECT_CONTAINER_ACLS[ 'public']}))]) self.cloud.set_container_access(self.container, 'public') self.assert_calls() def test_set_container_access_private(self): self.register_uris([ dict(method='POST', uri=self.container_endpoint, status_code=204, validate=dict( headers={ 'x-container-read': oc_oc.OBJECT_CONTAINER_ACLS[ 'private']}))]) self.cloud.set_container_access(self.container, 'private') self.assert_calls() def test_set_container_access_invalid(self): self.assertRaises( openstack.cloud.OpenStackCloudException, self.cloud.set_container_access, self.container, 'invalid') def test_get_container_access(self): self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, headers={ 'x-container-read': str(oc_oc.OBJECT_CONTAINER_ACLS[ 'public'])})]) access = self.cloud.get_container_access(self.container) self.assertEqual('public', access) def test_get_container_invalid(self): self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, headers={'x-container-read': 'invalid'})]) with testtools.ExpectedException( exc.OpenStackCloudException, "Could not determine container access for ACL: invalid" ): self.cloud.get_container_access(self.container) def test_get_container_access_not_found(self): self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, status_code=404)]) with testtools.ExpectedException( exc.OpenStackCloudException, "Container not found: %s" % self.container ): self.cloud.get_container_access(self.container) def test_list_containers(self): endpoint = '{endpoint}/?format=json'.format( endpoint=self.endpoint) containers = [ {u'count': 0, u'bytes': 0, u'name': self.container}] self.register_uris([dict(method='GET', uri=endpoint, complete_qs=True, json=containers)]) ret = self.cloud.list_containers() self.assert_calls() self.assertEqual(containers, ret) def test_list_containers_exception(self): endpoint = '{endpoint}/?format=json'.format( endpoint=self.endpoint) self.register_uris([dict(method='GET', uri=endpoint, complete_qs=True, status_code=416)]) self.assertRaises( exc.OpenStackCloudException, self.cloud.list_containers) self.assert_calls() def test_list_objects(self): endpoint = '{endpoint}?format=json'.format( endpoint=self.container_endpoint) objects = [{ u'bytes': 20304400896, u'last_modified': u'2016-12-15T13:34:13.650090', u'hash': u'daaf9ed2106d09bba96cf193d866445e', u'name': self.object, u'content_type': u'application/octet-stream'}] self.register_uris([dict(method='GET', uri=endpoint, complete_qs=True, json=objects)]) ret = self.cloud.list_objects(self.container) self.assert_calls() self.assertEqual(objects, ret) def test_list_objects_exception(self): endpoint = '{endpoint}?format=json'.format( endpoint=self.container_endpoint) self.register_uris([dict(method='GET', uri=endpoint, complete_qs=True, status_code=416)]) self.assertRaises( exc.OpenStackCloudException, self.cloud.list_objects, self.container) self.assert_calls() def test_delete_object(self): self.register_uris([ dict(method='HEAD', uri=self.object_endpoint, headers={'X-Object-Meta': 'foo'}), dict(method='DELETE', uri=self.object_endpoint, status_code=204)]) self.assertTrue(self.cloud.delete_object(self.container, self.object)) self.assert_calls() def test_delete_object_not_found(self): self.register_uris([dict(method='HEAD', uri=self.object_endpoint, status_code=404)]) self.assertFalse(self.cloud.delete_object(self.container, self.object)) self.assert_calls() def test_get_object(self): headers = { 'Content-Length': '20304400896', 'Content-Type': 'application/octet-stream', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Thu, 15 Dec 2016 13:34:14 GMT', 'Etag': '"b5c454b44fbd5344793e3fb7e3850768"', 'X-Timestamp': '1481808853.65009', 'X-Trans-Id': 'tx68c2a2278f0c469bb6de1-005857ed80dfw1', 'Date': 'Mon, 19 Dec 2016 14:24:00 GMT', 'X-Static-Large-Object': 'True', 'X-Object-Meta-Mtime': '1481513709.168512', } response_headers = {k.lower(): v for k, v in headers.items()} text = 'test body' self.register_uris([ dict(method='GET', uri=self.object_endpoint, headers={ 'Content-Length': '20304400896', 'Content-Type': 'application/octet-stream', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Thu, 15 Dec 2016 13:34:14 GMT', 'Etag': '"b5c454b44fbd5344793e3fb7e3850768"', 'X-Timestamp': '1481808853.65009', 'X-Trans-Id': 'tx68c2a2278f0c469bb6de1-005857ed80dfw1', 'Date': 'Mon, 19 Dec 2016 14:24:00 GMT', 'X-Static-Large-Object': 'True', 'X-Object-Meta-Mtime': '1481513709.168512', }, text='test body')]) resp = self.cloud.get_object(self.container, self.object) self.assert_calls() self.assertEqual((response_headers, text), resp) def test_get_object_not_found(self): self.register_uris([dict(method='GET', uri=self.object_endpoint, status_code=404)]) self.assertIsNone(self.cloud.get_object(self.container, self.object)) self.assert_calls() def test_get_object_exception(self): self.register_uris([dict(method='GET', uri=self.object_endpoint, status_code=416)]) self.assertRaises( openstack.cloud.OpenStackCloudException, self.cloud.get_object, self.container, self.object) self.assert_calls() def test_get_object_segment_size_below_min(self): # Register directly becuase we make multiple calls. The number # of calls we make isn't interesting - what we do with the return # values is. Don't run assert_calls for the same reason. self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': 1000}, slo={'min_segment_size': 500}), headers={'Content-Type': 'application/json'})]) self.assertEqual(500, self.cloud.get_object_segment_size(400)) self.assertEqual(900, self.cloud.get_object_segment_size(900)) self.assertEqual(1000, self.cloud.get_object_segment_size(1000)) self.assertEqual(1000, self.cloud.get_object_segment_size(1100)) def test_get_object_segment_size_http_404(self): self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', status_code=404, reason='Not Found')]) self.assertEqual(oc_oc.DEFAULT_OBJECT_SEGMENT_SIZE, self.cloud.get_object_segment_size(None)) self.assert_calls() def test_get_object_segment_size_http_412(self): self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', status_code=412, reason='Precondition failed')]) self.assertEqual( oc_oc.DEFAULT_OBJECT_SEGMENT_SIZE, self.cloud.get_object_segment_size(None)) self.assert_calls() class TestObjectUploads(BaseTestObject): def setUp(self): super(TestObjectUploads, self).setUp() self.content = self.getUniqueString().encode('latin-1') self.object_file = tempfile.NamedTemporaryFile(delete=False) self.object_file.write(self.content) self.object_file.close() (self.md5, self.sha256) = self.cloud._get_file_hashes( self.object_file.name) self.endpoint = self.cloud._object_store_client.get_endpoint() def test_create_object(self): self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': 1000}, slo={'min_segment_size': 500})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404), dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, validate=dict( headers={ 'x-object-meta-x-sdk-md5': self.md5, 'x-object-meta-x-sdk-sha256': self.sha256, })) ]) self.cloud.create_object( container=self.container, name=self.object, filename=self.object_file.name) self.assert_calls() def test_create_dynamic_large_object(self): max_file_size = 2 min_file_size = 1 uris_to_mock = [ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': max_file_size}, slo={'min_segment_size': min_file_size})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container, ), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404) ] uris_to_mock.extend( [dict(method='PUT', uri='{endpoint}/{container}/{object}/{index:0>6}'.format( endpoint=self.endpoint, container=self.container, object=self.object, index=index), status_code=201) for index, offset in enumerate( range(0, len(self.content), max_file_size))] ) uris_to_mock.append( dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, validate=dict( headers={ 'x-object-manifest': '{container}/{object}'.format( container=self.container, object=self.object), 'x-object-meta-x-sdk-md5': self.md5, 'x-object-meta-x-sdk-sha256': self.sha256, }))) self.register_uris(uris_to_mock) self.cloud.create_object( container=self.container, name=self.object, filename=self.object_file.name, use_slo=False) # After call 6, order become indeterminate because of thread pool self.assert_calls(stop_after=6) for key, value in self.calls[-1]['headers'].items(): self.assertEqual( value, self.adapter.request_history[-1].headers[key], 'header mismatch in manifest call') def test_create_static_large_object(self): max_file_size = 25 min_file_size = 1 uris_to_mock = [ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': max_file_size}, slo={'min_segment_size': min_file_size})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container, ), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404) ] uris_to_mock.extend([ dict(method='PUT', uri='{endpoint}/{container}/{object}/{index:0>6}'.format( endpoint=self.endpoint, container=self.container, object=self.object, index=index), status_code=201, headers=dict(Etag='etag{index}'.format(index=index))) for index, offset in enumerate( range(0, len(self.content), max_file_size)) ]) uris_to_mock.append( dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, validate=dict( params={ 'multipart-manifest', 'put' }, headers={ 'x-object-meta-x-sdk-md5': self.md5, 'x-object-meta-x-sdk-sha256': self.sha256, }))) self.register_uris(uris_to_mock) self.cloud.create_object( container=self.container, name=self.object, filename=self.object_file.name, use_slo=True) # After call 6, order become indeterminate because of thread pool self.assert_calls(stop_after=6) for key, value in self.calls[-1]['headers'].items(): self.assertEqual( value, self.adapter.request_history[-1].headers[key], 'header mismatch in manifest call') base_object = '/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object) self.assertEqual([ { 'path': "{base_object}/000000".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag0', }, { 'path': "{base_object}/000001".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag1', }, { 'path': "{base_object}/000002".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag2', }, { 'path': "{base_object}/000003".format( base_object=base_object), 'size_bytes': len(self.object) - 75, 'etag': 'etag3', }, ], self.adapter.request_history[-1].json()) def test_object_segment_retry_failure(self): max_file_size = 25 min_file_size = 1 self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': max_file_size}, slo={'min_segment_size': min_file_size})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container, ), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404), dict(method='PUT', uri='{endpoint}/{container}/{object}/000000'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000001'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000002'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000003'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=501), dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_object, container=self.container, name=self.object, filename=self.object_file.name, use_slo=True) # After call 6, order become indeterminate because of thread pool self.assert_calls(stop_after=6) def test_object_segment_retries(self): max_file_size = 25 min_file_size = 1 self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': max_file_size}, slo={'min_segment_size': min_file_size})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container, ), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404), dict(method='PUT', uri='{endpoint}/{container}/{object}/000000'.format( endpoint=self.endpoint, container=self.container, object=self.object), headers={'etag': 'etag0'}, status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000001'.format( endpoint=self.endpoint, container=self.container, object=self.object), headers={'etag': 'etag1'}, status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000002'.format( endpoint=self.endpoint, container=self.container, object=self.object), headers={'etag': 'etag2'}, status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000003'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=501), dict(method='PUT', uri='{endpoint}/{container}/{object}/000003'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, headers={'etag': 'etag3'}), dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, validate=dict( params={ 'multipart-manifest', 'put' }, headers={ 'x-object-meta-x-sdk-md5': self.md5, 'x-object-meta-x-sdk-sha256': self.sha256, })) ]) self.cloud.create_object( container=self.container, name=self.object, filename=self.object_file.name, use_slo=True) # After call 6, order become indeterminate because of thread pool self.assert_calls(stop_after=6) for key, value in self.calls[-1]['headers'].items(): self.assertEqual( value, self.adapter.request_history[-1].headers[key], 'header mismatch in manifest call') base_object = '/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object) self.assertEqual([ { 'path': "{base_object}/000000".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag0', }, { 'path': "{base_object}/000001".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag1', }, { 'path': "{base_object}/000002".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag2', }, { 'path': "{base_object}/000003".format( base_object=base_object), 'size_bytes': len(self.object) - 75, 'etag': 'etag3', }, ], self.adapter.request_history[-1].json()) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_limits.py0000666000175100017510000000754513236151340024454 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.unit import base class TestLimits(base.RequestsMockTestCase): def test_get_compute_limits(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['limits']), json={ "limits": { "absolute": { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxSecurityGroupRules": 20, "maxSecurityGroups": 10, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalFloatingIps": 10, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalSecurityGroupsUsed": 0, "totalFloatingIpsUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } }), ]) self.cloud.get_compute_limits() self.assert_calls() def test_other_get_compute_limits(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['limits'], qs_elements=[ 'tenant_id={id}'.format(id=project.project_id) ]), json={ "limits": { "absolute": { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxSecurityGroupRules": 20, "maxSecurityGroups": 10, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalFloatingIps": 10, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalSecurityGroupsUsed": 0, "totalFloatingIpsUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } }), ]) self.cloud.get_compute_limits(project.project_id) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_delete_server.py0000666000175100017510000002372113236151340025775 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_delete_server ---------------------------------- Tests for the `delete_server` command. """ import uuid from openstack.cloud import exc as shade_exc from openstack.tests import fakes from openstack.tests.unit import base class TestDeleteServer(base.RequestsMockTestCase): def test_delete_server(self): """ Test that server delete is called when wait=False """ server = fakes.make_fake_server('1234', 'daffy', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), ]) self.assertTrue(self.cloud.delete_server('daffy', wait=False)) self.assert_calls() def test_delete_server_already_gone(self): """ Test that we return immediately when server is already gone """ self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertFalse(self.cloud.delete_server('tweety', wait=False)) self.assert_calls() def test_delete_server_already_gone_wait(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertFalse(self.cloud.delete_server('speedy', wait=True)) self.assert_calls() def test_delete_server_wait_for_deleted(self): """ Test that delete_server waits for the server to be gone """ server = fakes.make_fake_server('9999', 'wily', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '9999'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertTrue(self.cloud.delete_server('wily', wait=True)) self.assert_calls() def test_delete_server_fails(self): """ Test that delete_server raises non-404 exceptions """ server = fakes.make_fake_server('1212', 'speedy', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1212']), status_code=400), ]) self.assertRaises( shade_exc.OpenStackCloudException, self.cloud.delete_server, 'speedy', wait=False) self.assert_calls() def test_delete_server_no_cinder(self): """ Test that deleting server works when cinder is not available """ orig_has_service = self.cloud.has_service def fake_has_service(service_type): if service_type == 'volume': return False return orig_has_service(service_type) self.cloud.has_service = fake_has_service server = fakes.make_fake_server('1234', 'porky', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), ]) self.assertTrue(self.cloud.delete_server('porky', wait=False)) self.assert_calls() def test_delete_server_delete_ips(self): """ Test that deleting server and fips works """ server = fakes.make_fake_server('1234', 'porky', 'ACTIVE') fip_id = uuid.uuid4().hex self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json'], qs_elements=['floating_ip_address=172.24.5.5']), complete_qs=True, json={'floatingips': [{ 'router_id': 'd23abc8d-2991-4a55-ba98-2aaea84cc72f', 'tenant_id': '4969c491a3c74ee4af974e6d800c62de', 'floating_network_id': '376da547-b977-4cfe-9cba7', 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.5.5', 'port_id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'id': fip_id, 'status': 'ACTIVE'}]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips', '{fip_id}.json'.format(fip_id=fip_id)])), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), complete_qs=True, json={'floatingips': []}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertTrue(self.cloud.delete_server( 'porky', wait=True, delete_ips=True)) self.assert_calls() def test_delete_server_delete_ips_bad_neutron(self): """ Test that deleting server with a borked neutron doesn't bork """ server = fakes.make_fake_server('1234', 'porky', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json'], qs_elements=['floating_ip_address=172.24.5.5']), complete_qs=True, status_code=404), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertTrue(self.cloud.delete_server( 'porky', wait=True, delete_ips=True)) self.assert_calls() def test_delete_server_delete_fips_nova(self): """ Test that deleting server with a borked neutron doesn't bork """ self.cloud._floating_ip_source = 'nova' server = fakes.make_fake_server('1234', 'porky', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-floating-ips']), json={'floating_ips': [ { 'fixed_ip': None, 'id': 1, 'instance_id': None, 'ip': '172.24.5.5', 'pool': 'nova' }]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-floating-ips', '1'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-floating-ips']), json={'floating_ips': []}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertTrue(self.cloud.delete_server( 'porky', wait=True, delete_ips=True)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_usage.py0000666000175100017510000000505713236151340024253 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid from openstack.tests.unit import base class TestUsage(base.RequestsMockTestCase): def test_get_usage(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] start = end = datetime.datetime.now() self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-simple-tenant-usage', project.project_id], qs_elements=[ 'start={now}'.format(now=start.isoformat()), 'end={now}'.format(now=end.isoformat()), ]), json={"tenant_usage": { "server_usages": [ { "ended_at": None, "flavor": "m1.tiny", "hours": 1.0, "instance_id": uuid.uuid4().hex, "local_gb": 1, "memory_mb": 512, "name": "instance-2", "started_at": "2012-10-08T20:10:44.541277", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ], "start": "2012-10-08T20:10:44.587336", "stop": "2012-10-08T21:10:44.587336", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 }}) ]) self.cloud.get_compute_usage(project.project_id, start, end) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_stack.py0000666000175100017510000005500513236151340024252 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile import testtools import openstack.cloud from openstack.cloud import meta from openstack.tests import fakes from openstack.tests.unit import base class TestStack(base.RequestsMockTestCase): def setUp(self): super(TestStack, self).setUp() self.stack_id = self.getUniqueString('id') self.stack_name = self.getUniqueString('name') self.stack_tag = self.getUniqueString('tag') self.stack = fakes.make_fake_stack(self.stack_id, self.stack_name) def test_list_stacks(self): fake_stacks = [ self.stack, fakes.make_fake_stack( self.getUniqueString('id'), self.getUniqueString('name')) ] self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stacks": fake_stacks}), ]) stacks = self.cloud.list_stacks() self.assertEqual( [f.toDict() for f in self.cloud._normalize_stacks(fake_stacks)], [f.toDict() for f in stacks]) self.assert_calls() def test_list_stacks_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), status_code=404) ]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudURINotFound): self.cloud.list_stacks() self.assert_calls() def test_search_stacks(self): fake_stacks = [ self.stack, fakes.make_fake_stack( self.getUniqueString('id'), self.getUniqueString('name')) ] self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stacks": fake_stacks}), ]) stacks = self.cloud.search_stacks() self.assertEqual( self.cloud._normalize_stacks(meta.obj_list_to_munch(fake_stacks)), stacks) self.assert_calls() def test_search_stacks_filters(self): fake_stacks = [ self.stack, fakes.make_fake_stack( self.getUniqueString('id'), self.getUniqueString('name'), status='CREATE_FAILED') ] self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stacks": fake_stacks}), ]) filters = {'status': 'FAILED'} stacks = self.cloud.search_stacks(filters=filters) self.assertEqual( self.cloud._normalize_stacks( meta.obj_list_to_munch(fake_stacks[1:])), stacks) self.assert_calls() def test_search_stacks_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), status_code=404) ]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudURINotFound): self.cloud.search_stacks() def test_delete_stack(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), dict(method='DELETE', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id)), ]) self.assertTrue(self.cloud.delete_stack(self.stack_name)) self.assert_calls() def test_delete_stack_not_found(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/stack_name'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), status_code=404), ]) self.assertFalse(self.cloud.delete_stack('stack_name')) self.assert_calls() def test_delete_stack_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), dict(method='DELETE', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id), status_code=400, reason="ouch"), ]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudBadRequest): self.cloud.delete_stack(self.stack_id) self.assert_calls() def test_delete_stack_wait(self): marker_event = fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='CREATE_COMPLETE') marker_qs = 'marker={e_id}&sort_dir=asc'.format( e_id=marker_event['id']) self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), dict(method='GET', uri='{endpoint}/stacks/{id}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, qs='limit=1&sort_dir=desc'), complete_qs=True, json={"events": [marker_event]}), dict(method='DELETE', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id)), dict(method='GET', uri='{endpoint}/stacks/{id}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, qs=marker_qs), complete_qs=True, json={"events": [ fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='DELETE_COMPLETE'), ]}), dict(method='GET', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), status_code=404), ]) self.assertTrue(self.cloud.delete_stack(self.stack_id, wait=True)) self.assert_calls() def test_delete_stack_wait_failed(self): failed_stack = self.stack.copy() failed_stack['stack_status'] = 'DELETE_FAILED' marker_event = fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='CREATE_COMPLETE') marker_qs = 'marker={e_id}&sort_dir=asc'.format( e_id=marker_event['id']) self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), dict(method='GET', uri='{endpoint}/stacks/{id}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, qs='limit=1&sort_dir=desc'), complete_qs=True, json={"events": [marker_event]}), dict(method='DELETE', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id)), dict(method='GET', uri='{endpoint}/stacks/{id}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, qs=marker_qs), complete_qs=True, json={"events": [ fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='DELETE_COMPLETE'), ]}), dict(method='GET', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": failed_stack}), ]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudException): self.cloud.delete_stack(self.stack_id, wait=True) self.assert_calls() def test_create_stack(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.register_uris([ dict( method='POST', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stack": self.stack}, validate=dict( json={ 'disable_rollback': False, 'environment': {}, 'files': {}, 'parameters': {}, 'stack_name': self.stack_name, 'tags': self.stack_tag, 'template': fakes.FAKE_TEMPLATE_CONTENT, 'timeout_mins': 60} )), dict( method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict( method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) self.cloud.create_stack( self.stack_name, tags=self.stack_tag, template_file=test_template.name ) self.assert_calls() def test_create_stack_wait(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.register_uris([ dict( method='POST', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stack": self.stack}, validate=dict( json={ 'disable_rollback': False, 'environment': {}, 'files': {}, 'parameters': {}, 'stack_name': self.stack_name, 'tags': self.stack_tag, 'template': fakes.FAKE_TEMPLATE_CONTENT, 'timeout_mins': 60} )), dict( method='GET', uri='{endpoint}/stacks/{name}/events?sort_dir=asc'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), json={"events": [ fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='CREATE_COMPLETE', resource_name='name'), ]}), dict( method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict( method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) self.cloud.create_stack( self.stack_name, tags=self.stack_tag, template_file=test_template.name, wait=True) self.assert_calls() def test_update_stack(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.register_uris([ dict( method='PUT', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), validate=dict( json={ 'disable_rollback': False, 'environment': {}, 'files': {}, 'parameters': {}, 'template': fakes.FAKE_TEMPLATE_CONTENT, 'timeout_mins': 60})), dict( method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict( method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) self.cloud.update_stack( self.stack_name, template_file=test_template.name) self.assert_calls() def test_update_stack_wait(self): marker_event = fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='CREATE_COMPLETE', resource_name='name') marker_qs = 'marker={e_id}&sort_dir=asc'.format( e_id=marker_event['id']) test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.register_uris([ dict( method='GET', uri='{endpoint}/stacks/{name}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name, qs='limit=1&sort_dir=desc'), json={"events": [marker_event]}), dict( method='PUT', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), validate=dict( json={ 'disable_rollback': False, 'environment': {}, 'files': {}, 'parameters': {}, 'template': fakes.FAKE_TEMPLATE_CONTENT, 'timeout_mins': 60})), dict( method='GET', uri='{endpoint}/stacks/{name}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name, qs=marker_qs), json={"events": [ fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='UPDATE_COMPLETE', resource_name='name'), ]}), dict( method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict( method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) self.cloud.update_stack( self.stack_name, template_file=test_template.name, wait=True) self.assert_calls() def test_get_stack(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) res = self.cloud.get_stack(self.stack_name) self.assertIsNotNone(res) self.assertEqual(self.stack['stack_name'], res['stack_name']) self.assertEqual(self.stack['stack_name'], res['name']) self.assertEqual(self.stack['stack_status'], res['stack_status']) self.assertEqual('COMPLETE', res['status']) self.assert_calls() def test_get_stack_in_progress(self): in_progress = self.stack.copy() in_progress['stack_status'] = 'CREATE_IN_PROGRESS' self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": in_progress}), ]) res = self.cloud.get_stack(self.stack_name) self.assertIsNotNone(res) self.assertEqual(in_progress['stack_name'], res['stack_name']) self.assertEqual(in_progress['stack_name'], res['name']) self.assertEqual(in_progress['stack_status'], res['stack_status']) self.assertEqual('CREATE', res['action']) self.assertEqual('IN_PROGRESS', res['status']) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_qos_minimum_bandwidth_rule.py0000666000175100017510000003017313236151340030554 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from openstack.cloud import exc from openstack.tests.unit import base class TestQosMinimumBandwidthRule(base.RequestsMockTestCase): policy_name = 'qos test policy' policy_id = '881d1bb7-a663-44c0-8f9f-ee2765b74486' project_id = 'c88fc89f-5121-4a4c-87fd-496b5af864e9' rule_id = 'ed1a2b05-0ad7-45d7-873f-008b575a02b3' rule_min_kbps = 1000 mock_policy = { 'id': policy_id, 'name': policy_name, 'description': '', 'rules': [], 'project_id': project_id, 'tenant_id': project_id, 'shared': False, 'is_default': False } mock_rule = { 'id': rule_id, 'min_kbps': rule_min_kbps, 'direction': 'egress' } qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } enabled_neutron_extensions = [qos_extension] def test_get_qos_minimum_bandwidth_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), json={'minimum_bandwidth_rule': self.mock_rule}) ]) r = self.cloud.get_qos_minimum_bandwidth_rule(self.policy_name, self.rule_id) self.assertDictEqual(self.mock_rule, r) self.assert_calls() def test_get_qos_minimum_bandwidth_rule_no_qos_policy_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': []}) ]) self.assertRaises( exc.OpenStackCloudResourceNotFound, self.cloud.get_qos_minimum_bandwidth_rule, self.policy_name, self.rule_id) self.assert_calls() def test_get_qos_minimum_bandwidth_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_minimum_bandwidth_rule, self.policy_name, self.rule_id) self.assert_calls() def test_create_qos_minimum_bandwidth_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules']), json={'minimum_bandwidth_rule': self.mock_rule}) ]) rule = self.cloud.create_qos_minimum_bandwidth_rule( self.policy_name, min_kbps=self.rule_min_kbps) self.assertDictEqual(self.mock_rule, rule) self.assert_calls() def test_create_qos_minimum_bandwidth_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_qos_minimum_bandwidth_rule, self.policy_name, min_kbps=100) self.assert_calls() def test_update_qos_minimum_bandwidth_rule(self): expected_rule = copy.copy(self.mock_rule) expected_rule['min_kbps'] = self.rule_min_kbps + 100 self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), json={'minimum_bandwidth_rule': self.mock_rule}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), json={'minimum_bandwidth_rule': expected_rule}, validate=dict( json={'minimum_bandwidth_rule': { 'min_kbps': self.rule_min_kbps + 100}})) ]) rule = self.cloud.update_qos_minimum_bandwidth_rule( self.policy_id, self.rule_id, min_kbps=self.rule_min_kbps + 100) self.assertDictEqual(expected_rule, rule) self.assert_calls() def test_update_qos_minimum_bandwidth_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.update_qos_minimum_bandwidth_rule, self.policy_id, self.rule_id, min_kbps=2000) self.assert_calls() def test_delete_qos_minimum_bandwidth_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), json={}) ]) self.assertTrue( self.cloud.delete_qos_minimum_bandwidth_rule( self.policy_name, self.rule_id)) self.assert_calls() def test_delete_qos_minimum_bandwidth_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_qos_minimum_bandwidth_rule, self.policy_name, self.rule_id) self.assert_calls() def test_delete_qos_minimum_bandwidth_rule_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), status_code=404) ]) self.assertFalse( self.cloud.delete_qos_minimum_bandwidth_rule( self.policy_name, self.rule_id)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_keypair.py0000666000175100017510000000724513236151340024614 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from openstack.cloud import exc from openstack.tests import fakes from openstack.tests.unit import base class TestKeypair(base.RequestsMockTestCase): def setUp(self): super(TestKeypair, self).setUp() self.keyname = self.getUniqueString('key') self.key = fakes.make_fake_keypair(self.keyname) def test_create_keypair(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs']), json={'keypair': self.key}, validate=dict(json={ 'keypair': { 'name': self.key['name'], 'public_key': self.key['public_key']}})), ]) new_key = self.cloud.create_keypair( self.keyname, self.key['public_key']) self.assertEqual(new_key, self.cloud._normalize_keypair(self.key)) self.assert_calls() def test_create_keypair_exception(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs']), status_code=400, validate=dict(json={ 'keypair': { 'name': self.key['name'], 'public_key': self.key['public_key']}})), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_keypair, self.keyname, self.key['public_key']) self.assert_calls() def test_delete_keypair(self): self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs', self.keyname]), status_code=202), ]) self.assertTrue(self.cloud.delete_keypair(self.keyname)) self.assert_calls() def test_delete_keypair_not_found(self): self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs', self.keyname]), status_code=404), ]) self.assertFalse(self.cloud.delete_keypair(self.keyname)) self.assert_calls() def test_list_keypairs(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs']), json={'keypairs': [{'keypair': self.key}]}), ]) keypairs = self.cloud.list_keypairs() self.assertEqual(keypairs, self.cloud._normalize_keypairs([self.key])) self.assert_calls() def test_list_keypairs_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs']), status_code=400), ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.list_keypairs) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_server_delete_metadata.py0000666000175100017510000000505513236151340027635 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_server_delete_metadata ---------------------------------- Tests for the `delete_server_metadata` command. """ import uuid from openstack.cloud.exc import OpenStackCloudURINotFound from openstack.tests import fakes from openstack.tests.unit import base class TestServerDeleteMetadata(base.RequestsMockTestCase): def setUp(self): super(TestServerDeleteMetadata, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) def test_server_delete_metadata_with_exception(self): """ Test that a missing metadata throws an exception. """ self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.fake_server['id'], 'metadata', 'key']), status_code=404), ]) self.assertRaises( OpenStackCloudURINotFound, self.cloud.delete_server_metadata, self.server_name, ['key']) self.assert_calls() def test_server_delete_metadata(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.fake_server['id'], 'metadata', 'key']), status_code=200), ]) self.cloud.delete_server_metadata(self.server_id, ['key']) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_shade.py0000666000175100017510000003747213236151340024241 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import uuid import testtools import openstack.cloud from openstack.cloud import exc from openstack.tests import fakes from openstack.tests.unit import base from openstack import utils RANGE_DATA = [ dict(id=1, key1=1, key2=5), dict(id=2, key1=1, key2=20), dict(id=3, key1=2, key2=10), dict(id=4, key1=2, key2=30), dict(id=5, key1=3, key2=40), dict(id=6, key1=3, key2=40), ] class TestShade(base.RequestsMockTestCase): def setUp(self): # This set of tests are not testing neutron, they're testing # rebuilding servers, but we do several network calls in service # of a NORMAL rebuild to find the default_network. Putting # in all of the neutron mocks for that will make the tests harder # to read. SO - we're going mock neutron into the off position # and then turn it back on in the few tests that specifically do. # Maybe we should reorg these into two classes - one with neutron # mocked out - and one with it not mocked out super(TestShade, self).setUp() self.has_neutron = False def fake_has_service(*args, **kwargs): return self.has_neutron self.cloud.has_service = fake_has_service def test_openstack_cloud(self): self.assertIsInstance(self.cloud, openstack.cloud.OpenStackCloud) @mock.patch.object(openstack.cloud.OpenStackCloud, 'search_images') def test_get_images(self, mock_search): image1 = dict(id='123', name='mickey') mock_search.return_value = [image1] r = self.cloud.get_image('mickey') self.assertIsNotNone(r) self.assertDictEqual(image1, r) @mock.patch.object(openstack.cloud.OpenStackCloud, 'search_images') def test_get_image_not_found(self, mock_search): mock_search.return_value = [] r = self.cloud.get_image('doesNotExist') self.assertIsNone(r) def test_get_server(self): server1 = fakes.make_fake_server('123', 'mickey') server2 = fakes.make_fake_server('345', 'mouse') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server1, server2]}), ]) r = self.cloud.get_server('mickey') self.assertIsNotNone(r) self.assertEqual(server1['name'], r['name']) self.assert_calls() def test_get_server_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) r = self.cloud.get_server('doesNotExist') self.assertIsNone(r) self.assert_calls() def test_list_servers_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), status_code=400) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.list_servers) self.assert_calls() def test__neutron_exceptions_resource_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), status_code=404) ]) self.assertRaises(exc.OpenStackCloudResourceNotFound, self.cloud.list_networks) self.assert_calls() def test__neutron_exceptions_url_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), status_code=404) ]) self.assertRaises(exc.OpenStackCloudURINotFound, self.cloud.list_networks) self.assert_calls() def test_list_servers(self): server_id = str(uuid.uuid4()) server_name = self.getUniqueString('name') fake_server = fakes.make_fake_server(server_id, server_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), ]) r = self.cloud.list_servers() self.assertEqual(1, len(r)) self.assertEqual(server_name, r[0]['name']) self.assert_calls() def test_list_servers_all_projects(self): '''This test verifies that when list_servers is called with `all_projects=True` that it passes `all_tenants=True` to nova.''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail'], qs_elements=['all_tenants=True']), complete_qs=True, json={'servers': []}), ]) self.cloud.list_servers(all_projects=True) self.assert_calls() def test_list_servers_filters(self): '''This test verifies that when list_servers is called with `filters` dict that it passes it to nova.''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail'], qs_elements=[ 'deleted=True', 'changes-since=2014-12-03T00:00:00Z' ]), complete_qs=True, json={'servers': []}), ]) self.cloud.list_servers(filters={ 'deleted': True, 'changes-since': '2014-12-03T00:00:00Z' }) self.assert_calls() def test_iterate_timeout_bad_wait(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Wait value must be an int or float value."): for count in utils.iterate_timeout( 1, "test_iterate_timeout_bad_wait", wait="timeishard"): pass @mock.patch('time.sleep') def test_iterate_timeout_str_wait(self, mock_sleep): iter = utils.iterate_timeout( 10, "test_iterate_timeout_str_wait", wait="1.6") next(iter) next(iter) mock_sleep.assert_called_with(1.6) @mock.patch('time.sleep') def test_iterate_timeout_int_wait(self, mock_sleep): iter = utils.iterate_timeout( 10, "test_iterate_timeout_int_wait", wait=1) next(iter) next(iter) mock_sleep.assert_called_with(1.0) @mock.patch('time.sleep') def test_iterate_timeout_timeout(self, mock_sleep): message = "timeout test" with testtools.ExpectedException( exc.OpenStackCloudTimeout, message): for count in utils.iterate_timeout(0.1, message, wait=1): pass mock_sleep.assert_called_with(1.0) def test__nova_extensions(self): body = [ { "updated": "2014-12-03T00:00:00Z", "name": "Multinic", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "NMN", "description": "Multiple network support." }, { "updated": "2014-12-03T00:00:00Z", "name": "DiskConfig", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "OS-DCF", "description": "Disk Management Extension." }, ] self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=dict(extensions=body)) ]) extensions = self.cloud._nova_extensions() self.assertEqual(set(['NMN', 'OS-DCF']), extensions) self.assert_calls() def test__nova_extensions_fails(self): self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), status_code=404), ]) with testtools.ExpectedException( exc.OpenStackCloudURINotFound, "Error fetching extension list for nova" ): self.cloud._nova_extensions() self.assert_calls() def test__has_nova_extension(self): body = [ { "updated": "2014-12-03T00:00:00Z", "name": "Multinic", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "NMN", "description": "Multiple network support." }, { "updated": "2014-12-03T00:00:00Z", "name": "DiskConfig", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "OS-DCF", "description": "Disk Management Extension." }, ] self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=dict(extensions=body)) ]) self.assertTrue(self.cloud._has_nova_extension('NMN')) self.assert_calls() def test__has_nova_extension_missing(self): body = [ { "updated": "2014-12-03T00:00:00Z", "name": "Multinic", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "NMN", "description": "Multiple network support." }, { "updated": "2014-12-03T00:00:00Z", "name": "DiskConfig", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "OS-DCF", "description": "Disk Management Extension." }, ] self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=dict(extensions=body)) ]) self.assertFalse(self.cloud._has_nova_extension('invalid')) self.assert_calls() def test__neutron_extensions(self): body = [ { "updated": "2014-06-1T10:00:00-00:00", "name": "Distributed Virtual Router", "links": [], "alias": "dvr", "description": "Enables configuration of Distributed Virtual Routers." }, { "updated": "2013-07-23T10:00:00-00:00", "name": "Allowed Address Pairs", "links": [], "alias": "allowed-address-pairs", "description": "Provides allowed address pairs" }, ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json=dict(extensions=body)) ]) extensions = self.cloud._neutron_extensions() self.assertEqual(set(['dvr', 'allowed-address-pairs']), extensions) self.assert_calls() def test__neutron_extensions_fails(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), status_code=404) ]) with testtools.ExpectedException( exc.OpenStackCloudURINotFound, "Error fetching extension list for neutron" ): self.cloud._neutron_extensions() self.assert_calls() def test__has_neutron_extension(self): body = [ { "updated": "2014-06-1T10:00:00-00:00", "name": "Distributed Virtual Router", "links": [], "alias": "dvr", "description": "Enables configuration of Distributed Virtual Routers." }, { "updated": "2013-07-23T10:00:00-00:00", "name": "Allowed Address Pairs", "links": [], "alias": "allowed-address-pairs", "description": "Provides allowed address pairs" }, ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json=dict(extensions=body)) ]) self.assertTrue(self.cloud._has_neutron_extension('dvr')) self.assert_calls() def test__has_neutron_extension_missing(self): body = [ { "updated": "2014-06-1T10:00:00-00:00", "name": "Distributed Virtual Router", "links": [], "alias": "dvr", "description": "Enables configuration of Distributed Virtual Routers." }, { "updated": "2013-07-23T10:00:00-00:00", "name": "Allowed Address Pairs", "links": [], "alias": "allowed-address-pairs", "description": "Provides allowed address pairs" }, ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json=dict(extensions=body)) ]) self.assertFalse(self.cloud._has_neutron_extension('invalid')) self.assert_calls() def test_range_search(self): filters = {"key1": "min", "key2": "20"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(1, len(retval)) self.assertEqual([RANGE_DATA[1]], retval) def test_range_search_2(self): filters = {"key1": "<=2", "key2": ">10"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual([RANGE_DATA[1], RANGE_DATA[3]], retval) def test_range_search_3(self): filters = {"key1": "2", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(0, len(retval)) def test_range_search_4(self): filters = {"key1": "max", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(0, len(retval)) def test_range_search_5(self): filters = {"key1": "min", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(1, len(retval)) self.assertEqual([RANGE_DATA[0]], retval) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_meta.py0000666000175100017510000012101513236151364024074 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import openstack.cloud from openstack.cloud import meta from openstack.tests import fakes from openstack.tests.unit import base PRIVATE_V4 = '198.51.100.3' PUBLIC_V4 = '192.0.2.99' PUBLIC_V6 = '2001:0db8:face:0da0:face::0b00:1c' # rfc3849 class FakeCloud(object): region_name = 'test-region' name = 'test-name' private = False force_ipv4 = False service_val = True _unused = "useless" _local_ipv6 = True def get_flavor_name(self, id): return 'test-flavor-name' def get_image_name(self, id): return 'test-image-name' def get_volumes(self, server): return [] def has_service(self, service_name): return self.service_val def use_internal_network(self): return True def use_external_network(self): return True def get_internal_networks(self): return [] def get_external_networks(self): return [] def get_internal_ipv4_networks(self): return [] def get_external_ipv4_networks(self): return [] def get_internal_ipv6_networks(self): return [] def get_external_ipv6_networks(self): return [] def list_server_security_groups(self, server): return [] def get_default_network(self): return None standard_fake_server = fakes.make_fake_server( server_id='test-id-0', name='test-id-0', status='ACTIVE', addresses={'private': [{'OS-EXT-IPS:type': 'fixed', 'addr': PRIVATE_V4, 'version': 4}], 'public': [{'OS-EXT-IPS:type': 'floating', 'addr': PUBLIC_V4, 'version': 4}]}, flavor={'id': '101'}, image={'id': '471c2475-da2f-47ac-aba5-cb4aa3d546f5'}, ) standard_fake_server['metadata'] = {'group': 'test-group'} SUBNETS_WITH_NAT = [ { u'name': u'', u'enable_dhcp': True, u'network_id': u'5ef0358f-9403-4f7b-9151-376ca112abf7', u'tenant_id': u'29c79f394b2946f1a0f8446d715dc301', u'dns_nameservers': [], u'ipv6_ra_mode': None, u'allocation_pools': [ { u'start': u'10.10.10.2', u'end': u'10.10.10.254' } ], u'gateway_ip': u'10.10.10.1', u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.10.10.0/24', u'id': u'14025a85-436e-4418-b0ee-f5b12a50f9b4' }, ] OSIC_NETWORKS = [ { u'admin_state_up': True, u'id': u'7004a83a-13d3-4dcd-8cf5-52af1ace4cae', u'mtu': 0, u'name': u'GATEWAY_NET', u'router:external': True, u'shared': True, u'status': u'ACTIVE', u'subnets': [u'cf785ee0-6cc9-4712-be3d-0bf6c86cf455'], u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' }, { u'admin_state_up': True, u'id': u'405abfcc-77dc-49b2-a271-139619ac9b26', u'mtu': 0, u'name': u'openstackjenkins-network1', u'router:external': False, u'shared': False, u'status': u'ACTIVE', u'subnets': [u'a47910bc-f649-45db-98ec-e2421c413f4e'], u'tenant_id': u'7e9c4d5842b3451d94417bd0af03a0f4' }, { u'admin_state_up': True, u'id': u'54753d2c-0a58-4928-9b32-084c59dd20a6', u'mtu': 0, u'name': u'GATEWAY_NET_V6', u'router:external': True, u'shared': True, u'status': u'ACTIVE', u'subnets': [u'9c21d704-a8b9-409a-b56d-501cb518d380', u'7cb0ce07-64c3-4a3d-92d3-6f11419b45b9'], u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' } ] OSIC_SUBNETS = [ { u'allocation_pools': [{ u'end': u'172.99.106.254', u'start': u'172.99.106.5'}], u'cidr': u'172.99.106.0/24', u'dns_nameservers': [u'69.20.0.164', u'69.20.0.196'], u'enable_dhcp': True, u'gateway_ip': u'172.99.106.1', u'host_routes': [], u'id': u'cf785ee0-6cc9-4712-be3d-0bf6c86cf455', u'ip_version': 4, u'ipv6_address_mode': None, u'ipv6_ra_mode': None, u'name': u'GATEWAY_NET', u'network_id': u'7004a83a-13d3-4dcd-8cf5-52af1ace4cae', u'subnetpool_id': None, u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' }, { u'allocation_pools': [{ u'end': u'10.0.1.254', u'start': u'10.0.1.2'}], u'cidr': u'10.0.1.0/24', u'dns_nameservers': [u'8.8.8.8', u'8.8.4.4'], u'enable_dhcp': True, u'gateway_ip': u'10.0.1.1', u'host_routes': [], u'id': u'a47910bc-f649-45db-98ec-e2421c413f4e', u'ip_version': 4, u'ipv6_address_mode': None, u'ipv6_ra_mode': None, u'name': u'openstackjenkins-subnet1', u'network_id': u'405abfcc-77dc-49b2-a271-139619ac9b26', u'subnetpool_id': None, u'tenant_id': u'7e9c4d5842b3451d94417bd0af03a0f4' }, { u'allocation_pools': [{ u'end': u'10.255.255.254', u'start': u'10.0.0.2'}], u'cidr': u'10.0.0.0/8', u'dns_nameservers': [u'8.8.8.8', u'8.8.4.4'], u'enable_dhcp': True, u'gateway_ip': u'10.0.0.1', u'host_routes': [], u'id': u'9c21d704-a8b9-409a-b56d-501cb518d380', u'ip_version': 4, u'ipv6_address_mode': None, u'ipv6_ra_mode': None, u'name': u'GATEWAY_SUBNET_V6V4', u'network_id': u'54753d2c-0a58-4928-9b32-084c59dd20a6', u'subnetpool_id': None, u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' }, { u'allocation_pools': [{ u'end': u'2001:4800:1ae1:18:ffff:ffff:ffff:ffff', u'start': u'2001:4800:1ae1:18::2'}], u'cidr': u'2001:4800:1ae1:18::/64', u'dns_nameservers': [u'2001:4860:4860::8888'], u'enable_dhcp': True, u'gateway_ip': u'2001:4800:1ae1:18::1', u'host_routes': [], u'id': u'7cb0ce07-64c3-4a3d-92d3-6f11419b45b9', u'ip_version': 6, u'ipv6_address_mode': u'dhcpv6-stateless', u'ipv6_ra_mode': None, u'name': u'GATEWAY_SUBNET_V6V6', u'network_id': u'54753d2c-0a58-4928-9b32-084c59dd20a6', u'subnetpool_id': None, u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' } ] class TestMeta(base.RequestsMockTestCase): def test_find_nova_addresses_key_name(self): # Note 198.51.100.0/24 is TEST-NET-2 from rfc5737 addrs = {'public': [{'addr': '198.51.100.1', 'version': 4}], 'private': [{'addr': '192.0.2.5', 'version': 4}]} self.assertEqual( ['198.51.100.1'], meta.find_nova_addresses(addrs, key_name='public')) self.assertEqual([], meta.find_nova_addresses(addrs, key_name='foo')) def test_find_nova_addresses_ext_tag(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses(addrs, ext_tag='fixed')) self.assertEqual([], meta.find_nova_addresses(addrs, ext_tag='foo')) def test_find_nova_addresses_key_name_and_ext_tag(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed')) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='public', ext_tag='foo')) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='bar', ext_tag='fixed')) def test_find_nova_addresses_all(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed', version=4)) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed', version=6)) def test_find_nova_addresses_floating_first(self): # Note 198.51.100.0/24 is TEST-NET-2 from rfc5737 addrs = { 'private': [{ 'addr': '192.0.2.5', 'version': 4, 'OS-EXT-IPS:type': 'fixed'}], 'public': [{ 'addr': '198.51.100.1', 'version': 4, 'OS-EXT-IPS:type': 'floating'}]} self.assertEqual( ['198.51.100.1', '192.0.2.5'], meta.find_nova_addresses(addrs)) def test_get_server_ip(self): srv = meta.obj_to_munch(standard_fake_server) self.assertEqual( PRIVATE_V4, meta.get_server_ip(srv, ext_tag='fixed')) self.assertEqual( PUBLIC_V4, meta.get_server_ip(srv, ext_tag='floating')) def test_get_server_private_ip(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net-name'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'private': [{'OS-EXT-IPS:type': 'fixed', 'addr': PRIVATE_V4, 'version': 4}], 'public': [{'OS-EXT-IPS:type': 'floating', 'addr': PUBLIC_V4, 'version': 4}]} ) self.assertEqual( PRIVATE_V4, meta.get_server_private_ip(srv, self.cloud)) self.assert_calls() def test_get_server_multiple_private_ip(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) shared_mac = '11:22:33:44:55:66' distinct_mac = '66:55:44:33:22:11' srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': distinct_mac, 'addr': '10.0.0.100', 'version': 4}, {'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': shared_mac, 'addr': '10.0.0.101', 'version': 4}], 'public': [{'OS-EXT-IPS:type': 'floating', 'OS-EXT-IPS-MAC:mac_addr': shared_mac, 'addr': PUBLIC_V4, 'version': 4}]} ) self.assertEqual( '10.0.0.101', meta.get_server_private_ip(srv, self.cloud)) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, 'has_service') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_volumes') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_image_name') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_flavor_name') def test_get_server_private_ip_devstack( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes, mock_has_service): mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] mock_has_service.return_value = True self.register_uris([ dict(method='GET', uri=('https://network.example.com/v2.0/ports.json?' 'device_id=test-id'), json={'ports': [{ 'id': 'test_port_id', 'mac_address': 'fa:16:3e:ae:7d:42', 'device_id': 'test-id'}]} ), dict(method='GET', uri=('https://network.example.com/v2.0/' 'floatingips.json?port_id=test_port_id'), json={'floatingips': []}), dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [ {'id': 'test_pnztt_net', 'name': 'test_pnztt_net', 'router:external': False }, {'id': 'private', 'name': 'private'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': PRIVATE_V4, u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42' }]} )) self.assertEqual(PRIVATE_V4, srv['private_v4']) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_volumes') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_image_name') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_flavor_name') def test_get_server_private_ip_no_fip( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes): self.cloud._floating_ip_source = None mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [ {'id': 'test_pnztt_net', 'name': 'test_pnztt_net', 'router:external': False, }, {'id': 'private', 'name': 'private'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': PRIVATE_V4, u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42' }]} )) self.assertEqual(PRIVATE_V4, srv['private_v4']) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_volumes') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_image_name') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_flavor_name') def test_get_server_cloud_no_fips( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes): self.cloud._floating_ip_source = None mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [ { 'id': 'test_pnztt_net', 'name': 'test_pnztt_net', 'router:external': False, }, { 'id': 'private', 'name': 'private'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'addr': PRIVATE_V4, u'version': 4, }]} )) self.assertEqual(PRIVATE_V4, srv['private_v4']) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, 'has_service') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_volumes') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_image_name') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_flavor_name') def test_get_server_cloud_missing_fips( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes, mock_has_service): mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] mock_has_service.return_value = True self.register_uris([ dict(method='GET', uri=('https://network.example.com/v2.0/ports.json?' 'device_id=test-id'), json={'ports': [{ 'id': 'test_port_id', 'mac_address': 'fa:16:3e:ae:7d:42', 'device_id': 'test-id'}]} ), dict(method='GET', uri=('https://network.example.com/v2.0/floatingips.json' '?port_id=test_port_id'), json={'floatingips': [{ 'id': 'floating-ip-id', 'port_id': 'test_port_id', 'fixed_ip_address': PRIVATE_V4, 'floating_ip_address': PUBLIC_V4, }]}), dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [ { 'id': 'test_pnztt_net', 'name': 'test_pnztt_net', 'router:external': False, }, { 'id': 'private', 'name': 'private', } ]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'addr': PRIVATE_V4, u'version': 4, 'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:ae:7d:42', }]} )) self.assertEqual(PUBLIC_V4, srv['public_v4']) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_volumes') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_image_name') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_flavor_name') def test_get_server_cloud_rackspace_v6( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes): self.cloud.cloud_config.config['has_network'] = False self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = True mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] self.register_uris([ dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ 'addr': "104.130.246.91", 'version': 4 }, { 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } )) self.assertEqual("10.223.160.141", srv['private_v4']) self.assertEqual("104.130.246.91", srv['public_v4']) self.assertEqual( "2001:4800:7819:103:be76:4eff:fe05:8525", srv['public_v6']) self.assertEqual( "2001:4800:7819:103:be76:4eff:fe05:8525", srv['interface_ip']) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_volumes') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_image_name') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_flavor_name') def test_get_server_cloud_osic_split( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes): self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = True self.cloud._external_ipv4_names = ['GATEWAY_NET'] self.cloud._external_ipv6_names = ['GATEWAY_NET_V6'] self.cloud._internal_ipv4_names = ['GATEWAY_NET_V6'] self.cloud._internal_ipv6_names = [] mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': OSIC_NETWORKS}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': OSIC_SUBNETS}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ 'addr': "104.130.246.91", 'version': 4 }, { 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } )) self.assertEqual("10.223.160.141", srv['private_v4']) self.assertEqual("104.130.246.91", srv['public_v4']) self.assertEqual( "2001:4800:7819:103:be76:4eff:fe05:8525", srv['public_v6']) self.assertEqual( "2001:4800:7819:103:be76:4eff:fe05:8525", srv['interface_ip']) self.assert_calls() def test_get_server_external_ipv4_neutron(self): # Testing Clouds with Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net', 'router:external': True }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, ) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) self.assert_calls() def test_get_server_external_provider_ipv4_neutron(self): # Testing Clouds with Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net', 'provider:network_type': 'vlan', 'provider:physical_network': 'vlan', }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, ) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) self.assert_calls() def test_get_server_internal_provider_ipv4_neutron(self): # Testing Clouds with Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net', 'router:external': False, 'provider:network_type': 'vxlan', 'provider:physical_network': None, }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PRIVATE_V4, 'version': 4}]}, ) self.assertIsNone( meta.get_server_external_ipv4(cloud=self.cloud, server=srv)) int_ip = meta.get_server_private_ip(cloud=self.cloud, server=srv) self.assertEqual(PRIVATE_V4, int_ip) self.assert_calls() def test_get_server_external_none_ipv4_neutron(self): # Testing Clouds with Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net', 'router:external': False, }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, ) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertIsNone(ip) self.assert_calls() def test_get_server_external_ipv4_neutron_accessIPv4(self): srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE') srv['accessIPv4'] = PUBLIC_V4 ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) def test_get_server_external_ipv4_neutron_accessIPv6(self): srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE') srv['accessIPv6'] = PUBLIC_V6 ip = meta.get_server_external_ipv6(server=srv) self.assertEqual(PUBLIC_V6, ip) def test_get_server_external_ipv4_neutron_exception(self): # Testing Clouds with a non working Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', status_code=404)]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'public': [{'addr': PUBLIC_V4, 'version': 4}]} ) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) self.assert_calls() def test_get_server_external_ipv4_nova_public(self): # Testing Clouds w/o Neutron and a network named public self.cloud.cloud_config.config['has_network'] = False srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'public': [{'addr': PUBLIC_V4, 'version': 4}]}) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) def test_get_server_external_ipv4_nova_none(self): # Testing Clouds w/o Neutron or a globally routable IP self.cloud.cloud_config.config['has_network'] = False srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{'addr': PRIVATE_V4}]}) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertIsNone(ip) def test_get_server_external_ipv6(self): srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={ 'test-net': [ {'addr': PUBLIC_V4, 'version': 4}, {'addr': PUBLIC_V6, 'version': 6} ] } ) ip = meta.get_server_external_ipv6(srv) self.assertEqual(PUBLIC_V6, ip) def test_get_groups_from_server(self): server_vars = {'flavor': 'test-flavor', 'image': 'test-image', 'az': 'test-az'} self.assertEqual( ['test-name', 'test-region', 'test-name_test-region', 'test-group', 'instance-test-id-0', 'meta-group_test-group', 'test-az', 'test-region_test-az', 'test-name_test-region_test-az'], meta.get_groups_from_server( FakeCloud(), meta.obj_to_munch(standard_fake_server), server_vars ) ) def test_obj_list_to_munch(self): """Test conversion of a list of objects to a list of dictonaries""" class obj0(object): value = 0 class obj1(object): value = 1 list = [obj0, obj1] new_list = meta.obj_list_to_munch(list) self.assertEqual(new_list[0]['value'], 0) self.assertEqual(new_list[1]['value'], 1) @mock.patch.object(FakeCloud, 'list_server_security_groups') def test_get_security_groups(self, mock_list_server_security_groups): '''This test verifies that calling get_hostvars_froms_server ultimately calls list_server_security_groups, and that the return value from list_server_security_groups ends up in server['security_groups'].''' mock_list_server_security_groups.return_value = [ {'name': 'testgroup', 'id': '1'}] server = meta.obj_to_munch(standard_fake_server) hostvars = meta.get_hostvars_from_server(FakeCloud(), server) mock_list_server_security_groups.assert_called_once_with(server) self.assertEqual('testgroup', hostvars['security_groups'][0]['name']) @mock.patch.object(openstack.cloud.meta, 'get_server_external_ipv6') @mock.patch.object(openstack.cloud.meta, 'get_server_external_ipv4') def test_basic_hostvars( self, mock_get_server_external_ipv4, mock_get_server_external_ipv6): mock_get_server_external_ipv4.return_value = PUBLIC_V4 mock_get_server_external_ipv6.return_value = PUBLIC_V6 hostvars = meta.get_hostvars_from_server( FakeCloud(), self.cloud._normalize_server( meta.obj_to_munch(standard_fake_server))) self.assertNotIn('links', hostvars) self.assertEqual(PRIVATE_V4, hostvars['private_v4']) self.assertEqual(PUBLIC_V4, hostvars['public_v4']) self.assertEqual(PUBLIC_V6, hostvars['public_v6']) self.assertEqual(PUBLIC_V6, hostvars['interface_ip']) self.assertEqual('RegionOne', hostvars['region']) self.assertEqual('_test_cloud_', hostvars['cloud']) self.assertIn('location', hostvars) self.assertEqual('_test_cloud_', hostvars['location']['cloud']) self.assertEqual('RegionOne', hostvars['location']['region_name']) self.assertEqual('admin', hostvars['location']['project']['name']) self.assertEqual("test-image-name", hostvars['image']['name']) self.assertEqual( standard_fake_server['image']['id'], hostvars['image']['id']) self.assertNotIn('links', hostvars['image']) self.assertEqual( standard_fake_server['flavor']['id'], hostvars['flavor']['id']) self.assertEqual("test-flavor-name", hostvars['flavor']['name']) self.assertNotIn('links', hostvars['flavor']) # test having volumes # test volume exception self.assertEqual([], hostvars['volumes']) @mock.patch.object(openstack.cloud.meta, 'get_server_external_ipv6') @mock.patch.object(openstack.cloud.meta, 'get_server_external_ipv4') def test_ipv4_hostvars( self, mock_get_server_external_ipv4, mock_get_server_external_ipv6): mock_get_server_external_ipv4.return_value = PUBLIC_V4 mock_get_server_external_ipv6.return_value = PUBLIC_V6 fake_cloud = FakeCloud() fake_cloud.force_ipv4 = True hostvars = meta.get_hostvars_from_server( fake_cloud, meta.obj_to_munch(standard_fake_server)) self.assertEqual(PUBLIC_V4, hostvars['interface_ip']) @mock.patch.object(openstack.cloud.meta, 'get_server_external_ipv4') def test_private_interface_ip(self, mock_get_server_external_ipv4): mock_get_server_external_ipv4.return_value = PUBLIC_V4 cloud = FakeCloud() cloud.private = True hostvars = meta.get_hostvars_from_server( cloud, meta.obj_to_munch(standard_fake_server)) self.assertEqual(PRIVATE_V4, hostvars['interface_ip']) @mock.patch.object(openstack.cloud.meta, 'get_server_external_ipv4') def test_image_string(self, mock_get_server_external_ipv4): mock_get_server_external_ipv4.return_value = PUBLIC_V4 server = standard_fake_server server['image'] = 'fake-image-id' hostvars = meta.get_hostvars_from_server( FakeCloud(), meta.obj_to_munch(server)) self.assertEqual('fake-image-id', hostvars['image']['id']) def test_az(self): server = standard_fake_server server['OS-EXT-AZ:availability_zone'] = 'az1' hostvars = self.cloud._normalize_server(meta.obj_to_munch(server)) self.assertEqual('az1', hostvars['az']) def test_current_location(self): self.assertEqual({ 'cloud': '_test_cloud_', 'project': { 'id': mock.ANY, 'name': 'admin', 'domain_id': None, 'domain_name': 'default' }, 'region_name': u'RegionOne', 'zone': None}, self.cloud.current_location) def test_current_project(self): self.assertEqual({ 'id': mock.ANY, 'name': 'admin', 'domain_id': None, 'domain_name': 'default'}, self.cloud.current_project) def test_has_volume(self): mock_cloud = mock.MagicMock() fake_volume = fakes.FakeVolume( id='volume1', status='available', name='Volume 1 Display Name', attachments=[{'device': '/dev/sda0'}]) fake_volume_dict = meta.obj_to_munch(fake_volume) mock_cloud.get_volumes.return_value = [fake_volume_dict] hostvars = meta.get_hostvars_from_server( mock_cloud, meta.obj_to_munch(standard_fake_server)) self.assertEqual('volume1', hostvars['volumes'][0]['id']) self.assertEqual('/dev/sda0', hostvars['volumes'][0]['device']) def test_has_no_volume_service(self): fake_cloud = FakeCloud() fake_cloud.service_val = False hostvars = meta.get_hostvars_from_server( fake_cloud, meta.obj_to_munch(standard_fake_server)) self.assertEqual([], hostvars['volumes']) def test_unknown_volume_exception(self): mock_cloud = mock.MagicMock() class FakeException(Exception): pass def side_effect(*args): raise FakeException("No Volumes") mock_cloud.get_volumes.side_effect = side_effect self.assertRaises( FakeException, meta.get_hostvars_from_server, mock_cloud, meta.obj_to_munch(standard_fake_server)) def test_obj_to_munch(self): cloud = FakeCloud() cloud.subcloud = FakeCloud() cloud_dict = meta.obj_to_munch(cloud) self.assertEqual(FakeCloud.name, cloud_dict['name']) self.assertNotIn('_unused', cloud_dict) self.assertNotIn('get_flavor_name', cloud_dict) self.assertNotIn('subcloud', cloud_dict) self.assertTrue(hasattr(cloud_dict, 'name')) self.assertEqual(cloud_dict.name, cloud_dict['name']) def test_obj_to_munch_subclass(self): class FakeObjDict(dict): additional = 1 obj = FakeObjDict(foo='bar') obj_dict = meta.obj_to_munch(obj) self.assertIn('additional', obj_dict) self.assertIn('foo', obj_dict) self.assertEqual(obj_dict['additional'], 1) self.assertEqual(obj_dict['foo'], 'bar') openstacksdk-0.11.3/openstack/tests/unit/cloud/test_volume_backups.py0000666000175100017510000001177313236151340026170 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.cloud import meta from openstack.tests.unit import base class TestVolumeBackups(base.RequestsMockTestCase): def test_search_volume_backups(self): name = 'Volume1' vol1 = {'name': name, 'availability_zone': 'az1'} vol2 = {'name': name, 'availability_zone': 'az1'} vol3 = {'name': 'Volume2', 'availability_zone': 'az2'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [vol1, vol2, vol3]})]) result = self.cloud.search_volume_backups( name, {'availability_zone': 'az1'}) self.assertEqual(len(result), 2) self.assertEqual( meta.obj_list_to_munch([vol1, vol2]), result) self.assert_calls() def test_get_volume_backup(self): name = 'Volume1' vol1 = {'name': name, 'availability_zone': 'az1'} vol2 = {'name': name, 'availability_zone': 'az2'} vol3 = {'name': 'Volume2', 'availability_zone': 'az1'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [vol1, vol2, vol3]})]) result = self.cloud.get_volume_backup( name, {'availability_zone': 'az1'}) result = meta.obj_to_munch(result) self.assertEqual( meta.obj_to_munch(vol1), result) self.assert_calls() def test_list_volume_backups(self): backup = {'id': '6ff16bdf-44d5-4bf9-b0f3-687549c76414', 'status': 'available'} search_opts = {'status': 'available'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail'], qs_elements=['='.join(i) for i in search_opts.items()]), json={"backups": [backup]})]) result = self.cloud.list_volume_backups(True, search_opts) self.assertEqual(len(result), 1) self.assertEqual( meta.obj_list_to_munch([backup]), result) self.assert_calls() def test_delete_volume_backup_wait(self): backup_id = '6ff16bdf-44d5-4bf9-b0f3-687549c76414' backup = {'id': backup_id} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [backup]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', backup_id])), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [backup]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": []})]) self.cloud.delete_volume_backup(backup_id, False, True, 1) self.assert_calls() def test_delete_volume_backup_force(self): backup_id = '6ff16bdf-44d5-4bf9-b0f3-687549c76414' backup = {'id': backup_id} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [backup]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', backup_id, 'action']), json={'os-force_delete': {}}, validate=dict(json={u'os-force_delete': None})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [backup]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": []}) ]) self.cloud.delete_volume_backup(backup_id, True, True, 1) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_security_groups.py0000666000175100017510000007267013236151340026422 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import openstack.cloud from openstack.tests.unit import base from openstack.tests import fakes # TODO(mordred): Move id and name to using a getUniqueString() value neutron_grp_dict = fakes.make_fake_neutron_security_group( id='1', name='neutron-sec-group', description='Test Neutron security group', rules=[ dict(id='1', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0') ] ) nova_grp_dict = fakes.make_fake_nova_security_group( id='2', name='nova-sec-group', description='Test Nova security group #1', rules=[ fakes.make_fake_nova_security_group_rule( id='2', from_port=8000, to_port=8001, ip_protocol='tcp', cidr='0.0.0.0/0'), ] ) class TestSecurityGroups(base.RequestsMockTestCase): def setUp(self): super(TestSecurityGroups, self).setUp() self.has_neutron = True def fake_has_service(*args, **kwargs): return self.has_neutron self.cloud.has_service = fake_has_service def test_list_security_groups_neutron(self): project_id = 42 self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json'], qs_elements=["project_id=%s" % project_id]), json={'security_groups': [neutron_grp_dict]}) ]) self.cloud.list_security_groups(filters={'project_id': project_id}) self.assert_calls() def test_list_security_groups_nova(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups?project_id=42'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}), ]) self.cloud.secgroup_source = 'nova' self.has_neutron = False self.cloud.list_security_groups(filters={'project_id': 42}) self.assert_calls() def test_list_security_groups_none(self): self.cloud.secgroup_source = None self.has_neutron = False self.assertRaises(openstack.cloud.OpenStackCloudUnavailableFeature, self.cloud.list_security_groups) def test_delete_security_group_neutron(self): sg_id = neutron_grp_dict['id'] self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups', '%s.json' % sg_id]), json={}) ]) self.assertTrue(self.cloud.delete_security_group('1')) self.assert_calls() def test_delete_security_group_nova(self): self.cloud.secgroup_source = 'nova' self.has_neutron = False nova_return = [nova_grp_dict] self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), dict(method='DELETE', uri='{endpoint}/os-security-groups/2'.format( endpoint=fakes.COMPUTE_ENDPOINT)), ]) self.cloud.delete_security_group('2') self.assert_calls() def test_delete_security_group_neutron_not_found(self): self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}) ]) self.assertFalse(self.cloud.delete_security_group('10')) self.assert_calls() def test_delete_security_group_nova_not_found(self): self.cloud.secgroup_source = 'nova' self.has_neutron = False nova_return = [nova_grp_dict] self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), ]) self.assertFalse(self.cloud.delete_security_group('doesNotExist')) def test_delete_security_group_none(self): self.cloud.secgroup_source = None self.assertRaises(openstack.cloud.OpenStackCloudUnavailableFeature, self.cloud.delete_security_group, 'doesNotExist') def test_create_security_group_neutron(self): self.cloud.secgroup_source = 'neutron' group_name = self.getUniqueString() group_desc = self.getUniqueString('description') new_group = fakes.make_fake_neutron_security_group( id='2', name=group_name, description=group_desc, rules=[]) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_group': new_group}, validate=dict( json={'security_group': { 'name': group_name, 'description': group_desc }})) ]) r = self.cloud.create_security_group(group_name, group_desc) self.assertEqual(group_name, r['name']) self.assertEqual(group_desc, r['description']) self.assert_calls() def test_create_security_group_neutron_specific_tenant(self): self.cloud.secgroup_source = 'neutron' project_id = "861808a93da0484ea1767967c4df8a23" group_name = self.getUniqueString() group_desc = 'security group from' \ ' test_create_security_group_neutron_specific_tenant' new_group = fakes.make_fake_neutron_security_group( id='2', name=group_name, description=group_desc, project_id=project_id, rules=[]) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_group': new_group}, validate=dict( json={'security_group': { 'name': group_name, 'description': group_desc, 'tenant_id': project_id }})) ]) r = self.cloud.create_security_group( group_name, group_desc, project_id ) self.assertEqual(group_name, r['name']) self.assertEqual(group_desc, r['description']) self.assertEqual(project_id, r['tenant_id']) self.assert_calls() def test_create_security_group_nova(self): group_name = self.getUniqueString() self.has_neutron = False group_desc = self.getUniqueString('description') new_group = fakes.make_fake_nova_security_group( id='2', name=group_name, description=group_desc, rules=[]) self.register_uris([ dict(method='POST', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_group': new_group}, validate=dict(json={ 'security_group': { 'name': group_name, 'description': group_desc }})), ]) self.cloud.secgroup_source = 'nova' r = self.cloud.create_security_group(group_name, group_desc) self.assertEqual(group_name, r['name']) self.assertEqual(group_desc, r['description']) self.assert_calls() def test_create_security_group_none(self): self.cloud.secgroup_source = None self.has_neutron = False self.assertRaises(openstack.cloud.OpenStackCloudUnavailableFeature, self.cloud.create_security_group, '', '') def test_update_security_group_neutron(self): self.cloud.secgroup_source = 'neutron' new_name = self.getUniqueString() sg_id = neutron_grp_dict['id'] update_return = neutron_grp_dict.copy() update_return['name'] = new_name self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups', '%s.json' % sg_id]), json={'security_group': update_return}, validate=dict(json={ 'security_group': {'name': new_name}})) ]) r = self.cloud.update_security_group(sg_id, name=new_name) self.assertEqual(r['name'], new_name) self.assert_calls() def test_update_security_group_nova(self): self.has_neutron = False new_name = self.getUniqueString() self.cloud.secgroup_source = 'nova' nova_return = [nova_grp_dict] update_return = nova_grp_dict.copy() update_return['name'] = new_name self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), dict(method='PUT', uri='{endpoint}/os-security-groups/2'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_group': update_return}), ]) r = self.cloud.update_security_group( nova_grp_dict['id'], name=new_name) self.assertEqual(r['name'], new_name) self.assert_calls() def test_update_security_group_bad_kwarg(self): self.assertRaises(TypeError, self.cloud.update_security_group, 'doesNotExist', bad_arg='') def test_create_security_group_rule_neutron(self): self.cloud.secgroup_source = 'neutron' args = dict( port_range_min=-1, port_range_max=40000, protocol='tcp', remote_ip_prefix='0.0.0.0/0', remote_group_id='456', direction='egress', ethertype='IPv6' ) expected_args = copy.copy(args) # For neutron, -1 port should be converted to None expected_args['port_range_min'] = None expected_args['security_group_id'] = neutron_grp_dict['id'] expected_new_rule = copy.copy(expected_args) expected_new_rule['id'] = '1234' expected_new_rule['tenant_id'] = '' expected_new_rule['project_id'] = expected_new_rule['tenant_id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-group-rules.json']), json={'security_group_rule': expected_new_rule}, validate=dict(json={ 'security_group_rule': expected_args})) ]) new_rule = self.cloud.create_security_group_rule( secgroup_name_or_id=neutron_grp_dict['id'], **args) # NOTE(slaweq): don't check location and properties in new rule new_rule.pop("location") new_rule.pop("properties") self.assertEqual(expected_new_rule, new_rule) self.assert_calls() def test_create_security_group_rule_neutron_specific_tenant(self): self.cloud.secgroup_source = 'neutron' args = dict( port_range_min=-1, port_range_max=40000, protocol='tcp', remote_ip_prefix='0.0.0.0/0', remote_group_id='456', direction='egress', ethertype='IPv6', project_id='861808a93da0484ea1767967c4df8a23' ) expected_args = copy.copy(args) # For neutron, -1 port should be converted to None expected_args['port_range_min'] = None expected_args['security_group_id'] = neutron_grp_dict['id'] expected_args['tenant_id'] = expected_args['project_id'] expected_args.pop('project_id') expected_new_rule = copy.copy(expected_args) expected_new_rule['id'] = '1234' expected_new_rule['project_id'] = expected_new_rule['tenant_id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-group-rules.json']), json={'security_group_rule': expected_new_rule}, validate=dict(json={ 'security_group_rule': expected_args})) ]) new_rule = self.cloud.create_security_group_rule( secgroup_name_or_id=neutron_grp_dict['id'], ** args) # NOTE(slaweq): don't check location and properties in new rule new_rule.pop("location") new_rule.pop("properties") self.assertEqual(expected_new_rule, new_rule) self.assert_calls() def test_create_security_group_rule_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' nova_return = [nova_grp_dict] new_rule = fakes.make_fake_nova_security_group_rule( id='xyz', from_port=1, to_port=2000, ip_protocol='tcp', cidr='1.2.3.4/32') self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), dict(method='POST', uri='{endpoint}/os-security-group-rules'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_group_rule': new_rule}, validate=dict(json={ "security_group_rule": { "from_port": 1, "ip_protocol": "tcp", "to_port": 2000, "parent_group_id": "2", "cidr": "1.2.3.4/32", "group_id": "123"}})), ]) self.cloud.create_security_group_rule( '2', port_range_min=1, port_range_max=2000, protocol='tcp', remote_ip_prefix='1.2.3.4/32', remote_group_id='123') self.assert_calls() def test_create_security_group_rule_nova_no_ports(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' new_rule = fakes.make_fake_nova_security_group_rule( id='xyz', from_port=1, to_port=65535, ip_protocol='tcp', cidr='1.2.3.4/32') nova_return = [nova_grp_dict] self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), dict(method='POST', uri='{endpoint}/os-security-group-rules'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_group_rule': new_rule}, validate=dict(json={ "security_group_rule": { "from_port": 1, "ip_protocol": "tcp", "to_port": 65535, "parent_group_id": "2", "cidr": "1.2.3.4/32", "group_id": "123"}})), ]) self.cloud.create_security_group_rule( '2', protocol='tcp', remote_ip_prefix='1.2.3.4/32', remote_group_id='123') self.assert_calls() def test_create_security_group_rule_none(self): self.has_neutron = False self.cloud.secgroup_source = None self.assertRaises(openstack.cloud.OpenStackCloudUnavailableFeature, self.cloud.create_security_group_rule, '') def test_delete_security_group_rule_neutron(self): rule_id = "xyz" self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-group-rules', '%s.json' % rule_id]), json={}) ]) self.assertTrue(self.cloud.delete_security_group_rule(rule_id)) self.assert_calls() def test_delete_security_group_rule_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict(method='DELETE', uri='{endpoint}/os-security-group-rules/xyz'.format( endpoint=fakes.COMPUTE_ENDPOINT)), ]) r = self.cloud.delete_security_group_rule('xyz') self.assertTrue(r) self.assert_calls() def test_delete_security_group_rule_none(self): self.has_neutron = False self.cloud.secgroup_source = None self.assertRaises(openstack.cloud.OpenStackCloudUnavailableFeature, self.cloud.delete_security_group_rule, '') def test_delete_security_group_rule_not_found(self): rule_id = "doesNotExist" self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}) ]) self.assertFalse(self.cloud.delete_security_group(rule_id)) self.assert_calls() def test_delete_security_group_rule_not_found_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': [nova_grp_dict]}), ]) r = self.cloud.delete_security_group('doesNotExist') self.assertFalse(r) self.assert_calls() def test_nova_egress_security_group_rule(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': [nova_grp_dict]}), ]) self.assertRaises(openstack.cloud.OpenStackCloudException, self.cloud.create_security_group_rule, secgroup_name_or_id='nova-sec-group', direction='egress') self.assert_calls() def test_list_server_security_groups_nova(self): self.has_neutron = False server = dict(id='server_id') self.register_uris([ dict( method='GET', uri='{endpoint}/servers/{id}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT, id='server_id'), json={'security_groups': [nova_grp_dict]}), ]) groups = self.cloud.list_server_security_groups(server) self.assertIn('location', groups[0]) self.assertEqual( groups[0]['security_group_rules'][0]['remote_ip_prefix'], nova_grp_dict['rules'][0]['ip_range']['cidr']) self.assert_calls() def test_list_server_security_groups_bad_source(self): self.has_neutron = False self.cloud.secgroup_source = 'invalid' server = dict(id='server_id') ret = self.cloud.list_server_security_groups(server) self.assertEqual([], ret) def test_add_security_group_to_server_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict( method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT, id='server_id'), json={'security_groups': [nova_grp_dict]}), dict( method='POST', uri='%s/servers/%s/action' % (fakes.COMPUTE_ENDPOINT, '1234'), validate=dict( json={'addSecurityGroup': {'name': 'nova-sec-group'}}), status_code=202, ), ]) ret = self.cloud.add_server_security_groups( dict(id='1234'), 'nova-sec-group') self.assertTrue(ret) self.assert_calls() def test_add_security_group_to_server_neutron(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') # use neutron for secgroup list and return an existing fake self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='POST', uri='%s/servers/%s/action' % (fakes.COMPUTE_ENDPOINT, '1234'), validate=dict( json={'addSecurityGroup': {'name': 'neutron-sec-group'}}), status_code=202), ]) self.assertTrue(self.cloud.add_server_security_groups( 'server-name', 'neutron-sec-group')) self.assert_calls() def test_remove_security_group_from_server_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict( method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': [nova_grp_dict]}), dict( method='POST', uri='%s/servers/%s/action' % (fakes.COMPUTE_ENDPOINT, '1234'), validate=dict( json={'removeSecurityGroup': {'name': 'nova-sec-group'}}), ), ]) ret = self.cloud.remove_server_security_groups( dict(id='1234'), 'nova-sec-group') self.assertTrue(ret) self.assert_calls() def test_remove_security_group_from_server_neutron(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') # use neutron for secgroup list and return an existing fake self.cloud.secgroup_source = 'neutron' validate = {'removeSecurityGroup': {'name': 'neutron-sec-group'}} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='POST', uri='%s/servers/%s/action' % (fakes.COMPUTE_ENDPOINT, '1234'), validate=dict(json=validate)), ]) self.assertTrue(self.cloud.remove_server_security_groups( 'server-name', 'neutron-sec-group')) self.assert_calls() def test_add_bad_security_group_to_server_nova(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') # use nova for secgroup list and return an existing fake self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict( method='GET', uri='{endpoint}/servers/detail'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'servers': [fake_server]}), dict( method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': [nova_grp_dict]}), ]) ret = self.cloud.add_server_security_groups('server-name', 'unknown-sec-group') self.assertFalse(ret) self.assert_calls() def test_add_bad_security_group_to_server_neutron(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') # use neutron for secgroup list and return an existing fake self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}) ]) self.assertFalse(self.cloud.add_server_security_groups( 'server-name', 'unknown-sec-group')) self.assert_calls() def test_add_security_group_to_bad_server(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') self.register_uris([ dict( method='GET', uri='{endpoint}/servers/detail'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'servers': [fake_server]}), ]) ret = self.cloud.add_server_security_groups('unknown-server-name', 'nova-sec-group') self.assertFalse(ret) self.assert_calls() def test_get_security_group_by_id_neutron(self): self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups', neutron_grp_dict['id']]), json={'security_group': neutron_grp_dict}) ]) ret_sg = self.cloud.get_security_group_by_id(neutron_grp_dict['id']) self.assertEqual(neutron_grp_dict['id'], ret_sg['id']) self.assertEqual(neutron_grp_dict['name'], ret_sg['name']) self.assertEqual(neutron_grp_dict['description'], ret_sg['description']) self.assert_calls() def test_get_security_group_by_id_nova(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups/{id}'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=nova_grp_dict['id']), json={'security_group': nova_grp_dict}), ]) self.cloud.secgroup_source = 'nova' self.has_neutron = False ret_sg = self.cloud.get_security_group_by_id(nova_grp_dict['id']) self.assertEqual(nova_grp_dict['id'], ret_sg['id']) self.assertEqual(nova_grp_dict['name'], ret_sg['name']) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_domain_params.py0000666000175100017510000000617213236151340025760 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import munch import openstack.cloud from openstack.cloud import exc from openstack.tests.unit import base class TestDomainParams(base.TestCase): @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_project') def test_identity_params_v3(self, mock_get_project, mock_is_client_version): mock_get_project.return_value = munch.Munch(id=1234) mock_is_client_version.return_value = True ret = self.cloud._get_identity_params(domain_id='5678', project='bar') self.assertIn('default_project_id', ret) self.assertEqual(ret['default_project_id'], 1234) self.assertIn('domain_id', ret) self.assertEqual(ret['domain_id'], '5678') @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_project') def test_identity_params_v3_no_domain( self, mock_get_project, mock_is_client_version): mock_get_project.return_value = munch.Munch(id=1234) mock_is_client_version.return_value = True self.assertRaises( exc.OpenStackCloudException, self.cloud._get_identity_params, domain_id=None, project='bar') @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_project') def test_identity_params_v2(self, mock_get_project, mock_is_client_version): mock_get_project.return_value = munch.Munch(id=1234) mock_is_client_version.return_value = False ret = self.cloud._get_identity_params(domain_id='foo', project='bar') self.assertIn('tenant_id', ret) self.assertEqual(ret['tenant_id'], 1234) self.assertNotIn('domain', ret) @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, 'get_project') def test_identity_params_v2_no_domain(self, mock_get_project, mock_is_client_version): mock_get_project.return_value = munch.Munch(id=1234) mock_is_client_version.return_value = False ret = self.cloud._get_identity_params(domain_id=None, project='bar') api_calls = [mock.call('identity', 3), mock.call('identity', 3)] mock_is_client_version.assert_has_calls(api_calls) self.assertIn('tenant_id', ret) self.assertEqual(ret['tenant_id'], 1234) self.assertNotIn('domain', ret) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_flavors.py0000666000175100017510000002422413236151364024626 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack.cloud from openstack.tests import fakes from openstack.tests.unit import base class TestFlavors(base.RequestsMockTestCase): def test_create_flavor(self): self.register_uris([ dict(method='POST', uri='{endpoint}/flavors'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavor': fakes.FAKE_FLAVOR}, validate=dict( json={ 'flavor': { "name": "vanilla", "ram": 65536, "vcpus": 24, "swap": 0, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 1600, "id": None}}))]) self.cloud.create_flavor( 'vanilla', ram=65536, disk=1600, vcpus=24, ) self.assert_calls() def test_delete_flavor(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), dict(method='DELETE', uri='{endpoint}/flavors/{id}'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=fakes.FLAVOR_ID))]) self.assertTrue(self.cloud.delete_flavor('vanilla')) self.assert_calls() def test_delete_flavor_not_found(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST})]) self.assertFalse(self.cloud.delete_flavor('invalid')) self.assert_calls() def test_delete_flavor_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), dict(method='DELETE', uri='{endpoint}/flavors/{id}'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=fakes.FLAVOR_ID), status_code=503)]) self.assertRaises(openstack.cloud.OpenStackCloudException, self.cloud.delete_flavor, 'vanilla') def test_list_flavors(self): uris_to_mock = [ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), ] uris_to_mock.extend([ dict(method='GET', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=flavor['id']), json={'extra_specs': {}}) for flavor in fakes.FAKE_FLAVOR_LIST]) self.register_uris(uris_to_mock) flavors = self.cloud.list_flavors() # test that new flavor is created correctly found = False for flavor in flavors: if flavor['name'] == 'vanilla': found = True break self.assertTrue(found) needed_keys = {'name', 'ram', 'vcpus', 'id', 'is_public', 'disk'} if found: # check flavor content self.assertTrue(needed_keys.issubset(flavor.keys())) self.assert_calls() def test_get_flavor_by_ram(self): uris_to_mock = [ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), ] uris_to_mock.extend([ dict(method='GET', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=flavor['id']), json={'extra_specs': {}}) for flavor in fakes.FAKE_FLAVOR_LIST]) self.register_uris(uris_to_mock) flavor = self.cloud.get_flavor_by_ram(ram=250) self.assertEqual(fakes.STRAWBERRY_FLAVOR_ID, flavor['id']) def test_get_flavor_by_ram_and_include(self): uris_to_mock = [ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), ] uris_to_mock.extend([ dict(method='GET', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=flavor['id']), json={'extra_specs': {}}) for flavor in fakes.FAKE_FLAVOR_LIST]) self.register_uris(uris_to_mock) flavor = self.cloud.get_flavor_by_ram(ram=150, include='strawberry') self.assertEqual(fakes.STRAWBERRY_FLAVOR_ID, flavor['id']) def test_get_flavor_by_ram_not_found(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': []})]) self.assertRaises( openstack.cloud.OpenStackCloudException, self.cloud.get_flavor_by_ram, ram=100) def test_get_flavor_string_and_int(self): flavor_list_uri = '{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT) flavor_resource_uri = '{endpoint}/flavors/1/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT) flavor_list_json = {'flavors': [fakes.make_fake_flavor( '1', 'vanilla')]} flavor_json = {'extra_specs': {}} self.register_uris([ dict(method='GET', uri=flavor_list_uri, json=flavor_list_json), dict(method='GET', uri=flavor_resource_uri, json=flavor_json), dict(method='GET', uri=flavor_list_uri, json=flavor_list_json), dict(method='GET', uri=flavor_resource_uri, json=flavor_json)]) flavor1 = self.cloud.get_flavor('1') self.assertEqual('1', flavor1['id']) flavor2 = self.cloud.get_flavor(1) self.assertEqual('1', flavor2['id']) def test_set_flavor_specs(self): extra_specs = dict(key1='value1') self.register_uris([ dict(method='POST', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=1), json=dict(extra_specs=extra_specs))]) self.cloud.set_flavor_specs(1, extra_specs) self.assert_calls() def test_unset_flavor_specs(self): keys = ['key1', 'key2'] self.register_uris([ dict(method='DELETE', uri='{endpoint}/flavors/{id}/os-extra_specs/{key}'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=1, key=key)) for key in keys]) self.cloud.unset_flavor_specs(1, keys) self.assert_calls() def test_add_flavor_access(self): self.register_uris([ dict(method='POST', uri='{endpoint}/flavors/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id='flavor_id'), json={ 'flavor_access': [{ 'flavor_id': 'flavor_id', 'tenant_id': 'tenant_id'}]}, validate=dict( json={'addTenantAccess': {'tenant': 'tenant_id'}}))]) self.cloud.add_flavor_access('flavor_id', 'tenant_id') self.assert_calls() def test_remove_flavor_access(self): self.register_uris([ dict(method='POST', uri='{endpoint}/flavors/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id='flavor_id'), json={'flavor_access': []}, validate=dict( json={'removeTenantAccess': {'tenant': 'tenant_id'}}))]) self.cloud.remove_flavor_access('flavor_id', 'tenant_id') self.assert_calls() def test_list_flavor_access(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/vanilla/os-flavor-access'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={ 'flavor_access': [ {'flavor_id': 'vanilla', 'tenant_id': 'tenant_id'}]}) ]) self.cloud.list_flavor_access('vanilla') self.assert_calls() def test_get_flavor_by_id(self): flavor_uri = '{endpoint}/flavors/1'.format( endpoint=fakes.COMPUTE_ENDPOINT) flavor_extra_uri = '{endpoint}/flavors/1/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT) flavor_json = {'flavor': fakes.make_fake_flavor('1', 'vanilla')} flavor_extra_json = {'extra_specs': {'name': 'test'}} self.register_uris([ dict(method='GET', uri=flavor_uri, json=flavor_json), dict(method='GET', uri=flavor_extra_uri, json=flavor_extra_json), ]) flavor1 = self.cloud.get_flavor_by_id('1') self.assertEqual('1', flavor1['id']) self.assertEqual({'name': 'test'}, flavor1.extra_specs) flavor2 = self.cloud.get_flavor_by_id('1', get_extra=False) self.assertEqual('1', flavor2['id']) self.assertEqual({}, flavor2.extra_specs) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_qos_policy.py0000666000175100017510000003171013236151340025323 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from openstack.cloud import exc from openstack.tests.unit import base class TestQosPolicy(base.RequestsMockTestCase): policy_name = 'qos test policy' policy_id = '881d1bb7-a663-44c0-8f9f-ee2765b74486' project_id = 'c88fc89f-5121-4a4c-87fd-496b5af864e9' mock_policy = { 'id': policy_id, 'name': policy_name, 'description': '', 'rules': [], 'project_id': project_id, 'tenant_id': project_id, 'shared': False, 'is_default': False } qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } qos_default_extension = { "updated": "2017-041-06T10:00:00-00:00", "name": "QoS default policy", "links": [], "alias": "qos-default", "description": "Expose the QoS default policy per project" } enabled_neutron_extensions = [qos_extension, qos_default_extension] def test_get_qos_policy(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}) ]) r = self.cloud.get_qos_policy(self.policy_name) self.assertIsNotNone(r) self.assertDictEqual(self.mock_policy, r) self.assert_calls() def test_get_qos_policy_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_policy, self.policy_name) self.assert_calls() def test_create_qos_policy(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policy': self.mock_policy}) ]) policy = self.cloud.create_qos_policy( name=self.policy_name, project_id=self.project_id) self.assertDictEqual(self.mock_policy, policy) self.assert_calls() def test_create_qos_policy_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_qos_policy, name=self.policy_name) self.assert_calls() def test_create_qos_policy_no_qos_default_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policy': self.mock_policy}, validate=dict( json={'policy': { 'name': self.policy_name, 'project_id': self.project_id}})) ]) policy = self.cloud.create_qos_policy( name=self.policy_name, project_id=self.project_id, default=True) self.assertDictEqual(self.mock_policy, policy) self.assert_calls() def test_delete_qos_policy(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', '%s.json' % self.policy_id]), json={}) ]) self.assertTrue(self.cloud.delete_qos_policy(self.policy_name)) self.assert_calls() def test_delete_qos_policy_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_qos_policy, self.policy_name) self.assert_calls() def test_delete_qos_policy_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': []}) ]) self.assertFalse(self.cloud.delete_qos_policy('goofy')) self.assert_calls() def test_delete_qos_policy_multiple_found(self): policy1 = dict(id='123', name=self.policy_name) policy2 = dict(id='456', name=self.policy_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [policy1, policy2]}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.delete_qos_policy, self.policy_name) self.assert_calls() def test_delete_qos_policy_multiple_using_id(self): policy1 = self.mock_policy policy2 = dict(id='456', name=self.policy_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [policy1, policy2]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', '%s.json' % self.policy_id]), json={}) ]) self.assertTrue(self.cloud.delete_qos_policy(policy1['id'])) self.assert_calls() def test_update_qos_policy(self): expected_policy = copy.copy(self.mock_policy) expected_policy['name'] = 'goofy' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', '%s.json' % self.policy_id]), json={'policy': expected_policy}, validate=dict( json={'policy': {'name': 'goofy'}})) ]) policy = self.cloud.update_qos_policy( self.policy_id, name='goofy') self.assertDictEqual(expected_policy, policy) self.assert_calls() def test_update_qos_policy_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.update_qos_policy, self.policy_id, name="goofy") self.assert_calls() def test_update_qos_policy_no_qos_default_extension(self): expected_policy = copy.copy(self.mock_policy) expected_policy['name'] = 'goofy' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', '%s.json' % self.policy_id]), json={'policy': expected_policy}, validate=dict( json={'policy': {'name': "goofy"}})) ]) policy = self.cloud.update_qos_policy( self.policy_id, name='goofy', default=True) self.assertDictEqual(expected_policy, policy) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_endpoints.py0000666000175100017510000003725613236151340025160 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_cloud_endpoints ---------------------------------- Tests Keystone endpoints commands. """ import uuid from openstack.cloud.exc import OpenStackCloudException from openstack.cloud.exc import OpenStackCloudUnavailableFeature from openstack.tests.unit import base from testtools import matchers class TestCloudEndpoints(base.RequestsMockTestCase): def get_mock_url(self, service_type='identity', interface='admin', resource='endpoints', append=None, base_url_append='v3'): return super(TestCloudEndpoints, self).get_mock_url( service_type, interface, resource, append, base_url_append) def _dummy_url(self): return 'https://%s.example.com/' % uuid.uuid4().hex def test_create_endpoint_v2(self): self.use_keystone_v2() service_data = self._get_service_data() endpoint_data = self._get_endpoint_v2_data( service_data.service_id, public_url=self._dummy_url(), internal_url=self._dummy_url(), admin_url=self._dummy_url()) other_endpoint_data = self._get_endpoint_v2_data( service_data.service_id, region=endpoint_data.region, public_url=endpoint_data.public_url) # correct the keys self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='services', base_url_append='OS-KSADM'), status_code=200, json={'OS-KSADM:services': [ service_data.json_response_v2['OS-KSADM:service']]}), dict(method='POST', uri=self.get_mock_url(base_url_append=None), status_code=200, json=endpoint_data.json_response, validate=dict(json=endpoint_data.json_request)), dict(method='GET', uri=self.get_mock_url( resource='services', base_url_append='OS-KSADM'), status_code=200, json={'OS-KSADM:services': [ service_data.json_response_v2['OS-KSADM:service']]}), # NOTE(notmorgan): There is a stupid happening here, we do two # gets on the services for some insane reason (read: keystoneclient # is bad and should feel bad). dict(method='GET', uri=self.get_mock_url( resource='services', base_url_append='OS-KSADM'), status_code=200, json={'OS-KSADM:services': [ service_data.json_response_v2['OS-KSADM:service']]}), dict(method='POST', uri=self.get_mock_url(base_url_append=None), status_code=200, json=other_endpoint_data.json_response, validate=dict(json=other_endpoint_data.json_request)) ]) endpoints = self.cloud.create_endpoint( service_name_or_id=service_data.service_id, region=endpoint_data.region, public_url=endpoint_data.public_url, internal_url=endpoint_data.internal_url, admin_url=endpoint_data.admin_url ) self.assertThat(endpoints[0].id, matchers.Equals(endpoint_data.endpoint_id)) self.assertThat(endpoints[0].region, matchers.Equals(endpoint_data.region)) self.assertThat(endpoints[0].publicURL, matchers.Equals(endpoint_data.public_url)) self.assertThat(endpoints[0].internalURL, matchers.Equals(endpoint_data.internal_url)) self.assertThat(endpoints[0].adminURL, matchers.Equals(endpoint_data.admin_url)) # test v3 semantics on v2.0 endpoint self.assertRaises(OpenStackCloudException, self.cloud.create_endpoint, service_name_or_id='service1', interface='mock_admin_url', url='admin') endpoints_3on2 = self.cloud.create_endpoint( service_name_or_id=service_data.service_id, region=endpoint_data.region, interface='public', url=endpoint_data.public_url ) # test keys and values are correct self.assertThat( endpoints_3on2[0].region, matchers.Equals(other_endpoint_data.region)) self.assertThat( endpoints_3on2[0].publicURL, matchers.Equals(other_endpoint_data.public_url)) self.assertThat(endpoints_3on2[0].get('internalURL'), matchers.Equals(None)) self.assertThat(endpoints_3on2[0].get('adminURL'), matchers.Equals(None)) self.assert_calls() def test_create_endpoint_v3(self): service_data = self._get_service_data() public_endpoint_data = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='public', url=self._dummy_url()) public_endpoint_data_disabled = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='public', url=self._dummy_url(), enabled=False) admin_endpoint_data = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='admin', url=self._dummy_url(), region=public_endpoint_data.region) internal_endpoint_data = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='internal', url=self._dummy_url(), region=public_endpoint_data.region) self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='services'), status_code=200, json={'services': [ service_data.json_response_v3['service']]}), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=public_endpoint_data_disabled.json_response, validate=dict( json=public_endpoint_data_disabled.json_request)), dict(method='GET', uri=self.get_mock_url(resource='services'), status_code=200, json={'services': [ service_data.json_response_v3['service']]}), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=public_endpoint_data.json_response, validate=dict(json=public_endpoint_data.json_request)), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=internal_endpoint_data.json_response, validate=dict(json=internal_endpoint_data.json_request)), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=admin_endpoint_data.json_response, validate=dict(json=admin_endpoint_data.json_request)), ]) endpoints = self.cloud.create_endpoint( service_name_or_id=service_data.service_id, region=public_endpoint_data_disabled.region, url=public_endpoint_data_disabled.url, interface=public_endpoint_data_disabled.interface, enabled=False) # Test endpoint values self.assertThat( endpoints[0].id, matchers.Equals(public_endpoint_data_disabled.endpoint_id)) self.assertThat(endpoints[0].url, matchers.Equals(public_endpoint_data_disabled.url)) self.assertThat( endpoints[0].interface, matchers.Equals(public_endpoint_data_disabled.interface)) self.assertThat( endpoints[0].region, matchers.Equals(public_endpoint_data_disabled.region)) self.assertThat( endpoints[0].region_id, matchers.Equals(public_endpoint_data_disabled.region)) self.assertThat(endpoints[0].enabled, matchers.Equals(public_endpoint_data_disabled.enabled)) endpoints_2on3 = self.cloud.create_endpoint( service_name_or_id=service_data.service_id, region=public_endpoint_data.region, public_url=public_endpoint_data.url, internal_url=internal_endpoint_data.url, admin_url=admin_endpoint_data.url) # Three endpoints should be returned, public, internal, and admin self.assertThat(len(endpoints_2on3), matchers.Equals(3)) # test keys and values are correct for each endpoint created for result, reference in zip( endpoints_2on3, [public_endpoint_data, internal_endpoint_data, admin_endpoint_data] ): self.assertThat(result.id, matchers.Equals(reference.endpoint_id)) self.assertThat(result.url, matchers.Equals(reference.url)) self.assertThat(result.interface, matchers.Equals(reference.interface)) self.assertThat(result.region, matchers.Equals(reference.region)) self.assertThat(result.enabled, matchers.Equals(reference.enabled)) self.assert_calls() def test_update_endpoint_v2(self): self.use_keystone_v2() self.assertRaises(OpenStackCloudUnavailableFeature, self.cloud.update_endpoint, 'endpoint_id') def test_update_endpoint_v3(self): service_data = self._get_service_data() dummy_url = self._dummy_url() endpoint_data = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='admin', enabled=False) reference_request = endpoint_data.json_request.copy() reference_request['endpoint']['url'] = dummy_url self.register_uris([ dict(method='PATCH', uri=self.get_mock_url(append=[endpoint_data.endpoint_id]), status_code=200, json=endpoint_data.json_response, validate=dict(json=reference_request)) ]) endpoint = self.cloud.update_endpoint( endpoint_data.endpoint_id, service_name_or_id=service_data.service_id, region=endpoint_data.region, url=dummy_url, interface=endpoint_data.interface, enabled=False ) # test keys and values are correct self.assertThat(endpoint.id, matchers.Equals(endpoint_data.endpoint_id)) self.assertThat(endpoint.service_id, matchers.Equals(service_data.service_id)) self.assertThat(endpoint.url, matchers.Equals(endpoint_data.url)) self.assertThat(endpoint.interface, matchers.Equals(endpoint_data.interface)) self.assert_calls() def test_list_endpoints(self): endpoints_data = [self._get_endpoint_v3_data() for e in range(1, 10)] self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}) ]) endpoints = self.cloud.list_endpoints() # test we are getting exactly len(self.mock_endpoints) elements self.assertThat(len(endpoints), matchers.Equals(len(endpoints_data))) # test keys and values are correct for i, ep in enumerate(endpoints_data): self.assertThat(endpoints[i].id, matchers.Equals(ep.endpoint_id)) self.assertThat(endpoints[i].service_id, matchers.Equals(ep.service_id)) self.assertThat(endpoints[i].url, matchers.Equals(ep.url)) self.assertThat(endpoints[i].interface, matchers.Equals(ep.interface)) self.assert_calls() def test_search_endpoints(self): endpoints_data = [self._get_endpoint_v3_data(region='region1') for e in range(0, 2)] endpoints_data.extend([self._get_endpoint_v3_data() for e in range(1, 8)]) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}) ]) # Search by id endpoints = self.cloud.search_endpoints( id=endpoints_data[-1].endpoint_id) # # test we are getting exactly 1 element self.assertEqual(1, len(endpoints)) self.assertThat(endpoints[0].id, matchers.Equals(endpoints_data[-1].endpoint_id)) self.assertThat(endpoints[0].service_id, matchers.Equals(endpoints_data[-1].service_id)) self.assertThat(endpoints[0].url, matchers.Equals(endpoints_data[-1].url)) self.assertThat(endpoints[0].interface, matchers.Equals(endpoints_data[-1].interface)) # Not found endpoints = self.cloud.search_endpoints(id='!invalid!') self.assertEqual(0, len(endpoints)) # Multiple matches endpoints = self.cloud.search_endpoints( filters={'region_id': 'region1'}) # # test we are getting exactly 2 elements self.assertEqual(2, len(endpoints)) # test we are getting the correct response for region/region_id compat endpoints = self.cloud.search_endpoints( filters={'region': 'region1'}) # # test we are getting exactly 2 elements, this is v3 self.assertEqual(2, len(endpoints)) self.assert_calls() def test_delete_endpoint(self): endpoint_data = self._get_endpoint_v3_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [ endpoint_data.json_response['endpoint']]}), dict(method='DELETE', uri=self.get_mock_url(append=[endpoint_data.endpoint_id]), status_code=204) ]) # Delete by id self.cloud.delete_endpoint(id=endpoint_data.endpoint_id) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test__utils.py0000666000175100017510000003445413236151340024451 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import random import string import tempfile from uuid import uuid4 import mock import testtools from openstack.cloud import _utils from openstack.cloud import exc from openstack.tests.unit import base RANGE_DATA = [ dict(id=1, key1=1, key2=5), dict(id=2, key1=1, key2=20), dict(id=3, key1=2, key2=10), dict(id=4, key1=2, key2=30), dict(id=5, key1=3, key2=40), dict(id=6, key1=3, key2=40), ] class TestUtils(base.TestCase): def test__filter_list_name_or_id(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto') data = [el1, el2] ret = _utils._filter_list(data, 'donald', None) self.assertEqual([el1], ret) def test__filter_list_name_or_id_special(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto[2017-01-10]') data = [el1, el2] ret = _utils._filter_list(data, 'pluto[2017-01-10]', None) self.assertEqual([el2], ret) def test__filter_list_name_or_id_partial_bad(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto[2017-01-10]') data = [el1, el2] ret = _utils._filter_list(data, 'pluto[2017-01]', None) self.assertEqual([], ret) def test__filter_list_name_or_id_partial_glob(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto[2017-01-10]') data = [el1, el2] ret = _utils._filter_list(data, 'pluto*', None) self.assertEqual([el2], ret) def test__filter_list_name_or_id_non_glob_glob(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto[2017-01-10]') data = [el1, el2] ret = _utils._filter_list(data, 'pluto', None) self.assertEqual([], ret) def test__filter_list_name_or_id_glob(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto') el3 = dict(id=200, name='pluto-2') data = [el1, el2, el3] ret = _utils._filter_list(data, 'pluto*', None) self.assertEqual([el2, el3], ret) def test__filter_list_name_or_id_glob_not_found(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto') el3 = dict(id=200, name='pluto-2') data = [el1, el2, el3] ret = _utils._filter_list(data, 'q*', None) self.assertEqual([], ret) def test__filter_list_unicode(self): el1 = dict(id=100, name=u'中文', last='duck', other=dict(category='duck', financial=dict(status='poor'))) el2 = dict(id=200, name=u'中文', last='trump', other=dict(category='human', financial=dict(status='rich'))) el3 = dict(id=300, name='donald', last='ronald mac', other=dict(category='clown', financial=dict(status='rich'))) data = [el1, el2, el3] ret = _utils._filter_list( data, u'中文', {'other': { 'financial': {'status': 'rich'} }}) self.assertEqual([el2], ret) def test__filter_list_filter(self): el1 = dict(id=100, name='donald', other='duck') el2 = dict(id=200, name='donald', other='trump') data = [el1, el2] ret = _utils._filter_list(data, 'donald', {'other': 'duck'}) self.assertEqual([el1], ret) def test__filter_list_filter_jmespath(self): el1 = dict(id=100, name='donald', other='duck') el2 = dict(id=200, name='donald', other='trump') data = [el1, el2] ret = _utils._filter_list(data, 'donald', "[?other == `duck`]") self.assertEqual([el1], ret) def test__filter_list_dict1(self): el1 = dict(id=100, name='donald', last='duck', other=dict(category='duck')) el2 = dict(id=200, name='donald', last='trump', other=dict(category='human')) el3 = dict(id=300, name='donald', last='ronald mac', other=dict(category='clown')) data = [el1, el2, el3] ret = _utils._filter_list( data, 'donald', {'other': {'category': 'clown'}}) self.assertEqual([el3], ret) def test__filter_list_dict2(self): el1 = dict(id=100, name='donald', last='duck', other=dict(category='duck', financial=dict(status='poor'))) el2 = dict(id=200, name='donald', last='trump', other=dict(category='human', financial=dict(status='rich'))) el3 = dict(id=300, name='donald', last='ronald mac', other=dict(category='clown', financial=dict(status='rich'))) data = [el1, el2, el3] ret = _utils._filter_list( data, 'donald', {'other': { 'financial': {'status': 'rich'} }}) self.assertEqual([el2, el3], ret) def test_safe_dict_min_ints(self): """Test integer comparison""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_strs(self): """Test integer as strings comparison""" data = [{'f1': '3'}, {'f1': '2'}, {'f1': '1'}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_None(self): """Test None values""" data = [{'f1': 3}, {'f1': None}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_key_missing(self): """Test missing key for an entry still works""" data = [{'f1': 3}, {'x': 2}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_key_not_found(self): """Test key not found in any elements returns None""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_min('doesnotexist', data) self.assertIsNone(retval) def test_safe_dict_min_not_int(self): """Test non-integer key value raises OSCE""" data = [{'f1': 3}, {'f1': "aaa"}, {'f1': 1}] with testtools.ExpectedException( exc.OpenStackCloudException, "Search for minimum value failed. " "Value for f1 is not an integer: aaa" ): _utils.safe_dict_min('f1', data) def test_safe_dict_max_ints(self): """Test integer comparison""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_strs(self): """Test integer as strings comparison""" data = [{'f1': '3'}, {'f1': '2'}, {'f1': '1'}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_None(self): """Test None values""" data = [{'f1': 3}, {'f1': None}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_key_missing(self): """Test missing key for an entry still works""" data = [{'f1': 3}, {'x': 2}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_key_not_found(self): """Test key not found in any elements returns None""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_max('doesnotexist', data) self.assertIsNone(retval) def test_safe_dict_max_not_int(self): """Test non-integer key value raises OSCE""" data = [{'f1': 3}, {'f1': "aaa"}, {'f1': 1}] with testtools.ExpectedException( exc.OpenStackCloudException, "Search for maximum value failed. " "Value for f1 is not an integer: aaa" ): _utils.safe_dict_max('f1', data) def test_parse_range_None(self): self.assertIsNone(_utils.parse_range(None)) def test_parse_range_invalid(self): self.assertIsNone(_utils.parse_range("1024") self.assertIsInstance(retval, tuple) self.assertEqual(">", retval[0]) self.assertEqual(1024, retval[1]) def test_parse_range_le(self): retval = _utils.parse_range("<=1024") self.assertIsInstance(retval, tuple) self.assertEqual("<=", retval[0]) self.assertEqual(1024, retval[1]) def test_parse_range_ge(self): retval = _utils.parse_range(">=1024") self.assertIsInstance(retval, tuple) self.assertEqual(">=", retval[0]) self.assertEqual(1024, retval[1]) def test_range_filter_min(self): retval = _utils.range_filter(RANGE_DATA, "key1", "min") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[:2], retval) def test_range_filter_max(self): retval = _utils.range_filter(RANGE_DATA, "key1", "max") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[-2:], retval) def test_range_filter_range(self): retval = _utils.range_filter(RANGE_DATA, "key1", "<3") self.assertIsInstance(retval, list) self.assertEqual(4, len(retval)) self.assertEqual(RANGE_DATA[:4], retval) def test_range_filter_exact(self): retval = _utils.range_filter(RANGE_DATA, "key1", "2") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[2:4], retval) def test_range_filter_invalid_int(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Invalid range value: <1A0" ): _utils.range_filter(RANGE_DATA, "key1", "<1A0") def test_range_filter_invalid_op(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Invalid range value: <>100" ): _utils.range_filter(RANGE_DATA, "key1", "<>100") def test_file_segment(self): file_size = 4200 content = ''.join(random.SystemRandom().choice( string.ascii_uppercase + string.digits) for _ in range(file_size)).encode('latin-1') self.imagefile = tempfile.NamedTemporaryFile(delete=False) self.imagefile.write(content) self.imagefile.close() segments = self.cloud._get_file_segments( endpoint='test_container/test_image', filename=self.imagefile.name, file_size=file_size, segment_size=1000) self.assertEqual(len(segments), 5) segment_content = b'' for (index, (name, segment)) in enumerate(segments.items()): self.assertEqual( 'test_container/test_image/{index:0>6}'.format(index=index), name) segment_content += segment.read() self.assertEqual(content, segment_content) def test_get_entity_pass_object(self): obj = mock.Mock(id=uuid4().hex) self.cloud.use_direct_get = True self.assertEqual(obj, _utils._get_entity(self.cloud, '', obj, {})) def test_get_entity_no_use_direct_get(self): # test we are defaulting to the search_ methods # if the use_direct_get flag is set to False(default). uuid = uuid4().hex resource = 'network' func = 'search_%ss' % resource filters = {} with mock.patch.object(self.cloud, func) as search: _utils._get_entity(self.cloud, resource, uuid, filters) search.assert_called_once_with(uuid, filters) def test_get_entity_no_uuid_like(self): # test we are defaulting to the search_ methods # if the name_or_id param is a name(string) but not a uuid. self.cloud.use_direct_get = True name = 'name_no_uuid' resource = 'network' func = 'search_%ss' % resource filters = {} with mock.patch.object(self.cloud, func) as search: _utils._get_entity(self.cloud, resource, name, filters) search.assert_called_once_with(name, filters) def test_get_entity_pass_uuid(self): uuid = uuid4().hex self.cloud.use_direct_get = True resources = ['flavor', 'image', 'volume', 'network', 'subnet', 'port', 'floating_ip', 'security_group'] for r in resources: f = 'get_%s_by_id' % r with mock.patch.object(self.cloud, f) as get: _utils._get_entity(self.cloud, r, uuid, {}) get.assert_called_once_with(uuid) def test_get_entity_pass_search_methods(self): self.cloud.use_direct_get = True resources = ['flavor', 'image', 'volume', 'network', 'subnet', 'port', 'floating_ip', 'security_group'] filters = {} name = 'name_no_uuid' for r in resources: f = 'search_%ss' % r with mock.patch.object(self.cloud, f) as search: _utils._get_entity(self.cloud, r, name, {}) search.assert_called_once_with(name, filters) def test_get_entity_get_and_search(self): resources = ['flavor', 'image', 'volume', 'network', 'subnet', 'port', 'floating_ip', 'security_group'] for r in resources: self.assertTrue(hasattr(self.cloud, 'get_%s_by_id' % r)) self.assertTrue(hasattr(self.cloud, 'search_%ss' % r)) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_operator_noauth.py0000666000175100017510000000545613236151364026371 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import openstack.cloud from openstack.tests.unit import base class TestOpenStackCloudOperatorNoAuth(base.RequestsMockTestCase): def setUp(self): """Setup Noauth OpenStackCloud tests Setup the test to utilize no authentication and an endpoint URL in the auth data. This is permits testing of the basic mechanism that enables Ironic noauth mode to be utilized with Shade. Uses base.RequestsMockTestCase instead of IronicTestCase because we need to do completely different things with discovery. """ super(TestOpenStackCloudOperatorNoAuth, self).setUp() # By clearing the URI registry, we remove all calls to a keystone # catalog or getting a token self._uri_registry.clear() self.register_uris([ dict(method='GET', uri=self.get_mock_url( service_type='baremetal', base_url_append='v1', resource='nodes'), json={'nodes': []}), ]) def test_ironic_noauth_none_auth_type(self): """Test noauth selection for Ironic in OpenStackCloud The new way of doing this is with the keystoneauth none plugin. """ # NOTE(TheJulia): When we are using the python-ironicclient # library, the library will automatically prepend the URI path # with 'v1'. As such, since we are overriding the endpoint, # we must explicitly do the same as we move away from the # client library. self.cloud_noauth = openstack.cloud.openstack_cloud( auth_type='none', baremetal_endpoint_override="https://bare-metal.example.com/v1") self.cloud_noauth.list_machines() self.assert_calls() def test_ironic_noauth_admin_token_auth_type(self): """Test noauth selection for Ironic in OpenStackCloud The old way of doing this was to abuse admin_token. """ self.cloud_noauth = openstack.cloud.openstack_cloud( auth_type='admin_token', auth=dict( endpoint='https://bare-metal.example.com/v1', token='ignored')) self.cloud_noauth.list_machines() self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_normalize.py0000666000175100017510000011716313236151340025151 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from openstack.tests.unit import base RAW_SERVER_DICT = { 'HUMAN_ID': True, 'NAME_ATTR': 'name', 'OS-DCF:diskConfig': u'MANUAL', 'OS-EXT-AZ:availability_zone': u'ca-ymq-2', 'OS-EXT-STS:power_state': 1, 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': u'active', 'OS-SRV-USG:launched_at': u'2015-08-01T19:52:02.000000', 'OS-SRV-USG:terminated_at': None, 'accessIPv4': u'', 'accessIPv6': u'', 'addresses': { u'public': [{ u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'2604:e100:1:0:f816:3eff:fe9f:463e', u'version': 6 }, { u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'162.253.54.192', u'version': 4}]}, 'config_drive': u'True', 'created': u'2015-08-01T19:52:16Z', 'flavor': { u'id': u'bbcb7eb5-5c8d-498f-9d7e-307c575d3566', u'links': [{ u'href': u'https://compute-ca-ymq-1.vexxhost.net/db9/flavors/bbc', u'rel': u'bookmark'}]}, 'hostId': u'bd37', 'human_id': u'mordred-irc', 'id': u'811c5197-dba7-4d3a-a3f6-68ca5328b9a7', 'image': { u'id': u'69c99b45-cd53-49de-afdc-f24789eb8f83', u'links': [{ u'href': u'https://compute-ca-ymq-1.vexxhost.net/db9/images/69c', u'rel': u'bookmark'}]}, 'key_name': u'mordred', 'links': [{ u'href': u'https://compute-ca-ymq-1.vexxhost.net/v2/db9/servers/811', u'rel': u'self' }, { u'href': u'https://compute-ca-ymq-1.vexxhost.net/db9/servers/811', u'rel': u'bookmark'}], 'metadata': {u'group': u'irc', u'groups': u'irc,enabled'}, 'name': u'mordred-irc', 'networks': {u'public': [u'2604:e100:1:0:f816:3eff:fe9f:463e', u'162.253.54.192']}, 'os-extended-volumes:volumes_attached': [], 'progress': 0, 'request_ids': [], 'security_groups': [{u'name': u'default'}], 'status': u'ACTIVE', 'tenant_id': u'db92b20496ae4fbda850a689ea9d563f', 'updated': u'2016-10-15T15:49:29Z', 'user_id': u'e9b21dc437d149858faee0898fb08e92'} RAW_GLANCE_IMAGE_DICT = { u'auto_disk_config': u'False', u'checksum': u'774f48af604ab1ec319093234c5c0019', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'container_format': u'ovf', u'created_at': u'2015-02-15T22:58:45Z', u'disk_format': u'vhd', u'file': u'/v2/images/f2868d7c-63e1-4974-a64d-8670a86df21e/file', u'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', u'image_type': u'import', u'min_disk': 20, u'min_ram': 0, u'name': u'Test Monty Ubuntu', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'owner': u'610275', u'protected': False, u'schema': u'/v2/schemas/image', u'size': 323004185, u'status': u'active', u'tags': [], u'updated_at': u'2015-02-15T23:04:34Z', u'user_id': u'156284', u'virtual_size': None, u'visibility': u'private', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'} RAW_NOVA_IMAGE_DICT = { 'HUMAN_ID': True, 'NAME_ATTR': 'name', 'OS-DCF:diskConfig': u'MANUAL', 'OS-EXT-IMG-SIZE:size': 323004185, 'created': u'2015-02-15T22:58:45Z', 'human_id': u'test-monty-ubuntu', 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', 'links': [{ u'href': u'https://example.com/v2/610275/images/f2868d7c', u'rel': u'self' }, { u'href': u'https://example.com/610275/images/f2868d7c', u'rel': u'bookmark' }, { u'href': u'https://example.com/images/f2868d7c', u'rel': u'alternate', u'type': u'application/vnd.openstack.image'}], 'metadata': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'}, 'minDisk': 20, 'minRam': 0, 'name': u'Test Monty Ubuntu', 'progress': 100, 'request_ids': [], 'status': u'ACTIVE', 'updated': u'2015-02-15T23:04:34Z'} RAW_FLAVOR_DICT = { 'HUMAN_ID': True, 'NAME_ATTR': 'name', 'OS-FLV-EXT-DATA:ephemeral': 80, 'OS-FLV-WITH-EXT-SPECS:extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}, 'disk': 40, 'ephemeral': 80, 'human_id': u'8-gb-performance', 'id': u'performance1-8', 'is_public': 'N/A', 'links': [{ u'href': u'https://example.com/v2/610275/flavors/performance1-8', u'rel': u'self' }, { u'href': u'https://example.com/610275/flavors/performance1-8', u'rel': u'bookmark'}], 'name': u'8 GB Performance', 'ram': 8192, 'request_ids': [], 'rxtx_factor': 1600.0, 'swap': u'', 'vcpus': 8} # TODO(shade) Convert this to RequestsMockTestCase class TestUtils(base.TestCase): def test_normalize_flavors(self): raw_flavor = RAW_FLAVOR_DICT.copy() expected = { 'OS-FLV-EXT-DATA:ephemeral': 80, 'OS-FLV-WITH-EXT-SPECS:extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}, 'disk': 40, 'ephemeral': 80, 'extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}, 'id': u'performance1-8', 'is_disabled': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'name': u'8 GB Performance', 'properties': { 'OS-FLV-EXT-DATA:ephemeral': 80, 'OS-FLV-WITH-EXT-SPECS:extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}}, 'ram': 8192, 'rxtx_factor': 1600.0, 'swap': 0, 'vcpus': 8} retval = self.cloud._normalize_flavor(raw_flavor) self.assertEqual(expected, retval) def test_normalize_flavors_strict(self): raw_flavor = RAW_FLAVOR_DICT.copy() expected = { 'disk': 40, 'ephemeral': 80, 'extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}, 'id': u'performance1-8', 'is_disabled': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'name': u'8 GB Performance', 'properties': {}, 'ram': 8192, 'rxtx_factor': 1600.0, 'swap': 0, 'vcpus': 8} retval = self.strict_cloud._normalize_flavor(raw_flavor) self.assertEqual(expected, retval) def test_normalize_nova_images(self): raw_image = RAW_NOVA_IMAGE_DICT.copy() expected = { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False', 'OS-DCF:diskConfig': u'MANUAL', 'checksum': None, 'container_format': None, 'created': u'2015-02-15T22:58:45Z', 'created_at': '2015-02-15T22:58:45Z', 'direct_url': None, 'disk_format': None, 'file': None, 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', 'is_protected': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'locations': [], 'metadata': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False', 'OS-DCF:diskConfig': u'MANUAL', 'progress': 100}, 'minDisk': 20, 'minRam': 0, 'min_disk': 20, 'min_ram': 0, 'name': u'Test Monty Ubuntu', 'owner': None, 'progress': 100, 'properties': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False', 'OS-DCF:diskConfig': u'MANUAL', 'progress': 100}, 'protected': False, 'size': 323004185, 'status': u'active', 'tags': [], 'updated': u'2015-02-15T23:04:34Z', 'updated_at': u'2015-02-15T23:04:34Z', 'virtual_size': 0, 'visibility': 'private'} retval = self.cloud._normalize_image(raw_image) self.assertEqual(expected, retval) def test_normalize_nova_images_strict(self): raw_image = RAW_NOVA_IMAGE_DICT.copy() expected = { 'checksum': None, 'container_format': None, 'created_at': '2015-02-15T22:58:45Z', 'direct_url': None, 'disk_format': None, 'file': None, 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', 'is_protected': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'locations': [], 'min_disk': 20, 'min_ram': 0, 'name': u'Test Monty Ubuntu', 'owner': None, 'properties': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False', 'OS-DCF:diskConfig': u'MANUAL', 'progress': 100}, 'size': 323004185, 'status': u'active', 'tags': [], 'updated_at': u'2015-02-15T23:04:34Z', 'virtual_size': 0, 'visibility': 'private'} retval = self.strict_cloud._normalize_image(raw_image) self.assertEqual(sorted(expected.keys()), sorted(retval.keys())) self.assertEqual(expected, retval) def test_normalize_glance_images(self): raw_image = RAW_GLANCE_IMAGE_DICT.copy() expected = { u'auto_disk_config': u'False', 'checksum': u'774f48af604ab1ec319093234c5c0019', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', 'container_format': u'ovf', 'created': u'2015-02-15T22:58:45Z', 'created_at': u'2015-02-15T22:58:45Z', 'direct_url': None, 'disk_format': u'vhd', 'file': u'/v2/images/f2868d7c-63e1-4974-a64d-8670a86df21e/file', 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', u'image_type': u'import', 'is_protected': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': u'610275', 'name': None}, 'region_name': u'RegionOne', 'zone': None}, 'locations': [], 'metadata': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'schema': u'/v2/schemas/image', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'}, 'minDisk': 20, 'min_disk': 20, 'minRam': 0, 'min_ram': 0, 'name': u'Test Monty Ubuntu', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', 'owner': u'610275', 'properties': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'schema': u'/v2/schemas/image', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'}, 'protected': False, u'schema': u'/v2/schemas/image', 'size': 323004185, 'status': u'active', 'tags': [], 'updated': u'2015-02-15T23:04:34Z', 'updated_at': u'2015-02-15T23:04:34Z', u'user_id': u'156284', 'virtual_size': 0, 'visibility': u'private', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'} retval = self.cloud._normalize_image(raw_image) self.assertEqual(expected, retval) def test_normalize_glance_images_strict(self): raw_image = RAW_GLANCE_IMAGE_DICT.copy() expected = { 'checksum': u'774f48af604ab1ec319093234c5c0019', 'container_format': u'ovf', 'created_at': u'2015-02-15T22:58:45Z', 'direct_url': None, 'disk_format': u'vhd', 'file': u'/v2/images/f2868d7c-63e1-4974-a64d-8670a86df21e/file', 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', 'is_protected': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': u'610275', 'name': None}, 'region_name': u'RegionOne', 'zone': None}, 'locations': [], 'min_disk': 20, 'min_ram': 0, 'name': u'Test Monty Ubuntu', 'owner': u'610275', 'properties': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'schema': u'/v2/schemas/image', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'}, 'size': 323004185, 'status': u'active', 'tags': [], 'updated_at': u'2015-02-15T23:04:34Z', 'virtual_size': 0, 'visibility': 'private'} retval = self.strict_cloud._normalize_image(raw_image) self.assertEqual(sorted(expected.keys()), sorted(retval.keys())) self.assertEqual(expected, retval) def test_normalize_servers_strict(self): raw_server = RAW_SERVER_DICT.copy() expected = { 'accessIPv4': u'', 'accessIPv6': u'', 'addresses': { u'public': [{ u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'2604:e100:1:0:f816:3eff:fe9f:463e', u'version': 6 }, { u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'162.253.54.192', u'version': 4}]}, 'adminPass': None, 'created': u'2015-08-01T19:52:16Z', 'disk_config': u'MANUAL', 'flavor': {u'id': u'bbcb7eb5-5c8d-498f-9d7e-307c575d3566'}, 'has_config_drive': True, 'host_id': u'bd37', 'id': u'811c5197-dba7-4d3a-a3f6-68ca5328b9a7', 'image': {u'id': u'69c99b45-cd53-49de-afdc-f24789eb8f83'}, 'interface_ip': u'', 'key_name': u'mordred', 'launched_at': u'2015-08-01T19:52:02.000000', 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': u'db92b20496ae4fbda850a689ea9d563f', 'name': None}, 'region_name': u'RegionOne', 'zone': u'ca-ymq-2'}, 'metadata': {u'group': u'irc', u'groups': u'irc,enabled'}, 'name': u'mordred-irc', 'networks': { u'public': [ u'2604:e100:1:0:f816:3eff:fe9f:463e', u'162.253.54.192']}, 'power_state': 1, 'private_v4': None, 'progress': 0, 'properties': {}, 'public_v4': None, 'public_v6': None, 'security_groups': [{u'name': u'default'}], 'status': u'ACTIVE', 'task_state': None, 'terminated_at': None, 'updated': u'2016-10-15T15:49:29Z', 'user_id': u'e9b21dc437d149858faee0898fb08e92', 'vm_state': u'active', 'volumes': []} retval = self.strict_cloud._normalize_server(raw_server) self.assertEqual(expected, retval) def test_normalize_servers_normal(self): raw_server = RAW_SERVER_DICT.copy() expected = { 'OS-DCF:diskConfig': u'MANUAL', 'OS-EXT-AZ:availability_zone': u'ca-ymq-2', 'OS-EXT-STS:power_state': 1, 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': u'active', 'OS-SRV-USG:launched_at': u'2015-08-01T19:52:02.000000', 'OS-SRV-USG:terminated_at': None, 'accessIPv4': u'', 'accessIPv6': u'', 'addresses': { u'public': [{ u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'2604:e100:1:0:f816:3eff:fe9f:463e', u'version': 6 }, { u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'162.253.54.192', u'version': 4}]}, 'adminPass': None, 'az': u'ca-ymq-2', 'cloud': '_test_cloud_', 'config_drive': u'True', 'created': u'2015-08-01T19:52:16Z', 'disk_config': u'MANUAL', 'flavor': {u'id': u'bbcb7eb5-5c8d-498f-9d7e-307c575d3566'}, 'has_config_drive': True, 'hostId': u'bd37', 'host_id': u'bd37', 'id': u'811c5197-dba7-4d3a-a3f6-68ca5328b9a7', 'image': {u'id': u'69c99b45-cd53-49de-afdc-f24789eb8f83'}, 'interface_ip': '', 'key_name': u'mordred', 'launched_at': u'2015-08-01T19:52:02.000000', 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': u'db92b20496ae4fbda850a689ea9d563f', 'name': None}, 'region_name': u'RegionOne', 'zone': u'ca-ymq-2'}, 'metadata': {u'group': u'irc', u'groups': u'irc,enabled'}, 'name': u'mordred-irc', 'networks': { u'public': [ u'2604:e100:1:0:f816:3eff:fe9f:463e', u'162.253.54.192']}, 'os-extended-volumes:volumes_attached': [], 'power_state': 1, 'private_v4': None, 'progress': 0, 'project_id': u'db92b20496ae4fbda850a689ea9d563f', 'properties': { 'OS-DCF:diskConfig': u'MANUAL', 'OS-EXT-AZ:availability_zone': u'ca-ymq-2', 'OS-EXT-STS:power_state': 1, 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': u'active', 'OS-SRV-USG:launched_at': u'2015-08-01T19:52:02.000000', 'OS-SRV-USG:terminated_at': None, 'os-extended-volumes:volumes_attached': []}, 'public_v4': None, 'public_v6': None, 'region': u'RegionOne', 'security_groups': [{u'name': u'default'}], 'status': u'ACTIVE', 'task_state': None, 'tenant_id': u'db92b20496ae4fbda850a689ea9d563f', 'terminated_at': None, 'updated': u'2016-10-15T15:49:29Z', 'user_id': u'e9b21dc437d149858faee0898fb08e92', 'vm_state': u'active', 'volumes': []} retval = self.cloud._normalize_server(raw_server) self.assertEqual(expected, retval) def test_normalize_secgroups_strict(self): nova_secgroup = dict( id='abc123', name='nova_secgroup', description='A Nova security group', rules=[ dict(id='123', from_port=80, to_port=81, ip_protocol='tcp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] ) expected = dict( id='abc123', name='nova_secgroup', description='A Nova security group', properties={}, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_'), security_group_rules=[ dict(id='123', direction='ingress', ethertype='IPv4', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0', security_group_id='xyz123', properties={}, remote_group_id=None, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_')) ] ) retval = self.strict_cloud._normalize_secgroup(nova_secgroup) self.assertEqual(expected, retval) def test_normalize_secgroups(self): nova_secgroup = dict( id='abc123', name='nova_secgroup', description='A Nova security group', rules=[ dict(id='123', from_port=80, to_port=81, ip_protocol='tcp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] ) expected = dict( id='abc123', name='nova_secgroup', description='A Nova security group', tenant_id='', project_id='', properties={}, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_'), security_group_rules=[ dict(id='123', direction='ingress', ethertype='IPv4', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0', security_group_id='xyz123', properties={}, tenant_id='', project_id='', remote_group_id=None, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_')) ] ) retval = self.cloud._normalize_secgroup(nova_secgroup) self.assertEqual(expected, retval) def test_normalize_secgroups_negone_port(self): nova_secgroup = dict( id='abc123', name='nova_secgroup', description='A Nova security group with -1 ports', rules=[ dict(id='123', from_port=-1, to_port=-1, ip_protocol='icmp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] ) retval = self.cloud._normalize_secgroup(nova_secgroup) self.assertIsNone(retval['security_group_rules'][0]['port_range_min']) self.assertIsNone(retval['security_group_rules'][0]['port_range_max']) def test_normalize_secgroup_rules(self): nova_rules = [ dict(id='123', from_port=80, to_port=81, ip_protocol='tcp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] expected = [ dict(id='123', direction='ingress', ethertype='IPv4', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0', security_group_id='xyz123', tenant_id='', project_id='', remote_group_id=None, properties={}, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_')) ] retval = self.cloud._normalize_secgroup_rules(nova_rules) self.assertEqual(expected, retval) def test_normalize_volumes_v1(self): vol = dict( id='55db9e89-9cb4-4202-af88-d8c4a174998e', display_name='test', display_description='description', bootable=u'false', # unicode type multiattach='true', # str type status='in-use', created_at='2015-08-27T09:49:58-05:00', ) expected = { 'attachments': [], 'availability_zone': None, 'bootable': False, 'can_multiattach': True, 'consistencygroup_id': None, 'created_at': vol['created_at'], 'description': vol['display_description'], 'display_description': vol['display_description'], 'display_name': vol['display_name'], 'encrypted': False, 'host': None, 'id': '55db9e89-9cb4-4202-af88-d8c4a174998e', 'is_bootable': False, 'is_encrypted': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'metadata': {}, 'migration_status': None, 'multiattach': True, 'name': vol['display_name'], 'properties': {}, 'replication_driver': None, 'replication_extended_status': None, 'replication_status': None, 'size': 0, 'snapshot_id': None, 'source_volume_id': None, 'status': vol['status'], 'updated_at': None, 'volume_type': None, } retval = self.cloud._normalize_volume(vol) self.assertEqual(expected, retval) def test_normalize_volumes_v2(self): vol = dict( id='55db9e89-9cb4-4202-af88-d8c4a174998e', name='test', description='description', bootable=False, multiattach=True, status='in-use', created_at='2015-08-27T09:49:58-05:00', availability_zone='my-zone', ) vol['os-vol-tenant-attr:tenant_id'] = 'my-project' expected = { 'attachments': [], 'availability_zone': vol['availability_zone'], 'bootable': False, 'can_multiattach': True, 'consistencygroup_id': None, 'created_at': vol['created_at'], 'description': vol['description'], 'display_description': vol['description'], 'display_name': vol['name'], 'encrypted': False, 'host': None, 'id': '55db9e89-9cb4-4202-af88-d8c4a174998e', 'is_bootable': False, 'is_encrypted': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': vol['os-vol-tenant-attr:tenant_id'], 'name': None}, 'region_name': u'RegionOne', 'zone': vol['availability_zone']}, 'metadata': {}, 'migration_status': None, 'multiattach': True, 'name': vol['name'], 'os-vol-tenant-attr:tenant_id': vol[ 'os-vol-tenant-attr:tenant_id'], 'properties': { 'os-vol-tenant-attr:tenant_id': vol[ 'os-vol-tenant-attr:tenant_id']}, 'replication_driver': None, 'replication_extended_status': None, 'replication_status': None, 'size': 0, 'snapshot_id': None, 'source_volume_id': None, 'status': vol['status'], 'updated_at': None, 'volume_type': None, } retval = self.cloud._normalize_volume(vol) self.assertEqual(expected, retval) def test_normalize_volumes_v1_strict(self): vol = dict( id='55db9e89-9cb4-4202-af88-d8c4a174998e', display_name='test', display_description='description', bootable=u'false', # unicode type multiattach='true', # str type status='in-use', created_at='2015-08-27T09:49:58-05:00', ) expected = { 'attachments': [], 'can_multiattach': True, 'consistencygroup_id': None, 'created_at': vol['created_at'], 'description': vol['display_description'], 'host': None, 'id': '55db9e89-9cb4-4202-af88-d8c4a174998e', 'is_bootable': False, 'is_encrypted': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'metadata': {}, 'migration_status': None, 'name': vol['display_name'], 'properties': {}, 'replication_driver': None, 'replication_extended_status': None, 'replication_status': None, 'size': 0, 'snapshot_id': None, 'source_volume_id': None, 'status': vol['status'], 'updated_at': None, 'volume_type': None, } retval = self.strict_cloud._normalize_volume(vol) self.assertEqual(expected, retval) def test_normalize_volumes_v2_strict(self): vol = dict( id='55db9e89-9cb4-4202-af88-d8c4a174998e', name='test', description='description', bootable=False, multiattach=True, status='in-use', created_at='2015-08-27T09:49:58-05:00', availability_zone='my-zone', ) vol['os-vol-tenant-attr:tenant_id'] = 'my-project' expected = { 'attachments': [], 'can_multiattach': True, 'consistencygroup_id': None, 'created_at': vol['created_at'], 'description': vol['description'], 'host': None, 'id': '55db9e89-9cb4-4202-af88-d8c4a174998e', 'is_bootable': False, 'is_encrypted': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': vol['os-vol-tenant-attr:tenant_id'], 'name': None}, 'region_name': u'RegionOne', 'zone': vol['availability_zone']}, 'metadata': {}, 'migration_status': None, 'name': vol['name'], 'properties': {}, 'replication_driver': None, 'replication_extended_status': None, 'replication_status': None, 'size': 0, 'snapshot_id': None, 'source_volume_id': None, 'status': vol['status'], 'updated_at': None, 'volume_type': None, } retval = self.strict_cloud._normalize_volume(vol) self.assertEqual(expected, retval) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_qos_dscp_marking_rule.py0000666000175100017510000002771413236151340027525 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from openstack.cloud import exc from openstack.tests.unit import base class TestQosDscpMarkingRule(base.RequestsMockTestCase): policy_name = 'qos test policy' policy_id = '881d1bb7-a663-44c0-8f9f-ee2765b74486' project_id = 'c88fc89f-5121-4a4c-87fd-496b5af864e9' rule_id = 'ed1a2b05-0ad7-45d7-873f-008b575a02b3' rule_dscp_mark = 32 mock_policy = { 'id': policy_id, 'name': policy_name, 'description': '', 'rules': [], 'project_id': project_id, 'tenant_id': project_id, 'shared': False, 'is_default': False } mock_rule = { 'id': rule_id, 'dscp_mark': rule_dscp_mark, } qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } enabled_neutron_extensions = [qos_extension] def test_get_qos_dscp_marking_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), json={'dscp_marking_rule': self.mock_rule}) ]) r = self.cloud.get_qos_dscp_marking_rule(self.policy_name, self.rule_id) self.assertDictEqual(self.mock_rule, r) self.assert_calls() def test_get_qos_dscp_marking_rule_no_qos_policy_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': []}) ]) self.assertRaises( exc.OpenStackCloudResourceNotFound, self.cloud.get_qos_dscp_marking_rule, self.policy_name, self.rule_id) self.assert_calls() def test_get_qos_dscp_marking_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_dscp_marking_rule, self.policy_name, self.rule_id) self.assert_calls() def test_create_qos_dscp_marking_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules']), json={'dscp_marking_rule': self.mock_rule}) ]) rule = self.cloud.create_qos_dscp_marking_rule( self.policy_name, dscp_mark=self.rule_dscp_mark) self.assertDictEqual(self.mock_rule, rule) self.assert_calls() def test_create_qos_dscp_marking_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_qos_dscp_marking_rule, self.policy_name, dscp_mark=16) self.assert_calls() def test_update_qos_dscp_marking_rule(self): new_dscp_mark_value = 16 expected_rule = copy.copy(self.mock_rule) expected_rule['dscp_mark'] = new_dscp_mark_value self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), json={'dscp_marking_rule': self.mock_rule}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), json={'dscp_marking_rule': expected_rule}, validate=dict( json={'dscp_marking_rule': { 'dscp_mark': new_dscp_mark_value}})) ]) rule = self.cloud.update_qos_dscp_marking_rule( self.policy_id, self.rule_id, dscp_mark=new_dscp_mark_value) self.assertDictEqual(expected_rule, rule) self.assert_calls() def test_update_qos_dscp_marking_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.update_qos_dscp_marking_rule, self.policy_id, self.rule_id, dscp_mark=8) self.assert_calls() def test_delete_qos_dscp_marking_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), json={}) ]) self.assertTrue( self.cloud.delete_qos_dscp_marking_rule( self.policy_name, self.rule_id)) self.assert_calls() def test_delete_qos_dscp_marking_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_qos_dscp_marking_rule, self.policy_name, self.rule_id) self.assert_calls() def test_delete_qos_dscp_marking_rule_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), status_code=404) ]) self.assertFalse( self.cloud.delete_qos_dscp_marking_rule( self.policy_name, self.rule_id)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_quotas.py0000666000175100017510000002203713236151340024460 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.cloud import exc from openstack.tests.unit import base fake_quota_set = { "cores": 20, "fixed_ips": -1, "floating_ips": 10, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": 20, "security_groups": 45, "server_groups": 10, "server_group_members": 10 } class TestQuotas(base.RequestsMockTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(TestQuotas, self).setUp( cloud_config_fixture=cloud_config_fixture) def test_update_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['os-quota-sets', project.project_id]), json={'quota_set': fake_quota_set}, validate=dict( json={ 'quota_set': { 'cores': 1, 'force': True }})), ]) self.cloud.set_compute_quotas(project.project_id, cores=1) self.assert_calls() def test_update_quotas_bad_request(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['os-quota-sets', project.project_id]), status_code=400), ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.set_compute_quotas, project.project_id) self.assert_calls() def test_get_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-quota-sets', project.project_id]), json={'quota_set': fake_quota_set}), ]) self.cloud.get_compute_quotas(project.project_id) self.assert_calls() def test_delete_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-quota-sets', project.project_id])), ]) self.cloud.delete_compute_quotas(project.project_id) self.assert_calls() def test_cinder_update_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'volumev2', 'public', append=['os-quota-sets', project.project_id]), json=dict(quota_set={'volumes': 1}), validate=dict( json={'quota_set': { 'volumes': 1, 'tenant_id': project.project_id}}))]) self.cloud.set_volume_quotas(project.project_id, volumes=1) self.assert_calls() def test_cinder_get_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['os-quota-sets', project.project_id]), json=dict(quota_set={'snapshots': 10, 'volumes': 20}))]) self.cloud.get_volume_quotas(project.project_id) self.assert_calls() def test_cinder_delete_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['os-quota-sets', project.project_id]))]) self.cloud.delete_volume_quotas(project.project_id) self.assert_calls() def test_neutron_update_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'quotas', '%s.json' % project.project_id]), json={}, validate=dict( json={'quota': {'network': 1}})) ]) self.cloud.set_network_quotas(project.project_id, network=1) self.assert_calls() def test_neutron_get_quotas(self): quota = { 'subnet': 100, 'network': 100, 'floatingip': 50, 'subnetpool': -1, 'security_group_rule': 100, 'security_group': 10, 'router': 10, 'rbac_policy': 10, 'port': 500 } project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'quotas', '%s.json' % project.project_id]), json={'quota': quota}) ]) received_quota = self.cloud.get_network_quotas(project.project_id) self.assertDictEqual(quota, received_quota) self.assert_calls() def test_neutron_get_quotas_details(self): quota_details = { 'subnet': { 'limit': 100, 'used': 7, 'reserved': 0}, 'network': { 'limit': 100, 'used': 6, 'reserved': 0}, 'floatingip': { 'limit': 50, 'used': 0, 'reserved': 0}, 'subnetpool': { 'limit': -1, 'used': 2, 'reserved': 0}, 'security_group_rule': { 'limit': 100, 'used': 4, 'reserved': 0}, 'security_group': { 'limit': 10, 'used': 1, 'reserved': 0}, 'router': { 'limit': 10, 'used': 2, 'reserved': 0}, 'rbac_policy': { 'limit': 10, 'used': 2, 'reserved': 0}, 'port': { 'limit': 500, 'used': 7, 'reserved': 0} } project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'quotas', '%s/details.json' % project.project_id]), json={'quota': quota_details}) ]) received_quota_details = self.cloud.get_network_quotas( project.project_id, details=True) self.assertDictEqual(quota_details, received_quota_details) self.assert_calls() def test_neutron_delete_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'quotas', '%s.json' % project.project_id]), json={}) ]) self.cloud.delete_network_quotas(project.project_id) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_qos_bandwidth_limit_rule.py0000666000175100017510000004167313236151340030226 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from openstack.cloud import exc from openstack.tests.unit import base class TestQosBandwidthLimitRule(base.RequestsMockTestCase): policy_name = 'qos test policy' policy_id = '881d1bb7-a663-44c0-8f9f-ee2765b74486' project_id = 'c88fc89f-5121-4a4c-87fd-496b5af864e9' rule_id = 'ed1a2b05-0ad7-45d7-873f-008b575a02b3' rule_max_kbps = 1000 rule_max_burst = 100 mock_policy = { 'id': policy_id, 'name': policy_name, 'description': '', 'rules': [], 'project_id': project_id, 'tenant_id': project_id, 'shared': False, 'is_default': False } mock_rule = { 'id': rule_id, 'max_kbps': rule_max_kbps, 'max_burst_kbps': rule_max_burst, 'direction': 'egress' } qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } qos_bw_limit_direction_extension = { "updated": "2017-04-10T10:00:00-00:00", "name": "Direction for QoS bandwidth limit rule", "links": [], "alias": "qos-bw-limit-direction", "description": ("Allow to configure QoS bandwidth limit rule with " "specific direction: ingress or egress") } enabled_neutron_extensions = [qos_extension, qos_bw_limit_direction_extension] def test_get_qos_bandwidth_limit_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': self.mock_rule}) ]) r = self.cloud.get_qos_bandwidth_limit_rule(self.policy_name, self.rule_id) self.assertDictEqual(self.mock_rule, r) self.assert_calls() def test_get_qos_bandwidth_limit_rule_no_qos_policy_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': []}) ]) self.assertRaises( exc.OpenStackCloudResourceNotFound, self.cloud.get_qos_bandwidth_limit_rule, self.policy_name, self.rule_id) self.assert_calls() def test_get_qos_bandwidth_limit_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_bandwidth_limit_rule, self.policy_name, self.rule_id) self.assert_calls() def test_create_qos_bandwidth_limit_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules']), json={'bandwidth_limit_rule': self.mock_rule}) ]) rule = self.cloud.create_qos_bandwidth_limit_rule( self.policy_name, max_kbps=self.rule_max_kbps) self.assertDictEqual(self.mock_rule, rule) self.assert_calls() def test_create_qos_bandwidth_limit_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_qos_bandwidth_limit_rule, self.policy_name, max_kbps=100) self.assert_calls() def test_create_qos_bandwidth_limit_rule_no_qos_direction_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules']), json={'bandwidth_limit_rule': self.mock_rule}) ]) rule = self.cloud.create_qos_bandwidth_limit_rule( self.policy_name, max_kbps=self.rule_max_kbps, direction="ingress") self.assertDictEqual(self.mock_rule, rule) self.assert_calls() def test_update_qos_bandwidth_limit_rule(self): expected_rule = copy.copy(self.mock_rule) expected_rule['max_kbps'] = self.rule_max_kbps + 100 self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': self.mock_rule}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': expected_rule}, validate=dict( json={'bandwidth_limit_rule': { 'max_kbps': self.rule_max_kbps + 100}})) ]) rule = self.cloud.update_qos_bandwidth_limit_rule( self.policy_id, self.rule_id, max_kbps=self.rule_max_kbps + 100) self.assertDictEqual(expected_rule, rule) self.assert_calls() def test_update_qos_bandwidth_limit_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.update_qos_bandwidth_limit_rule, self.policy_id, self.rule_id, max_kbps=2000) self.assert_calls() def test_update_qos_bandwidth_limit_rule_no_qos_direction_extension(self): expected_rule = copy.copy(self.mock_rule) expected_rule['direction'] = self.rule_max_kbps + 100 self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': self.mock_rule}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': expected_rule}, validate=dict( json={'bandwidth_limit_rule': { 'max_kbps': self.rule_max_kbps + 100}})) ]) rule = self.cloud.update_qos_bandwidth_limit_rule( self.policy_id, self.rule_id, max_kbps=self.rule_max_kbps + 100, direction="ingress") # Even if there was attempt to change direction to 'ingress' it should # be not changed in returned rule self.assertDictEqual(expected_rule, rule) self.assert_calls() def test_delete_qos_bandwidth_limit_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={}) ]) self.assertTrue( self.cloud.delete_qos_bandwidth_limit_rule( self.policy_name, self.rule_id)) self.assert_calls() def test_delete_qos_bandwidth_limit_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_qos_bandwidth_limit_rule, self.policy_name, self.rule_id) self.assert_calls() def test_delete_qos_bandwidth_limit_rule_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), status_code=404) ]) self.assertFalse( self.cloud.delete_qos_bandwidth_limit_rule( self.policy_name, self.rule_id)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_services.py0000666000175100017510000002667713236151340025005 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_cloud_services ---------------------------------- Tests Keystone services commands. """ from openstack.cloud.exc import OpenStackCloudException from openstack.cloud.exc import OpenStackCloudUnavailableFeature from openstack.tests.unit import base from testtools import matchers class CloudServices(base.RequestsMockTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(CloudServices, self).setUp(cloud_config_fixture) def get_mock_url(self, service_type='identity', interface='admin', resource='services', append=None, base_url_append='v3'): return super(CloudServices, self).get_mock_url( service_type, interface, resource, append, base_url_append) def test_create_service_v2(self): self.use_keystone_v2() service_data = self._get_service_data(name='a service', type='network', description='A test service') reference_req = service_data.json_request.copy() reference_req.pop('enabled') self.register_uris([ dict(method='POST', uri=self.get_mock_url(base_url_append='OS-KSADM'), status_code=200, json=service_data.json_response_v2, validate=dict(json={'OS-KSADM:service': reference_req})) ]) service = self.cloud.create_service( name=service_data.service_name, service_type=service_data.service_type, description=service_data.description) self.assertThat(service.name, matchers.Equals(service_data.service_name)) self.assertThat(service.id, matchers.Equals(service_data.service_id)) self.assertThat(service.description, matchers.Equals(service_data.description)) self.assertThat(service.type, matchers.Equals(service_data.service_type)) self.assert_calls() def test_create_service_v3(self): service_data = self._get_service_data(name='a service', type='network', description='A test service') self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=200, json=service_data.json_response_v3, validate=dict(json={'service': service_data.json_request})) ]) service = self.cloud.create_service( name=service_data.service_name, service_type=service_data.service_type, description=service_data.description) self.assertThat(service.name, matchers.Equals(service_data.service_name)) self.assertThat(service.id, matchers.Equals(service_data.service_id)) self.assertThat(service.description, matchers.Equals(service_data.description)) self.assertThat(service.type, matchers.Equals(service_data.service_type)) self.assert_calls() def test_update_service_v2(self): self.use_keystone_v2() # NOTE(SamYaple): Update service only works with v3 api self.assertRaises(OpenStackCloudUnavailableFeature, self.cloud.update_service, 'service_id', name='new name') def test_update_service_v3(self): service_data = self._get_service_data(name='a service', type='network', description='A test service') request = service_data.json_request.copy() request['enabled'] = False resp = service_data.json_response_v3.copy() resp['enabled'] = False request.pop('description') request.pop('name') request.pop('type') self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [resp['service']]}), dict(method='PATCH', uri=self.get_mock_url(append=[service_data.service_id]), status_code=200, json=resp, validate=dict(json={'service': request})) ]) service = self.cloud.update_service( service_data.service_id, enabled=False) self.assertThat(service.name, matchers.Equals(service_data.service_name)) self.assertThat(service.id, matchers.Equals(service_data.service_id)) self.assertThat(service.description, matchers.Equals(service_data.description)) self.assertThat(service.type, matchers.Equals(service_data.service_type)) self.assert_calls() def test_list_services(self): service_data = self._get_service_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [service_data.json_response_v3['service']]}) ]) services = self.cloud.list_services() self.assertThat(len(services), matchers.Equals(1)) self.assertThat(services[0].id, matchers.Equals(service_data.service_id)) self.assertThat(services[0].name, matchers.Equals(service_data.service_name)) self.assertThat(services[0].type, matchers.Equals(service_data.service_type)) self.assert_calls() def test_get_service(self): service_data = self._get_service_data() service2_data = self._get_service_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=400), ]) # Search by id service = self.cloud.get_service(name_or_id=service_data.service_id) self.assertThat(service.id, matchers.Equals(service_data.service_id)) # Search by name service = self.cloud.get_service( name_or_id=service_data.service_name) # test we are getting exactly 1 element self.assertThat(service.id, matchers.Equals(service_data.service_id)) # Not found service = self.cloud.get_service(name_or_id='INVALID SERVICE') self.assertIs(None, service) # Multiple matches # test we are getting an Exception self.assertRaises(OpenStackCloudException, self.cloud.get_service, name_or_id=None, filters={'type': 'type2'}) self.assert_calls() def test_search_services(self): service_data = self._get_service_data() service2_data = self._get_service_data(type=service_data.service_type) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), ]) # Search by id services = self.cloud.search_services( name_or_id=service_data.service_id) # test we are getting exactly 1 element self.assertThat(len(services), matchers.Equals(1)) self.assertThat(services[0].id, matchers.Equals(service_data.service_id)) # Search by name services = self.cloud.search_services( name_or_id=service_data.service_name) # test we are getting exactly 1 element self.assertThat(len(services), matchers.Equals(1)) self.assertThat(services[0].name, matchers.Equals(service_data.service_name)) # Not found services = self.cloud.search_services(name_or_id='!INVALID!') self.assertThat(len(services), matchers.Equals(0)) # Multiple matches services = self.cloud.search_services( filters={'type': service_data.service_type}) # test we are getting exactly 2 elements self.assertThat(len(services), matchers.Equals(2)) self.assertThat(services[0].id, matchers.Equals(service_data.service_id)) self.assertThat(services[1].id, matchers.Equals(service2_data.service_id)) self.assert_calls() def test_delete_service(self): service_data = self._get_service_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service']]}), dict(method='DELETE', uri=self.get_mock_url(append=[service_data.service_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service']]}), dict(method='DELETE', uri=self.get_mock_url(append=[service_data.service_id]), status_code=204) ]) # Delete by name self.cloud.delete_service(name_or_id=service_data.service_name) # Delete by id self.cloud.delete_service(service_data.service_id) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_shade_operator.py0000666000175100017510000000136313236151340026142 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(shade) Port this content back in from shade repo as tests don't have # references to ironic_client. from openstack.tests.unit import base class TestShadeOperator(base.RequestsMockTestCase): pass openstacksdk-0.11.3/openstack/tests/unit/cloud/test_magnum_services.py0000666000175100017510000000244413236151340026333 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.unit import base magnum_service_obj = dict( binary='fake-service', created_at='2015-08-27T09:49:58-05:00', disabled_reason=None, host='fake-host', human_id=None, id=1, report_count=1, state='up', updated_at=None, ) class TestMagnumServices(base.RequestsMockTestCase): def test_list_magnum_services(self): self.register_uris([dict( method='GET', uri='https://container-infra.example.com/v1/mservices', json=dict(mservices=[magnum_service_obj]))]) mservices_list = self.cloud.list_magnum_services() self.assertEqual( mservices_list[0], self.cloud._normalize_magnum_service(magnum_service_obj)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_update_server.py0000666000175100017510000000626513236151340026021 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_update_server ---------------------------------- Tests for the `update_server` command. """ import uuid from openstack.cloud.exc import OpenStackCloudException from openstack.tests import fakes from openstack.tests.unit import base class TestUpdateServer(base.RequestsMockTestCase): def setUp(self): super(TestUpdateServer, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.updated_server_name = self.getUniqueString('name2') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) def test_update_server_with_update_exception(self): """ Test that an exception in the update raises an exception in update_server. """ self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id]), status_code=400, validate=dict( json={'server': {'name': self.updated_server_name}})), ]) self.assertRaises( OpenStackCloudException, self.cloud.update_server, self.server_name, name=self.updated_server_name) self.assert_calls() def test_update_server_name(self): """ Test that update_server updates the name without raising any exception """ fake_update_server = fakes.make_fake_server( self.server_id, self.updated_server_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id]), json={'server': fake_update_server}, validate=dict( json={'server': {'name': self.updated_server_name}})), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( self.updated_server_name, self.cloud.update_server( self.server_name, name=self.updated_server_name)['name']) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_server_group.py0000666000175100017510000000435013236151340025664 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.tests.unit import base from openstack.tests import fakes class TestServerGroup(base.RequestsMockTestCase): def setUp(self): super(TestServerGroup, self).setUp() self.group_id = uuid.uuid4().hex self.group_name = self.getUniqueString('server-group') self.policies = ['affinity'] self.fake_group = fakes.make_fake_server_group( self.group_id, self.group_name, self.policies) def test_create_server_group(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-server-groups']), json={'server_group': self.fake_group}, validate=dict( json={'server_group': { 'name': self.group_name, 'policies': self.policies, }})), ]) self.cloud.create_server_group(name=self.group_name, policies=self.policies) self.assert_calls() def test_delete_server_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-server-groups']), json={'server_groups': [self.fake_group]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-server-groups', self.group_id]), json={'server_groups': [self.fake_group]}), ]) self.assertTrue(self.cloud.delete_server_group(self.group_name)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_operator.py0000666000175100017510000001062413236151364025004 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.cloud import exc from openstack.config import cloud_region from openstack.tests import fakes from openstack.tests.unit import base class TestOperatorCloud(base.RequestsMockTestCase): @mock.patch.object(cloud_region.CloudRegion, 'get_endpoint') def test_get_session_endpoint_provided(self, fake_get_endpoint): fake_get_endpoint.return_value = 'http://fake.url' self.assertEqual( 'http://fake.url', self.cloud.get_session_endpoint('image')) @mock.patch.object(cloud_region.CloudRegion, 'get_session') def test_get_session_endpoint_session(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = 'http://fake.url' get_session_mock.return_value = session_mock self.assertEqual( 'http://fake.url', self.cloud.get_session_endpoint('image')) @mock.patch.object(cloud_region.CloudRegion, 'get_session') def test_get_session_endpoint_exception(self, get_session_mock): class FakeException(Exception): pass def side_effect(*args, **kwargs): raise FakeException("No service") session_mock = mock.Mock() session_mock.get_endpoint.side_effect = side_effect get_session_mock.return_value = session_mock self.cloud.name = 'testcloud' self.cloud.region_name = 'testregion' with testtools.ExpectedException( exc.OpenStackCloudException, "Error getting image endpoint on testcloud:testregion:" " No service"): self.cloud.get_session_endpoint("image") @mock.patch.object(cloud_region.CloudRegion, 'get_session') def test_get_session_endpoint_unavailable(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = None get_session_mock.return_value = session_mock image_endpoint = self.cloud.get_session_endpoint("image") self.assertIsNone(image_endpoint) @mock.patch.object(cloud_region.CloudRegion, 'get_session') def test_get_session_endpoint_identity(self, get_session_mock): session_mock = mock.Mock() get_session_mock.return_value = session_mock self.cloud.get_session_endpoint('identity') kwargs = dict( interface='public', region_name='RegionOne', service_name=None, service_type='identity') session_mock.get_endpoint.assert_called_with(**kwargs) @mock.patch.object(cloud_region.CloudRegion, 'get_session') def test_has_service_no(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = None get_session_mock.return_value = session_mock self.assertFalse(self.cloud.has_service("image")) @mock.patch.object(cloud_region.CloudRegion, 'get_session') def test_has_service_yes(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = 'http://fake.url' get_session_mock.return_value = session_mock self.assertTrue(self.cloud.has_service("image")) def test_list_hypervisors(self): '''This test verifies that calling list_hypervisors results in a call to nova client.''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-hypervisors', 'detail']), json={'hypervisors': [ fakes.make_fake_hypervisor('1', 'testserver1'), fakes.make_fake_hypervisor('2', 'testserver2'), ]}), ]) r = self.cloud.list_hypervisors() self.assertEqual(2, len(r)) self.assertEqual('testserver1', r[0]['hypervisor_hostname']) self.assertEqual('testserver2', r[1]['hypervisor_hostname']) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_availability_zones.py0000666000175100017510000000442013236151340027030 0ustar zuulzuul00000000000000# Copyright (c) 2017 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.unit import base from openstack.tests import fakes _fake_zone_list = { "availabilityZoneInfo": [ { "hosts": None, "zoneName": "az1", "zoneState": { "available": True } }, { "hosts": None, "zoneName": "nova", "zoneState": { "available": False } } ] } class TestAvailabilityZoneNames(base.RequestsMockTestCase): def test_list_availability_zone_names(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-availability-zone'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=_fake_zone_list), ]) self.assertEqual( ['az1'], self.cloud.list_availability_zone_names()) self.assert_calls() def test_unauthorized_availability_zone_names(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-availability-zone'.format( endpoint=fakes.COMPUTE_ENDPOINT), status_code=403), ]) self.assertEqual( [], self.cloud.list_availability_zone_names()) self.assert_calls() def test_list_all_availability_zone_names(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-availability-zone'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=_fake_zone_list), ]) self.assertEqual( ['az1', 'nova'], self.cloud.list_availability_zone_names(unavailable=True)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_router.py0000666000175100017510000003616413236151340024472 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import testtools from openstack.cloud import exc from openstack.tests.unit import base class TestRouter(base.RequestsMockTestCase): router_name = 'goofy' router_id = '57076620-dcfb-42ed-8ad6-79ccb4a79ed2' subnet_id = '1f1696eb-7f47-47f6-835c-4889bff88604' mock_router_rep = { 'admin_state_up': True, 'availability_zone_hints': [], 'availability_zones': [], 'description': u'', 'distributed': False, 'external_gateway_info': None, 'flavor_id': None, 'ha': False, 'id': router_id, 'name': router_name, 'project_id': u'861808a93da0484ea1767967c4df8a23', 'routes': [], 'status': u'ACTIVE', 'tenant_id': u'861808a93da0484ea1767967c4df8a23' } mock_router_interface_rep = { 'network_id': '53aee281-b06d-47fc-9e1a-37f045182b8e', 'subnet_id': '1f1696eb-7f47-47f6-835c-4889bff88604', 'tenant_id': '861808a93da0484ea1767967c4df8a23', 'subnet_ids': [subnet_id], 'port_id': '23999891-78b3-4a6b-818d-d1b713f67848', 'id': '57076620-dcfb-42ed-8ad6-79ccb4a79ed2', 'request_ids': ['req-f1b0b1b4-ae51-4ef9-b371-0cc3c3402cf7'] } router_availability_zone_extension = { "alias": "router_availability_zone", "updated": "2015-01-01T10:00:00-00:00", "description": "Availability zone support for router.", "links": [], "name": "Router Availability Zone" } enabled_neutron_extensions = [router_availability_zone_extension] def test_get_router(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [self.mock_router_rep]}) ]) r = self.cloud.get_router(self.router_name) self.assertIsNotNone(r) self.assertDictEqual(self.mock_router_rep, r) self.assert_calls() def test_get_router_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': []}) ]) r = self.cloud.get_router('mickey') self.assertIsNone(r) self.assert_calls() def test_create_router(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True}})) ]) new_router = self.cloud.create_router(name=self.router_name, admin_state_up=True) self.assertDictEqual(self.mock_router_rep, new_router) self.assert_calls() def test_create_router_specific_tenant(self): new_router_tenant_id = "project_id_value" mock_router_rep = copy.copy(self.mock_router_rep) mock_router_rep['tenant_id'] = new_router_tenant_id mock_router_rep['project_id'] = new_router_tenant_id self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True, 'tenant_id': new_router_tenant_id}})) ]) self.cloud.create_router(self.router_name, project_id=new_router_tenant_id) self.assert_calls() def test_create_router_with_availability_zone_hints(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True, 'availability_zone_hints': ['nova']}})) ]) self.cloud.create_router( name=self.router_name, admin_state_up=True, availability_zone_hints=['nova']) self.assert_calls() def test_create_router_with_enable_snat_True(self): """Do not send enable_snat when same as neutron default.""" self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True}})) ]) self.cloud.create_router( name=self.router_name, admin_state_up=True, enable_snat=True) self.assert_calls() def test_create_router_with_enable_snat_False(self): """Send enable_snat when it is False.""" self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'external_gateway_info': {'enable_snat': False}, 'admin_state_up': True}})) ]) self.cloud.create_router( name=self.router_name, admin_state_up=True, enable_snat=False) self.assert_calls() def test_create_router_wrong_availability_zone_hints_type(self): azh_opts = "invalid" with testtools.ExpectedException( exc.OpenStackCloudException, "Parameter 'availability_zone_hints' must be a list" ): self.cloud.create_router( name=self.router_name, admin_state_up=True, availability_zone_hints=azh_opts) def test_add_router_interface(self): self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', self.router_id, 'add_router_interface.json']), json={'port': self.mock_router_interface_rep}, validate=dict( json={'subnet_id': self.subnet_id})) ]) self.cloud.add_router_interface( {'id': self.router_id}, subnet_id=self.subnet_id) self.assert_calls() def test_remove_router_interface(self): self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', self.router_id, 'remove_router_interface.json']), json={'port': self.mock_router_interface_rep}, validate=dict( json={'subnet_id': self.subnet_id})) ]) self.cloud.remove_router_interface( {'id': self.router_id}, subnet_id=self.subnet_id) self.assert_calls() def test_remove_router_interface_missing_argument(self): self.assertRaises(ValueError, self.cloud.remove_router_interface, {'id': '123'}) def test_update_router(self): new_router_name = "mickey" expected_router_rep = copy.copy(self.mock_router_rep) expected_router_rep['name'] = new_router_name self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [self.mock_router_rep]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', '%s.json' % self.router_id]), json={'router': expected_router_rep}, validate=dict( json={'router': { 'name': new_router_name}})) ]) new_router = self.cloud.update_router( self.router_id, name=new_router_name) self.assertDictEqual(expected_router_rep, new_router) self.assert_calls() def test_delete_router(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [self.mock_router_rep]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', '%s.json' % self.router_id]), json={}) ]) self.assertTrue(self.cloud.delete_router(self.router_name)) self.assert_calls() def test_delete_router_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': []}), ]) self.assertFalse(self.cloud.delete_router(self.router_name)) self.assert_calls() def test_delete_router_multiple_found(self): router1 = dict(id='123', name='mickey') router2 = dict(id='456', name='mickey') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [router1, router2]}), ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.delete_router, 'mickey') self.assert_calls() def test_delete_router_multiple_using_id(self): router1 = dict(id='123', name='mickey') router2 = dict(id='456', name='mickey') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [router1, router2]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', '123.json']), json={}) ]) self.assertTrue(self.cloud.delete_router("123")) self.assert_calls() def _get_mock_dict(self, owner, json): return dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=["device_id=%s" % self.router_id, "device_owner=network:%s" % owner]), json=json) def _test_list_router_interfaces(self, router, interface_type, router_type="normal", expected_result=None): if router_type == "normal": device_owner = 'router_interface' elif router_type == "ha": device_owner = 'ha_router_replicated_interface' elif router_type == "dvr": device_owner = 'router_interface_distributed' internal_port = { 'id': 'internal_port_id', 'fixed_ips': [{ 'subnet_id': 'internal_subnet_id', 'ip_address': "10.0.0.1" }], 'device_id': self.router_id, 'device_owner': 'network:%s' % device_owner } external_port = { 'id': 'external_port_id', 'fixed_ips': [{ 'subnet_id': 'external_subnet_id', 'ip_address': "1.2.3.4" }], 'device_id': self.router_id, 'device_owner': 'network:router_gateway' } if expected_result is None: if interface_type == "internal": expected_result = [internal_port] elif interface_type == "external": expected_result = [external_port] else: expected_result = [internal_port, external_port] mock_uris = [] for port_type in ['router_interface', 'router_interface_distributed', 'ha_router_replicated_interface']: ports = {} if port_type == device_owner: ports = {'ports': [internal_port]} mock_uris.append(self._get_mock_dict(port_type, ports)) mock_uris.append(self._get_mock_dict('router_gateway', {'ports': [external_port]})) self.register_uris(mock_uris) ret = self.cloud.list_router_interfaces(router, interface_type) self.assertEqual(expected_result, ret) self.assert_calls() router = { 'id': router_id, 'external_gateway_info': { 'external_fixed_ips': [{ 'subnet_id': 'external_subnet_id', 'ip_address': '1.2.3.4'}] } } def test_list_router_interfaces_all(self): self._test_list_router_interfaces(self.router, interface_type=None) def test_list_router_interfaces_internal(self): self._test_list_router_interfaces(self.router, interface_type="internal") def test_list_router_interfaces_external(self): self._test_list_router_interfaces(self.router, interface_type="external") def test_list_router_interfaces_internal_ha(self): self._test_list_router_interfaces(self.router, router_type="ha", interface_type="internal") def test_list_router_interfaces_internal_dvr(self): self._test_list_router_interfaces(self.router, router_type="dvr", interface_type="internal") openstacksdk-0.11.3/openstack/tests/unit/cloud/__init__.py0000666000175100017510000000000013236151340023626 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/cloud/test_delete_volume_snapshot.py0000666000175100017510000000755613236151340027725 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_delete_volume_snapshot ---------------------------------- Tests for the `delete_volume_snapshot` command. """ from openstack.cloud import exc from openstack.cloud import meta from openstack.tests import fakes from openstack.tests.unit import base class TestDeleteVolumeSnapshot(base.RequestsMockTestCase): def test_delete_volume_snapshot(self): """ Test that delete_volume_snapshot without a wait returns True instance when the volume snapshot deletes. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') fake_snapshot_dict = meta.obj_to_munch(fake_snapshot) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', 'detail']), json={'snapshots': [fake_snapshot_dict]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', fake_snapshot_dict['id']]))]) self.assertTrue( self.cloud.delete_volume_snapshot(name_or_id='1234', wait=False)) self.assert_calls() def test_delete_volume_snapshot_with_error(self): """ Test that a exception while deleting a volume snapshot will cause an OpenStackCloudException. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') fake_snapshot_dict = meta.obj_to_munch(fake_snapshot) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', 'detail']), json={'snapshots': [fake_snapshot_dict]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', fake_snapshot_dict['id']]), status_code=404)]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_volume_snapshot, name_or_id='1234') self.assert_calls() def test_delete_volume_snapshot_with_timeout(self): """ Test that a timeout while waiting for the volume snapshot to delete raises an exception in delete_volume_snapshot. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') fake_snapshot_dict = meta.obj_to_munch(fake_snapshot) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', 'detail']), json={'snapshots': [fake_snapshot_dict]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', fake_snapshot_dict['id']]))]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.delete_volume_snapshot, name_or_id='1234', wait=True, timeout=0.01) self.assert_calls(do_count=False) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_server_console.py0000666000175100017510000000542513236151340026176 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.tests.unit import base from openstack.tests import fakes class TestServerConsole(base.RequestsMockTestCase): def setUp(self): super(TestServerConsole, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.server = fakes.make_fake_server( server_id=self.server_id, name=self.server_name) self.output = self.getUniqueString('output') def test_get_server_console_dict(self): self.register_uris([ dict(method='POST', uri='{endpoint}/servers/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=self.server_id), json={"output": self.output}, validate=dict( json={'os-getConsoleOutput': {'length': None}})) ]) self.assertEqual( self.output, self.cloud.get_server_console(self.server)) self.assert_calls() def test_get_server_console_name_or_id(self): self.register_uris([ dict(method='GET', uri='{endpoint}/servers/detail'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={"servers": [self.server]}), dict(method='POST', uri='{endpoint}/servers/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=self.server_id), json={"output": self.output}, validate=dict( json={'os-getConsoleOutput': {'length': None}})) ]) self.assertEqual( self.output, self.cloud.get_server_console(self.server['id'])) self.assert_calls() def test_get_server_console_no_console(self): self.register_uris([ dict(method='POST', uri='{endpoint}/servers/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=self.server_id), status_code=400, validate=dict( json={'os-getConsoleOutput': {'length': None}})) ]) self.assertEqual('', self.cloud.get_server_console(self.server)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_server_set_metadata.py0000666000175100017510000000511013236151340027156 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_server_set_metadata ---------------------------------- Tests for the `set_server_metadata` command. """ import uuid from openstack.cloud.exc import OpenStackCloudBadRequest from openstack.tests import fakes from openstack.tests.unit import base class TestServerSetMetadata(base.RequestsMockTestCase): def setUp(self): super(TestServerSetMetadata, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) def test_server_set_metadata_with_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.fake_server['id'], 'metadata']), validate=dict(json={'metadata': {'meta': 'data'}}), json={}, status_code=400), ]) self.assertRaises( OpenStackCloudBadRequest, self.cloud.set_server_metadata, self.server_name, {'meta': 'data'}) self.assert_calls() def test_server_set_metadata(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.fake_server['id'], 'metadata']), validate=dict(json={'metadata': {'meta': 'data'}}), status_code=200), ]) self.cloud.set_server_metadata(self.server_id, {'meta': 'data'}) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_qos_rule_type.py0000666000175100017510000001303413236151340026033 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from openstack.cloud import exc from openstack.tests.unit import base class TestQosRuleType(base.RequestsMockTestCase): rule_type_name = "bandwidth_limit" qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } qos_rule_type_details_extension = { "updated": "2017-06-22T10:00:00-00:00", "name": "Details of QoS rule types", "links": [], "alias": "qos-rule-type-details", "description": ("Expose details about QoS rule types supported by " "loaded backend drivers") } mock_rule_type_bandwidth_limit = { 'type': 'bandwidth_limit' } mock_rule_type_dscp_marking = { 'type': 'dscp_marking' } mock_rule_types = [ mock_rule_type_bandwidth_limit, mock_rule_type_dscp_marking] mock_rule_type_details = { 'drivers': [{ 'name': 'linuxbridge', 'supported_parameters': [{ 'parameter_values': {'start': 0, 'end': 2147483647}, 'parameter_type': 'range', 'parameter_name': u'max_kbps' }, { 'parameter_values': ['ingress', 'egress'], 'parameter_type': 'choices', 'parameter_name': u'direction' }, { 'parameter_values': {'start': 0, 'end': 2147483647}, 'parameter_type': 'range', 'parameter_name': 'max_burst_kbps' }] }], 'type': rule_type_name } def test_list_qos_rule_types(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'rule-types.json']), json={'rule_types': self.mock_rule_types}) ]) rule_types = self.cloud.list_qos_rule_types() self.assertEqual(self.mock_rule_types, rule_types) self.assert_calls() def test_list_qos_rule_types_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.list_qos_rule_types) self.assert_calls() def test_get_qos_rule_type_details(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [ self.qos_extension, self.qos_rule_type_details_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [ self.qos_extension, self.qos_rule_type_details_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'rule-types', '%s.json' % self.rule_type_name]), json={'rule_type': self.mock_rule_type_details}) ]) self.assertEqual( self.mock_rule_type_details, self.cloud.get_qos_rule_type_details(self.rule_type_name) ) self.assert_calls() def test_get_qos_rule_type_details_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_rule_type_details, self.rule_type_name) self.assert_calls() def test_get_qos_rule_type_details_no_qos_details_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_rule_type_details, self.rule_type_name) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_volume_access.py0000666000175100017510000002043213236151340025771 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import openstack.cloud from openstack.tests.unit import base class TestVolumeAccess(base.RequestsMockTestCase): def test_list_volume_types(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]})]) self.assertTrue(self.cloud.list_volume_types()) self.assert_calls() def test_get_volume_type(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]})]) volume_type_got = self.cloud.get_volume_type(volume_type['name']) self.assertEqual(volume_type_got.id, volume_type['id']) def test_get_volume_type_access(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) volume_type_access = [ dict(volume_type_id='voltype01', name='name', project_id='prj01'), dict(volume_type_id='voltype01', name='name', project_id='prj02') ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'os-volume-type-access']), json={'volume_type_access': volume_type_access})]) self.assertEqual( len(self.cloud.get_volume_type_access(volume_type['name'])), 2) self.assert_calls() def test_remove_volume_type_access(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) project_001 = dict(volume_type_id='voltype01', name='name', project_id='prj01') project_002 = dict(volume_type_id='voltype01', name='name', project_id='prj02') volume_type_access = [project_001, project_002] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'os-volume-type-access']), json={'volume_type_access': volume_type_access}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'action']), json={'removeProjectAccess': { 'project': project_001['project_id']}}, validate=dict( json={'removeProjectAccess': { 'project': project_001['project_id']}})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'os-volume-type-access']), json={'volume_type_access': [project_001]})]) self.assertEqual( len(self.cloud.get_volume_type_access( volume_type['name'])), 2) self.cloud.remove_volume_type_access( volume_type['name'], project_001['project_id']) self.assertEqual( len(self.cloud.get_volume_type_access(volume_type['name'])), 1) self.assert_calls() def test_add_volume_type_access(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) project_001 = dict(volume_type_id='voltype01', name='name', project_id='prj01') project_002 = dict(volume_type_id='voltype01', name='name', project_id='prj02') volume_type_access = [project_001, project_002] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'action']), json={'addProjectAccess': { 'project': project_002['project_id']}}, validate=dict( json={'addProjectAccess': { 'project': project_002['project_id']}})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'os-volume-type-access']), json={'volume_type_access': volume_type_access})]) self.cloud.add_volume_type_access( volume_type['name'], project_002['project_id']) self.assertEqual( len(self.cloud.get_volume_type_access(volume_type['name'])), 2) self.assert_calls() def test_add_volume_type_access_missing(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) project_001 = dict(volume_type_id='voltype01', name='name', project_id='prj01') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]})]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "VolumeType not found: MISSING"): self.cloud.add_volume_type_access( "MISSING", project_001['project_id']) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_floating_ip_neutron.py0000666000175100017510000012030713236151340027210 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_neutron ---------------------------------- Tests Floating IP resource methods for Neutron """ import copy import datetime import munch from openstack.cloud import exc from openstack.tests import fakes from openstack.tests.unit import base class TestFloatingIP(base.RequestsMockTestCase): mock_floating_ip_list_rep = { 'floatingips': [ { 'router_id': 'd23abc8d-2991-4a55-ba98-2aaea84cc72f', 'tenant_id': '4969c491a3c74ee4af974e6d800c62de', 'floating_network_id': '376da547-b977-4cfe-9cba-275c80debf57', 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'port_id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda7', 'status': 'ACTIVE' }, { 'router_id': None, 'tenant_id': '4969c491a3c74ee4af974e6d800c62de', 'floating_network_id': '376da547-b977-4cfe-9cba-275c80debf57', 'fixed_ip_address': None, 'floating_ip_address': '203.0.113.30', 'port_id': None, 'id': '61cea855-49cb-4846-997d-801b70c71bdd', 'status': 'DOWN' } ] } mock_floating_ip_new_rep = { 'floatingip': { 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'floating_network_id': 'my-network-id', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda8', 'port_id': None, 'router_id': None, 'status': 'ACTIVE', 'tenant_id': '4969c491a3c74ee4af974e6d800c62df' } } mock_floating_ip_port_rep = { 'floatingip': { 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'floating_network_id': 'my-network-id', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda8', 'port_id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'router_id': None, 'status': 'ACTIVE', 'tenant_id': '4969c491a3c74ee4af974e6d800c62df' } } mock_get_network_rep = { 'status': 'ACTIVE', 'subnets': [ '54d6f61d-db07-451c-9ab3-b9609b6b6f0b' ], 'name': 'my-network', 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id': '4fd44f30292945e481c7b8a0c8908869', 'provider:network_type': 'local', 'router:external': True, 'shared': True, 'id': 'my-network-id', 'provider:segmentation_id': None } mock_search_ports_rep = [ { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': 'first-port', 'created_at': datetime.datetime.now().isoformat(), 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': '70c1db1f-b701-45bd-96e0-a313ee3430b3', 'tenant_id': '', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'compute:None', 'mac_address': 'fa:16:3e:58:42:ed', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '008ba151-0b8c-4a67-98b5-0d2b87666062', 'ip_address': u'172.24.4.2' } ], 'id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'security_groups': [], 'device_id': 'server-id' } ] def assertAreInstances(self, elements, elem_type): for e in elements: self.assertIsInstance(e, elem_type) def setUp(self): super(TestFloatingIP, self).setUp() self.fake_server = fakes.make_fake_server( 'server-id', '', 'ACTIVE', addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': '192.0.2.129', u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42'}]}) self.floating_ip = self.cloud._normalize_floating_ips( self.mock_floating_ip_list_rep['floatingips'])[0] def test_float_no_status(self): floating_ips = [ { 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'floating_network_id': 'my-network-id', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda8', 'port_id': None, 'router_id': None, 'tenant_id': '4969c491a3c74ee4af974e6d800c62df' } ] normalized = self.cloud._normalize_floating_ips(floating_ips) self.assertEqual('UNKNOWN', normalized[0]['status']) def test_list_floating_ips(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_list_rep)]) floating_ips = self.cloud.list_floating_ips() self.assertIsInstance(floating_ips, list) self.assertAreInstances(floating_ips, dict) self.assertEqual(2, len(floating_ips)) self.assert_calls() def test_list_floating_ips_with_filters(self): self.register_uris([ dict(method='GET', uri=('https://network.example.com/v2.0/floatingips.json?' 'Foo=42'), json={'floatingips': []})]) self.cloud.list_floating_ips(filters={'Foo': 42}) self.assert_calls() def test_search_floating_ips(self): self.register_uris([ dict(method='GET', uri=('https://network.example.com/v2.0/floatingips.json' '?attached=False'), json=self.mock_floating_ip_list_rep)]) floating_ips = self.cloud.search_floating_ips( filters={'attached': False}) self.assertIsInstance(floating_ips, list) self.assertAreInstances(floating_ips, dict) self.assertEqual(1, len(floating_ips)) self.assert_calls() def test_get_floating_ip(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_list_rep)]) floating_ip = self.cloud.get_floating_ip( id='2f245a7b-796b-4f26-9cf9-9e82d248fda7') self.assertIsInstance(floating_ip, dict) self.assertEqual('172.24.4.229', floating_ip['floating_ip_address']) self.assertEqual( self.mock_floating_ip_list_rep['floatingips'][0]['tenant_id'], floating_ip['project_id'] ) self.assertEqual( self.mock_floating_ip_list_rep['floatingips'][0]['tenant_id'], floating_ip['tenant_id'] ) self.assertIn('location', floating_ip) self.assert_calls() def test_get_floating_ip_not_found(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_list_rep)]) floating_ip = self.cloud.get_floating_ip(id='non-existent') self.assertIsNone(floating_ip) self.assert_calls() def test_get_floating_ip_by_id(self): fid = self.mock_floating_ip_new_rep['floatingip']['id'] self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/floatingips/' '{id}'.format(id=fid), json=self.mock_floating_ip_new_rep)]) floating_ip = self.cloud.get_floating_ip_by_id(id=fid) self.assertIsInstance(floating_ip, dict) self.assertEqual('172.24.4.229', floating_ip['floating_ip_address']) self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['tenant_id'], floating_ip['project_id'] ) self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['tenant_id'], floating_ip['tenant_id'] ) self.assertIn('location', floating_ip) self.assert_calls() def test_create_floating_ip(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_new_rep, validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id'}})) ]) ip = self.cloud.create_floating_ip(network='my-network') self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['floating_ip_address'], ip['floating_ip_address']) self.assert_calls() def test_create_floating_ip_port_bad_response(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_new_rep, validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id', 'port_id': u'ce705c24-c1ef-408a-bda3-7bbd946164ab'}})) ]) # Fails because we requested a port and the returned FIP has no port self.assertRaises( exc.OpenStackCloudException, self.cloud.create_floating_ip, network='my-network', port='ce705c24-c1ef-408a-bda3-7bbd946164ab') self.assert_calls() def test_create_floating_ip_port(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_port_rep, validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id', 'port_id': u'ce705c24-c1ef-408a-bda3-7bbd946164ac'}})) ]) ip = self.cloud.create_floating_ip( network='my-network', port='ce705c24-c1ef-408a-bda3-7bbd946164ac') self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['floating_ip_address'], ip['floating_ip_address']) self.assert_calls() def test_neutron_available_floating_ips(self): """ Test without specifying a network name. """ fips_mock_uri = 'https://network.example.com/v2.0/floatingips.json' self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': []}), dict(method='GET', uri=fips_mock_uri, json={'floatingips': []}), dict(method='POST', uri=fips_mock_uri, json=self.mock_floating_ip_new_rep, validate=dict(json={ 'floatingip': { 'floating_network_id': self.mock_get_network_rep['id'] }})) ]) # Test if first network is selected if no network is given self.cloud._neutron_available_floating_ips() self.assert_calls() def test_neutron_available_floating_ips_network(self): """ Test with specifying a network name. """ fips_mock_uri = 'https://network.example.com/v2.0/floatingips.json' self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': []}), dict(method='GET', uri=fips_mock_uri, json={'floatingips': []}), dict(method='POST', uri=fips_mock_uri, json=self.mock_floating_ip_new_rep, validate=dict(json={ 'floatingip': { 'floating_network_id': self.mock_get_network_rep['id'] }})) ]) # Test if first network is selected if no network is given self.cloud._neutron_available_floating_ips( network=self.mock_get_network_rep['name'] ) self.assert_calls() def test_neutron_available_floating_ips_invalid_network(self): """ Test with an invalid network name. """ self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud._neutron_available_floating_ips, network='INVALID') self.assert_calls() def test_auto_ip_pool_no_reuse(self): # payloads taken from citycloud self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={"networks": [{ "status": "ACTIVE", "subnets": [ "df3e17fa-a4b2-47ae-9015-bc93eb076ba2", "6b0c3dc9-b0b8-4d87-976a-7f2ebf13e7ec", "fc541f48-fc7f-48c0-a063-18de6ee7bdd7"], "availability_zone_hints": [], "availability_zones": ["nova"], "name": "ext-net", "admin_state_up": True, "tenant_id": "a564613210ee43708b8a7fc6274ebd63", "tags": [], "ipv6_address_scope": "9f03124f-89af-483a-b6fd-10f08079db4d", # noqa "mtu": 0, "is_default": False, "router:external": True, "ipv4_address_scope": None, "shared": False, "id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", "description": None }, { "status": "ACTIVE", "subnets": ["f0ad1df5-53ee-473f-b86b-3604ea5591e9"], "availability_zone_hints": [], "availability_zones": ["nova"], "name": "private", "admin_state_up": True, "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2016-10-22T13:46:26", "tags": [], "updated_at": "2016-10-22T13:46:26", "ipv6_address_scope": None, "router:external": False, "ipv4_address_scope": None, "shared": False, "mtu": 1450, "id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "description": "" }]}), dict(method='GET', uri='https://network.example.com/v2.0/ports.json' '?device_id=f80e3ad0-e13e-41d4-8e9c-be79bccdb8f7', json={"ports": [{ "status": "ACTIVE", "created_at": "2017-02-06T20:59:45", "description": "", "allowed_address_pairs": [], "admin_state_up": True, "network_id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "dns_name": None, "extra_dhcp_opts": [], "mac_address": "fa:16:3e:e8:7f:03", "updated_at": "2017-02-06T20:59:49", "name": "", "device_owner": "compute:None", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "binding:vnic_type": "normal", "fixed_ips": [{ "subnet_id": "f0ad1df5-53ee-473f-b86b-3604ea5591e9", "ip_address": "10.4.0.16"}], "id": "a767944e-057a-47d1-a669-824a21b8fb7b", "security_groups": [ "9fb5ba44-5c46-4357-8e60-8b55526cab54"], "device_id": "f80e3ad0-e13e-41d4-8e9c-be79bccdb8f7", }]}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json={"floatingip": { "router_id": "9de9c787-8f89-4a53-8468-a5533d6d7fd1", "status": "DOWN", "description": "", "dns_domain": "", "floating_network_id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", # noqa "fixed_ip_address": "10.4.0.16", "floating_ip_address": "89.40.216.153", "port_id": "a767944e-057a-47d1-a669-824a21b8fb7b", "id": "e69179dc-a904-4c9a-a4c9-891e2ecb984c", "dns_name": "", "tenant_id": "65222a4d09ea4c68934fa1028c77f394" }}, validate=dict(json={"floatingip": { "floating_network_id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", # noqa "fixed_ip_address": "10.4.0.16", "port_id": "a767944e-057a-47d1-a669-824a21b8fb7b", }})), dict(method='GET', uri='{endpoint}/servers/detail'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={"servers": [{ "status": "ACTIVE", "updated": "2017-02-06T20:59:49Z", "addresses": { "private": [{ "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e8:7f:03", "version": 4, "addr": "10.4.0.16", "OS-EXT-IPS:type": "fixed" }, { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e8:7f:03", "version": 4, "addr": "89.40.216.153", "OS-EXT-IPS:type": "floating" }]}, "key_name": None, "image": {"id": "95e4c449-8abf-486e-97d9-dc3f82417d2d"}, "OS-EXT-STS:task_state": None, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-02-06T20:59:48.000000", "flavor": {"id": "2186bd79-a05e-4953-9dde-ddefb63c88d4"}, "id": "f80e3ad0-e13e-41d4-8e9c-be79bccdb8f7", "security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at": None, "OS-EXT-AZ:availability_zone": "nova", "user_id": "c17534835f8f42bf98fc367e0bf35e09", "name": "testmt", "created": "2017-02-06T20:59:44Z", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached": [], "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "", "metadata": {} }]}), dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={"networks": [{ "status": "ACTIVE", "subnets": [ "df3e17fa-a4b2-47ae-9015-bc93eb076ba2", "6b0c3dc9-b0b8-4d87-976a-7f2ebf13e7ec", "fc541f48-fc7f-48c0-a063-18de6ee7bdd7"], "availability_zone_hints": [], "availability_zones": ["nova"], "name": "ext-net", "admin_state_up": True, "tenant_id": "a564613210ee43708b8a7fc6274ebd63", "tags": [], "ipv6_address_scope": "9f03124f-89af-483a-b6fd-10f08079db4d", # noqa "mtu": 0, "is_default": False, "router:external": True, "ipv4_address_scope": None, "shared": False, "id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", "description": None }, { "status": "ACTIVE", "subnets": ["f0ad1df5-53ee-473f-b86b-3604ea5591e9"], "availability_zone_hints": [], "availability_zones": ["nova"], "name": "private", "admin_state_up": True, "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2016-10-22T13:46:26", "tags": [], "updated_at": "2016-10-22T13:46:26", "ipv6_address_scope": None, "router:external": False, "ipv4_address_scope": None, "shared": False, "mtu": 1450, "id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "description": "" }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={"subnets": [{ "description": "", "enable_dhcp": True, "network_id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2016-10-22T13:46:26", "dns_nameservers": [ "89.36.90.101", "89.36.90.102"], "updated_at": "2016-10-22T13:46:26", "gateway_ip": "10.4.0.1", "ipv6_ra_mode": None, "allocation_pools": [{ "start": "10.4.0.2", "end": "10.4.0.200"}], "host_routes": [], "ip_version": 4, "ipv6_address_mode": None, "cidr": "10.4.0.0/24", "id": "f0ad1df5-53ee-473f-b86b-3604ea5591e9", "subnetpool_id": None, "name": "private-subnet-ipv4", }]})]) self.cloud.add_ips_to_server( munch.Munch( id='f80e3ad0-e13e-41d4-8e9c-be79bccdb8f7', addresses={ "private": [{ "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e8:7f:03", "version": 4, "addr": "10.4.0.16", "OS-EXT-IPS:type": "fixed" }]}), ip_pool='ext-net', reuse=False) self.assert_calls() def test_available_floating_ip_new(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': []}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id'}}), json=self.mock_floating_ip_new_rep) ]) ip = self.cloud.available_floating_ip(network='my-network') self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['floating_ip_address'], ip['floating_ip_address']) self.assert_calls() def test_delete_floating_ip_existing(self): fip_id = '2f245a7b-796b-4f26-9cf9-9e82d248fda7' fake_fip = { 'id': fip_id, 'floating_ip_address': '172.99.106.167', 'status': 'ACTIVE', } self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': []}), ]) self.assertTrue( self.cloud.delete_floating_ip(floating_ip_id=fip_id, retry=2)) self.assert_calls() def test_delete_floating_ip_existing_down(self): fip_id = '2f245a7b-796b-4f26-9cf9-9e82d248fda7' fake_fip = { 'id': fip_id, 'floating_ip_address': '172.99.106.167', 'status': 'ACTIVE', } down_fip = { 'id': fip_id, 'floating_ip_address': '172.99.106.167', 'status': 'DOWN', } self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [down_fip]}), ]) self.assertTrue( self.cloud.delete_floating_ip(floating_ip_id=fip_id, retry=2)) self.assert_calls() def test_delete_floating_ip_existing_no_delete(self): fip_id = '2f245a7b-796b-4f26-9cf9-9e82d248fda7' fake_fip = { 'id': fip_id, 'floating_ip_address': '172.99.106.167', 'status': 'ACTIVE', } self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_floating_ip, floating_ip_id=fip_id, retry=2) self.assert_calls() def test_delete_floating_ip_not_found(self): self.register_uris([ dict(method='DELETE', uri=('https://network.example.com/v2.0/floatingips/' 'a-wild-id-appears.json'), status_code=404)]) ret = self.cloud.delete_floating_ip( floating_ip_id='a-wild-id-appears') self.assertFalse(ret) self.assert_calls() def test_attach_ip_to_server(self): fip = self.mock_floating_ip_list_rep['floatingips'][0] device_id = self.fake_server['id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=["device_id={0}".format(device_id)]), json={'ports': self.mock_search_ports_rep}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( fip['id'])]), json={'floatingip': fip}, validate=dict( json={'floatingip': { 'port_id': self.mock_search_ports_rep[0]['id'], 'fixed_ip_address': self.mock_search_ports_rep[0][ 'fixed_ips'][0]['ip_address']}})), ]) self.cloud._attach_ip_to_server( server=self.fake_server, floating_ip=self.floating_ip) self.assert_calls() def test_add_ip_refresh_timeout(self): device_id = self.fake_server['id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': []}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=["device_id={0}".format(device_id)]), json={'ports': self.mock_search_ports_rep}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json={'floatingip': self.floating_ip}, validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id', 'fixed_ip_address': self.mock_search_ports_rep[0][ 'fixed_ips'][0]['ip_address'], 'port_id': self.mock_search_ports_rep[0]['id']}})), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [self.floating_ip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( self.floating_ip['id'])]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': []}), ]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud._add_auto_ip, server=self.fake_server, wait=True, timeout=0.01, reuse=False) self.assert_calls() def test_detach_ip_from_server(self): fip = self.mock_floating_ip_new_rep['floatingip'] attached_fip = copy.copy(fip) attached_fip['port_id'] = 'server-port-id' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [attached_fip]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( fip['id'])]), json={'floatingip': fip}, validate=dict( json={'floatingip': {'port_id': None}})) ]) self.cloud.detach_ip_from_server( server_id='server-id', floating_ip_id=fip['id']) self.assert_calls() def test_add_ip_from_pool(self): network = self.mock_get_network_rep fip = self.mock_floating_ip_new_rep['floatingip'] fixed_ip = self.mock_search_ports_rep[0]['fixed_ips'][0]['ip_address'] port_id = self.mock_search_ports_rep[0]['id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fip]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingip': fip}, validate=dict( json={'floatingip': { 'floating_network_id': network['id']}})), dict(method="GET", uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=[ "device_id={0}".format(self.fake_server['id'])]), json={'ports': self.mock_search_ports_rep}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( fip['id'])]), json={'floatingip': fip}, validate=dict( json={'floatingip': { 'fixed_ip_address': fixed_ip, 'port_id': port_id}})), ]) server = self.cloud._add_ip_from_pool( server=self.fake_server, network=network['id'], fixed_address=fixed_ip) self.assertEqual(server, self.fake_server) self.assert_calls() def test_cleanup_floating_ips(self): floating_ips = [{ "id": "this-is-a-floating-ip-id", "fixed_ip_address": None, "internal_network": None, "floating_ip_address": "203.0.113.29", "network": "this-is-a-net-or-pool-id", "port_id": None, "status": "ACTIVE" }, { "id": "this-is-an-attached-floating-ip-id", "fixed_ip_address": None, "internal_network": None, "floating_ip_address": "203.0.113.29", "network": "this-is-a-net-or-pool-id", "attached": True, "port_id": "this-is-id-of-port-with-fip", "status": "ACTIVE" }] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': floating_ips}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( floating_ips[0]['id'])]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [floating_ips[1]]}), ]) self.cloud.delete_unattached_floating_ips() self.assert_calls() def test_create_floating_ip_no_port(self): server_port = { "id": "port-id", "device_id": "some-server", 'created_at': datetime.datetime.now().isoformat(), 'fixed_ips': [ { 'subnet_id': 'subnet-id', 'ip_address': '172.24.4.2' } ], } floating_ip = { "id": "floating-ip-id", "port_id": None } self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), dict(method="GET", uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=['device_id=some-server']), json={'ports': [server_port]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingip': floating_ip}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud._neutron_create_floating_ip, server=dict(id='some-server')) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_image_snapshot.py0000666000175100017510000000776513236151340026160 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.cloud import exc from openstack.tests import fakes from openstack.tests.unit import base class TestImageSnapshot(base.RequestsMockTestCase): def setUp(self): super(TestImageSnapshot, self).setUp() self.server_id = str(uuid.uuid4()) self.image_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) def test_create_image_snapshot_wait_until_active_never_active(self): snapshot_name = 'test-snapshot' fake_image = fakes.make_fake_image(self.image_id, status='pending') self.register_uris([ dict( method='POST', uri='{endpoint}/servers/{server_id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, server_id=self.server_id), headers=dict( Location='{endpoint}/images/{image_id}'.format( endpoint='https://images.example.com', image_id=self.image_id)), validate=dict( json={ "createImage": { "name": snapshot_name, "metadata": {}, }})), self.get_glance_discovery_mock_dict(), dict( method='GET', uri='https://image.example.com/v2/images', json=dict(images=[fake_image])), ]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.create_image_snapshot, snapshot_name, dict(id=self.server_id), wait=True, timeout=0.01) # After the fifth call, we just keep polling get images for status. # Due to mocking sleep, we have no clue how many times we'll call it. self.assert_calls(stop_after=5, do_count=False) def test_create_image_snapshot_wait_active(self): snapshot_name = 'test-snapshot' pending_image = fakes.make_fake_image(self.image_id, status='pending') fake_image = fakes.make_fake_image(self.image_id) self.register_uris([ dict( method='POST', uri='{endpoint}/servers/{server_id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, server_id=self.server_id), headers=dict( Location='{endpoint}/images/{image_id}'.format( endpoint='https://images.example.com', image_id=self.image_id)), validate=dict( json={ "createImage": { "name": snapshot_name, "metadata": {}, }})), self.get_glance_discovery_mock_dict(), dict( method='GET', uri='https://image.example.com/v2/images', json=dict(images=[pending_image])), dict( method='GET', uri='https://image.example.com/v2/images', json=dict(images=[fake_image])), ]) image = self.cloud.create_image_snapshot( 'test-snapshot', dict(id=self.server_id), wait=True, timeout=2) self.assertEqual(image['id'], self.image_id) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_create_server.py0000666000175100017510000010134513236151340025775 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_create_server ---------------------------------- Tests for the `create_server` command. """ import base64 import uuid import mock import openstack.cloud from openstack.cloud import exc from openstack.cloud import meta from openstack.tests import fakes from openstack.tests.unit import base class TestCreateServer(base.RequestsMockTestCase): def test_create_server_with_get_exception(self): """ Test that a bad status code when attempting to get the server instance raises an exception in create_server. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), status_code=404), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', {'id': 'image-id'}, {'id': 'flavor-id'}) self.assert_calls() def test_create_server_with_server_error(self): """ Test that a server error before we return or begin waiting for the server instance spawn raises an exception in create_server. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') error_server = fakes.make_fake_server('1234', '', 'ERROR') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': error_server}), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', {'id': 'image-id'}, {'id': 'flavor-id'}) self.assert_calls() def test_create_server_wait_server_error(self): """ Test that a server error while waiting for the server to spawn raises an exception in create_server. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') error_server = fakes.make_fake_server('1234', '', 'ERROR') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [build_server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [error_server]}), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', dict(id='image-id'), dict(id='flavor-id'), wait=True) self.assert_calls() def test_create_server_with_timeout(self): """ Test that a timeout while waiting for the server to spawn raises an exception in create_server. """ fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), ]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.create_server, 'server-name', dict(id='image-id'), dict(id='flavor-id'), wait=True, timeout=0.01) # We poll at the end, so we don't know real counts self.assert_calls(do_count=False) def test_create_server_no_wait(self): """ Test that create_server with no wait and no exception in the create call returns the server instance. """ fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) normalized = self.cloud._expand_server( self.cloud._normalize_server(fake_server), False, False) self.assertEqual( normalized, self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'))) self.assert_calls() def test_create_server_config_drive(self): """ Test that config_drive gets passed in properly """ fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'config_drive': True, u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) normalized = self.cloud._expand_server( self.cloud._normalize_server(fake_server), False, False) self.assertEqual( normalized, self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), config_drive=True)) self.assert_calls() def test_create_server_config_drive_none(self): """ Test that config_drive gets not passed in properly """ fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) normalized = self.cloud._expand_server( self.cloud._normalize_server(fake_server), False, False) self.assertEqual( normalized, self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), config_drive=None)) self.assert_calls() def test_create_server_with_admin_pass_no_wait(self): """ Test that a server with an admin_pass passed returns the password """ admin_pass = self.getUniqueString('password') fake_server = fakes.make_fake_server('1234', '', 'BUILD') fake_create_server = fakes.make_fake_server( '1234', '', 'BUILD', admin_pass=admin_pass) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_create_server}, validate=dict( json={'server': { u'adminPass': admin_pass, u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) self.assertEqual( self.cloud._normalize_server(fake_create_server)['adminPass'], self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), admin_pass=admin_pass)['adminPass']) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, "wait_for_server") def test_create_server_with_admin_pass_wait(self, mock_wait): """ Test that a server with an admin_pass passed returns the password """ admin_pass = self.getUniqueString('password') fake_server = fakes.make_fake_server('1234', '', 'BUILD') fake_server_with_pass = fakes.make_fake_server( '1234', '', 'BUILD', admin_pass=admin_pass) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server_with_pass}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'adminPass': admin_pass, u'name': u'server-name'}})), ]) # The wait returns non-password server mock_wait.return_value = self.cloud._normalize_server(fake_server) server = self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), admin_pass=admin_pass, wait=True) # Assert that we did wait self.assertTrue(mock_wait.called) # Even with the wait, we should still get back a passworded server self.assertEqual( server['adminPass'], self.cloud._normalize_server(fake_server_with_pass)['adminPass'] ) self.assert_calls() def test_create_server_user_data_base64(self): """ Test that a server passed user-data sends it base64 encoded. """ user_data = self.getUniqueString('user_data') user_data_b64 = base64.b64encode( user_data.encode('utf-8')).decode('utf-8') fake_server = fakes.make_fake_server('1234', '', 'BUILD') fake_server['user_data'] = user_data self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'user_data': user_data_b64, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), userdata=user_data, wait=False) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, "get_active_server") @mock.patch.object(openstack.cloud.OpenStackCloud, "get_server") def test_wait_for_server(self, mock_get_server, mock_get_active_server): """ Test that waiting for a server returns the server instance when its status changes to "ACTIVE". """ # TODO(mordred) Rework this to not mock methods building_server = {'id': 'fake_server_id', 'status': 'BUILDING'} active_server = {'id': 'fake_server_id', 'status': 'ACTIVE'} mock_get_server.side_effect = iter([building_server, active_server]) mock_get_active_server.side_effect = iter([ building_server, active_server]) server = self.cloud.wait_for_server(building_server) self.assertEqual(2, mock_get_server.call_count) mock_get_server.assert_has_calls([ mock.call(building_server['id']), mock.call(active_server['id']), ]) self.assertEqual(2, mock_get_active_server.call_count) mock_get_active_server.assert_has_calls([ mock.call(server=building_server, reuse=True, auto_ip=True, ips=None, ip_pool=None, wait=True, timeout=mock.ANY, nat_destination=None), mock.call(server=active_server, reuse=True, auto_ip=True, ips=None, ip_pool=None, wait=True, timeout=mock.ANY, nat_destination=None), ]) self.assertEqual('ACTIVE', server['status']) @mock.patch.object(openstack.cloud.OpenStackCloud, 'wait_for_server') def test_create_server_wait(self, mock_wait): """ Test that create_server with a wait actually does the wait. """ # TODO(mordred) Make this a full proper response fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), wait=True), mock_wait.assert_called_once_with( fake_server, auto_ip=True, ips=None, ip_pool=None, reuse=True, timeout=180, nat_destination=None, ) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, 'add_ips_to_server') @mock.patch('time.sleep') def test_create_server_no_addresses( self, mock_sleep, mock_add_ips_to_server): """ Test that create_server with a wait throws an exception if the server doesn't have addresses. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') fake_server = fakes.make_fake_server( '1234', '', 'ACTIVE', addresses={}) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [build_server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=['device_id=1234']), json={'ports': []}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) mock_add_ips_to_server.return_value = fake_server self.cloud._SERVER_AGE = 0 self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', {'id': 'image-id'}, {'id': 'flavor-id'}, wait=True) self.assert_calls() def test_create_server_network_with_no_nics(self): """ Verify that if 'network' is supplied, and 'nics' is not, that we attempt to get the network for the server. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') network = { 'id': 'network-id', 'name': 'network-name' } self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'networks': [{u'uuid': u'network-id'}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': build_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), network='network-name') self.assert_calls() def test_create_server_network_with_empty_nics(self): """ Verify that if 'network' is supplied, along with an empty 'nics' list, it's treated the same as if 'nics' were not included. """ network = { 'id': 'network-id', 'name': 'network-name' } build_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'networks': [{u'uuid': u'network-id'}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': build_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), network='network-name', nics=[]) self.assert_calls() def test_create_server_get_flavor_image(self): self.use_glance() image_id = str(uuid.uuid4()) fake_image_dict = fakes.make_fake_image(image_id=image_id) fake_image_search_return = {'images': [fake_image_dict]} build_server = fakes.make_fake_server('1234', '', 'BUILD') active_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=fake_image_search_return), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['flavors', 'detail'], qs_elements=['is_public=None']), json={'flavors': fakes.FAKE_FLAVOR_LIST}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': fakes.FLAVOR_ID, u'imageRef': image_id, u'max_count': 1, u'min_count': 1, u'networks': [{u'uuid': u'some-network'}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': active_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.cloud.create_server( 'server-name', image_id, 'vanilla', nics=[{'net-id': 'some-network'}], wait=False) self.assert_calls() def test_create_server_nics_port_id(self): '''Verify port-id in nics input turns into port in REST.''' build_server = fakes.make_fake_server('1234', '', 'BUILD') active_server = fakes.make_fake_server('1234', '', 'BUILD') image_id = uuid.uuid4().hex port_id = uuid.uuid4().hex self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': fakes.FLAVOR_ID, u'imageRef': image_id, u'max_count': 1, u'min_count': 1, u'networks': [{u'port': port_id}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': active_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.cloud.create_server( 'server-name', dict(id=image_id), dict(id=fakes.FLAVOR_ID), nics=[{'port-id': port_id}], wait=False) self.assert_calls() def test_create_boot_attach_volume(self): build_server = fakes.make_fake_server('1234', '', 'BUILD') active_server = fakes.make_fake_server('1234', '', 'BUILD') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-volumes_boot']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': 'flavor-id', u'imageRef': 'image-id', u'max_count': 1, u'min_count': 1, u'block_device_mapping_v2': [ { u'boot_index': 0, u'delete_on_termination': True, u'destination_type': u'local', u'source_type': u'image', u'uuid': u'image-id' }, { u'boot_index': u'-1', u'delete_on_termination': False, u'destination_type': u'volume', u'source_type': u'volume', u'uuid': u'volume001' } ], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': active_server}), ]) self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), boot_from_volume=False, volumes=[volume], wait=False) self.assert_calls() def test_create_boot_from_volume_image_terminate(self): build_server = fakes.make_fake_server('1234', '', 'BUILD') active_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-volumes_boot']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': 'flavor-id', u'imageRef': '', u'max_count': 1, u'min_count': 1, u'block_device_mapping_v2': [{ u'boot_index': u'0', u'delete_on_termination': True, u'destination_type': u'volume', u'source_type': u'image', u'uuid': u'image-id', u'volume_size': u'1'}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': active_server}), ]) self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), boot_from_volume=True, terminate_volume=True, volume_size=1, wait=False) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_volume.py0000666000175100017510000005435613236151340024464 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import openstack.cloud from openstack.cloud import meta from openstack.tests import fakes from openstack.tests.unit import base class TestVolume(base.RequestsMockTestCase): def test_attach_volume(self): server = dict(id='server001') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) rattach = {'server_id': server['id'], 'device': 'device001', 'volumeId': volume['id'], 'id': 'attachmentId'} self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments']), json={'volumeAttachment': rattach}, validate=dict(json={ 'volumeAttachment': { 'volumeId': vol['id']}}) )]) ret = self.cloud.attach_volume(server, volume, wait=False) self.assertEqual(rattach, ret) self.assert_calls() def test_attach_volume_exception(self): server = dict(id='server001') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments']), status_code=404, validate=dict(json={ 'volumeAttachment': { 'volumeId': vol['id']}}) )]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudURINotFound, "Error attaching volume %s to server %s" % ( volume['id'], server['id']) ): self.cloud.attach_volume(server, volume, wait=False) self.assert_calls() def test_attach_volume_wait(self): server = dict(id='server001') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) vol['attachments'] = [{'server_id': server['id'], 'device': 'device001'}] vol['status'] = 'attached' attached_volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) rattach = {'server_id': server['id'], 'device': 'device001', 'volumeId': volume['id'], 'id': 'attachmentId'} self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments']), json={'volumeAttachment': rattach}, validate=dict(json={ 'volumeAttachment': { 'volumeId': vol['id']}})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [attached_volume]})]) # defaults to wait=True ret = self.cloud.attach_volume(server, volume) self.assertEqual(rattach, ret) self.assert_calls() def test_attach_volume_wait_error(self): server = dict(id='server001') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) vol['status'] = 'error' errored_volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) rattach = {'server_id': server['id'], 'device': 'device001', 'volumeId': volume['id'], 'id': 'attachmentId'} self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments']), json={'volumeAttachment': rattach}, validate=dict(json={ 'volumeAttachment': { 'volumeId': vol['id']}})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [errored_volume]})]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Error in attaching volume %s" % errored_volume['id'] ): self.cloud.attach_volume(server, volume) self.assert_calls() def test_attach_volume_not_available(self): server = dict(id='server001') volume = dict(id='volume001', status='error', attachments=[]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Volume %s is not available. Status is '%s'" % ( volume['id'], volume['status']) ): self.cloud.attach_volume(server, volume) self.assertEqual(0, len(self.adapter.request_history)) def test_attach_volume_already_attached(self): device_id = 'device001' server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': device_id} ]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Volume %s already attached to server %s on device %s" % ( volume['id'], server['id'], device_id) ): self.cloud.attach_volume(server, volume) self.assertEqual(0, len(self.adapter.request_history)) def test_detach_volume(self): server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': 'device001'} ]) self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments', volume['id']]))]) self.cloud.detach_volume(server, volume, wait=False) self.assert_calls() def test_detach_volume_exception(self): server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': 'device001'} ]) self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments', volume['id']]), status_code=404)]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudURINotFound, "Error detaching volume %s from server %s" % ( volume['id'], server['id']) ): self.cloud.detach_volume(server, volume, wait=False) self.assert_calls() def test_detach_volume_wait(self): server = dict(id='server001') attachments = [{'server_id': 'server001', 'device': 'device001'}] vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': attachments} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) vol['status'] = 'available' vol['attachments'] = [] avail_volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments', volume.id])), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [avail_volume]})]) self.cloud.detach_volume(server, volume) self.assert_calls() def test_detach_volume_wait_error(self): server = dict(id='server001') attachments = [{'server_id': 'server001', 'device': 'device001'}] vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': attachments} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) vol['status'] = 'error' vol['attachments'] = [] errored_volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments', volume.id])), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [errored_volume]})]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Error in detaching volume %s" % errored_volume['id'] ): self.cloud.detach_volume(server, volume) self.assert_calls() def test_delete_volume_deletes(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id])), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': []})]) self.assertTrue(self.cloud.delete_volume(volume['id'])) self.assert_calls() def test_delete_volume_gone_away(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id]), status_code=404)]) self.assertFalse(self.cloud.delete_volume(volume['id'])) self.assert_calls() def test_delete_volume_force(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id, 'action']), validate=dict( json={'os-force_delete': None})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': []})]) self.assertTrue(self.cloud.delete_volume(volume['id'], force=True)) self.assert_calls() def test_set_volume_bootable(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id, 'action']), json={'os-set_bootable': {'bootable': True}}), ]) self.cloud.set_volume_bootable(volume['id']) self.assert_calls() def test_set_volume_bootable_false(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id, 'action']), json={'os-set_bootable': {'bootable': False}}), ]) self.cloud.set_volume_bootable(volume['id']) self.assert_calls() def test_list_volumes_with_pagination(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) vol2 = meta.obj_to_munch(fakes.FakeVolume('02', 'available', 'vol2')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={ 'volumes': [vol1], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), json={ 'volumes': [vol2], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=02']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=02']), json={'volumes': []})]) self.assertEqual( [self.cloud._normalize_volume(vol1), self.cloud._normalize_volume(vol2)], self.cloud.list_volumes()) self.assert_calls() def test_list_volumes_with_pagination_next_link_fails_once(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) vol2 = meta.obj_to_munch(fakes.FakeVolume('02', 'available', 'vol2')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={ 'volumes': [vol1], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), status_code=404), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={ 'volumes': [vol1], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), json={ 'volumes': [vol2], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=02']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=02']), json={'volumes': []})]) self.assertEqual( [self.cloud._normalize_volume(vol1), self.cloud._normalize_volume(vol2)], self.cloud.list_volumes()) self.assert_calls() def test_list_volumes_with_pagination_next_link_fails_all_attempts(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) uris = [] attempts = 5 for i in range(attempts): uris.extend([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={ 'volumes': [vol1], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), status_code=404)]) self.register_uris(uris) # Check that found volumes are returned even if pagination didn't # complete because call to get next link 404'ed for all the allowed # attempts self.assertEqual( [self.cloud._normalize_volume(vol1)], self.cloud.list_volumes()) self.assert_calls() def test_get_volume_by_id(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', '01']), json={'volume': vol1} ) ]) self.assertEqual( self.cloud._normalize_volume(vol1), self.cloud.get_volume_by_id('01')) self.assert_calls() def test_create_volume(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes']), json={'volume': vol1}, validate=dict(json={ 'volume': { 'size': 50, 'name': 'vol1', }})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [vol1]}), ]) self.cloud.create_volume(50, name='vol1') self.assert_calls() def test_create_bootable_volume(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes']), json={'volume': vol1}, validate=dict(json={ 'volume': { 'size': 50, 'name': 'vol1', }})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [vol1]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', '01', 'action']), validate=dict( json={'os-set_bootable': {'bootable': True}})), ]) self.cloud.create_volume(50, name='vol1', bootable=True) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_create_volume_snapshot.py0000666000175100017510000001255113236151340027715 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_create_volume_snapshot ---------------------------------- Tests for the `create_volume_snapshot` command. """ from openstack.cloud import exc from openstack.cloud import meta from openstack.tests import fakes from openstack.tests.unit import base class TestCreateVolumeSnapshot(base.RequestsMockTestCase): def test_create_volume_snapshot_wait(self): """ Test that create_volume_snapshot with a wait returns the volume snapshot when its status changes to "available". """ snapshot_id = '5678' volume_id = '1234' build_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'creating', 'foo', 'derpysnapshot') build_snapshot_dict = meta.obj_to_munch(build_snapshot) fake_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'available', 'foo', 'derpysnapshot') fake_snapshot_dict = meta.obj_to_munch(fake_snapshot) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots']), json={'snapshot': build_snapshot_dict}, validate=dict(json={ 'snapshot': {'force': False, 'volume_id': '1234'}})), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': build_snapshot_dict}), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': fake_snapshot_dict})]) self.assertEqual( self.cloud._normalize_volume(fake_snapshot_dict), self.cloud.create_volume_snapshot(volume_id=volume_id, wait=True) ) self.assert_calls() def test_create_volume_snapshot_with_timeout(self): """ Test that a timeout while waiting for the volume snapshot to create raises an exception in create_volume_snapshot. """ snapshot_id = '5678' volume_id = '1234' build_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'creating', 'foo', 'derpysnapshot') build_snapshot_dict = meta.obj_to_munch(build_snapshot) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots']), json={'snapshot': build_snapshot_dict}, validate=dict(json={ 'snapshot': {'force': False, 'volume_id': '1234'}})), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': build_snapshot_dict})]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.create_volume_snapshot, volume_id=volume_id, wait=True, timeout=0.01) self.assert_calls(do_count=False) def test_create_volume_snapshot_with_error(self): """ Test that a error status while waiting for the volume snapshot to create raises an exception in create_volume_snapshot. """ snapshot_id = '5678' volume_id = '1234' build_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'creating', 'bar', 'derpysnapshot') build_snapshot_dict = meta.obj_to_munch(build_snapshot) error_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'error', 'blah', 'derpysnapshot') error_snapshot_dict = meta.obj_to_munch(error_snapshot) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots']), json={'snapshot': build_snapshot_dict}, validate=dict(json={ 'snapshot': {'force': False, 'volume_id': '1234'}})), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': build_snapshot_dict}), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': error_snapshot_dict})]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_volume_snapshot, volume_id=volume_id, wait=True, timeout=5) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_subnet.py0000666000175100017510000003755113236151340024453 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import testtools from openstack.cloud import exc from openstack.tests.unit import base class TestSubnet(base.RequestsMockTestCase): network_name = 'network_name' subnet_name = 'subnet_name' subnet_id = '1f1696eb-7f47-47f6-835c-4889bff88604' subnet_cidr = '192.168.199.0/24' mock_network_rep = { 'id': '881d1bb7-a663-44c0-8f9f-ee2765b74486', 'name': network_name, } mock_subnet_rep = { 'allocation_pools': [{ 'start': u'192.168.199.2', 'end': u'192.168.199.254' }], 'cidr': subnet_cidr, 'created_at': '2017-04-24T20:22:23Z', 'description': '', 'dns_nameservers': [], 'enable_dhcp': False, 'gateway_ip': '192.168.199.1', 'host_routes': [], 'id': subnet_id, 'ip_version': 4, 'ipv6_address_mode': None, 'ipv6_ra_mode': None, 'name': subnet_name, 'network_id': mock_network_rep['id'], 'project_id': '861808a93da0484ea1767967c4df8a23', 'revision_number': 2, 'service_types': [], 'subnetpool_id': None, 'tags': [] } def test_get_subnet(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}) ]) r = self.cloud.get_subnet(self.subnet_name) self.assertIsNotNone(r) self.assertDictEqual(self.mock_subnet_rep, r) self.assert_calls() def test_get_subnet_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', self.subnet_id]), json={'subnet': self.mock_subnet_rep}) ]) r = self.cloud.get_subnet_by_id(self.subnet_id) self.assertIsNotNone(r) self.assertDictEqual(self.mock_subnet_rep, r) self.assert_calls() def test_create_subnet(self): pool = [{'start': '192.168.199.2', 'end': '192.168.199.254'}] dns = ['8.8.8.8'] routes = [{"destination": "0.0.0.0/0", "nexthop": "123.456.78.9"}] mock_subnet_rep = copy.copy(self.mock_subnet_rep) mock_subnet_rep['allocation_pools'] = pool mock_subnet_rep['dns_nameservers'] = dns mock_subnet_rep['host_routes'] = routes self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnet': mock_subnet_rep}, validate=dict( json={'subnet': { 'cidr': self.subnet_cidr, 'enable_dhcp': False, 'ip_version': 4, 'network_id': self.mock_network_rep['id'], 'allocation_pools': pool, 'dns_nameservers': dns, 'host_routes': routes}})) ]) subnet = self.cloud.create_subnet(self.network_name, self.subnet_cidr, allocation_pools=pool, dns_nameservers=dns, host_routes=routes) self.assertDictEqual(mock_subnet_rep, subnet) self.assert_calls() def test_create_subnet_string_ip_version(self): '''Allow ip_version as a string''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnet': self.mock_subnet_rep}, validate=dict( json={'subnet': { 'cidr': self.subnet_cidr, 'enable_dhcp': False, 'ip_version': 4, 'network_id': self.mock_network_rep['id']}})) ]) subnet = self.cloud.create_subnet( self.network_name, self.subnet_cidr, ip_version='4') self.assertDictEqual(self.mock_subnet_rep, subnet) self.assert_calls() def test_create_subnet_bad_ip_version(self): '''String ip_versions must be convertable to int''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, "ip_version must be an integer" ): self.cloud.create_subnet( self.network_name, self.subnet_cidr, ip_version='4x') self.assert_calls() def test_create_subnet_without_gateway_ip(self): pool = [{'start': '192.168.199.2', 'end': '192.168.199.254'}] dns = ['8.8.8.8'] mock_subnet_rep = copy.copy(self.mock_subnet_rep) mock_subnet_rep['allocation_pools'] = pool mock_subnet_rep['dns_nameservers'] = dns mock_subnet_rep['gateway_ip'] = None self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnet': mock_subnet_rep}, validate=dict( json={'subnet': { 'cidr': self.subnet_cidr, 'enable_dhcp': False, 'ip_version': 4, 'network_id': self.mock_network_rep['id'], 'allocation_pools': pool, 'gateway_ip': None, 'dns_nameservers': dns}})) ]) subnet = self.cloud.create_subnet(self.network_name, self.subnet_cidr, allocation_pools=pool, dns_nameservers=dns, disable_gateway_ip=True) self.assertDictEqual(mock_subnet_rep, subnet) self.assert_calls() def test_create_subnet_with_gateway_ip(self): pool = [{'start': '192.168.199.8', 'end': '192.168.199.254'}] gateway = '192.168.199.2' dns = ['8.8.8.8'] mock_subnet_rep = copy.copy(self.mock_subnet_rep) mock_subnet_rep['allocation_pools'] = pool mock_subnet_rep['dns_nameservers'] = dns mock_subnet_rep['gateway_ip'] = gateway self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnet': mock_subnet_rep}, validate=dict( json={'subnet': { 'cidr': self.subnet_cidr, 'enable_dhcp': False, 'ip_version': 4, 'network_id': self.mock_network_rep['id'], 'allocation_pools': pool, 'gateway_ip': gateway, 'dns_nameservers': dns}})) ]) subnet = self.cloud.create_subnet(self.network_name, self.subnet_cidr, allocation_pools=pool, dns_nameservers=dns, gateway_ip=gateway) self.assertDictEqual(mock_subnet_rep, subnet) self.assert_calls() def test_create_subnet_conflict_gw_ops(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}) ]) gateway = '192.168.200.3' self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, 'kooky', self.subnet_cidr, gateway_ip=gateway, disable_gateway_ip=True) self.assert_calls() def test_create_subnet_bad_network(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, 'duck', self.subnet_cidr) self.assert_calls() def test_create_subnet_non_unique_network(self): net1 = dict(id='123', name=self.network_name) net2 = dict(id='456', name=self.network_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [net1, net2]}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, self.network_name, self.subnet_cidr) self.assert_calls() def test_delete_subnet(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % self.subnet_id]), json={}) ]) self.assertTrue(self.cloud.delete_subnet(self.subnet_name)) self.assert_calls() def test_delete_subnet_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}) ]) self.assertFalse(self.cloud.delete_subnet('goofy')) self.assert_calls() def test_delete_subnet_multiple_found(self): subnet1 = dict(id='123', name=self.subnet_name) subnet2 = dict(id='456', name=self.subnet_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [subnet1, subnet2]}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.delete_subnet, self.subnet_name) self.assert_calls() def test_delete_subnet_multiple_using_id(self): subnet1 = dict(id='123', name=self.subnet_name) subnet2 = dict(id='456', name=self.subnet_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [subnet1, subnet2]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % subnet1['id']]), json={}) ]) self.assertTrue(self.cloud.delete_subnet(subnet1['id'])) self.assert_calls() def test_update_subnet(self): expected_subnet = copy.copy(self.mock_subnet_rep) expected_subnet['name'] = 'goofy' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % self.subnet_id]), json={'subnet': expected_subnet}, validate=dict( json={'subnet': {'name': 'goofy'}})) ]) subnet = self.cloud.update_subnet(self.subnet_id, subnet_name='goofy') self.assertDictEqual(expected_subnet, subnet) self.assert_calls() def test_update_subnet_gateway_ip(self): expected_subnet = copy.copy(self.mock_subnet_rep) gateway = '192.168.199.3' expected_subnet['gateway_ip'] = gateway self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % self.subnet_id]), json={'subnet': expected_subnet}, validate=dict( json={'subnet': {'gateway_ip': gateway}})) ]) subnet = self.cloud.update_subnet(self.subnet_id, gateway_ip=gateway) self.assertDictEqual(expected_subnet, subnet) self.assert_calls() def test_update_subnet_disable_gateway_ip(self): expected_subnet = copy.copy(self.mock_subnet_rep) expected_subnet['gateway_ip'] = None self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % self.subnet_id]), json={'subnet': expected_subnet}, validate=dict( json={'subnet': {'gateway_ip': None}})) ]) subnet = self.cloud.update_subnet(self.subnet_id, disable_gateway_ip=True) self.assertDictEqual(expected_subnet, subnet) self.assert_calls() def test_update_subnet_conflict_gw_ops(self): self.assertRaises(exc.OpenStackCloudException, self.cloud.update_subnet, self.subnet_id, gateway_ip="192.168.199.3", disable_gateway_ip=True) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_domains.py0000666000175100017510000002326713236151340024604 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid import testtools from testtools import matchers import openstack.cloud from openstack.tests.unit import base class TestDomains(base.RequestsMockTestCase): def get_mock_url(self, service_type='identity', interface='admin', resource='domains', append=None, base_url_append='v3'): return super(TestDomains, self).get_mock_url( service_type=service_type, interface=interface, resource=resource, append=append, base_url_append=base_url_append) def test_list_domains(self): domain_data = self._get_domain_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'domains': [domain_data.json_response['domain']]})]) domains = self.cloud.list_domains() self.assertThat(len(domains), matchers.Equals(1)) self.assertThat(domains[0].name, matchers.Equals(domain_data.domain_name)) self.assertThat(domains[0].id, matchers.Equals(domain_data.domain_id)) self.assert_calls() def test_get_domain(self): domain_data = self._get_domain_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(append=[domain_data.domain_id]), status_code=200, json=domain_data.json_response)]) domain = self.cloud.get_domain(domain_id=domain_data.domain_id) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assert_calls() def test_get_domain_with_name_or_id(self): domain_data = self._get_domain_data() response = {'domains': [domain_data.json_response['domain']]} self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json=response), dict(method='GET', uri=self.get_mock_url(), status_code=200, json=response)]) domain = self.cloud.get_domain(name_or_id=domain_data.domain_id) domain_by_name = self.cloud.get_domain( name_or_id=domain_data.domain_name) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assertThat(domain_by_name.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain_by_name.name, matchers.Equals(domain_data.domain_name)) self.assert_calls() def test_create_domain(self): domain_data = self._get_domain_data(description=uuid.uuid4().hex, enabled=True) self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=200, json=domain_data.json_response, validate=dict(json=domain_data.json_request))]) domain = self.cloud.create_domain( domain_data.domain_name, domain_data.description) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assertThat( domain.description, matchers.Equals(domain_data.description)) self.assert_calls() def test_create_domain_exception(self): domain_data = self._get_domain_data(domain_name='domain_name', enabled=True) with testtools.ExpectedException( openstack.cloud.OpenStackCloudBadRequest, "Failed to create domain domain_name" ): self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=400, json=domain_data.json_response, validate=dict(json=domain_data.json_request))]) self.cloud.create_domain('domain_name') self.assert_calls() def test_delete_domain(self): domain_data = self._get_domain_data() new_resp = domain_data.json_response.copy() new_resp['domain']['enabled'] = False domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=new_resp, validate=dict(json={'domain': {'enabled': False}})), dict(method='DELETE', uri=domain_resource_uri, status_code=204)]) self.cloud.delete_domain(domain_data.domain_id) self.assert_calls() def test_delete_domain_name_or_id(self): domain_data = self._get_domain_data() new_resp = domain_data.json_response.copy() new_resp['domain']['enabled'] = False domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'domains': [domain_data.json_response['domain']]}), dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=new_resp, validate=dict(json={'domain': {'enabled': False}})), dict(method='DELETE', uri=domain_resource_uri, status_code=204)]) self.cloud.delete_domain(name_or_id=domain_data.domain_id) self.assert_calls() def test_delete_domain_exception(self): # NOTE(notmorgan): This test does not reflect the case where the domain # cannot be updated to be disabled, Shade raises that as an unable # to update domain even though it is called via delete_domain. This # should be fixed in shade to catch either a failure on PATCH, # subsequent GET, or DELETE call(s). domain_data = self._get_domain_data() new_resp = domain_data.json_response.copy() new_resp['domain']['enabled'] = False domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=new_resp, validate=dict(json={'domain': {'enabled': False}})), dict(method='DELETE', uri=domain_resource_uri, status_code=404)]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudURINotFound, "Failed to delete domain %s" % domain_data.domain_id ): self.cloud.delete_domain(domain_data.domain_id) self.assert_calls() def test_update_domain(self): domain_data = self._get_domain_data( description=self.getUniqueString('domainDesc')) domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=domain_data.json_response, validate=dict(json=domain_data.json_request))]) domain = self.cloud.update_domain( domain_data.domain_id, name=domain_data.domain_name, description=domain_data.description) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assertThat( domain.description, matchers.Equals(domain_data.description)) self.assert_calls() def test_update_domain_name_or_id(self): domain_data = self._get_domain_data( description=self.getUniqueString('domainDesc')) domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'domains': [domain_data.json_response['domain']]}), dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=domain_data.json_response, validate=dict(json=domain_data.json_request))]) domain = self.cloud.update_domain( name_or_id=domain_data.domain_id, name=domain_data.domain_name, description=domain_data.description) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assertThat( domain.description, matchers.Equals(domain_data.description)) self.assert_calls() def test_update_domain_exception(self): domain_data = self._get_domain_data( description=self.getUniqueString('domainDesc')) self.register_uris([ dict(method='PATCH', uri=self.get_mock_url(append=[domain_data.domain_id]), status_code=409, json=domain_data.json_response, validate=dict(json={'domain': {'enabled': False}}))]) with testtools.ExpectedException( openstack.cloud.OpenStackCloudHTTPError, "Error in updating domain %s" % domain_data.domain_id ): self.cloud.delete_domain(domain_data.domain_id) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_role_assignment.py0000666000175100017510000041701513236151340026341 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from openstack.cloud import exc from openstack.tests.unit import base import testtools from testtools import matchers class TestRoleAssignment(base.RequestsMockTestCase): def _build_role_assignment_response(self, role_id, scope_type, scope_id, entity_type, entity_id): self.assertThat(['group', 'user'], matchers.Contains(entity_type)) self.assertThat(['project', 'domain'], matchers.Contains(scope_type)) # NOTE(notmorgan): Links are thrown out by shade, but we construct them # for corectness. link_str = ('https://identity.example.com/identity/v3/{scope_t}s' '/{scopeid}/{entity_t}s/{entityid}/roles/{roleid}') return [{ 'links': {'assignment': link_str.format( scope_t=scope_type, scopeid=scope_id, entity_t=entity_type, entityid=entity_id, roleid=role_id)}, 'role': {'id': role_id}, 'scope': {scope_type: {'id': scope_id}}, entity_type: {'id': entity_id} }] def setUp(self, cloud_config_fixture='clouds.yaml'): super(TestRoleAssignment, self).setUp(cloud_config_fixture) self.role_data = self._get_role_data() self.domain_data = self._get_domain_data() self.user_data = self._get_user_data( domain_id=self.domain_data.domain_id) self.project_data = self._get_project_data( domain_id=self.domain_data.domain_id) self.project_data_v2 = self._get_project_data( project_name=self.project_data.project_name, project_id=self.project_data.project_id, v3=False) self.group_data = self._get_group_data( domain_id=self.domain_data.domain_id) self.user_project_assignment = self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id) self.group_project_assignment = self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id) self.user_domain_assignment = self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id) self.group_domain_assignment = self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id) # Cleanup of instances to ensure garbage collection/no leaking memory # in tests. self.addCleanup(delattr, self, 'role_data') self.addCleanup(delattr, self, 'user_data') self.addCleanup(delattr, self, 'domain_data') self.addCleanup(delattr, self, 'group_data') self.addCleanup(delattr, self, 'project_data') self.addCleanup(delattr, self, 'project_data_v2') self.addCleanup(delattr, self, 'user_project_assignment') self.addCleanup(delattr, self, 'group_project_assignment') self.addCleanup(delattr, self, 'user_domain_assignment') self.addCleanup(delattr, self, 'group_domain_assignment') def get_mock_url(self, service_type='identity', interface='admin', resource='role_assignments', append=None, base_url_append='v3', qs_elements=None): return super(TestRoleAssignment, self).get_mock_url( service_type, interface, resource, append, base_url_append, qs_elements) def test_grant_role_user_v2(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', status_code=201, uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201) ]) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_user_project_v2(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201, ), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201) ]) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data_v2.project_id)) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data_v2.project_id)) self.assertTrue( self.cloud.grant_role( self.role_data.role_id, user=self.user_data.name, project=self.project_data_v2.project_id)) self.assertTrue( self.cloud.grant_role( self.role_data.role_id, user=self.user_data.user_id, project=self.project_data_v2.project_id)) self.assert_calls() def test_grant_role_user_project_v2_exists(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), ]) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data_v2.project_id)) self.assert_calls() def test_grant_role_user_project(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url( resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url( resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), ]) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_user_project_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), ]) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertFalse(self.cloud.grant_role( self.role_data.role_id, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_group_project(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url( resource='projects', append=[self.project_data.project_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url( resource='projects', append=[self.project_data.project_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, project=self.project_data.project_id)) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_group_project_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id)}), ]) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, project=self.project_data.project_id)) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_user_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_id)) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_id)) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_name)) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_grant_role_user_domain_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), ]) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_id)) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_id)) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_name)) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_grant_role_group_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_id)) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_id)) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name)) self.assertTrue(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_grant_role_group_domain_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), ]) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_id)) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_id)) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name)) self.assertFalse(self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_revoke_role_user_v2(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url( base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url( base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_project_v2(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}) ]) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_id, user=self.user_data.name, project=self.project_data.project_id)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_id, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_project_v2_exists(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_project(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), ]) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_project_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), ]) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_id, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_group_project(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), ]) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, project=self.project_data.project_id)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_group_project_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), ]) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, project=self.project_data.project_id)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), ]) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_id)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_id)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_name)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_revoke_role_user_domain_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), ]) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_name)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_name)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_id)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_id)) self.assert_calls() def test_revoke_role_group_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), ]) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_name)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_id)) self.assertFalse(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_id)) self.assert_calls() def test_revoke_role_group_domain_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), ]) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_name)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_id)) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_id)) self.assert_calls() def test_grant_no_role(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Role {0} not found'.format(self.role_data.role_name) ): self.cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name) self.assert_calls() def test_revoke_no_role(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Role {0} not found'.format(self.role_data.role_name) ): self.cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name) self.assert_calls() def test_grant_no_user_or_group_specified(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a user or a group' ): self.cloud.grant_role(self.role_data.role_name) self.assert_calls() def test_revoke_no_user_or_group_specified(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a user or a group' ): self.cloud.revoke_role(self.role_data.role_name) self.assert_calls() def test_grant_no_user_or_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a user or a group' ): self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name) self.assert_calls() def test_revoke_no_user_or_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a user or a group' ): self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name) self.assert_calls() def test_grant_both_user_and_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Specify either a group or a user, not both' ): self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, group=self.group_data.group_name) self.assert_calls() def test_revoke_both_user_and_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Specify either a group or a user, not both' ): self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, group=self.group_data.group_name) self.assert_calls() def test_grant_both_project_and_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % self.domain_data.domain_id)), status_code=200, json={'projects': [self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204) ]) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_revoke_both_project_and_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % self.domain_data.domain_id)), status_code=200, json={'projects': [self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204) ]) self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_grant_no_project_or_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=['user.id=%s' % self.user_data.user_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a domain or project' ): self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name) self.assert_calls() def test_revoke_no_project_or_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=['user.id=%s' % self.user_data.user_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a domain or project' ): self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name) self.assert_calls() def test_grant_bad_domain_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=['baddomain']), status_code=404, headers={'Content-Type': 'text/plain'}, text='Could not find domain: baddomain') ]) with testtools.ExpectedException( exc.OpenStackCloudURINotFound, 'Failed to get domain baddomain' ): self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain='baddomain') self.assert_calls() def test_revoke_bad_domain_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=['baddomain']), status_code=404, headers={'Content-Type': 'text/plain'}, text='Could not find domain: baddomain') ]) with testtools.ExpectedException( exc.OpenStackCloudURINotFound, 'Failed to get domain baddomain' ): self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain='baddomain') self.assert_calls() def test_grant_role_user_project_v2_wait(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), ]) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, wait=True)) self.assert_calls() def test_grant_role_user_project_v2_wait_exception(self): self.use_keystone_v2() with testtools.ExpectedException( exc.OpenStackCloudTimeout, 'Timeout waiting for role to be granted' ): self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[ self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[ self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), ]) self.assertTrue( self.cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, wait=True, timeout=0.01)) self.assert_calls(do_count=False) def test_revoke_role_user_project_v2_wait(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), ]) self.assertTrue( self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, wait=True)) self.assert_calls(do_count=False) def test_revoke_role_user_project_v2_wait_exception(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), ]) with testtools.ExpectedException( exc.OpenStackCloudTimeout, 'Timeout waiting for role to be revoked' ): self.assertTrue(self.cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, wait=True, timeout=0.01)) self.assert_calls(do_count=False) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_floating_ip_pool.py0000666000175100017510000000552213236151340026470 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_pool ---------------------------------- Test floating IP pool resource (managed by nova) """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.unit import base from openstack.tests import fakes class TestFloatingIPPool(base.RequestsMockTestCase): pools = [{'name': u'public'}] def test_list_floating_ip_pools(self): self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'extensions': [{ u'alias': u'os-floating-ip-pools', u'updated': u'2014-12-03T00:00:00Z', u'name': u'FloatingIpPools', u'links': [], u'namespace': u'http://docs.openstack.org/compute/ext/fake_xml', u'description': u'Floating IPs support.'}]}), dict(method='GET', uri='{endpoint}/os-floating-ip-pools'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={"floating_ip_pools": [{"name": "public"}]}) ]) floating_ip_pools = self.cloud.list_floating_ip_pools() self.assertItemsEqual(floating_ip_pools, self.pools) self.assert_calls() def test_list_floating_ip_pools_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'extensions': [{ u'alias': u'os-floating-ip-pools', u'updated': u'2014-12-03T00:00:00Z', u'name': u'FloatingIpPools', u'links': [], u'namespace': u'http://docs.openstack.org/compute/ext/fake_xml', u'description': u'Floating IPs support.'}]}), dict(method='GET', uri='{endpoint}/os-floating-ip-pools'.format( endpoint=fakes.COMPUTE_ENDPOINT), status_code=404)]) self.assertRaises( OpenStackCloudException, self.cloud.list_floating_ip_pools) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_baremetal_ports.py0000666000175100017510000001034413236151340026325 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_baremetal_ports ---------------------------------- Tests for baremetal port related operations """ from testscenarios import load_tests_apply_scenarios as load_tests # noqa from openstack.cloud import exc from openstack.tests import fakes from openstack.tests.unit import base class TestBaremetalPort(base.IronicTestCase): def setUp(self): super(TestBaremetalPort, self).setUp() self.fake_baremetal_node = fakes.make_fake_machine( self.name, self.uuid) # TODO(TheJulia): Some tests below have fake ports, # since they are required in some processes. Lets refactor # them at some point to use self.fake_baremetal_port. self.fake_baremetal_port = fakes.make_fake_port( '00:01:02:03:04:05', node_id=self.uuid) self.fake_baremetal_port2 = fakes.make_fake_port( '0a:0b:0c:0d:0e:0f', node_id=self.uuid) def test_list_nics(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='ports'), json={'ports': [self.fake_baremetal_port, self.fake_baremetal_port2]}), ]) return_value = self.cloud.list_nics() self.assertEqual(2, len(return_value)) self.assertEqual(self.fake_baremetal_port, return_value[0]) self.assert_calls() def test_list_nics_failure(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='ports'), status_code=400) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.list_nics) self.assert_calls() def test_list_nics_for_machine(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'ports']), json={'ports': [self.fake_baremetal_port, self.fake_baremetal_port2]}), ]) return_value = self.cloud.list_nics_for_machine( self.fake_baremetal_node['uuid']) self.assertEqual(2, len(return_value)) self.assertEqual(self.fake_baremetal_port, return_value[0]) self.assert_calls() def test_list_nics_for_machine_failure(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'ports']), status_code=400) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.list_nics_for_machine, self.fake_baremetal_node['uuid']) self.assert_calls() def test_get_nic_by_mac(self): mac = self.fake_baremetal_port['address'] query = 'detail?address=%s' % mac self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='ports', append=[query]), json={'ports': [self.fake_baremetal_port]}), ]) return_value = self.cloud.get_nic_by_mac(mac) self.assertEqual(self.fake_baremetal_port, return_value) self.assert_calls() def test_get_nic_by_mac_failure(self): mac = self.fake_baremetal_port['address'] query = 'detail?address=%s' % mac self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='ports', append=[query]), json={'ports': []}), ]) self.assertIsNone(self.cloud.get_nic_by_mac(mac)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_image.py0000666000175100017510000013016613236151364024237 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(mordred) There are mocks of the image_client in here that are not # using requests_mock. Erradicate them. import operator import tempfile import uuid import mock import munch import six import openstack.cloud from openstack.cloud import exc from openstack.cloud import meta from openstack.cloud import openstackcloud from openstack.tests import fakes from openstack.tests.unit import base CINDER_URL = 'https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0' class BaseTestImage(base.RequestsMockTestCase): def setUp(self): super(BaseTestImage, self).setUp() self.image_id = str(uuid.uuid4()) self.image_name = self.getUniqueString('image') self.object_name = u'images/{name}'.format(name=self.image_name) self.imagefile = tempfile.NamedTemporaryFile(delete=False) self.imagefile.write(b'\0') self.imagefile.close() self.fake_image_dict = fakes.make_fake_image( image_id=self.image_id, image_name=self.image_name) self.fake_search_return = {'images': [self.fake_image_dict]} self.output = uuid.uuid4().bytes self.container_name = self.getUniqueString('container') class TestImage(BaseTestImage): def setUp(self): super(TestImage, self).setUp() self.use_glance() def test_config_v1(self): self.cloud.cloud_config.config['image_api_version'] = '1' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v1/', self.cloud._image_client.get_endpoint()) self.assertEqual( '1', self.cloud_config.get_api_version('image')) def test_config_v2(self): self.cloud.cloud_config.config['image_api_version'] = '2' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v2/', self.cloud._image_client.get_endpoint()) self.assertEqual( '2', self.cloud_config.get_api_version('image')) def test_download_image_no_output(self): self.assertRaises(exc.OpenStackCloudException, self.cloud.download_image, self.image_name) def test_download_image_two_outputs(self): fake_fd = six.BytesIO() self.assertRaises(exc.OpenStackCloudException, self.cloud.download_image, self.image_name, output_path='fake_path', output_file=fake_fd) def test_download_image_no_images_found(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=dict(images=[]))]) self.assertRaises(exc.OpenStackCloudResourceNotFound, self.cloud.download_image, self.image_name, output_path='fake_path') self.assert_calls() def _register_image_mocks(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=self.fake_search_return), dict(method='GET', uri='https://image.example.com/v2/images/{id}/file'.format( id=self.image_id), content=self.output, headers={'Content-Type': 'application/octet-stream'}) ]) def test_download_image_with_fd(self): self._register_image_mocks() output_file = six.BytesIO() self.cloud.download_image(self.image_name, output_file=output_file) output_file.seek(0) self.assertEqual(output_file.read(), self.output) self.assert_calls() def test_download_image_with_path(self): self._register_image_mocks() output_file = tempfile.NamedTemporaryFile() self.cloud.download_image( self.image_name, output_path=output_file.name) output_file.seek(0) self.assertEqual(output_file.read(), self.output) self.assert_calls() def test_get_image_name(self, cloud=None): cloud = cloud or self.cloud self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return), ]) self.assertEqual( self.image_name, cloud.get_image_name(self.image_id)) self.assertEqual( self.image_name, cloud.get_image_name(self.image_name)) self.assert_calls() def test_get_image_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images', self.image_id], base_url_append='v2'), json=self.fake_image_dict) ]) self.assertEqual( self.cloud._normalize_image(self.fake_image_dict), self.cloud.get_image_by_id(self.image_id)) self.assert_calls() def test_get_image_id(self, cloud=None): cloud = cloud or self.cloud self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return), ]) self.assertEqual( self.image_id, cloud.get_image_id(self.image_id)) self.assertEqual( self.image_id, cloud.get_image_id(self.image_name)) self.assert_calls() def test_get_image_name_operator(self): # This should work the same as non-operator, just verifying it does. self.test_get_image_name(cloud=self.cloud) def test_get_image_id_operator(self): # This should work the same as the other test, just verifying it does. self.test_get_image_id(cloud=self.cloud) def test_empty_list_images(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': []}) ]) self.assertEqual([], self.cloud.list_images()) self.assert_calls() def test_list_images(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() def test_list_images_show_all(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2', qs_elements=['member_status=all']), json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([self.fake_image_dict]), self.cloud.list_images(show_all=True)) self.assert_calls() def test_list_images_show_all_deleted(self): deleted_image = self.fake_image_dict.copy() deleted_image['status'] = 'deleted' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2', qs_elements=['member_status=all']), json={'images': [self.fake_image_dict, deleted_image]}) ]) self.assertEqual( self.cloud._normalize_images([ self.fake_image_dict, deleted_image]), self.cloud.list_images(show_all=True)) self.assert_calls() def test_list_images_no_filter_deleted(self): deleted_image = self.fake_image_dict.copy() deleted_image['status'] = 'deleted' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': [self.fake_image_dict, deleted_image]}) ]) self.assertEqual( self.cloud._normalize_images([ self.fake_image_dict, deleted_image]), self.cloud.list_images(filter_deleted=False)) self.assert_calls() def test_list_images_filter_deleted(self): deleted_image = self.fake_image_dict.copy() deleted_image['status'] = 'deleted' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': [self.fake_image_dict, deleted_image]}) ]) self.assertEqual( self.cloud._normalize_images([self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() def test_list_images_string_properties(self): image_dict = self.fake_image_dict.copy() image_dict['properties'] = 'list,of,properties' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': [image_dict]}), ]) images = self.cloud.list_images() self.assertEqual( self.cloud._normalize_images([image_dict]), images) self.assertEqual( images[0]['properties']['properties'], 'list,of,properties') self.assert_calls() def test_list_images_paginated(self): marker = str(uuid.uuid4()) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': [self.fake_image_dict], 'next': '/v2/images?marker={marker}'.format( marker=marker)}), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2', qs_elements=['marker={marker}'.format(marker=marker)]), json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([ self.fake_image_dict, self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() def test_create_image_put_v2(self): self.cloud.image_api_use_tasks = False self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': []}), dict(method='POST', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_image_dict, validate=dict( json={ u'container_format': u'bare', u'disk_format': u'qcow2', u'name': self.image_name, u'owner_specified.openstack.md5': fakes.NO_MD5, u'owner_specified.openstack.object': self.object_name, u'owner_specified.openstack.sha256': fakes.NO_SHA256, u'visibility': u'private'}) ), dict(method='PUT', uri=self.get_mock_url( 'image', append=['images', self.image_id, 'file'], base_url_append='v2'), request_headers={'Content-Type': 'application/octet-stream'}), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return) ]) self.cloud.create_image( self.image_name, self.imagefile.name, wait=True, timeout=1, is_public=False) self.assert_calls() self.assertEqual(self.adapter.request_history[5].text.read(), b'\x00') def test_create_image_task(self): self.cloud.image_api_use_tasks = True endpoint = self.cloud._object_store_client.get_endpoint() task_id = str(uuid.uuid4()) args = dict( id=task_id, status='success', type='import', result={ 'image_id': self.image_id, }, ) image_no_checksums = self.fake_image_dict.copy() del(image_no_checksums['owner_specified.openstack.md5']) del(image_no_checksums['owner_specified.openstack.sha256']) del(image_no_checksums['owner_specified.openstack.object']) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': []}), dict(method='GET', # This is explicitly not using get_mock_url because that # gets us a project-id oriented URL. uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': 1000}, slo={'min_segment_size': 500})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=endpoint, container=self.container_name), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=endpoint, container=self.container_name), status_code=201, headers={'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8'}), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=endpoint, container=self.container_name), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name), status_code=404), dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name), status_code=201, validate=dict( headers={'x-object-meta-x-sdk-md5': fakes.NO_MD5, 'x-object-meta-x-sdk-sha256': fakes.NO_SHA256}) ), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': []}), dict(method='POST', uri=self.get_mock_url( 'image', append=['tasks'], base_url_append='v2'), json=args, validate=dict( json=dict( type='import', input={ 'import_from': '{container}/{object}'.format( container=self.container_name, object=self.image_name), 'image_properties': {'name': self.image_name}})) ), dict(method='GET', uri=self.get_mock_url( 'image', append=['tasks', task_id], base_url_append='v2'), status_code=503, text='Random error'), dict(method='GET', uri=self.get_mock_url( 'image', append=['tasks', task_id], base_url_append='v2'), json=args), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': [image_no_checksums]}), dict(method='PATCH', uri=self.get_mock_url( 'image', append=['images', self.image_id], base_url_append='v2'), validate=dict( json=sorted([ {u'op': u'add', u'value': '{container}/{object}'.format( container=self.container_name, object=self.image_name), u'path': u'/owner_specified.openstack.object'}, {u'op': u'add', u'value': fakes.NO_MD5, u'path': u'/owner_specified.openstack.md5'}, {u'op': u'add', u'value': fakes.NO_SHA256, u'path': u'/owner_specified.openstack.sha256'}], key=operator.itemgetter('value')), headers={ 'Content-Type': 'application/openstack-images-v2.1-json-patch'}) ), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name), headers={ 'X-Timestamp': '1429036140.50253', 'X-Trans-Id': 'txbbb825960a3243b49a36f-005a0dadaedfw1', 'Content-Length': '1290170880', 'Last-Modified': 'Tue, 14 Apr 2015 18:29:01 GMT', 'X-Object-Meta-X-Sdk-Sha256': fakes.NO_SHA256, 'X-Object-Meta-X-Sdk-Md5': fakes.NO_MD5, 'Date': 'Thu, 16 Nov 2017 15:24:30 GMT', 'Accept-Ranges': 'bytes', 'Content-Type': 'application/octet-stream', 'Etag': fakes.NO_MD5}), dict(method='DELETE', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name)), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return) ]) self.cloud.create_image( self.image_name, self.imagefile.name, wait=True, timeout=1, is_public=False, container=self.container_name) self.assert_calls() def test_delete_autocreated_no_tasks(self): self.use_nothing() self.cloud.image_api_use_tasks = False deleted = self.cloud.delete_autocreated_image_objects( container=self.container_name) self.assertFalse(deleted) self.assert_calls() def test_delete_autocreated_image_objects(self): self.use_keystone_v3() self.cloud.image_api_use_tasks = True endpoint = self.cloud._object_store_client.get_endpoint() other_image = self.getUniqueString('no-delete') self.register_uris([ dict(method='GET', uri=self.get_mock_url( service_type='object-store', resource=self.container_name, qs_elements=['format=json']), json=[{ 'content_type': 'application/octet-stream', 'bytes': 1437258240, 'hash': '249219347276c331b87bf1ac2152d9af', 'last_modified': '2015-02-16T17:50:05.289600', 'name': other_image, }, { 'content_type': 'application/octet-stream', 'bytes': 1290170880, 'hash': fakes.NO_MD5, 'last_modified': '2015-04-14T18:29:00.502530', 'name': self.image_name, }]), dict(method='HEAD', uri=self.get_mock_url( service_type='object-store', resource=self.container_name, append=[other_image]), headers={ 'X-Timestamp': '1429036140.50253', 'X-Trans-Id': 'txbbb825960a3243b49a36f-005a0dadaedfw1', 'Content-Length': '1290170880', 'Last-Modified': 'Tue, 14 Apr 2015 18:29:01 GMT', 'X-Object-Meta-X-Shade-Sha256': 'does not matter', 'X-Object-Meta-X-Shade-Md5': 'does not matter', 'Date': 'Thu, 16 Nov 2017 15:24:30 GMT', 'Accept-Ranges': 'bytes', 'Content-Type': 'application/octet-stream', 'Etag': '249219347276c331b87bf1ac2152d9af', }), dict(method='HEAD', uri=self.get_mock_url( service_type='object-store', resource=self.container_name, append=[self.image_name]), headers={ 'X-Timestamp': '1429036140.50253', 'X-Trans-Id': 'txbbb825960a3243b49a36f-005a0dadaedfw1', 'Content-Length': '1290170880', 'Last-Modified': 'Tue, 14 Apr 2015 18:29:01 GMT', 'X-Object-Meta-X-Shade-Sha256': fakes.NO_SHA256, 'X-Object-Meta-X-Shade-Md5': fakes.NO_MD5, 'Date': 'Thu, 16 Nov 2017 15:24:30 GMT', 'Accept-Ranges': 'bytes', 'Content-Type': 'application/octet-stream', openstackcloud.OBJECT_AUTOCREATE_KEY: 'true', 'Etag': fakes.NO_MD5}), dict(method='DELETE', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name)), ]) deleted = self.cloud.delete_autocreated_image_objects( container=self.container_name) self.assertTrue(deleted) self.assert_calls() def _image_dict(self, fake_image): return self.cloud._normalize_image(meta.obj_to_munch(fake_image)) def _munch_images(self, fake_image): return self.cloud._normalize_images([fake_image]) def _call_create_image(self, name, **kwargs): imagefile = tempfile.NamedTemporaryFile(delete=False) imagefile.write(b'\0') imagefile.close() self.cloud.create_image( name, imagefile.name, wait=True, timeout=1, is_public=False, **kwargs) # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_create_image_put_v1( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = False mock_image_client.get.return_value = [] self.assertEqual([], self.cloud.list_images()) args = {'name': '42 name', 'container_format': 'bare', 'disk_format': 'qcow2', 'properties': { 'owner_specified.openstack.md5': mock.ANY, 'owner_specified.openstack.sha256': mock.ANY, 'owner_specified.openstack.object': 'images/42 name', 'is_public': False}} ret = munch.Munch(args.copy()) ret['id'] = '42' ret['status'] = 'success' mock_image_client.get.side_effect = [ [], [ret], [ret], ] mock_image_client.post.return_value = ret mock_image_client.put.return_value = ret self._call_create_image('42 name') mock_image_client.post.assert_called_with('/images', json=args) mock_image_client.put.assert_called_with( '/images/42', data=mock.ANY, headers={ 'x-image-meta-checksum': mock.ANY, 'x-glance-registry-purge-props': 'false' }) mock_image_client.get.assert_called_with('/images/detail', params={}) self.assertEqual( self._munch_images(ret), self.cloud.list_images()) # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_create_image_put_v1_bad_delete( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = False mock_image_client.get.return_value = [] self.assertEqual([], self.cloud.list_images()) args = {'name': '42 name', 'container_format': 'bare', 'disk_format': 'qcow2', 'properties': { 'owner_specified.openstack.md5': mock.ANY, 'owner_specified.openstack.sha256': mock.ANY, 'owner_specified.openstack.object': 'images/42 name', 'is_public': False}} ret = munch.Munch(args.copy()) ret['id'] = '42' ret['status'] = 'success' mock_image_client.get.side_effect = [ [], [ret], ] mock_image_client.post.return_value = ret mock_image_client.put.side_effect = exc.OpenStackCloudHTTPError( "Some error", {}) self.assertRaises( exc.OpenStackCloudHTTPError, self._call_create_image, '42 name') mock_image_client.post.assert_called_with('/images', json=args) mock_image_client.put.assert_called_with( '/images/42', data=mock.ANY, headers={ 'x-image-meta-checksum': mock.ANY, 'x-glance-registry-purge-props': 'false' }) mock_image_client.delete.assert_called_with('/images/42') # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_update_image_no_patch( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = True self.cloud.image_api_use_tasks = False mock_image_client.get.return_value = [] self.assertEqual([], self.cloud.list_images()) args = {'name': '42 name', 'container_format': 'bare', 'disk_format': 'qcow2', 'owner_specified.openstack.md5': mock.ANY, 'owner_specified.openstack.sha256': mock.ANY, 'owner_specified.openstack.object': 'images/42 name', 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = munch.Munch(args.copy()) ret['id'] = '42' ret['status'] = 'success' mock_image_client.get.side_effect = [ [], [ret], [ret], ] self.cloud.update_image_properties( image=self._image_dict(ret), **{'owner_specified.openstack.object': 'images/42 name'}) mock_image_client.get.assert_called_with('/images', params={}) mock_image_client.patch.assert_not_called() # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_create_image_put_v2_bad_delete( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = True self.cloud.image_api_use_tasks = False mock_image_client.get.return_value = [] self.assertEqual([], self.cloud.list_images()) args = {'name': '42 name', 'container_format': 'bare', 'disk_format': 'qcow2', 'owner_specified.openstack.md5': mock.ANY, 'owner_specified.openstack.sha256': mock.ANY, 'owner_specified.openstack.object': 'images/42 name', 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = munch.Munch(args.copy()) ret['id'] = '42' ret['status'] = 'success' mock_image_client.get.side_effect = [ [], [ret], [ret], ] mock_image_client.post.return_value = ret mock_image_client.put.side_effect = exc.OpenStackCloudHTTPError( "Some error", {}) self.assertRaises( exc.OpenStackCloudHTTPError, self._call_create_image, '42 name', min_disk='0', min_ram=0) mock_image_client.post.assert_called_with('/images', json=args) mock_image_client.put.assert_called_with( '/images/42/file', headers={'Content-Type': 'application/octet-stream'}, data=mock.ANY) mock_image_client.delete.assert_called_with('/images/42') # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_create_image_put_bad_int( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = True self.cloud.image_api_use_tasks = False self.assertRaises( exc.OpenStackCloudException, self._call_create_image, '42 name', min_disk='fish', min_ram=0) mock_image_client.post.assert_not_called() # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_create_image_put_user_int( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = True self.cloud.image_api_use_tasks = False args = {'name': '42 name', 'container_format': 'bare', 'disk_format': u'qcow2', 'owner_specified.openstack.md5': mock.ANY, 'owner_specified.openstack.sha256': mock.ANY, 'owner_specified.openstack.object': 'images/42 name', 'int_v': '12345', 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = munch.Munch(args.copy()) ret['id'] = '42' ret['status'] = 'success' mock_image_client.get.side_effect = [ [], [ret], [ret] ] mock_image_client.post.return_value = ret self._call_create_image( '42 name', min_disk='0', min_ram=0, int_v=12345) mock_image_client.post.assert_called_with('/images', json=args) mock_image_client.put.assert_called_with( '/images/42/file', headers={'Content-Type': 'application/octet-stream'}, data=mock.ANY) mock_image_client.get.assert_called_with('/images', params={}) self.assertEqual( self._munch_images(ret), self.cloud.list_images()) # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_create_image_put_meta_int( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = True self.cloud.image_api_use_tasks = False mock_image_client.get.return_value = [] self.assertEqual([], self.cloud.list_images()) self._call_create_image( '42 name', min_disk='0', min_ram=0, meta={'int_v': 12345}) args = {'name': '42 name', 'container_format': 'bare', 'disk_format': u'qcow2', 'owner_specified.openstack.md5': mock.ANY, 'owner_specified.openstack.sha256': mock.ANY, 'owner_specified.openstack.object': 'images/42 name', 'int_v': 12345, 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = munch.Munch(args.copy()) ret['id'] = '42' ret['status'] = 'success' mock_image_client.get.return_value = [ret] mock_image_client.post.return_value = ret mock_image_client.get.assert_called_with('/images', params={}) self.assertEqual( self._munch_images(ret), self.cloud.list_images()) # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_create_image_put_protected( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = True self.cloud.image_api_use_tasks = False mock_image_client.get.return_value = [] self.assertEqual([], self.cloud.list_images()) args = {'name': '42 name', 'container_format': 'bare', 'disk_format': u'qcow2', 'owner_specified.openstack.md5': mock.ANY, 'owner_specified.openstack.sha256': mock.ANY, 'owner_specified.openstack.object': 'images/42 name', 'protected': False, 'int_v': '12345', 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = munch.Munch(args.copy()) ret['id'] = '42' ret['status'] = 'success' mock_image_client.get.side_effect = [ [], [ret], [ret], ] mock_image_client.put.return_value = ret mock_image_client.post.return_value = ret self._call_create_image( '42 name', min_disk='0', min_ram=0, properties={'int_v': 12345}, protected=False) mock_image_client.post.assert_called_with('/images', json=args) mock_image_client.put.assert_called_with( '/images/42/file', data=mock.ANY, headers={'Content-Type': 'application/octet-stream'}) self.assertEqual(self._munch_images(ret), self.cloud.list_images()) # TODO(shade) Migrate this to requests-mock @mock.patch.object(openstack.cloud.OpenStackCloud, '_is_client_version') @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_create_image_put_user_prop( self, mock_image_client, mock_is_client_version): mock_is_client_version.return_value = True self.cloud.image_api_use_tasks = False mock_image_client.get.return_value = [] self.assertEqual([], self.cloud.list_images()) args = {'name': '42 name', 'container_format': 'bare', 'disk_format': u'qcow2', 'owner_specified.openstack.md5': mock.ANY, 'owner_specified.openstack.sha256': mock.ANY, 'owner_specified.openstack.object': 'images/42 name', 'int_v': '12345', 'xenapi_use_agent': 'False', 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = munch.Munch(args.copy()) ret['id'] = '42' ret['status'] = 'success' mock_image_client.get.return_value = [ret] mock_image_client.post.return_value = ret self._call_create_image( '42 name', min_disk='0', min_ram=0, properties={'int_v': 12345}) mock_image_client.get.assert_called_with('/images', params={}) self.assertEqual( self._munch_images(ret), self.cloud.list_images()) class TestImageSuburl(BaseTestImage): def setUp(self): super(TestImageSuburl, self).setUp() self.use_keystone_v3(catalog='catalog-v3-suburl.json') self.use_glance( image_version_json='image-version-suburl.json', image_discovery_url='https://example.com/image') def test_list_images(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() def test_list_images_paginated(self): marker = str(uuid.uuid4()) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': [self.fake_image_dict], 'next': '/v2/images?marker={marker}'.format( marker=marker)}), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2', qs_elements=['marker={marker}'.format(marker=marker)]), json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([ self.fake_image_dict, self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() class TestImageV1Only(base.RequestsMockTestCase): def setUp(self): super(TestImageV1Only, self).setUp() self.use_glance(image_version_json='image-version-v1.json') def test_config_v1(self): self.cloud.cloud_config.config['image_api_version'] = '1' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v1/', self.cloud._image_client.get_endpoint()) self.assertTrue(self.cloud._is_client_version('image', 1)) def test_config_v2(self): self.cloud.cloud_config.config['image_api_version'] = '2' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v1/', self.cloud._image_client.get_endpoint()) self.assertFalse(self.cloud._is_client_version('image', 2)) class TestImageV2Only(base.RequestsMockTestCase): def setUp(self): super(TestImageV2Only, self).setUp() self.use_glance(image_version_json='image-version-v2.json') def test_config_v1(self): self.cloud.cloud_config.config['image_api_version'] = '1' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v2/', self.cloud._image_client.get_endpoint()) self.assertTrue(self.cloud._is_client_version('image', 2)) def test_config_v2(self): self.cloud.cloud_config.config['image_api_version'] = '2' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v2/', self.cloud._image_client.get_endpoint()) self.assertTrue(self.cloud._is_client_version('image', 2)) class TestImageVolume(BaseTestImage): def setUp(self): super(TestImageVolume, self).setUp() self.volume_id = str(uuid.uuid4()) def test_create_image_volume(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', append=['volumes', self.volume_id, 'action']), json={'os-volume_upload_image': {'image_id': self.image_id}}, validate=dict(json={ u'os-volume_upload_image': { u'container_format': u'bare', u'disk_format': u'qcow2', u'force': False, u'image_name': u'fake_image'}}) ), # NOTE(notmorgan): Glance discovery happens here, insert the # glance discovery mock at this point, DO NOT use the # .use_glance() method, that is intended only for use in # .setUp self.get_glance_discovery_mock_dict(), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return) ]) self.cloud.create_image( 'fake_image', self.imagefile.name, wait=True, timeout=1, volume={'id': self.volume_id}) self.assert_calls() def test_create_image_volume_duplicate(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', append=['volumes', self.volume_id, 'action']), json={'os-volume_upload_image': {'image_id': self.image_id}}, validate=dict(json={ u'os-volume_upload_image': { u'container_format': u'bare', u'disk_format': u'qcow2', u'force': True, u'image_name': u'fake_image'}}) ), # NOTE(notmorgan): Glance discovery happens here, insert the # glance discovery mock at this point, DO NOT use the # .use_glance() method, that is intended only for use in # .setUp self.get_glance_discovery_mock_dict(), dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json=self.fake_search_return) ]) self.cloud.create_image( 'fake_image', self.imagefile.name, wait=True, timeout=1, volume={'id': self.volume_id}, allow_duplicates=True) self.assert_calls() class TestImageBrokenDiscovery(base.RequestsMockTestCase): def setUp(self): super(TestImageBrokenDiscovery, self).setUp() self.use_glance(image_version_json='image-version-broken.json') def test_url_fix(self): # image-version-broken.json has both http urls and localhost as the # host. This is testing that what is discovered is https, because # that's what's in the catalog, and image.example.com for the same # reason. self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'image', append=['images'], base_url_append='v2'), json={'images': []}) ]) self.assertEqual([], self.cloud.list_images()) self.assertEqual( self.cloud._image_client.get_endpoint(), 'https://image.example.com/v2/') self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_zone.py0000666000175100017510000001213413236151340024114 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import testtools import openstack.cloud from openstack.tests.unit import base zone_dict = { 'name': 'example.net.', 'type': 'PRIMARY', 'email': 'test@example.net', 'description': 'Example zone', 'ttl': 3600, } new_zone_dict = copy.copy(zone_dict) new_zone_dict['id'] = '1' class TestZone(base.RequestsMockTestCase): def setUp(self): super(TestZone, self).setUp() self.use_designate() def test_create_zone(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json=new_zone_dict, validate=dict( json=zone_dict)) ]) z = self.cloud.create_zone( name=zone_dict['name'], zone_type=zone_dict['type'], email=zone_dict['email'], description=zone_dict['description'], ttl=zone_dict['ttl'], masters=None) self.assertEqual(new_zone_dict, z) self.assert_calls() def test_create_zone_exception(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), status_code=500) ]) with testtools.ExpectedException( openstack.cloud.exc.OpenStackCloudHTTPError, "Unable to create zone example.net." ): self.cloud.create_zone('example.net.') self.assert_calls() def test_update_zone(self): new_ttl = 7200 updated_zone = copy.copy(new_zone_dict) updated_zone['ttl'] = new_ttl self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [new_zone_dict], "links": {}, "metadata": { 'total_count': 1}}), dict(method='PATCH', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1']), json=updated_zone, validate=dict( json={"ttl": new_ttl})) ]) z = self.cloud.update_zone('1', ttl=new_ttl) self.assertEqual(updated_zone, z) self.assert_calls() def test_delete_zone(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [new_zone_dict], "links": {}, "metadata": { 'total_count': 1}}), dict(method='DELETE', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1']), json=new_zone_dict) ]) self.assertTrue(self.cloud.delete_zone('1')) self.assert_calls() def test_get_zone_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [new_zone_dict], "links": {}, "metadata": { 'total_count': 1}}) ]) zone = self.cloud.get_zone('1') self.assertEqual(zone['id'], '1') self.assert_calls() def test_get_zone_by_name(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [new_zone_dict], "links": {}, "metadata": { 'total_count': 1}}) ]) zone = self.cloud.get_zone('example.net.') self.assertEqual(zone['name'], 'example.net.') self.assert_calls() def test_get_zone_not_found_returns_false(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [], "links": {}, "metadata": { 'total_count': 1}}) ]) zone = self.cloud.get_zone('nonexistingzone.net.') self.assertFalse(zone) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_baremetal_node.py0000666000175100017510000017475113236151340026120 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_baremetal_node ---------------------------------- Tests for baremetal node related operations """ import uuid from testscenarios import load_tests_apply_scenarios as load_tests # noqa from openstack.cloud import exc from openstack.tests import fakes from openstack.tests.unit import base class TestBaremetalNode(base.IronicTestCase): def setUp(self): super(TestBaremetalNode, self).setUp() # TODO(shade) Fix this when we get ironic update to REST self.skipTest("Ironic operations not supported yet") self.fake_baremetal_node = fakes.make_fake_machine( self.name, self.uuid) # TODO(TheJulia): Some tests below have fake ports, # since they are required in some processes. Lets refactor # them at some point to use self.fake_baremetal_port. self.fake_baremetal_port = fakes.make_fake_port( '00:01:02:03:04:05', node_id=self.uuid) def test_list_machines(self): fake_baremetal_two = fakes.make_fake_machine('two', str(uuid.uuid4())) self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='nodes'), json={'nodes': [self.fake_baremetal_node, fake_baremetal_two]}), ]) machines = self.cloud.list_machines() self.assertEqual(2, len(machines)) self.assertEqual(self.fake_baremetal_node, machines[0]) self.assert_calls() def test_get_machine(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) machine = self.cloud.get_machine(self.fake_baremetal_node['uuid']) self.assertEqual(machine['uuid'], self.fake_baremetal_node['uuid']) self.assert_calls() def test_get_machine_by_mac(self): mac_address = '00:01:02:03:04:05' url_address = 'detail?address=%s' % mac_address node_uuid = self.fake_baremetal_node['uuid'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='ports', append=[url_address]), json={'ports': [{'address': mac_address, 'node_uuid': node_uuid}]}), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) machine = self.cloud.get_machine_by_mac(mac_address) self.assertEqual(machine['uuid'], self.fake_baremetal_node['uuid']) self.assert_calls() def test_validate_node(self): # NOTE(TheJulia): Note: These are only the interfaces # that are validated, and both must be true for an # exception to not be raised. # This should be fixed at some point, as some interfaces # are important in some cases and should be validated, # such as storage. validate_return = { 'deploy': { 'result': True, }, 'power': { 'result': True, }, 'foo': { 'result': False, }} self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'validate']), json=validate_return), ]) self.cloud.validate_node(self.fake_baremetal_node['uuid']) self.assert_calls() # FIXME(TheJulia): So, this doesn't presently fail, but should fail. # Placing the test here, so we can sort out the issue in the actual # method later. # def test_validate_node_raises_exception(self): # validate_return = { # 'deploy': { # 'result': False, # 'reason': 'error!', # }, # 'power': { # 'result': False, # 'reason': 'meow!', # }, # 'foo': { # 'result': True # }} # self.register_uris([ # dict(method='GET', # uri=self.get_mock_url( # resource='nodes', # append=[self.fake_baremetal_node['uuid'], # 'validate']), # json=validate_return), # ]) # self.assertRaises( # Exception, # self.cloud.validate_node, # self.fake_baremetal_node['uuid']) # # self.assert_calls() def test_patch_machine(self): test_patch = [{ 'op': 'remove', 'path': '/instance_info'}] self.fake_baremetal_node['instance_info'] = {} self.register_uris([ dict(method='PATCH', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node, validate=dict(json=test_patch)), ]) self.cloud.patch_machine( self.fake_baremetal_node['uuid'], test_patch) self.assert_calls() def test_set_node_instance_info(self): test_patch = [{ 'op': 'add', 'path': '/foo', 'value': 'bar'}] self.register_uris([ dict(method='PATCH', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node, validate=dict(json=test_patch)), ]) self.cloud.set_node_instance_info( self.fake_baremetal_node['uuid'], test_patch) self.assert_calls() def test_purge_node_instance_info(self): test_patch = [{ 'op': 'remove', 'path': '/instance_info'}] self.fake_baremetal_node['instance_info'] = {} self.register_uris([ dict(method='PATCH', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node, validate=dict(json=test_patch)), ]) self.cloud.purge_node_instance_info( self.fake_baremetal_node['uuid']) self.assert_calls() def test_inspect_machine_fail_active(self): self.fake_baremetal_node['provision_state'] = 'active' self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.inspect_machine, self.fake_baremetal_node['uuid'], wait=True, timeout=1) self.assert_calls() def test_inspect_machine_failed(self): inspecting_node = self.fake_baremetal_node.copy() self.fake_baremetal_node['provision_state'] = 'inspect failed' self.fake_baremetal_node['last_error'] = 'kaboom!' inspecting_node['provision_state'] = 'inspecting' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node) ]) self.cloud.inspect_machine(self.fake_baremetal_node['uuid']) self.assert_calls() def test_inspect_machine_manageable(self): self.fake_baremetal_node['provision_state'] = 'manageable' inspecting_node = self.fake_baremetal_node.copy() inspecting_node['provision_state'] = 'inspecting' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), ]) self.cloud.inspect_machine(self.fake_baremetal_node['uuid']) self.assert_calls() def test_inspect_machine_available(self): available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' manageable_node = self.fake_baremetal_node.copy() manageable_node['provision_state'] = 'manageable' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) self.cloud.inspect_machine(self.fake_baremetal_node['uuid']) self.assert_calls() def test_inspect_machine_available_wait(self): available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' manageable_node = self.fake_baremetal_node.copy() manageable_node['provision_state'] = 'manageable' inspecting_node = self.fake_baremetal_node.copy() inspecting_node['provision_state'] = 'inspecting' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) self.cloud.inspect_machine( self.fake_baremetal_node['uuid'], wait=True, timeout=1) self.assert_calls() def test_inspect_machine_wait(self): self.fake_baremetal_node['provision_state'] = 'manageable' inspecting_node = self.fake_baremetal_node.copy() inspecting_node['provision_state'] = 'inspecting' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.cloud.inspect_machine( self.fake_baremetal_node['uuid'], wait=True, timeout=1) self.assert_calls() def test_inspect_machine_inspect_failed(self): self.fake_baremetal_node['provision_state'] = 'manageable' inspecting_node = self.fake_baremetal_node.copy() inspecting_node['provision_state'] = 'inspecting' inspect_fail_node = self.fake_baremetal_node.copy() inspect_fail_node['provision_state'] = 'inspect failed' inspect_fail_node['last_error'] = 'Earth Imploded' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspect_fail_node), ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.inspect_machine, self.fake_baremetal_node['uuid'], wait=True, timeout=1) self.assert_calls() def test_set_machine_maintenace_state(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'maintenance']), validate=dict(json={'reason': 'no reason'})), ]) self.cloud.set_machine_maintenance_state( self.fake_baremetal_node['uuid'], True, reason='no reason') self.assert_calls() def test_set_machine_maintenace_state_false(self): self.register_uris([ dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'maintenance'])), ]) self.cloud.set_machine_maintenance_state( self.fake_baremetal_node['uuid'], False) self.assert_calls def test_remove_machine_from_maintenance(self): self.register_uris([ dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'maintenance'])), ]) self.cloud.remove_machine_from_maintenance( self.fake_baremetal_node['uuid']) self.assert_calls() def test_set_machine_power_on(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power on'})), ]) return_value = self.cloud.set_machine_power_on( self.fake_baremetal_node['uuid']) self.assertIsNone(return_value) self.assert_calls() def test_set_machine_power_on_with_retires(self): # NOTE(TheJulia): This logic ends up testing power on/off and reboot # as they all utilize the same helper method. self.register_uris([ dict( method='PUT', status_code=503, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power on'})), dict( method='PUT', status_code=409, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power on'})), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power on'})), ]) return_value = self.cloud.set_machine_power_on( self.fake_baremetal_node['uuid']) self.assertIsNone(return_value) self.assert_calls() def test_set_machine_power_off(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power off'})), ]) return_value = self.cloud.set_machine_power_off( self.fake_baremetal_node['uuid']) self.assertIsNone(return_value) self.assert_calls() def test_set_machine_power_reboot(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'rebooting'})), ]) return_value = self.cloud.set_machine_power_reboot( self.fake_baremetal_node['uuid']) self.assertIsNone(return_value) self.assert_calls() def test_set_machine_power_reboot_failure(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), status_code=400, json={'error': 'invalid'}, validate=dict(json={'target': 'rebooting'})), ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.set_machine_power_reboot, self.fake_baremetal_node['uuid']) self.assert_calls() def test_node_set_provision_state(self): deploy_node = self.fake_baremetal_node.copy() deploy_node['provision_state'] = 'deploying' active_node = self.fake_baremetal_node.copy() active_node['provision_state'] = 'active' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'active', configdrive='http://host/file') self.assert_calls() def test_node_set_provision_state_with_retries(self): deploy_node = self.fake_baremetal_node.copy() deploy_node['provision_state'] = 'deploying' active_node = self.fake_baremetal_node.copy() active_node['provision_state'] = 'active' self.register_uris([ dict( method='PUT', status_code=409, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict( method='PUT', status_code=503, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'active', configdrive='http://host/file') self.assert_calls() def test_node_set_provision_state_wait_timeout(self): deploy_node = self.fake_baremetal_node.copy() deploy_node['provision_state'] = 'deploying' active_node = self.fake_baremetal_node.copy() active_node['provision_state'] = 'active' self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=deploy_node), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=active_node), ]) return_value = self.cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'active', wait=True) self.assertEqual(active_node, return_value) self.assert_calls() def test_node_set_provision_state_wait_timeout_fails(self): # Intentionally time out. self.fake_baremetal_node['provision_state'] = 'deploy wait' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.node_set_provision_state, self.fake_baremetal_node['uuid'], 'active', wait=True, timeout=0.001) self.assert_calls() def test_node_set_provision_state_wait_success(self): self.fake_baremetal_node['provision_state'] = 'active' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) return_value = self.cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'active', wait=True) self.assertEqual(self.fake_baremetal_node, return_value) self.assert_calls() def test_node_set_provision_state_wait_failure_cases(self): self.fake_baremetal_node['provision_state'] = 'foo failed' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.node_set_provision_state, self.fake_baremetal_node['uuid'], 'active', wait=True, timeout=300) self.assert_calls() def test_node_set_provision_state_wait_provide(self): self.fake_baremetal_node['provision_state'] = 'manageable' available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) return_value = self.cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'provide', wait=True) self.assertEqual(available_node, return_value) self.assert_calls() def test_wait_for_baremetal_node_lock_locked(self): self.fake_baremetal_node['reservation'] = 'conductor0' unlocked_node = self.fake_baremetal_node.copy() unlocked_node['reservation'] = None self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=unlocked_node), ]) self.assertIsNone( self.cloud.wait_for_baremetal_node_lock( self.fake_baremetal_node, timeout=1)) self.assert_calls() def test_wait_for_baremetal_node_lock_not_locked(self): self.fake_baremetal_node['reservation'] = None self.assertIsNone( self.cloud.wait_for_baremetal_node_lock( self.fake_baremetal_node, timeout=1)) self.assertEqual(0, len(self.adapter.request_history)) def test_wait_for_baremetal_node_lock_timeout(self): self.fake_baremetal_node['reservation'] = 'conductor0' self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.wait_for_baremetal_node_lock, self.fake_baremetal_node, timeout=0.001) self.assert_calls() def test_activate_node(self): self.fake_baremetal_node['provision_state'] = 'active' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) return_value = self.cloud.activate_node( self.fake_baremetal_node['uuid'], configdrive='http://host/file', wait=True) self.assertIsNone(return_value) self.assert_calls() def test_deactivate_node(self): self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'deleted'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) return_value = self.cloud.deactivate_node( self.fake_baremetal_node['uuid'], wait=True) self.assertIsNone(return_value) self.assert_calls() def test_register_machine(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] # TODO(TheJulia): There is a lot of duplication # in testing creation. Surely this hsould be a helper # or something. We should fix this. node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'available' if 'provision_state' in node_to_post: node_to_post.pop('provision_state') self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), ]) return_value = self.cloud.register_machine(nics, **node_to_post) self.assertDictEqual(self.fake_baremetal_node, return_value) self.assert_calls() # TODO(TheJulia): We need to de-duplicate these tests. # Possibly a dedicated class, although we should do it # then as we may find differences that need to be # accounted for newer microversions. def test_register_machine_enroll(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' manageable_node = self.fake_baremetal_node.copy() manageable_node['provision_state'] = 'manageable' available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), validate=dict(json=node_to_post), json=self.fake_baremetal_node), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) # NOTE(When we migrate to a newer microversion, this test # may require revision. It was written for microversion # ?1.13?, which accidently got reverted to 1.6 at one # point during code being refactored soon after the # change landed. Presently, with the lock at 1.6, # this code is never used in the current code path. return_value = self.cloud.register_machine(nics, **node_to_post) self.assertDictEqual(available_node, return_value) self.assert_calls() def test_register_machine_enroll_wait(self): mac_address = self.fake_baremetal_port nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' manageable_node = self.fake_baremetal_node.copy() manageable_node['provision_state'] = 'manageable' available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), validate=dict(json=node_to_post), json=self.fake_baremetal_node), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) return_value = self.cloud.register_machine( nics, wait=True, **node_to_post) self.assertDictEqual(available_node, return_value) self.assert_calls() def test_register_machine_enroll_failure(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' failed_node = self.fake_baremetal_node.copy() failed_node['reservation'] = 'conductor0' failed_node['provision_state'] = 'verifying' failed_node['last_error'] = 'kaboom!' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=failed_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=failed_node), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.register_machine, nics, **node_to_post) self.assert_calls() def test_register_machine_enroll_timeout(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' busy_node = self.fake_baremetal_node.copy() busy_node['reservation'] = 'conductor0' busy_node['provision_state'] = 'verifying' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=busy_node), ]) # NOTE(TheJulia): This test shortcircuits the timeout loop # such that it executes only once. The very last returned # state to the API is essentially a busy state that we # want to block on until it has cleared. self.assertRaises( exc.OpenStackCloudException, self.cloud.register_machine, nics, timeout=0.001, lock_timeout=0.001, **node_to_post) self.assert_calls() def test_register_machine_enroll_timeout_wait(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.register_machine, nics, wait=True, timeout=0.001, **node_to_post) self.assert_calls() def test_register_machine_port_create_failed(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), status_code=400, json={'error': 'invalid'}, validate=dict(json={'address': mac_address, 'node_uuid': node_uuid})), dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.register_machine, nics, **node_to_post) self.assert_calls() def test_unregister_machine(self): mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] port_uuid = self.fake_baremetal_port['uuid'] # NOTE(TheJulia): The two values below should be the same. port_node_uuid = self.fake_baremetal_port['node_uuid'] port_url_address = 'detail?address=%s' % mac_address self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='ports', append=[port_url_address]), json={'ports': [{'address': mac_address, 'node_uuid': port_node_uuid, 'uuid': port_uuid}]}), dict( method='DELETE', uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), ]) self.cloud.unregister_machine( nics, self.fake_baremetal_node['uuid']) self.assert_calls() def test_unregister_machine_timeout(self): mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] port_uuid = self.fake_baremetal_port['uuid'] port_node_uuid = self.fake_baremetal_port['node_uuid'] port_url_address = 'detail?address=%s' % mac_address self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='ports', append=[port_url_address]), json={'ports': [{'address': mac_address, 'node_uuid': port_node_uuid, 'uuid': port_uuid}]}), dict( method='DELETE', uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.unregister_machine, nics, self.fake_baremetal_node['uuid'], wait=True, timeout=0.001) self.assert_calls() def test_unregister_machine_locked_timeout(self): mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] self.fake_baremetal_node['provision_state'] = 'available' self.fake_baremetal_node['reservation'] = 'conductor99' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.unregister_machine, nics, self.fake_baremetal_node['uuid'], timeout=0.001) self.assert_calls() def test_unregister_machine_retries(self): mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] port_uuid = self.fake_baremetal_port['uuid'] # NOTE(TheJulia): The two values below should be the same. port_node_uuid = self.fake_baremetal_port['node_uuid'] port_url_address = 'detail?address=%s' % mac_address self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='ports', append=[port_url_address]), json={'ports': [{'address': mac_address, 'node_uuid': port_node_uuid, 'uuid': port_uuid}]}), dict( method='DELETE', status_code=503, uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', status_code=409, uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', status_code=409, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), ]) self.cloud.unregister_machine( nics, self.fake_baremetal_node['uuid']) self.assert_calls() def test_unregister_machine_unavailable(self): # This is a list of invalid states that the method # should fail on. invalid_states = ['active', 'cleaning', 'clean wait', 'clean failed'] mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] url_list = [] for state in invalid_states: self.fake_baremetal_node['provision_state'] = state url_list.append( dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node)) self.register_uris(url_list) for state in invalid_states: self.assertRaises( exc.OpenStackCloudException, self.cloud.unregister_machine, nics, self.fake_baremetal_node['uuid']) self.assert_calls() def test_update_machine_patch_no_action(self): self.register_uris([dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) # NOTE(TheJulia): This is just testing mechanics. update_dict = self.cloud.update_machine( self.fake_baremetal_node['uuid']) self.assertIsNone(update_dict['changes']) self.assertDictEqual(self.fake_baremetal_node, update_dict['node']) self.assert_calls() class TestUpdateMachinePatch(base.IronicTestCase): # NOTE(TheJulia): As appears, and mordred describes, # this class utilizes black magic, which ultimately # results in additional test runs being executed with # the scenario name appended. Useful for lots of # variables that need to be tested. def setUp(self): super(TestUpdateMachinePatch, self).setUp() # TODO(shade) Fix this when we get ironic update to REST self.skipTest("Ironic operations not supported yet") self.fake_baremetal_node = fakes.make_fake_machine( self.name, self.uuid) def test_update_machine_patch(self): # The model has evolved over time, create the field if # we don't already have it. if self.field_name not in self.fake_baremetal_node: self.fake_baremetal_node[self.field_name] = None value_to_send = self.fake_baremetal_node[self.field_name] if self.changed: value_to_send = 'meow' uris = [dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ] if self.changed: test_patch = [{ 'op': 'replace', 'path': '/' + self.field_name, 'value': 'meow'}] uris.append( dict( method='PATCH', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node, validate=dict(json=test_patch))) self.register_uris(uris) call_args = {self.field_name: value_to_send} update_dict = self.cloud.update_machine( self.fake_baremetal_node['uuid'], **call_args) if not self.changed: self.assertIsNone(update_dict['changes']) self.assertDictEqual(self.fake_baremetal_node, update_dict['node']) self.assert_calls() scenarios = [ ('chassis_uuid', dict(field_name='chassis_uuid', changed=False)), ('chassis_uuid_changed', dict(field_name='chassis_uuid', changed=True)), ('driver', dict(field_name='driver', changed=False)), ('driver_changed', dict(field_name='driver', changed=True)), ('driver_info', dict(field_name='driver_info', changed=False)), ('driver_info_changed', dict(field_name='driver_info', changed=True)), ('instance_info', dict(field_name='instance_info', changed=False)), ('instance_info_changed', dict(field_name='instance_info', changed=True)), ('instance_uuid', dict(field_name='instance_uuid', changed=False)), ('instance_uuid_changed', dict(field_name='instance_uuid', changed=True)), ('name', dict(field_name='name', changed=False)), ('name_changed', dict(field_name='name', changed=True)), ('properties', dict(field_name='properties', changed=False)), ('properties_changed', dict(field_name='properties', changed=True)) ] openstacksdk-0.11.3/openstack/tests/unit/cloud/test_aggregate.py0000666000175100017510000001517513236151340025077 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.unit import base from openstack.tests import fakes class TestAggregate(base.RequestsMockTestCase): def setUp(self): super(TestAggregate, self).setUp() self.aggregate_name = self.getUniqueString('aggregate') self.fake_aggregate = fakes.make_fake_aggregate(1, self.aggregate_name) def test_create_aggregate(self): create_aggregate = self.fake_aggregate.copy() del create_aggregate['metadata'] del create_aggregate['hosts'] self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregate': create_aggregate}, validate=dict(json={ 'aggregate': { 'name': self.aggregate_name, 'availability_zone': None, }})), ]) self.cloud.create_aggregate(name=self.aggregate_name) self.assert_calls() def test_create_aggregate_with_az(self): availability_zone = 'az1' az_aggregate = fakes.make_fake_aggregate( 1, self.aggregate_name, availability_zone=availability_zone) create_aggregate = az_aggregate.copy() del create_aggregate['metadata'] del create_aggregate['hosts'] self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregate': create_aggregate}, validate=dict(json={ 'aggregate': { 'name': self.aggregate_name, 'availability_zone': availability_zone, }})), ]) self.cloud.create_aggregate( name=self.aggregate_name, availability_zone=availability_zone) self.assert_calls() def test_delete_aggregate(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1'])), ]) self.assertTrue(self.cloud.delete_aggregate('1')) self.assert_calls() def test_update_aggregate_set_az(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1']), json={'aggregate': self.fake_aggregate}, validate=dict( json={ 'aggregate': { 'availability_zone': 'az', }})), ]) self.cloud.update_aggregate(1, availability_zone='az') self.assert_calls() def test_update_aggregate_unset_az(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1']), json={'aggregate': self.fake_aggregate}, validate=dict( json={ 'aggregate': { 'availability_zone': None, }})), ]) self.cloud.update_aggregate(1, availability_zone=None) self.assert_calls() def test_set_aggregate_metadata(self): metadata = {'key': 'value'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1', 'action']), json={'aggregate': self.fake_aggregate}, validate=dict( json={'set_metadata': {'metadata': metadata}})), ]) self.cloud.set_aggregate_metadata('1', metadata) self.assert_calls() def test_add_host_to_aggregate(self): hostname = 'host1' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1', 'action']), json={'aggregate': self.fake_aggregate}, validate=dict( json={'add_host': {'host': hostname}})), ]) self.cloud.add_host_to_aggregate('1', hostname) self.assert_calls() def test_remove_host_from_aggregate(self): hostname = 'host1' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1', 'action']), json={'aggregate': self.fake_aggregate}, validate=dict( json={'remove_host': {'host': hostname}})), ]) self.cloud.remove_host_from_aggregate('1', hostname) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_rebuild_server.py0000666000175100017510000002211713236151340026157 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_rebuild_server ---------------------------------- Tests for the `rebuild_server` command. """ import uuid from openstack.cloud import exc from openstack.tests import fakes from openstack.tests.unit import base class TestRebuildServer(base.RequestsMockTestCase): def setUp(self): super(TestRebuildServer, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) self.rebuild_server = fakes.make_fake_server( self.server_id, self.server_name, 'REBUILD') self.error_server = fakes.make_fake_server( self.server_id, self.server_name, 'ERROR') def test_rebuild_server_rebuild_exception(self): """ Test that an exception in the rebuild raises an exception in rebuild_server. """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), status_code=400, validate=dict( json={ 'rebuild': { 'imageRef': 'a', 'adminPass': 'b'}})), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.rebuild_server, self.fake_server['id'], "a", "b") self.assert_calls() def test_rebuild_server_server_error(self): """ Test that a server error while waiting for the server to rebuild raises an exception in rebuild_server. """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': self.rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.error_server]}), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.rebuild_server, self.fake_server['id'], "a", wait=True) self.assert_calls() def test_rebuild_server_timeout(self): """ Test that a timeout while waiting for the server to rebuild raises an exception in rebuild_server. """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': self.rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.rebuild_server]}), ]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.rebuild_server, self.fake_server['id'], "a", wait=True, timeout=0.001) self.assert_calls(do_count=False) def test_rebuild_server_no_wait(self): """ Test that rebuild_server with no wait and no exception in the rebuild call returns the server instance. """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': self.rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a'}})), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( self.rebuild_server['status'], self.cloud.rebuild_server(self.fake_server['id'], "a")['status']) self.assert_calls() def test_rebuild_server_with_admin_pass_no_wait(self): """ Test that a server with an admin_pass passed returns the password """ password = self.getUniqueString('password') rebuild_server = self.rebuild_server.copy() rebuild_server['adminPass'] = password self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a', 'adminPass': password}})), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( password, self.cloud.rebuild_server( self.fake_server['id'], 'a', admin_pass=password)['adminPass']) self.assert_calls() def test_rebuild_server_with_admin_pass_wait(self): """ Test that a server with an admin_pass passed returns the password """ password = self.getUniqueString('password') rebuild_server = self.rebuild_server.copy() rebuild_server['adminPass'] = password self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a', 'adminPass': password}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.rebuild_server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( password, self.cloud.rebuild_server( self.fake_server['id'], 'a', admin_pass=password, wait=True)['adminPass']) self.assert_calls() def test_rebuild_server_wait(self): """ Test that rebuild_server with a wait returns the server instance when its status changes to "ACTIVE". """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': self.rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.rebuild_server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( 'ACTIVE', self.cloud.rebuild_server( self.fake_server['id'], 'a', wait=True)['status']) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_floating_ip_nova.py0000666000175100017510000002555413236151340026471 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_nova ---------------------------------- Tests Floating IP resource methods for nova-network """ from openstack.tests import fakes from openstack.tests.unit import base def get_fake_has_service(has_service): def fake_has_service(s): if s == 'network': return False return has_service(s) return fake_has_service class TestFloatingIP(base.RequestsMockTestCase): mock_floating_ip_list_rep = [ { 'fixed_ip': None, 'id': 1, 'instance_id': None, 'ip': '203.0.113.1', 'pool': 'nova' }, { 'fixed_ip': None, 'id': 2, 'instance_id': None, 'ip': '203.0.113.2', 'pool': 'nova' }, { 'fixed_ip': '192.0.2.3', 'id': 29, 'instance_id': 'myself', 'ip': '198.51.100.29', 'pool': 'black_hole' } ] mock_floating_ip_pools = [ {'id': 'pool1_id', 'name': 'nova'}, {'id': 'pool2_id', 'name': 'pool2'}] def assertAreInstances(self, elements, elem_type): for e in elements: self.assertIsInstance(e, elem_type) def setUp(self): super(TestFloatingIP, self).setUp() self.fake_server = fakes.make_fake_server( 'server-id', '', 'ACTIVE', addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': '192.0.2.129', u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42'}]}) self.cloud.has_service = get_fake_has_service(self.cloud.has_service) def test_list_floating_ips(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), ]) floating_ips = self.cloud.list_floating_ips() self.assertIsInstance(floating_ips, list) self.assertEqual(3, len(floating_ips)) self.assertAreInstances(floating_ips, dict) self.assert_calls() def test_list_floating_ips_with_filters(self): self.assertRaisesRegex( ValueError, "Nova-network don't support server-side", self.cloud.list_floating_ips, filters={'Foo': 42} ) def test_search_floating_ips(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), ]) floating_ips = self.cloud.search_floating_ips( filters={'attached': False}) self.assertIsInstance(floating_ips, list) self.assertEqual(2, len(floating_ips)) self.assertAreInstances(floating_ips, dict) self.assert_calls() def test_get_floating_ip(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), ]) floating_ip = self.cloud.get_floating_ip(id='29') self.assertIsInstance(floating_ip, dict) self.assertEqual('198.51.100.29', floating_ip['floating_ip_address']) self.assert_calls() def test_get_floating_ip_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), ]) floating_ip = self.cloud.get_floating_ip(id='666') self.assertIsNone(floating_ip) self.assert_calls() def test_get_floating_ip_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips', '1']), json={'floating_ip': self.mock_floating_ip_list_rep[0]}), ]) floating_ip = self.cloud.get_floating_ip_by_id(id='1') self.assertIsInstance(floating_ip, dict) self.assertEqual('203.0.113.1', floating_ip['floating_ip_address']) self.assert_calls() def test_create_floating_ip(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ip': self.mock_floating_ip_list_rep[1]}, validate=dict( json={'pool': 'nova'})), dict(method='GET', uri=self.get_mock_url( 'compute', append=['os-floating-ips', '2']), json={'floating_ip': self.mock_floating_ip_list_rep[1]}), ]) self.cloud.create_floating_ip(network='nova') self.assert_calls() def test_available_floating_ip_existing(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep[:1]}), ]) ip = self.cloud.available_floating_ip(network='nova') self.assertEqual(self.mock_floating_ip_list_rep[0]['ip'], ip['floating_ip_address']) self.assert_calls() def test_available_floating_ip_new(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': []}), dict(method='POST', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ip': self.mock_floating_ip_list_rep[0]}, validate=dict( json={'pool': 'nova'})), dict(method='GET', uri=self.get_mock_url( 'compute', append=['os-floating-ips', '1']), json={'floating_ip': self.mock_floating_ip_list_rep[0]}), ]) ip = self.cloud.available_floating_ip(network='nova') self.assertEqual(self.mock_floating_ip_list_rep[0]['ip'], ip['floating_ip_address']) self.assert_calls() def test_delete_floating_ip_existing(self): self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', append=['os-floating-ips', 'a-wild-id-appears'])), dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': []}), ]) ret = self.cloud.delete_floating_ip( floating_ip_id='a-wild-id-appears') self.assertTrue(ret) self.assert_calls() def test_delete_floating_ip_not_found(self): self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', append=['os-floating-ips', 'a-wild-id-appears']), status_code=404), ]) ret = self.cloud.delete_floating_ip( floating_ip_id='a-wild-id-appears') self.assertFalse(ret) self.assert_calls() def test_attach_ip_to_server(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), dict(method='POST', uri=self.get_mock_url( 'compute', append=['servers', self.fake_server['id'], 'action']), validate=dict( json={ "addFloatingIp": { "address": "203.0.113.1", "fixed_address": "192.0.2.129", }})), ]) self.cloud._attach_ip_to_server( server=self.fake_server, floating_ip=self.cloud._normalize_floating_ip( self.mock_floating_ip_list_rep[0]), fixed_address='192.0.2.129') self.assert_calls() def test_detach_ip_from_server(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), dict(method='POST', uri=self.get_mock_url( 'compute', append=['servers', self.fake_server['id'], 'action']), validate=dict( json={ "removeFloatingIp": { "address": "203.0.113.1", }})), ]) self.cloud.detach_ip_from_server( server_id='server-id', floating_ip_id=1) self.assert_calls() def test_add_ip_from_pool(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), dict(method='POST', uri=self.get_mock_url( 'compute', append=['servers', self.fake_server['id'], 'action']), validate=dict( json={ "addFloatingIp": { "address": "203.0.113.1", "fixed_address": "192.0.2.129", }})), ]) server = self.cloud._add_ip_from_pool( server=self.fake_server, network='nova', fixed_address='192.0.2.129') self.assertEqual(server, self.fake_server) self.assert_calls() def test_cleanup_floating_ips(self): # This should not call anything because it's unsafe on nova. self.cloud.delete_unattached_floating_ips() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_network.py0000666000175100017510000002657413236151340024647 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import testtools import openstack import openstack.cloud from openstack.tests.unit import base class TestNetwork(base.RequestsMockTestCase): mock_new_network_rep = { 'provider:physical_network': None, 'ipv6_address_scope': None, 'revision_number': 3, 'port_security_enabled': True, 'provider:network_type': 'local', 'id': '881d1bb7-a663-44c0-8f9f-ee2765b74486', 'router:external': False, 'availability_zone_hints': [], 'availability_zones': [], 'provider:segmentation_id': None, 'ipv4_address_scope': None, 'shared': False, 'project_id': '861808a93da0484ea1767967c4df8a23', 'status': 'ACTIVE', 'subnets': [], 'description': '', 'tags': [], 'updated_at': '2017-04-22T19:22:53Z', 'is_default': False, 'qos_policy_id': None, 'name': 'netname', 'admin_state_up': True, 'tenant_id': '861808a93da0484ea1767967c4df8a23', 'created_at': '2017-04-22T19:22:53Z', 'mtu': 0 } network_availability_zone_extension = { "alias": "network_availability_zone", "updated": "2015-01-01T10:00:00-00:00", "description": "Availability zone support for router.", "links": [], "name": "Network Availability Zone" } enabled_neutron_extensions = [network_availability_zone_extension] def test_list_networks(self): net1 = {'id': '1', 'name': 'net1'} net2 = {'id': '2', 'name': 'net2'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [net1, net2]}) ]) nets = self.cloud.list_networks() self.assertEqual([net1, net2], nets) self.assert_calls() def test_list_networks_filtered(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json'], qs_elements=["name=test"]), json={'networks': []}) ]) self.cloud.list_networks(filters={'name': 'test'}) self.assert_calls() def test_create_network(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': self.mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname'}})) ]) network = self.cloud.create_network("netname") self.assertEqual(self.mock_new_network_rep, network) self.assert_calls() def test_create_network_specific_tenant(self): project_id = "project_id_value" mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep['project_id'] = project_id self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname', 'tenant_id': project_id}})) ]) network = self.cloud.create_network("netname", project_id=project_id) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_external(self): mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep['router:external'] = True self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname', 'router:external': True}})) ]) network = self.cloud.create_network("netname", external=True) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_provider(self): provider_opts = {'physical_network': 'mynet', 'network_type': 'vlan', 'segmentation_id': 'vlan1'} new_network_provider_opts = { 'provider:physical_network': 'mynet', 'provider:network_type': 'vlan', 'provider:segmentation_id': 'vlan1' } mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep.update(new_network_provider_opts) expected_send_params = { 'admin_state_up': True, 'name': 'netname' } expected_send_params.update(new_network_provider_opts) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': expected_send_params})) ]) network = self.cloud.create_network("netname", provider=provider_opts) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_with_availability_zone_hints(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': self.mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname', 'availability_zone_hints': ['nova']}})) ]) network = self.cloud.create_network("netname", availability_zone_hints=['nova']) self.assertEqual(self.mock_new_network_rep, network) self.assert_calls() def test_create_network_provider_ignored_value(self): provider_opts = {'physical_network': 'mynet', 'network_type': 'vlan', 'segmentation_id': 'vlan1', 'should_not_be_passed': 1} new_network_provider_opts = { 'provider:physical_network': 'mynet', 'provider:network_type': 'vlan', 'provider:segmentation_id': 'vlan1' } mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep.update(new_network_provider_opts) expected_send_params = { 'admin_state_up': True, 'name': 'netname' } expected_send_params.update(new_network_provider_opts) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': expected_send_params})) ]) network = self.cloud.create_network("netname", provider=provider_opts) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_wrong_availability_zone_hints_type(self): azh_opts = "invalid" with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Parameter 'availability_zone_hints' must be a list" ): self.cloud.create_network("netname", availability_zone_hints=azh_opts) def test_create_network_provider_wrong_type(self): provider_opts = "invalid" with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Parameter 'provider' must be a dict" ): self.cloud.create_network("netname", provider=provider_opts) def test_delete_network(self): network_id = "test-net-id" network_name = "network" network = {'id': network_id, 'name': network_name} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks', "%s.json" % network_id]), json={}) ]) self.assertTrue(self.cloud.delete_network(network_name)) self.assert_calls() def test_delete_network_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertFalse(self.cloud.delete_network('test-net')) self.assert_calls() def test_delete_network_exception(self): network_id = "test-net-id" network_name = "network" network = {'id': network_id, 'name': network_name} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks', "%s.json" % network_id]), status_code=503) ]) self.assertRaises(openstack.cloud.OpenStackCloudException, self.cloud.delete_network, network_name) self.assert_calls() def test_get_network_by_id(self): network_id = "test-net-id" network_name = "network" network = {'id': network_id, 'name': network_name} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks', "%s" % network_id]), json={'network': network}) ]) self.assertTrue(self.cloud.get_network_by_id(network_id)) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_groups.py0000666000175100017510000000746713236151340024475 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from openstack.tests.unit import base class TestGroups(base.RequestsMockTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(TestGroups, self).setUp( cloud_config_fixture=cloud_config_fixture) self.addCleanup(self.assert_calls) def get_mock_url(self, service_type='identity', interface='admin', resource='groups', append=None, base_url_append='v3'): return super(TestGroups, self).get_mock_url( service_type='identity', interface='admin', resource=resource, append=append, base_url_append=base_url_append) def test_list_groups(self): group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'groups': [group_data.json_response['group']]}) ]) self.cloud.list_groups() def test_get_group(self): group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'groups': [group_data.json_response['group']]}), ]) self.cloud.get_group(group_data.group_id) def test_delete_group(self): group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='DELETE', uri=self.get_mock_url(append=[group_data.group_id]), status_code=204), ]) self.assertTrue(self.cloud.delete_group(group_data.group_id)) def test_create_group(self): domain_data = self._get_domain_data() group_data = self._get_group_data(domain_id=domain_data.domain_id) self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='domains', append=[domain_data.domain_id]), status_code=200, json=domain_data.json_response), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=group_data.json_response, validate=dict(json=group_data.json_request)) ]) self.cloud.create_group( name=group_data.group_name, description=group_data.description, domain=group_data.domain_id) def test_update_group(self): group_data = self._get_group_data() # Domain ID is not sent group_data.json_request['group'].pop('domain_id') self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='PATCH', uri=self.get_mock_url(append=[group_data.group_id]), status_code=200, json=group_data.json_response, validate=dict(json=group_data.json_request)) ]) self.cloud.update_group( group_data.group_id, group_data.group_name, group_data.description) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_caching.py0000666000175100017510000005433113236151364024550 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import concurrent import time import mock import munch import testtools import openstack import openstack.cloud from openstack.cloud import exc from openstack.cloud import meta from openstack.tests import fakes from openstack.tests.unit import base # Mock out the gettext function so that the task schema can be copypasta def _(msg): return msg _TASK_PROPERTIES = { "id": { "description": _("An identifier for the task"), "pattern": _('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}' '-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$'), "type": "string" }, "type": { "description": _("The type of task represented by this content"), "enum": [ "import", ], "type": "string" }, "status": { "description": _("The current status of this task"), "enum": [ "pending", "processing", "success", "failure" ], "type": "string" }, "input": { "description": _("The parameters required by task, JSON blob"), "type": ["null", "object"], }, "result": { "description": _("The result of current task, JSON blob"), "type": ["null", "object"], }, "owner": { "description": _("An identifier for the owner of this task"), "type": "string" }, "message": { "description": _("Human-readable informative message only included" " when appropriate (usually on failure)"), "type": "string", }, "expires_at": { "description": _("Datetime when this resource would be" " subject to removal"), "type": ["null", "string"] }, "created_at": { "description": _("Datetime when this resource was created"), "type": "string" }, "updated_at": { "description": _("Datetime when this resource was updated"), "type": "string" }, 'self': {'type': 'string'}, 'schema': {'type': 'string'} } _TASK_SCHEMA = dict( name='Task', properties=_TASK_PROPERTIES, additionalProperties=False, ) class TestMemoryCache(base.RequestsMockTestCase): def setUp(self): super(TestMemoryCache, self).setUp( cloud_config_fixture='clouds_cache.yaml') def _image_dict(self, fake_image): return self.cloud._normalize_image(meta.obj_to_munch(fake_image)) def _munch_images(self, fake_image): return self.cloud._normalize_images([fake_image]) def test_openstack_cloud(self): self.assertIsInstance(self.cloud, openstack.cloud.OpenStackCloud) def test_list_projects_v3(self): project_one = self._get_project_data() project_two = self._get_project_data() project_list = [project_one, project_two] first_response = {'projects': [project_one.json_response['project']]} second_response = {'projects': [p.json_response['project'] for p in project_list]} mock_uri = self.get_mock_url( service_type='identity', interface='admin', resource='projects', base_url_append='v3') self.register_uris([ dict(method='GET', uri=mock_uri, status_code=200, json=first_response), dict(method='GET', uri=mock_uri, status_code=200, json=second_response)]) self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(first_response['projects'])), self.cloud.list_projects()) self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(first_response['projects'])), self.cloud.list_projects()) # invalidate the list_projects cache self.cloud.list_projects.invalidate(self.cloud) # ensure the new values are now retrieved self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(second_response['projects'])), self.cloud.list_projects()) self.assert_calls() def test_list_projects_v2(self): self.use_keystone_v2() project_one = self._get_project_data(v3=False) project_two = self._get_project_data(v3=False) project_list = [project_one, project_two] first_response = {'tenants': [project_one.json_response['tenant']]} second_response = {'tenants': [p.json_response['tenant'] for p in project_list]} mock_uri = self.get_mock_url( service_type='identity', interface='admin', resource='tenants') self.register_uris([ dict(method='GET', uri=mock_uri, status_code=200, json=first_response), dict(method='GET', uri=mock_uri, status_code=200, json=second_response)]) self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(first_response['tenants'])), self.cloud.list_projects()) self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(first_response['tenants'])), self.cloud.list_projects()) # invalidate the list_projects cache self.cloud.list_projects.invalidate(self.cloud) # ensure the new values are now retrieved self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(second_response['tenants'])), self.cloud.list_projects()) self.assert_calls() def test_list_servers_no_herd(self): self.cloud._SERVER_AGE = 2 fake_server = fakes.make_fake_server('1234', 'name') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), ]) with concurrent.futures.ThreadPoolExecutor(16) as pool: for i in range(16): pool.submit(lambda: self.cloud.list_servers(bare=True)) # It's possible to race-condition 16 threads all in the # single initial lock without a tiny sleep time.sleep(0.001) self.assert_calls() def test_list_volumes(self): fake_volume = fakes.FakeVolume('volume1', 'available', 'Volume 1 Display Name') fake_volume_dict = meta.obj_to_munch(fake_volume) fake_volume2 = fakes.FakeVolume('volume2', 'available', 'Volume 2 Display Name') fake_volume2_dict = meta.obj_to_munch(fake_volume2) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volume_dict]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volume_dict, fake_volume2_dict]})]) self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict)], self.cloud.list_volumes()) # this call should hit the cache self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict)], self.cloud.list_volumes()) self.cloud.list_volumes.invalidate(self.cloud) self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict), self.cloud._normalize_volume(fake_volume2_dict)], self.cloud.list_volumes()) self.assert_calls() def test_list_volumes_creating_invalidates(self): fake_volume = fakes.FakeVolume('volume1', 'creating', 'Volume 1 Display Name') fake_volume_dict = meta.obj_to_munch(fake_volume) fake_volume2 = fakes.FakeVolume('volume2', 'available', 'Volume 2 Display Name') fake_volume2_dict = meta.obj_to_munch(fake_volume2) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volume_dict]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volume_dict, fake_volume2_dict]})]) self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict)], self.cloud.list_volumes()) self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict), self.cloud._normalize_volume(fake_volume2_dict)], self.cloud.list_volumes()) self.assert_calls() def test_create_volume_invalidates(self): fake_volb4 = meta.obj_to_munch( fakes.FakeVolume('volume1', 'available', '')) _id = '12345' fake_vol_creating = meta.obj_to_munch( fakes.FakeVolume(_id, 'creating', '')) fake_vol_avail = meta.obj_to_munch( fakes.FakeVolume(_id, 'available', '')) def now_deleting(request, context): fake_vol_avail['status'] = 'deleting' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes']), json={'volume': fake_vol_creating}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4, fake_vol_creating]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4, fake_vol_avail]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4, fake_vol_avail]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', _id]), json=now_deleting), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4]})]) self.assertEqual( [self.cloud._normalize_volume(fake_volb4)], self.cloud.list_volumes()) volume = dict(display_name='junk_vol', size=1, display_description='test junk volume') self.cloud.create_volume(wait=True, timeout=None, **volume) # If cache was not invalidated, we would not see our own volume here # because the first volume was available and thus would already be # cached. self.assertEqual( [self.cloud._normalize_volume(fake_volb4), self.cloud._normalize_volume(fake_vol_avail)], self.cloud.list_volumes()) self.cloud.delete_volume(_id) # And now delete and check same thing since list is cached as all # available self.assertEqual( [self.cloud._normalize_volume(fake_volb4)], self.cloud.list_volumes()) self.assert_calls() def test_list_users(self): user_data = self._get_user_data(email='test@example.com') self.register_uris([ dict(method='GET', uri=self.get_mock_url( service_type='identity', interface='admin', resource='users', base_url_append='v3'), status_code=200, json={'users': [user_data.json_response['user']]})]) users = self.cloud.list_users() self.assertEqual(1, len(users)) self.assertEqual(user_data.user_id, users[0]['id']) self.assertEqual(user_data.name, users[0]['name']) self.assertEqual(user_data.email, users[0]['email']) self.assert_calls() def test_modify_user_invalidates_cache(self): self.use_keystone_v2() user_data = self._get_user_data(email='test@example.com') new_resp = {'user': user_data.json_response['user'].copy()} new_resp['user']['email'] = 'Nope@Nope.Nope' new_req = {'user': {'email': new_resp['user']['email']}} mock_users_url = self.get_mock_url( service_type='identity', interface='admin', resource='users') mock_user_resource_url = self.get_mock_url( service_type='identity', interface='admin', resource='users', append=[user_data.user_id]) empty_user_list_resp = {'users': []} users_list_resp = {'users': [user_data.json_response['user']]} updated_users_list_resp = {'users': [new_resp['user']]} # Password is None in the original create below user_data.json_request['user']['password'] = None uris_to_mock = [ # Inital User List is Empty dict(method='GET', uri=mock_users_url, status_code=200, json=empty_user_list_resp), # POST to create the user # GET to get the user data after POST dict(method='POST', uri=mock_users_url, status_code=200, json=user_data.json_response, validate=dict(json=user_data.json_request)), # List Users Call dict(method='GET', uri=mock_users_url, status_code=200, json=users_list_resp), # List users to get ID for update # Get user using user_id from list # Update user # Get updated user dict(method='GET', uri=mock_users_url, status_code=200, json=users_list_resp), dict(method='PUT', uri=mock_user_resource_url, status_code=200, json=new_resp, validate=dict(json=new_req)), # List Users Call dict(method='GET', uri=mock_users_url, status_code=200, json=updated_users_list_resp), # List User to get ID for delete # Get user using user_id from list # delete user dict(method='GET', uri=mock_users_url, status_code=200, json=updated_users_list_resp), dict(method='GET', uri=mock_user_resource_url, status_code=200, json=new_resp), dict(method='DELETE', uri=mock_user_resource_url, status_code=204), # List Users Call (empty post delete) dict(method='GET', uri=mock_users_url, status_code=200, json=empty_user_list_resp) ] self.register_uris(uris_to_mock) # first cache an empty list self.assertEqual([], self.cloud.list_users()) # now add one created = self.cloud.create_user(name=user_data.name, email=user_data.email) self.assertEqual(user_data.user_id, created['id']) self.assertEqual(user_data.name, created['name']) self.assertEqual(user_data.email, created['email']) # Cache should have been invalidated users = self.cloud.list_users() self.assertEqual(user_data.user_id, users[0]['id']) self.assertEqual(user_data.name, users[0]['name']) self.assertEqual(user_data.email, users[0]['email']) # Update and check to see if it is updated updated = self.cloud.update_user(user_data.user_id, email=new_resp['user']['email']) self.assertEqual(user_data.user_id, updated.id) self.assertEqual(user_data.name, updated.name) self.assertEqual(new_resp['user']['email'], updated.email) users = self.cloud.list_users() self.assertEqual(1, len(users)) self.assertEqual(user_data.user_id, users[0]['id']) self.assertEqual(user_data.name, users[0]['name']) self.assertEqual(new_resp['user']['email'], users[0]['email']) # Now delete and ensure it disappears self.cloud.delete_user(user_data.user_id) self.assertEqual([], self.cloud.list_users()) self.assert_calls() def test_list_flavors(self): mock_uri = '{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT) uris_to_mock = [ dict(method='GET', uri=mock_uri, json={'flavors': []}), dict(method='GET', uri=mock_uri, json={'flavors': fakes.FAKE_FLAVOR_LIST}) ] uris_to_mock.extend([ dict(method='GET', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=flavor['id']), json={'extra_specs': {}}) for flavor in fakes.FAKE_FLAVOR_LIST]) self.register_uris(uris_to_mock) self.assertEqual([], self.cloud.list_flavors()) self.assertEqual([], self.cloud.list_flavors()) fake_flavor_dicts = self.cloud._normalize_flavors( fakes.FAKE_FLAVOR_LIST) self.cloud.list_flavors.invalidate(self.cloud) self.assertEqual(fake_flavor_dicts, self.cloud.list_flavors()) self.assert_calls() def test_list_images(self): self.use_glance() fake_image = fakes.make_fake_image(image_id='42') self.register_uris([ dict(method='GET', uri=self.get_mock_url('image', 'public', append=['v2', 'images']), json={'images': []}), dict(method='GET', uri=self.get_mock_url('image', 'public', append=['v2', 'images']), json={'images': [fake_image]}), ]) self.assertEqual([], self.cloud.list_images()) self.assertEqual([], self.cloud.list_images()) self.cloud.list_images.invalidate(self.cloud) self.assertEqual( self._munch_images(fake_image), self.cloud.list_images()) self.assert_calls() @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_list_images_ignores_unsteady_status(self, mock_image_client): steady_image = munch.Munch(id='68', name='Jagr', status='active') for status in ('queued', 'saving', 'pending_delete'): active_image = munch.Munch( id=self.getUniqueString(), name=self.getUniqueString(), status=status) mock_image_client.get.return_value = [active_image] self.assertEqual( self._munch_images(active_image), self.cloud.list_images()) mock_image_client.get.return_value = [ active_image, steady_image] # Should expect steady_image to appear if active wasn't cached self.assertEqual( [self._image_dict(active_image), self._image_dict(steady_image)], self.cloud.list_images()) @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_list_images_caches_steady_status(self, mock_image_client): steady_image = munch.Munch(id='91', name='Federov', status='active') first_image = None for status in ('active', 'deleted', 'killed'): active_image = munch.Munch( id=self.getUniqueString(), name=self.getUniqueString(), status=status) mock_image_client.get.return_value = [active_image] if not first_image: first_image = active_image self.assertEqual( self._munch_images(first_image), self.cloud.list_images()) mock_image_client.get.return_value = [ active_image, steady_image] # because we skipped the create_image code path, no invalidation # was done, so we _SHOULD_ expect steady state images to cache and # therefore we should _not_ expect to see the new one here self.assertEqual( self._munch_images(first_image), self.cloud.list_images()) @mock.patch.object(openstack.cloud.OpenStackCloud, '_image_client') def test_cache_no_cloud_name(self, mock_image_client): self.cloud.name = None fi = munch.Munch( id='1', name='None Test Image', status='active', visibility='private') mock_image_client.get.return_value = [fi] self.assertEqual( self._munch_images(fi), self.cloud.list_images()) # Now test that the list was cached fi2 = munch.Munch( id='2', name='None Test Image', status='active', visibility='private') mock_image_client.get.return_value = [fi, fi2] self.assertEqual( self._munch_images(fi), self.cloud.list_images()) # Invalidation too self.cloud.list_images.invalidate(self.cloud) self.assertEqual( [self._image_dict(fi), self._image_dict(fi2)], self.cloud.list_images()) class TestBogusAuth(base.TestCase): def setUp(self): super(TestBogusAuth, self).setUp( cloud_config_fixture='clouds_cache.yaml') def test_get_auth_bogus(self): with testtools.ExpectedException(exc.OpenStackCloudException): openstack.cloud.openstack_cloud( cloud='_bogus_test_', config=self.config) openstacksdk-0.11.3/openstack/tests/unit/cloud/test_users.py0000666000175100017510000002110513236151340024300 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import testtools import openstack.cloud from openstack.tests.unit import base class TestUsers(base.RequestsMockTestCase): def _get_keystone_mock_url(self, resource, append=None, v3=True): base_url_append = None if v3: base_url_append = 'v3' return self.get_mock_url( service_type='identity', interface='admin', resource=resource, append=append, base_url_append=base_url_append) def _get_user_list(self, user_data): uri = self._get_keystone_mock_url(resource='users') return { 'users': [ user_data.json_response['user'], ], 'links': { 'self': uri, 'previous': None, 'next': None, } } def test_create_user_v2(self): self.use_keystone_v2() user_data = self._get_user_data() self.register_uris([ dict(method='POST', uri=self._get_keystone_mock_url(resource='users', v3=False), status_code=200, json=user_data.json_response, validate=dict(json=user_data.json_request)), ]) user = self.cloud.create_user( name=user_data.name, email=user_data.email, password=user_data.password) self.assertEqual(user_data.name, user.name) self.assertEqual(user_data.email, user.email) self.assertEqual(user_data.user_id, user.id) self.assert_calls() def test_create_user_v3(self): user_data = self._get_user_data( domain_id=uuid.uuid4().hex, description=self.getUniqueString('description')) self.register_uris([ dict(method='POST', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json=user_data.json_response, validate=dict(json=user_data.json_request)), ]) user = self.cloud.create_user( name=user_data.name, email=user_data.email, password=user_data.password, description=user_data.description, domain_id=user_data.domain_id) self.assertEqual(user_data.name, user.name) self.assertEqual(user_data.email, user.email) self.assertEqual(user_data.description, user.description) self.assertEqual(user_data.user_id, user.id) self.assert_calls() def test_update_user_password_v2(self): self.use_keystone_v2() user_data = self._get_user_data(email='test@example.com') mock_user_resource_uri = self._get_keystone_mock_url( resource='users', append=[user_data.user_id], v3=False) mock_users_uri = self._get_keystone_mock_url( resource='users', v3=False) self.register_uris([ # GET list to find user id # PUT user with password update # PUT empty update (password change is different than update) # but is always chained together [keystoneclient oddity] dict(method='GET', uri=mock_users_uri, status_code=200, json=self._get_user_list(user_data)), dict(method='PUT', uri=self._get_keystone_mock_url( resource='users', v3=False, append=[user_data.user_id, 'OS-KSADM', 'password']), status_code=200, json=user_data.json_response, validate=dict( json={'user': {'password': user_data.password}})), dict(method='PUT', uri=mock_user_resource_uri, status_code=200, json=user_data.json_response, validate=dict(json={'user': {}}))]) user = self.cloud.update_user( user_data.user_id, password=user_data.password) self.assertEqual(user_data.name, user.name) self.assertEqual(user_data.email, user.email) self.assert_calls() def test_create_user_v3_no_domain(self): user_data = self._get_user_data(domain_id=uuid.uuid4().hex, email='test@example.com') with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "User or project creation requires an explicit" " domain_id argument." ): self.cloud.create_user( name=user_data.name, email=user_data.email, password=user_data.password) def test_delete_user(self): user_data = self._get_user_data(domain_id=uuid.uuid4().hex) user_resource_uri = self._get_keystone_mock_url( resource='users', append=[user_data.user_id]) self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json=self._get_user_list(user_data)), dict(method='GET', uri=user_resource_uri, status_code=200, json=user_data.json_response), dict(method='DELETE', uri=user_resource_uri, status_code=204)]) self.cloud.delete_user(user_data.name) self.assert_calls() def test_delete_user_not_found(self): self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json={'users': []})]) self.assertFalse(self.cloud.delete_user(self.getUniqueString())) def test_add_user_to_group(self): user_data = self._get_user_data() group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json=self._get_user_list(user_data)), dict(method='GET', uri=self._get_keystone_mock_url(resource='groups'), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='PUT', uri=self._get_keystone_mock_url( resource='groups', append=[group_data.group_id, 'users', user_data.user_id]), status_code=200)]) self.cloud.add_user_to_group(user_data.user_id, group_data.group_id) self.assert_calls() def test_is_user_in_group(self): user_data = self._get_user_data() group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json=self._get_user_list(user_data)), dict(method='GET', uri=self._get_keystone_mock_url(resource='groups'), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='HEAD', uri=self._get_keystone_mock_url( resource='groups', append=[group_data.group_id, 'users', user_data.user_id]), status_code=204)]) self.assertTrue(self.cloud.is_user_in_group( user_data.user_id, group_data.group_id)) self.assert_calls() def test_remove_user_from_group(self): user_data = self._get_user_data() group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), json=self._get_user_list(user_data)), dict(method='GET', uri=self._get_keystone_mock_url(resource='groups'), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='DELETE', uri=self._get_keystone_mock_url( resource='groups', append=[group_data.group_id, 'users', user_data.user_id]), status_code=204)]) self.cloud.remove_user_from_group( user_data.user_id, group_data.group_id) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_identity_roles.py0000666000175100017510000002772713236151340026214 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools import openstack.cloud from openstack.tests.unit import base from testtools import matchers RAW_ROLE_ASSIGNMENTS = [ { "links": {"assignment": "http://example"}, "role": {"id": "123456"}, "scope": {"domain": {"id": "161718"}}, "user": {"id": "313233"} }, { "links": {"assignment": "http://example"}, "group": {"id": "101112"}, "role": {"id": "123456"}, "scope": {"project": {"id": "456789"}} } ] class TestIdentityRoles(base.RequestsMockTestCase): def get_mock_url(self, service_type='identity', interface='admin', resource='roles', append=None, base_url_append='v3', qs_elements=None): return super(TestIdentityRoles, self).get_mock_url( service_type, interface, resource, append, base_url_append, qs_elements) def test_list_roles(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}) ]) self.cloud.list_roles() self.assert_calls() def test_get_role_by_name(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}) ]) role = self.cloud.get_role(role_data.role_name) self.assertIsNotNone(role) self.assertThat(role.id, matchers.Equals(role_data.role_id)) self.assertThat(role.name, matchers.Equals(role_data.role_name)) self.assert_calls() def test_get_role_by_id(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}) ]) role = self.cloud.get_role(role_data.role_id) self.assertIsNotNone(role) self.assertThat(role.id, matchers.Equals(role_data.role_id)) self.assertThat(role.name, matchers.Equals(role_data.role_name)) self.assert_calls() def test_create_role(self): role_data = self._get_role_data() self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=200, json=role_data.json_response, validate=dict(json=role_data.json_request)) ]) role = self.cloud.create_role(role_data.role_name) self.assertIsNotNone(role) self.assertThat(role.name, matchers.Equals(role_data.role_name)) self.assertThat(role.id, matchers.Equals(role_data.role_id)) self.assert_calls() def test_update_role(self): role_data = self._get_role_data() req = {'role_id': role_data.role_id, 'role': {'name': role_data.role_name}} self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}), dict(method='PATCH', uri=self.get_mock_url(), status_code=200, json=role_data.json_response, validate=dict(json=req)) ]) role = self.cloud.update_role( role_data.role_id, role_data.role_name) self.assertIsNotNone(role) self.assertThat(role.name, matchers.Equals(role_data.role_name)) self.assertThat(role.id, matchers.Equals(role_data.role_id)) self.assert_calls() def test_delete_role_by_id(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(append=[role_data.role_id]), status_code=204) ]) role = self.cloud.delete_role(role_data.role_id) self.assertThat(role, matchers.Equals(True)) self.assert_calls() def test_delete_role_by_name(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(append=[role_data.role_id]), status_code=204) ]) role = self.cloud.delete_role(role_data.role_name) self.assertThat(role, matchers.Equals(True)) self.assert_calls() def test_list_role_assignments(self): domain_data = self._get_domain_data() user_data = self._get_user_data(domain_id=domain_data.domain_id) group_data = self._get_group_data(domain_id=domain_data.domain_id) project_data = self._get_project_data(domain_id=domain_data.domain_id) role_data = self._get_role_data() response = [ {'links': 'https://example.com', 'role': {'id': role_data.role_id}, 'scope': {'domain': {'id': domain_data.domain_id}}, 'user': {'id': user_data.user_id}}, {'links': 'https://example.com', 'role': {'id': role_data.role_id}, 'scope': {'project': {'id': project_data.project_id}}, 'group': {'id': group_data.group_id}}, ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='role_assignments'), status_code=200, json={'role_assignments': response}, complete_qs=True) ]) ret = self.cloud.list_role_assignments() self.assertThat(len(ret), matchers.Equals(2)) self.assertThat(ret[0].user, matchers.Equals(user_data.user_id)) self.assertThat(ret[0].id, matchers.Equals(role_data.role_id)) self.assertThat(ret[0].domain, matchers.Equals(domain_data.domain_id)) self.assertThat(ret[1].group, matchers.Equals(group_data.group_id)) self.assertThat(ret[1].id, matchers.Equals(role_data.role_id)) self.assertThat(ret[1].project, matchers.Equals(project_data.project_id)) def test_list_role_assignments_filters(self): domain_data = self._get_domain_data() user_data = self._get_user_data(domain_id=domain_data.domain_id) role_data = self._get_role_data() response = [ {'links': 'https://example.com', 'role': {'id': role_data.role_id}, 'scope': {'domain': {'id': domain_data.domain_id}}, 'user': {'id': user_data.user_id}} ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=['scope.domain.id=%s' % domain_data.domain_id, 'user.id=%s' % user_data.user_id, 'effective=True']), status_code=200, json={'role_assignments': response}, complete_qs=True) ]) params = dict(user=user_data.user_id, domain=domain_data.domain_id, effective=True) ret = self.cloud.list_role_assignments(filters=params) self.assertThat(len(ret), matchers.Equals(1)) self.assertThat(ret[0].user, matchers.Equals(user_data.user_id)) self.assertThat(ret[0].id, matchers.Equals(role_data.role_id)) self.assertThat(ret[0].domain, matchers.Equals(domain_data.domain_id)) def test_list_role_assignments_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='role_assignments'), status_code=403) ]) with testtools.ExpectedException( openstack.cloud.exc.OpenStackCloudHTTPError, "Failed to list role assignments" ): self.cloud.list_role_assignments() self.assert_calls() def test_list_role_assignments_keystone_v2(self): self.use_keystone_v2() role_data = self._get_role_data() user_data = self._get_user_data() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='tenants', append=[project_data.project_id, 'users', user_data.user_id, 'roles'], base_url_append=None), status_code=200, json={'roles': [role_data.json_response['role']]}) ]) ret = self.cloud.list_role_assignments( filters={ 'user': user_data.user_id, 'project': project_data.project_id}) self.assertThat(len(ret), matchers.Equals(1)) self.assertThat(ret[0].project, matchers.Equals(project_data.project_id)) self.assertThat(ret[0].id, matchers.Equals(role_data.role_id)) self.assertThat(ret[0].user, matchers.Equals(user_data.user_id)) self.assert_calls() def test_list_role_assignments_keystone_v2_with_role(self): self.use_keystone_v2() roles_data = [self._get_role_data() for r in range(0, 2)] user_data = self._get_user_data() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='tenants', append=[project_data.project_id, 'users', user_data.user_id, 'roles'], base_url_append=None), status_code=200, json={'roles': [r.json_response['role'] for r in roles_data]}) ]) ret = self.cloud.list_role_assignments( filters={ 'role': roles_data[0].role_id, 'user': user_data.user_id, 'project': project_data.project_id}) self.assertThat(len(ret), matchers.Equals(1)) self.assertThat(ret[0].project, matchers.Equals(project_data.project_id)) self.assertThat(ret[0].id, matchers.Equals(roles_data[0].role_id)) self.assertThat(ret[0].user, matchers.Equals(user_data.user_id)) self.assert_calls() def test_list_role_assignments_exception_v2(self): self.use_keystone_v2() with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Must provide project and user for keystone v2" ): self.cloud.list_role_assignments() self.assert_calls() def test_list_role_assignments_exception_v2_no_project(self): self.use_keystone_v2() with testtools.ExpectedException( openstack.cloud.OpenStackCloudException, "Must provide project and user for keystone v2" ): self.cloud.list_role_assignments(filters={'user': '12345'}) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/cloud/test_port.py0000666000175100017510000003274113236151340024133 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_port ---------------------------------- Test port resource (managed by neutron) """ from openstack.cloud.exc import OpenStackCloudException from openstack.tests.unit import base class TestPort(base.RequestsMockTestCase): mock_neutron_port_create_rep = { 'port': { 'status': 'DOWN', 'binding:host_id': '', 'name': 'test-port-name', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'test-net-id', 'tenant_id': 'test-tenant-id', 'binding:vif_details': {}, 'binding:vnic_type': 'normal', 'binding:vif_type': 'unbound', 'device_owner': '', 'mac_address': '50:1c:0d:e4:f0:0d', 'binding:profile': {}, 'fixed_ips': [ { 'subnet_id': 'test-subnet-id', 'ip_address': '29.29.29.29' } ], 'id': 'test-port-id', 'security_groups': [], 'device_id': '' } } mock_neutron_port_update_rep = { 'port': { 'status': 'DOWN', 'binding:host_id': '', 'name': 'test-port-name-updated', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'test-net-id', 'tenant_id': 'test-tenant-id', 'binding:vif_details': {}, 'binding:vnic_type': 'normal', 'binding:vif_type': 'unbound', 'device_owner': '', 'mac_address': '50:1c:0d:e4:f0:0d', 'binding:profile': {}, 'fixed_ips': [ { 'subnet_id': 'test-subnet-id', 'ip_address': '29.29.29.29' } ], 'id': 'test-port-id', 'security_groups': [], 'device_id': '' } } mock_neutron_port_list_rep = { 'ports': [ { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': 'first-port', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': '70c1db1f-b701-45bd-96e0-a313ee3430b3', 'tenant_id': '', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'network:router_gateway', 'mac_address': 'fa:16:3e:58:42:ed', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '008ba151-0b8c-4a67-98b5-0d2b87666062', 'ip_address': '172.24.4.2' } ], 'id': 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b', 'security_groups': [], 'device_id': '9ae135f4-b6e0-4dad-9e91-3c223e385824' }, { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': '', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'f27aa545-cbdd-4907-b0c6-c9e8b039dcc2', 'tenant_id': 'd397de8a63f341818f198abb0966f6f3', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'network:router_interface', 'mac_address': 'fa:16:3e:bb:3c:e4', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '288bf4a1-51ba-43b6-9d0a-520e9005db17', 'ip_address': '10.0.0.1' } ], 'id': 'f71a6703-d6de-4be1-a91a-a570ede1d159', 'security_groups': [], 'device_id': '9ae135f4-b6e0-4dad-9e91-3c223e385824' } ] } def test_create_port(self): self.register_uris([ dict(method="POST", uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_create_rep, validate=dict( json={'port': { 'network_id': 'test-net-id', 'name': 'test-port-name', 'admin_state_up': True}})) ]) port = self.cloud.create_port( network_id='test-net-id', name='test-port-name', admin_state_up=True) self.assertEqual(self.mock_neutron_port_create_rep['port'], port) self.assert_calls() def test_create_port_parameters(self): """Test that we detect invalid arguments passed to create_port""" self.assertRaises( TypeError, self.cloud.create_port, network_id='test-net-id', nome='test-port-name', stato_amministrativo_porta=True) def test_create_port_exception(self): self.register_uris([ dict(method="POST", uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), status_code=500, validate=dict( json={'port': { 'network_id': 'test-net-id', 'name': 'test-port-name', 'admin_state_up': True}})) ]) self.assertRaises( OpenStackCloudException, self.cloud.create_port, network_id='test-net-id', name='test-port-name', admin_state_up=True) self.assert_calls() def test_update_port(self): port_id = 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', '%s.json' % port_id]), json=self.mock_neutron_port_update_rep, validate=dict( json={'port': {'name': 'test-port-name-updated'}})) ]) port = self.cloud.update_port( name_or_id=port_id, name='test-port-name-updated') self.assertEqual(self.mock_neutron_port_update_rep['port'], port) self.assert_calls() def test_update_port_parameters(self): """Test that we detect invalid arguments passed to update_port""" self.assertRaises( TypeError, self.cloud.update_port, name_or_id='test-port-id', nome='test-port-name-updated') def test_update_port_exception(self): port_id = 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', '%s.json' % port_id]), status_code=500, validate=dict( json={'port': {'name': 'test-port-name-updated'}})) ]) self.assertRaises( OpenStackCloudException, self.cloud.update_port, name_or_id='d80b1a3b-4fc1-49f3-952e-1e2ab7081d8b', name='test-port-name-updated') self.assert_calls() def test_list_ports(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) ports = self.cloud.list_ports() self.assertItemsEqual(self.mock_neutron_port_list_rep['ports'], ports) self.assert_calls() def test_list_ports_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), status_code=500) ]) self.assertRaises(OpenStackCloudException, self.cloud.list_ports) def test_search_ports_by_id(self): port_id = 'f71a6703-d6de-4be1-a91a-a570ede1d159' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) ports = self.cloud.search_ports(name_or_id=port_id) self.assertEqual(1, len(ports)) self.assertEqual('fa:16:3e:bb:3c:e4', ports[0]['mac_address']) self.assert_calls() def test_search_ports_by_name(self): port_name = "first-port" self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) ports = self.cloud.search_ports(name_or_id=port_name) self.assertEqual(1, len(ports)) self.assertEqual('fa:16:3e:58:42:ed', ports[0]['mac_address']) self.assert_calls() def test_search_ports_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) ports = self.cloud.search_ports(name_or_id='non-existent') self.assertEqual(0, len(ports)) self.assert_calls() def test_delete_port(self): port_id = 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', '%s.json' % port_id]), json={}) ]) self.assertTrue(self.cloud.delete_port(name_or_id='first-port')) def test_delete_port_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) self.assertFalse(self.cloud.delete_port(name_or_id='non-existent')) self.assert_calls() def test_delete_subnet_multiple_found(self): port_name = "port-name" port1 = dict(id='123', name=port_name) port2 = dict(id='456', name=port_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json={'ports': [port1, port2]}) ]) self.assertRaises(OpenStackCloudException, self.cloud.delete_port, port_name) self.assert_calls() def test_delete_subnet_multiple_using_id(self): port_name = "port-name" port1 = dict(id='123', name=port_name) port2 = dict(id='456', name=port_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json={'ports': [port1, port2]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', '%s.json' % port1['id']]), json={}) ]) self.assertTrue(self.cloud.delete_port(name_or_id=port1['id'])) self.assert_calls() def test_get_port_by_id(self): fake_port = dict(id='123', name='456') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', fake_port['id']]), json={'port': fake_port}) ]) r = self.cloud.get_port_by_id(fake_port['id']) self.assertIsNotNone(r) self.assertDictEqual(fake_port, r) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/compute/0000775000175100017510000000000013236151501022072 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/compute/test_version.py0000666000175100017510000000277613236151340025207 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute import version IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'status': '3', 'updated': '4', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['updated'], sot.updated) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/0000775000175100017510000000000013236151501022421 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_flavor.py0000666000175100017510000000616213236151340025333 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import flavor IDENTIFIER = 'IDENTIFIER' BASIC_EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'name': '3', 'disk': 4, 'os-flavor-access:is_public': True, 'ram': 6, 'vcpus': 7, 'swap': 8, 'OS-FLV-EXT-DATA:ephemeral': 9, 'OS-FLV-DISABLED:disabled': False, 'rxtx_factor': 11.0 } class TestFlavor(testtools.TestCase): def test_basic(self): sot = flavor.Flavor() self.assertEqual('flavor', sot.resource_key) self.assertEqual('flavors', sot.resources_key) self.assertEqual('/flavors', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertFalse(sot.allow_update) self.assertDictEqual({"sort_key": "sort_key", "sort_dir": "sort_dir", "min_disk": "minDisk", "min_ram": "minRam", "limit": "limit", "marker": "marker"}, sot._query_mapping._mapping) def test_make_basic(self): sot = flavor.Flavor(**BASIC_EXAMPLE) self.assertEqual(BASIC_EXAMPLE['id'], sot.id) self.assertEqual(BASIC_EXAMPLE['links'], sot.links) self.assertEqual(BASIC_EXAMPLE['name'], sot.name) self.assertEqual(BASIC_EXAMPLE['disk'], sot.disk) self.assertEqual(BASIC_EXAMPLE['os-flavor-access:is_public'], sot.is_public) self.assertEqual(BASIC_EXAMPLE['ram'], sot.ram) self.assertEqual(BASIC_EXAMPLE['vcpus'], sot.vcpus) self.assertEqual(BASIC_EXAMPLE['swap'], sot.swap) self.assertEqual(BASIC_EXAMPLE['OS-FLV-EXT-DATA:ephemeral'], sot.ephemeral) self.assertEqual(BASIC_EXAMPLE['OS-FLV-DISABLED:disabled'], sot.is_disabled) self.assertEqual(BASIC_EXAMPLE['rxtx_factor'], sot.rxtx_factor) def test_detail(self): sot = flavor.FlavorDetail() self.assertEqual('flavor', sot.resource_key) self.assertEqual('flavors', sot.resources_key) self.assertEqual('/flavors/detail', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_limits.py0000666000175100017510000002020413236151340025334 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from keystoneauth1 import adapter import mock import testtools from openstack.compute.v2 import limits ABSOLUTE_LIMITS = { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxSecurityGroupRules": 20, "maxSecurityGroups": 10, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalFloatingIps": 10, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalFloatingIpsUsed": 1, "totalSecurityGroupsUsed": 2, "totalRAMUsed": 4, "totalInstancesUsed": 5, "totalServerGroupsUsed": 6, "totalCoresUsed": 7 } RATE_LIMIT = { "limit": [ { "next-available": "2012-11-27T17:22:18Z", "remaining": 120, "unit": "MINUTE", "value": 120, "verb": "POST" }, ], "regex": ".*", "uri": "*" } LIMITS_BODY = { "limits": { "absolute": ABSOLUTE_LIMITS, "rate": [RATE_LIMIT] } } class TestAbsoluteLimits(testtools.TestCase): def test_basic(self): sot = limits.AbsoluteLimits() self.assertIsNone(sot.resource_key) self.assertIsNone(sot.resources_key) self.assertEqual("", sot.base_path) self.assertIsNone(sot.service) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) def test_make_it(self): sot = limits.AbsoluteLimits(**ABSOLUTE_LIMITS) self.assertEqual(ABSOLUTE_LIMITS["maxImageMeta"], sot.image_meta) self.assertEqual(ABSOLUTE_LIMITS["maxPersonality"], sot.personality) self.assertEqual(ABSOLUTE_LIMITS["maxPersonalitySize"], sot.personality_size) self.assertEqual(ABSOLUTE_LIMITS["maxSecurityGroupRules"], sot.security_group_rules) self.assertEqual(ABSOLUTE_LIMITS["maxSecurityGroups"], sot.security_groups) self.assertEqual(ABSOLUTE_LIMITS["maxServerMeta"], sot.server_meta) self.assertEqual(ABSOLUTE_LIMITS["maxTotalCores"], sot.total_cores) self.assertEqual(ABSOLUTE_LIMITS["maxTotalFloatingIps"], sot.floating_ips) self.assertEqual(ABSOLUTE_LIMITS["maxTotalInstances"], sot.instances) self.assertEqual(ABSOLUTE_LIMITS["maxTotalKeypairs"], sot.keypairs) self.assertEqual(ABSOLUTE_LIMITS["maxTotalRAMSize"], sot.total_ram) self.assertEqual(ABSOLUTE_LIMITS["maxServerGroups"], sot.server_groups) self.assertEqual(ABSOLUTE_LIMITS["maxServerGroupMembers"], sot.server_group_members) self.assertEqual(ABSOLUTE_LIMITS["totalFloatingIpsUsed"], sot.floating_ips_used) self.assertEqual(ABSOLUTE_LIMITS["totalSecurityGroupsUsed"], sot.security_groups_used) self.assertEqual(ABSOLUTE_LIMITS["totalRAMUsed"], sot.total_ram_used) self.assertEqual(ABSOLUTE_LIMITS["totalInstancesUsed"], sot.instances_used) self.assertEqual(ABSOLUTE_LIMITS["totalServerGroupsUsed"], sot.server_groups_used) self.assertEqual(ABSOLUTE_LIMITS["totalCoresUsed"], sot.total_cores_used) class TestRateLimit(testtools.TestCase): def test_basic(self): sot = limits.RateLimit() self.assertIsNone(sot.resource_key) self.assertIsNone(sot.resources_key) self.assertEqual("", sot.base_path) self.assertIsNone(sot.service) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) def test_make_it(self): sot = limits.RateLimit(**RATE_LIMIT) self.assertEqual(RATE_LIMIT["regex"], sot.regex) self.assertEqual(RATE_LIMIT["uri"], sot.uri) self.assertEqual(RATE_LIMIT["limit"], sot.limits) class TestLimits(testtools.TestCase): def test_basic(self): sot = limits.Limits() self.assertEqual("limits", sot.resource_key) self.assertEqual("/limits", sot.base_path) self.assertEqual("compute", sot.service.service_type) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) def test_get(self): sess = mock.Mock(spec=adapter.Adapter) resp = mock.Mock() sess.get.return_value = resp resp.json.return_value = copy.deepcopy(LIMITS_BODY) resp.headers = {} resp.status_code = 200 sot = limits.Limits().get(sess) self.assertEqual(ABSOLUTE_LIMITS["maxImageMeta"], sot.absolute.image_meta) self.assertEqual(ABSOLUTE_LIMITS["maxPersonality"], sot.absolute.personality) self.assertEqual(ABSOLUTE_LIMITS["maxPersonalitySize"], sot.absolute.personality_size) self.assertEqual(ABSOLUTE_LIMITS["maxSecurityGroupRules"], sot.absolute.security_group_rules) self.assertEqual(ABSOLUTE_LIMITS["maxSecurityGroups"], sot.absolute.security_groups) self.assertEqual(ABSOLUTE_LIMITS["maxServerMeta"], sot.absolute.server_meta) self.assertEqual(ABSOLUTE_LIMITS["maxTotalCores"], sot.absolute.total_cores) self.assertEqual(ABSOLUTE_LIMITS["maxTotalFloatingIps"], sot.absolute.floating_ips) self.assertEqual(ABSOLUTE_LIMITS["maxTotalInstances"], sot.absolute.instances) self.assertEqual(ABSOLUTE_LIMITS["maxTotalKeypairs"], sot.absolute.keypairs) self.assertEqual(ABSOLUTE_LIMITS["maxTotalRAMSize"], sot.absolute.total_ram) self.assertEqual(ABSOLUTE_LIMITS["maxServerGroups"], sot.absolute.server_groups) self.assertEqual(ABSOLUTE_LIMITS["maxServerGroupMembers"], sot.absolute.server_group_members) self.assertEqual(ABSOLUTE_LIMITS["totalFloatingIpsUsed"], sot.absolute.floating_ips_used) self.assertEqual(ABSOLUTE_LIMITS["totalSecurityGroupsUsed"], sot.absolute.security_groups_used) self.assertEqual(ABSOLUTE_LIMITS["totalRAMUsed"], sot.absolute.total_ram_used) self.assertEqual(ABSOLUTE_LIMITS["totalInstancesUsed"], sot.absolute.instances_used) self.assertEqual(ABSOLUTE_LIMITS["totalServerGroupsUsed"], sot.absolute.server_groups_used) self.assertEqual(ABSOLUTE_LIMITS["totalCoresUsed"], sot.absolute.total_cores_used) self.assertEqual(RATE_LIMIT["uri"], sot.rate[0].uri) self.assertEqual(RATE_LIMIT["regex"], sot.rate[0].regex) self.assertEqual(RATE_LIMIT["limit"], sot.rate[0].limits) dsot = sot.to_dict() self.assertIsInstance(dsot['rate'][0], dict) self.assertIsInstance(dsot['absolute'], dict) self.assertEqual(RATE_LIMIT["uri"], dsot['rate'][0]['uri']) self.assertEqual( ABSOLUTE_LIMITS["totalSecurityGroupsUsed"], dsot['absolute']['security_groups_used']) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_server_ip.py0000666000175100017510000000752313236151340026042 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.compute.v2 import server_ip IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'addr': '1', 'network_label': '2', 'version': '4', } class TestServerIP(testtools.TestCase): def test_basic(self): sot = server_ip.ServerIP() self.assertEqual('addresses', sot.resources_key) self.assertEqual('/servers/%(server_id)s/ips', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = server_ip.ServerIP(**EXAMPLE) self.assertEqual(EXAMPLE['addr'], sot.address) self.assertEqual(EXAMPLE['network_label'], sot.network_label) self.assertEqual(EXAMPLE['version'], sot.version) def test_list(self): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp resp.json.return_value = { "addresses": {"label1": [{"version": 1, "addr": "a1"}, {"version": 2, "addr": "a2"}], "label2": [{"version": 3, "addr": "a3"}, {"version": 4, "addr": "a4"}]}} ips = list(server_ip.ServerIP.list(sess, server_id=IDENTIFIER)) self.assertEqual(4, len(ips)) ips = sorted(ips, key=lambda ip: ip.version) self.assertEqual(type(ips[0]), server_ip.ServerIP) self.assertEqual(ips[0].network_label, "label1") self.assertEqual(ips[0].address, "a1") self.assertEqual(ips[0].version, 1) self.assertEqual(type(ips[1]), server_ip.ServerIP) self.assertEqual(ips[1].network_label, "label1") self.assertEqual(ips[1].address, "a2") self.assertEqual(ips[1].version, 2) self.assertEqual(type(ips[2]), server_ip.ServerIP) self.assertEqual(ips[2].network_label, "label2") self.assertEqual(ips[2].address, "a3") self.assertEqual(ips[2].version, 3) self.assertEqual(type(ips[3]), server_ip.ServerIP) self.assertEqual(ips[3].network_label, "label2") self.assertEqual(ips[3].address, "a4") self.assertEqual(ips[3].version, 4) def test_list_network_label(self): label = "label1" sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp resp.json.return_value = {label: [{"version": 1, "addr": "a1"}, {"version": 2, "addr": "a2"}]} ips = list(server_ip.ServerIP.list(sess, server_id=IDENTIFIER, network_label=label)) self.assertEqual(2, len(ips)) ips = sorted(ips, key=lambda ip: ip.version) self.assertEqual(type(ips[0]), server_ip.ServerIP) self.assertEqual(ips[0].network_label, label) self.assertEqual(ips[0].address, "a1") self.assertEqual(ips[0].version, 1) self.assertEqual(type(ips[1]), server_ip.ServerIP) self.assertEqual(ips[1].network_label, label) self.assertEqual(ips[1].address, "a2") self.assertEqual(ips[1].version, 2) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_keypair.py0000666000175100017510000000303013236151340025475 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import keypair EXAMPLE = { 'fingerprint': '1', 'name': '2', 'public_key': '3', 'private_key': '3', } class TestKeypair(testtools.TestCase): def test_basic(self): sot = keypair.Keypair() self.assertEqual('keypair', sot.resource_key) self.assertEqual('keypairs', sot.resources_key) self.assertEqual('/os-keypairs', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = keypair.Keypair(**EXAMPLE) self.assertEqual(EXAMPLE['fingerprint'], sot.fingerprint) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['public_key'], sot.public_key) self.assertEqual(EXAMPLE['private_key'], sot.private_key) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_extension.py0000666000175100017510000000334013236151340026051 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import extension IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'alias': '1', 'description': '2', 'links': '3', 'name': '4', 'namespace': '5', 'updated': '2015-03-09T12:14:57.233772', } class TestExtension(testtools.TestCase): def test_basic(self): sot = extension.Extension() self.assertEqual('extension', sot.resource_key) self.assertEqual('extensions', sot.resources_key) self.assertEqual('/extensions', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = extension.Extension(**EXAMPLE) self.assertEqual(EXAMPLE['alias'], sot.alias) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['namespace'], sot.namespace) self.assertEqual(EXAMPLE['updated'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_metadata.py0000666000175100017510000000554413236151340025625 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.compute.v2 import server IDENTIFIER = 'IDENTIFIER' # NOTE: The implementation for metadata is done via a mixin class that both # the server and image resources inherit from. Currently this test class # uses the Server resource to test it. Ideally it would be parameterized # to run with both Server and Image when the tooling for subtests starts # working. class TestMetadata(testtools.TestCase): def setUp(self): super(TestMetadata, self).setUp() self.metadata_result = {"metadata": {"go": "cubs", "boo": "sox"}} self.meta_result = {"meta": {"oh": "yeah"}} def test_get_all_metadata_Server(self): self._test_get_all_metadata(server.Server(id=IDENTIFIER)) def test_get_all_metadata_ServerDetail(self): # This is tested explicitly so we know ServerDetail items are # properly having /detail stripped out of their base_path. self._test_get_all_metadata(server.ServerDetail(id=IDENTIFIER)) def _test_get_all_metadata(self, sot): response = mock.Mock() response.json.return_value = self.metadata_result sess = mock.Mock() sess.get.return_value = response result = sot.get_metadata(sess) self.assertEqual(result, self.metadata_result["metadata"]) sess.get.assert_called_once_with( "servers/IDENTIFIER/metadata", headers={}) def test_set_metadata(self): response = mock.Mock() response.json.return_value = self.metadata_result sess = mock.Mock() sess.post.return_value = response sot = server.Server(id=IDENTIFIER) set_meta = {"lol": "rofl"} result = sot.set_metadata(sess, **set_meta) self.assertEqual(result, self.metadata_result["metadata"]) sess.post.assert_called_once_with("servers/IDENTIFIER/metadata", headers={}, json={"metadata": set_meta}) def test_delete_metadata(self): sess = mock.Mock() sess.delete.return_value = None sot = server.Server(id=IDENTIFIER) key = "hey" sot.delete_metadata(sess, [key]) sess.delete.assert_called_once_with( "servers/IDENTIFIER/metadata/" + key, headers={"Accept": ""}, ) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_hypervisor.py0000666000175100017510000000602413236151340026251 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import hypervisor EXAMPLE = { "status": "enabled", "service": { "host": "fake-mini", "disabled_reason": None, "id": 6 }, "vcpus_used": 0, "hypervisor_type": "QEMU", "local_gb_used": 0, "vcpus": 8, "hypervisor_hostname": "fake-mini", "memory_mb_used": 512, "memory_mb": 7980, "current_workload": 0, "state": "up", "host_ip": "23.253.248.171", "cpu_info": "some cpu info", "running_vms": 0, "free_disk_gb": 157, "hypervisor_version": 2000000, "disk_available_least": 140, "local_gb": 157, "free_ram_mb": 7468, "id": 1 } class TestHypervisor(testtools.TestCase): def test_basic(self): sot = hypervisor.Hypervisor() self.assertEqual('hypervisor', sot.resource_key) self.assertEqual('hypervisors', sot.resources_key) self.assertEqual('/os-hypervisors', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_list) def test_make_it(self): sot = hypervisor.Hypervisor(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['hypervisor_hostname'], sot.name) self.assertEqual(EXAMPLE['state'], sot.state) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['service'], sot.service_details) self.assertEqual(EXAMPLE['vcpus_used'], sot.vcpus_used) self.assertEqual(EXAMPLE['hypervisor_type'], sot.hypervisor_type) self.assertEqual(EXAMPLE['local_gb_used'], sot.local_disk_used) self.assertEqual(EXAMPLE['vcpus'], sot.vcpus) self.assertEqual(EXAMPLE['vcpus_used'], sot.vcpus_used) self.assertEqual(EXAMPLE['memory_mb_used'], sot.memory_used) self.assertEqual(EXAMPLE['memory_mb'], sot.memory_size) self.assertEqual(EXAMPLE['current_workload'], sot.current_workload) self.assertEqual(EXAMPLE['host_ip'], sot.host_ip) self.assertEqual(EXAMPLE['cpu_info'], sot.cpu_info) self.assertEqual(EXAMPLE['running_vms'], sot.running_vms) self.assertEqual(EXAMPLE['free_disk_gb'], sot.local_disk_free) self.assertEqual(EXAMPLE['hypervisor_version'], sot.hypervisor_version) self.assertEqual(EXAMPLE['disk_available_least'], sot.disk_available) self.assertEqual(EXAMPLE['local_gb'], sot.local_disk_size) self.assertEqual(EXAMPLE['free_ram_mb'], sot.memory_free) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_server_group.py0000666000175100017510000000353313236151340026563 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import server_group EXAMPLE = { 'id': 'IDENTIFIER', 'name': 'test', 'members': ['server1', 'server2'], 'metadata': {'k': 'v'}, 'policies': ['anti-affinity'], } class TestServerGroup(testtools.TestCase): def test_basic(self): sot = server_group.ServerGroup() self.assertEqual('server_group', sot.resource_key) self.assertEqual('server_groups', sot.resources_key) self.assertEqual('/os-server-groups', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual({"all_projects": "all_projects", "limit": "limit", "marker": "marker"}, sot._query_mapping._mapping) def test_make_it(self): sot = server_group.ServerGroup(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['members'], sot.member_ids) self.assertEqual(EXAMPLE['metadata'], sot.metadata) self.assertEqual(EXAMPLE['policies'], sot.policies) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_server_interface.py0000666000175100017510000000365613236151340027375 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import server_interface IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'fixed_ips': [ { 'ip_address': '192.168.1.1', 'subnet_id': 'f8a6e8f8-c2ec-497c-9f23-da9616de54ef' } ], 'mac_addr': '2', 'net_id': '3', 'port_id': '4', 'port_state': '5', 'server_id': '6', } class TestServerInterface(testtools.TestCase): def test_basic(self): sot = server_interface.ServerInterface() self.assertEqual('interfaceAttachment', sot.resource_key) self.assertEqual('interfaceAttachments', sot.resources_key) self.assertEqual('/servers/%(server_id)s/os-interface', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = server_interface.ServerInterface(**EXAMPLE) self.assertEqual(EXAMPLE['fixed_ips'], sot.fixed_ips) self.assertEqual(EXAMPLE['mac_addr'], sot.mac_addr) self.assertEqual(EXAMPLE['net_id'], sot.net_id) self.assertEqual(EXAMPLE['port_id'], sot.port_id) self.assertEqual(EXAMPLE['port_state'], sot.port_state) self.assertEqual(EXAMPLE['server_id'], sot.server_id) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/__init__.py0000666000175100017510000000000013236151340024523 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_volume_attachment.py0000666000175100017510000000337013236151340027557 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import volume_attachment EXAMPLE = { 'device': '1', 'id': '2', 'volume_id': '3', } class TestServerInterface(testtools.TestCase): def test_basic(self): sot = volume_attachment.VolumeAttachment() self.assertEqual('volumeAttachment', sot.resource_key) self.assertEqual('volumeAttachments', sot.resources_key) self.assertEqual('/servers/%(server_id)s/os-volume_attachments', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual({"limit": "limit", "offset": "offset", "marker": "marker"}, sot._query_mapping._mapping) def test_make_it(self): sot = volume_attachment.VolumeAttachment(**EXAMPLE) self.assertEqual(EXAMPLE['device'], sot.device) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['volume_id'], sot.volume_id) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_availability_zone.py0000666000175100017510000000333313236151340027544 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import availability_zone as az IDENTIFIER = 'IDENTIFIER' BASIC_EXAMPLE = { 'id': IDENTIFIER, 'zoneState': 'available', 'hosts': 'host1', 'zoneName': 'zone1' } class TestAvailabilityZone(testtools.TestCase): def test_basic(self): sot = az.AvailabilityZone() self.assertEqual('availabilityZoneInfo', sot.resources_key) self.assertEqual('/os-availability-zone', sot.base_path) self.assertTrue(sot.allow_list) self.assertEqual('compute', sot.service.service_type) def test_basic_detail(self): sot = az.AvailabilityZoneDetail() self.assertEqual('availabilityZoneInfo', sot.resources_key) self.assertEqual('/os-availability-zone/detail', sot.base_path) self.assertTrue(sot.allow_list) self.assertEqual('compute', sot.service.service_type) def test_make_basic(self): sot = az.AvailabilityZone(**BASIC_EXAMPLE) self.assertEqual(BASIC_EXAMPLE['id'], sot.id) self.assertEqual(BASIC_EXAMPLE['zoneState'], sot.state) self.assertEqual(BASIC_EXAMPLE['hosts'], sot.hosts) self.assertEqual(BASIC_EXAMPLE['zoneName'], sot.name) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_image.py0000666000175100017510000000701113236151340025116 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute.v2 import image IDENTIFIER = 'IDENTIFIER' BASIC_EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'name': '3', } DETAILS = { 'created': '2015-03-09T12:14:57.233772', 'metadata': {'key': '2'}, 'minDisk': 3, 'minRam': 4, 'progress': 5, 'status': '6', 'updated': '2015-03-09T12:15:57.233772', 'OS-EXT-IMG-SIZE:size': 8 } DETAIL_EXAMPLE = BASIC_EXAMPLE.copy() DETAIL_EXAMPLE.update(DETAILS) class TestImage(testtools.TestCase): def test_basic(self): sot = image.Image() self.assertEqual('image', sot.resource_key) self.assertEqual('images', sot.resources_key) self.assertEqual('/images', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual({"server": "server", "name": "name", "status": "status", "type": "type", "min_disk": "minDisk", "min_ram": "minRam", "changes_since": "changes-since", "limit": "limit", "marker": "marker"}, sot._query_mapping._mapping) def test_make_basic(self): sot = image.Image(**BASIC_EXAMPLE) self.assertEqual(BASIC_EXAMPLE['id'], sot.id) self.assertEqual(BASIC_EXAMPLE['links'], sot.links) self.assertEqual(BASIC_EXAMPLE['name'], sot.name) def test_detail(self): sot = image.ImageDetail() self.assertEqual('image', sot.resource_key) self.assertEqual('images', sot.resources_key) self.assertEqual('/images/detail', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_detail(self): sot = image.ImageDetail(**DETAIL_EXAMPLE) self.assertEqual(DETAIL_EXAMPLE['created'], sot.created_at) self.assertEqual(DETAIL_EXAMPLE['id'], sot.id) self.assertEqual(DETAIL_EXAMPLE['links'], sot.links) self.assertEqual(DETAIL_EXAMPLE['metadata'], sot.metadata) self.assertEqual(DETAIL_EXAMPLE['minDisk'], sot.min_disk) self.assertEqual(DETAIL_EXAMPLE['minRam'], sot.min_ram) self.assertEqual(DETAIL_EXAMPLE['name'], sot.name) self.assertEqual(DETAIL_EXAMPLE['progress'], sot.progress) self.assertEqual(DETAIL_EXAMPLE['status'], sot.status) self.assertEqual(DETAIL_EXAMPLE['updated'], sot.updated_at) self.assertEqual(DETAIL_EXAMPLE['OS-EXT-IMG-SIZE:size'], sot.size) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_server.py0000666000175100017510000005553613236151340025361 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.compute.v2 import server IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'accessIPv4': '1', 'accessIPv6': '2', 'addresses': {'region': '3'}, 'config_drive': True, 'created': '2015-03-09T12:14:57.233772', 'flavorRef': '5', 'flavor': {'id': 'FLAVOR_ID', 'links': {}}, 'hostId': '6', 'id': IDENTIFIER, 'imageRef': '8', 'image': {'id': 'IMAGE_ID', 'links': {}}, 'links': '9', 'metadata': {'key': '10'}, 'networks': 'auto', 'name': '11', 'progress': 12, 'tenant_id': '13', 'status': '14', 'updated': '2015-03-09T12:15:57.233772', 'user_id': '16', 'key_name': '17', 'OS-DCF:diskConfig': '18', 'OS-EXT-AZ:availability_zone': '19', 'OS-EXT-STS:power_state': '20', 'OS-EXT-STS:task_state': '21', 'OS-EXT-STS:vm_state': '22', 'os-extended-volumes:volumes_attached': '23', 'OS-SRV-USG:launched_at': '2015-03-09T12:15:57.233772', 'OS-SRV-USG:terminated_at': '2015-03-09T12:15:57.233772', 'security_groups': '26', 'adminPass': '27', 'personality': '28', 'block_device_mapping_v2': {'key': '29'}, 'OS-EXT-SRV-ATTR:hypervisor_hostname': 'hypervisor.example.com', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-SCH-HNT:scheduler_hints': {'key': '30'}, 'OS-EXT-SRV-ATTR:user_data': '31' } class TestServer(testtools.TestCase): def setUp(self): super(TestServer, self).setUp() self.resp = mock.Mock() self.resp.body = None self.resp.json = mock.Mock(return_value=self.resp.body) self.sess = mock.Mock() self.sess.post = mock.Mock(return_value=self.resp) def test_basic(self): sot = server.Server() self.assertEqual('server', sot.resource_key) self.assertEqual('servers', sot.resources_key) self.assertEqual('/servers', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual({"image": "image", "flavor": "flavor", "name": "name", "status": "status", "host": "host", "all_tenants": "all_tenants", "changes_since": "changes-since", "limit": "limit", "marker": "marker", "sort_key": "sort_key", "sort_dir": "sort_dir", "reservation_id": "reservation_id", "project_id": "project_id", "tags": "tags", "tags_any": "tags-any", "not_tags": "not-tags", "not_tags_any": "not-tags-any", "is_deleted": "deleted", "ipv4_address": "ip", "ipv6_address": "ip6", }, sot._query_mapping._mapping) def test_make_it(self): sot = server.Server(**EXAMPLE) self.assertEqual(EXAMPLE['accessIPv4'], sot.access_ipv4) self.assertEqual(EXAMPLE['accessIPv6'], sot.access_ipv6) self.assertEqual(EXAMPLE['addresses'], sot.addresses) self.assertEqual(EXAMPLE['created'], sot.created_at) self.assertEqual(EXAMPLE['config_drive'], sot.has_config_drive) self.assertEqual(EXAMPLE['flavorRef'], sot.flavor_id) self.assertEqual(EXAMPLE['flavor'], sot.flavor) self.assertEqual(EXAMPLE['hostId'], sot.host_id) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['imageRef'], sot.image_id) self.assertEqual(EXAMPLE['image'], sot.image) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['metadata'], sot.metadata) self.assertEqual(EXAMPLE['networks'], sot.networks) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['progress'], sot.progress) self.assertEqual(EXAMPLE['tenant_id'], sot.project_id) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['updated'], sot.updated_at) self.assertEqual(EXAMPLE['user_id'], sot.user_id) self.assertEqual(EXAMPLE['key_name'], sot.key_name) self.assertEqual(EXAMPLE['OS-DCF:diskConfig'], sot.disk_config) self.assertEqual(EXAMPLE['OS-EXT-AZ:availability_zone'], sot.availability_zone) self.assertEqual(EXAMPLE['OS-EXT-STS:power_state'], sot.power_state) self.assertEqual(EXAMPLE['OS-EXT-STS:task_state'], sot.task_state) self.assertEqual(EXAMPLE['OS-EXT-STS:vm_state'], sot.vm_state) self.assertEqual(EXAMPLE['os-extended-volumes:volumes_attached'], sot.attached_volumes) self.assertEqual(EXAMPLE['OS-SRV-USG:launched_at'], sot.launched_at) self.assertEqual(EXAMPLE['OS-SRV-USG:terminated_at'], sot.terminated_at) self.assertEqual(EXAMPLE['security_groups'], sot.security_groups) self.assertEqual(EXAMPLE['adminPass'], sot.admin_password) self.assertEqual(EXAMPLE['personality'], sot.personality) self.assertEqual(EXAMPLE['block_device_mapping_v2'], sot.block_device_mapping) self.assertEqual(EXAMPLE['OS-EXT-SRV-ATTR:hypervisor_hostname'], sot.hypervisor_hostname) self.assertEqual(EXAMPLE['OS-EXT-SRV-ATTR:instance_name'], sot.instance_name) self.assertEqual(EXAMPLE['OS-SCH-HNT:scheduler_hints'], sot.scheduler_hints) self.assertEqual(EXAMPLE['OS-EXT-SRV-ATTR:user_data'], sot.user_data) def test_detail(self): sot = server.ServerDetail() self.assertEqual('server', sot.resource_key) self.assertEqual('servers', sot.resources_key) self.assertEqual('/servers/detail', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test__prepare_server(self): zone = 1 data = 2 hints = {"hint": 3} sot = server.Server(id=1, availability_zone=zone, user_data=data, scheduler_hints=hints) request = sot._prepare_request() self.assertNotIn("OS-EXT-AZ:availability_zone", request.body[sot.resource_key]) self.assertEqual(request.body[sot.resource_key]["availability_zone"], zone) self.assertNotIn("OS-EXT-SRV-ATTR:user_data", request.body[sot.resource_key]) self.assertEqual(request.body[sot.resource_key]["user_data"], data) self.assertNotIn("OS-SCH-HNT:scheduler_hints", request.body[sot.resource_key]) self.assertEqual(request.body["OS-SCH-HNT:scheduler_hints"], hints) def test_change_password(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.change_password(self.sess, 'a')) url = 'servers/IDENTIFIER/action' body = {"changePassword": {"adminPass": "a"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_reboot(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.reboot(self.sess, 'HARD')) url = 'servers/IDENTIFIER/action' body = {"reboot": {"type": "HARD"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_force_delete(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.force_delete(self.sess)) url = 'servers/IDENTIFIER/action' body = {'forceDelete': None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_rebuild(self): sot = server.Server(**EXAMPLE) # Let the translate pass through, that portion is tested elsewhere sot._translate_response = lambda arg: arg result = sot.rebuild(self.sess, name='noo', admin_password='seekr3t', image='http://image/1', access_ipv4="12.34.56.78", access_ipv6="fe80::100", metadata={"meta var": "meta val"}, personality=[{"path": "/etc/motd", "contents": "foo"}]) self.assertIsInstance(result, server.Server) url = 'servers/IDENTIFIER/action' body = { "rebuild": { "name": "noo", "imageRef": "http://image/1", "adminPass": "seekr3t", "accessIPv4": "12.34.56.78", "accessIPv6": "fe80::100", "metadata": {"meta var": "meta val"}, "personality": [{"path": "/etc/motd", "contents": "foo"}], "preserve_ephemeral": False } } headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_rebuild_minimal(self): sot = server.Server(**EXAMPLE) # Let the translate pass through, that portion is tested elsewhere sot._translate_response = lambda arg: arg result = sot.rebuild(self.sess, name='nootoo', admin_password='seekr3two', image='http://image/2') self.assertIsInstance(result, server.Server) url = 'servers/IDENTIFIER/action' body = { "rebuild": { "name": "nootoo", "imageRef": "http://image/2", "adminPass": "seekr3two", "preserve_ephemeral": False } } headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_resize(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.resize(self.sess, '2')) url = 'servers/IDENTIFIER/action' body = {"resize": {"flavorRef": "2"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_confirm_resize(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.confirm_resize(self.sess)) url = 'servers/IDENTIFIER/action' body = {"confirmResize": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_revert_resize(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.revert_resize(self.sess)) url = 'servers/IDENTIFIER/action' body = {"revertResize": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_create_image(self): sot = server.Server(**EXAMPLE) name = 'noo' metadata = {'nu': 'image', 'created': 'today'} self.assertIsNone(sot.create_image(self.sess, name, metadata)) url = 'servers/IDENTIFIER/action' body = {"createImage": {'name': name, 'metadata': metadata}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_create_image_minimal(self): sot = server.Server(**EXAMPLE) name = 'noo' self.assertIsNone(self.resp.body, sot.create_image(self.sess, name)) url = 'servers/IDENTIFIER/action' body = {"createImage": {'name': name}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_add_security_group(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.add_security_group(self.sess, "group")) url = 'servers/IDENTIFIER/action' body = {"addSecurityGroup": {"name": "group"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_remove_security_group(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.remove_security_group(self.sess, "group")) url = 'servers/IDENTIFIER/action' body = {"removeSecurityGroup": {"name": "group"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_reset_state(self): sot = server.Server(**EXAMPLE) self.assertIsNone(sot.reset_state(self.sess, 'active')) url = 'servers/IDENTIFIER/action' body = {"os-resetState": {"state": 'active'}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_add_fixed_ip(self): sot = server.Server(**EXAMPLE) res = sot.add_fixed_ip(self.sess, "NETWORK-ID") self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"addFixedIp": {"networkId": "NETWORK-ID"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_remove_fixed_ip(self): sot = server.Server(**EXAMPLE) res = sot.remove_fixed_ip(self.sess, "ADDRESS") self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"removeFixedIp": {"address": "ADDRESS"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_add_floating_ip(self): sot = server.Server(**EXAMPLE) res = sot.add_floating_ip(self.sess, "FLOATING-IP") self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"addFloatingIp": {"address": "FLOATING-IP"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_add_floating_ip_with_fixed_addr(self): sot = server.Server(**EXAMPLE) res = sot.add_floating_ip(self.sess, "FLOATING-IP", "FIXED-ADDR") self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"addFloatingIp": {"address": "FLOATING-IP", "fixed_address": "FIXED-ADDR"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_remove_floating_ip(self): sot = server.Server(**EXAMPLE) res = sot.remove_floating_ip(self.sess, "I-AM-FLOATING") self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"removeFloatingIp": {"address": "I-AM-FLOATING"}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_backup(self): sot = server.Server(**EXAMPLE) res = sot.backup(self.sess, "name", "daily", 1) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"createBackup": {"name": "name", "backup_type": "daily", "rotation": 1}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_pause(self): sot = server.Server(**EXAMPLE) res = sot.pause(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"pause": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_unpause(self): sot = server.Server(**EXAMPLE) res = sot.unpause(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"unpause": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_suspend(self): sot = server.Server(**EXAMPLE) res = sot.suspend(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"suspend": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_resume(self): sot = server.Server(**EXAMPLE) res = sot.resume(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"resume": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_lock(self): sot = server.Server(**EXAMPLE) res = sot.lock(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"lock": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_unlock(self): sot = server.Server(**EXAMPLE) res = sot.unlock(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"unlock": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_rescue(self): sot = server.Server(**EXAMPLE) res = sot.rescue(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"rescue": {}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_rescue_with_options(self): sot = server.Server(**EXAMPLE) res = sot.rescue(self.sess, admin_pass='SECRET', image_ref='IMG-ID') self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"rescue": {'adminPass': 'SECRET', 'rescue_image_ref': 'IMG-ID'}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_unrescue(self): sot = server.Server(**EXAMPLE) res = sot.unrescue(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"unrescue": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_evacuate(self): sot = server.Server(**EXAMPLE) res = sot.evacuate(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"evacuate": {}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_evacuate_with_options(self): sot = server.Server(**EXAMPLE) res = sot.evacuate(self.sess, host='HOST2', admin_pass='NEW_PASS', force=True) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"evacuate": {'host': 'HOST2', 'adminPass': 'NEW_PASS', 'force': True}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_start(self): sot = server.Server(**EXAMPLE) res = sot.start(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"os-start": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_stop(self): sot = server.Server(**EXAMPLE) res = sot.stop(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"os-stop": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_shelve(self): sot = server.Server(**EXAMPLE) res = sot.shelve(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"shelve": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_unshelve(self): sot = server.Server(**EXAMPLE) res = sot.unshelve(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"unshelve": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_migrate(self): sot = server.Server(**EXAMPLE) res = sot.migrate(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {"migrate": None} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_get_console_output(self): sot = server.Server(**EXAMPLE) res = sot.get_console_output(self.sess) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {'os-getConsoleOutput': {}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) res = sot.get_console_output(self.sess, length=1) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = {'os-getConsoleOutput': {'length': 1}} headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) def test_live_migrate(self): sot = server.Server(**EXAMPLE) res = sot.live_migrate(self.sess, host='HOST2', force=False) self.assertIsNone(res) url = 'servers/IDENTIFIER/action' body = { "os-migrateLive": { "host": 'HOST2', "block_migration": "auto", "force": False } } headers = {'Accept': ''} self.sess.post.assert_called_with( url, json=body, headers=headers) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_service.py0000666000175100017510000000671613236151340025507 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.compute.v2 import service IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'binary': 'nova-compute', 'host': 'host1', 'status': 'enabled', 'state': 'up', 'zone': 'nova' } class TestService(testtools.TestCase): def setUp(self): super(TestService, self).setUp() self.resp = mock.Mock() self.resp.body = None self.resp.json = mock.Mock(return_value=self.resp.body) self.sess = mock.Mock() self.sess.put = mock.Mock(return_value=self.resp) def test_basic(self): sot = service.Service() self.assertEqual('service', sot.resource_key) self.assertEqual('services', sot.resources_key) self.assertEqual('/os-services', sot.base_path) self.assertEqual('compute', sot.service.service_type) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_list) self.assertFalse(sot.allow_get) def test_make_it(self): sot = service.Service(**EXAMPLE) self.assertEqual(EXAMPLE['host'], sot.host) self.assertEqual(EXAMPLE['binary'], sot.binary) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['state'], sot.state) self.assertEqual(EXAMPLE['zone'], sot.zone) self.assertEqual(EXAMPLE['id'], sot.id) def test_force_down(self): sot = service.Service(**EXAMPLE) res = sot.force_down(self.sess, 'host1', 'nova-compute') self.assertIsNone(res.body) url = 'os-services/force-down' body = { 'binary': 'nova-compute', 'host': 'host1', 'forced_down': True, } self.sess.put.assert_called_with( url, json=body) def test_enable(self): sot = service.Service(**EXAMPLE) res = sot.enable(self.sess, 'host1', 'nova-compute') self.assertIsNone(res.body) url = 'os-services/enable' body = { 'binary': 'nova-compute', 'host': 'host1', } self.sess.put.assert_called_with( url, json=body) def test_disable(self): sot = service.Service(**EXAMPLE) res = sot.disable(self.sess, 'host1', 'nova-compute') self.assertIsNone(res.body) url = 'os-services/disable' body = { 'binary': 'nova-compute', 'host': 'host1', } self.sess.put.assert_called_with( url, json=body) def test_disable_with_reason(self): sot = service.Service(**EXAMPLE) reason = 'fencing' res = sot.disable(self.sess, 'host1', 'nova-compute', reason=reason) self.assertIsNone(res.body) url = 'os-services/disable-log-reason' body = { 'binary': 'nova-compute', 'host': 'host1', 'disabled_reason': reason } self.sess.put.assert_called_with( url, json=body) openstacksdk-0.11.3/openstack/tests/unit/compute/v2/test_proxy.py0000666000175100017510000005447613236151340025236 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.compute.v2 import _proxy from openstack.compute.v2 import availability_zone as az from openstack.compute.v2 import extension from openstack.compute.v2 import flavor from openstack.compute.v2 import hypervisor from openstack.compute.v2 import image from openstack.compute.v2 import keypair from openstack.compute.v2 import limits from openstack.compute.v2 import server from openstack.compute.v2 import server_group from openstack.compute.v2 import server_interface from openstack.compute.v2 import server_ip from openstack.compute.v2 import service from openstack.tests.unit import test_proxy_base class TestComputeProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestComputeProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_extension_find(self): self.verify_find(self.proxy.find_extension, extension.Extension) def test_extensions(self): self.verify_list_no_kwargs(self.proxy.extensions, extension.Extension, paginated=False) def test_flavor_create(self): self.verify_create(self.proxy.create_flavor, flavor.Flavor) def test_flavor_delete(self): self.verify_delete(self.proxy.delete_flavor, flavor.Flavor, False) def test_flavor_delete_ignore(self): self.verify_delete(self.proxy.delete_flavor, flavor.Flavor, True) def test_flavor_find(self): self.verify_find(self.proxy.find_flavor, flavor.Flavor) def test_flavor_get(self): self.verify_get(self.proxy.get_flavor, flavor.Flavor) def test_flavors_detailed(self): self.verify_list(self.proxy.flavors, flavor.FlavorDetail, paginated=True, method_kwargs={"details": True, "query": 1}, expected_kwargs={"query": 1}) def test_flavors_not_detailed(self): self.verify_list(self.proxy.flavors, flavor.Flavor, paginated=True, method_kwargs={"details": False, "query": 1}, expected_kwargs={"query": 1}) def test_image_delete(self): self.verify_delete(self.proxy.delete_image, image.Image, False) def test_image_delete_ignore(self): self.verify_delete(self.proxy.delete_image, image.Image, True) def test_image_find(self): self.verify_find(self.proxy.find_image, image.Image) def test_image_get(self): self.verify_get(self.proxy.get_image, image.Image) def test_images_detailed(self): self.verify_list(self.proxy.images, image.ImageDetail, paginated=True, method_kwargs={"details": True, "query": 1}, expected_kwargs={"query": 1}) def test_images_not_detailed(self): self.verify_list(self.proxy.images, image.Image, paginated=True, method_kwargs={"details": False, "query": 1}, expected_kwargs={"query": 1}) def test_keypair_create(self): self.verify_create(self.proxy.create_keypair, keypair.Keypair) def test_keypair_delete(self): self.verify_delete(self.proxy.delete_keypair, keypair.Keypair, False) def test_keypair_delete_ignore(self): self.verify_delete(self.proxy.delete_keypair, keypair.Keypair, True) def test_keypair_find(self): self.verify_find(self.proxy.find_keypair, keypair.Keypair) def test_keypair_get(self): self.verify_get(self.proxy.get_keypair, keypair.Keypair) def test_keypairs(self): self.verify_list_no_kwargs(self.proxy.keypairs, keypair.Keypair, paginated=False) def test_limits_get(self): self.verify_get(self.proxy.get_limits, limits.Limits, value=[]) def test_server_interface_create(self): self.verify_create(self.proxy.create_server_interface, server_interface.ServerInterface, method_kwargs={"server": "test_id"}, expected_kwargs={"server_id": "test_id"}) def test_server_interface_delete(self): self.proxy._get_uri_attribute = lambda *args: args[1] interface_id = "test_interface_id" server_id = "test_server_id" test_interface = server_interface.ServerInterface(id=interface_id) test_interface.server_id = server_id # Case1: ServerInterface instance is provided as value self._verify2("openstack.proxy.BaseProxy._delete", self.proxy.delete_server_interface, method_args=[test_interface], method_kwargs={"server": server_id}, expected_args=[server_interface.ServerInterface], expected_kwargs={"server_id": server_id, "port_id": interface_id, "ignore_missing": True}) # Case2: ServerInterface ID is provided as value self._verify2("openstack.proxy.BaseProxy._delete", self.proxy.delete_server_interface, method_args=[interface_id], method_kwargs={"server": server_id}, expected_args=[server_interface.ServerInterface], expected_kwargs={"server_id": server_id, "port_id": interface_id, "ignore_missing": True}) def test_server_interface_delete_ignore(self): self.proxy._get_uri_attribute = lambda *args: args[1] self.verify_delete(self.proxy.delete_server_interface, server_interface.ServerInterface, True, method_kwargs={"server": "test_id"}, expected_args=[server_interface.ServerInterface], expected_kwargs={"server_id": "test_id", "port_id": "resource_or_id"}) def test_server_interface_get(self): self.proxy._get_uri_attribute = lambda *args: args[1] interface_id = "test_interface_id" server_id = "test_server_id" test_interface = server_interface.ServerInterface(id=interface_id) test_interface.server_id = server_id # Case1: ServerInterface instance is provided as value self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_server_interface, method_args=[test_interface], method_kwargs={"server": server_id}, expected_args=[server_interface.ServerInterface], expected_kwargs={"port_id": interface_id, "server_id": server_id}) # Case2: ServerInterface ID is provided as value self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_server_interface, method_args=[interface_id], method_kwargs={"server": server_id}, expected_args=[server_interface.ServerInterface], expected_kwargs={"port_id": interface_id, "server_id": server_id}) def test_server_interfaces(self): self.verify_list(self.proxy.server_interfaces, server_interface.ServerInterface, paginated=False, method_args=["test_id"], expected_kwargs={"server_id": "test_id"}) def test_server_ips_with_network_label(self): self.verify_list(self.proxy.server_ips, server_ip.ServerIP, paginated=False, method_args=["test_id"], method_kwargs={"network_label": "test_label"}, expected_kwargs={"server_id": "test_id", "network_label": "test_label"}) def test_server_ips_without_network_label(self): self.verify_list(self.proxy.server_ips, server_ip.ServerIP, paginated=False, method_args=["test_id"], expected_kwargs={"server_id": "test_id", "network_label": None}) def test_server_create_attrs(self): self.verify_create(self.proxy.create_server, server.Server) def test_server_delete(self): self.verify_delete(self.proxy.delete_server, server.Server, False) def test_server_delete_ignore(self): self.verify_delete(self.proxy.delete_server, server.Server, True) def test_server_force_delete(self): self._verify("openstack.compute.v2.server.Server.force_delete", self.proxy.delete_server, method_args=["value", False, True]) def test_server_find(self): self.verify_find(self.proxy.find_server, server.Server) def test_server_get(self): self.verify_get(self.proxy.get_server, server.Server) def test_servers_detailed(self): self.verify_list(self.proxy.servers, server.ServerDetail, paginated=True, method_kwargs={"details": True, "changes_since": 1, "image": 2}, expected_kwargs={"changes_since": 1, "image": 2}) def test_servers_not_detailed(self): self.verify_list(self.proxy.servers, server.Server, paginated=True, method_kwargs={"details": False, "changes_since": 1, "image": 2}, expected_kwargs={"paginated": True, "changes_since": 1, "image": 2}) def test_server_update(self): self.verify_update(self.proxy.update_server, server.Server) def test_server_wait_for(self): value = server.Server(id='1234') self.verify_wait_for_status( self.proxy.wait_for_server, method_args=[value], expected_args=[value, 'ACTIVE', ['ERROR'], 2, 120]) def test_server_resize(self): self._verify("openstack.compute.v2.server.Server.resize", self.proxy.resize_server, method_args=["value", "test-flavor"], expected_args=["test-flavor"]) def test_server_confirm_resize(self): self._verify("openstack.compute.v2.server.Server.confirm_resize", self.proxy.confirm_server_resize, method_args=["value"]) def test_server_revert_resize(self): self._verify("openstack.compute.v2.server.Server.revert_resize", self.proxy.revert_server_resize, method_args=["value"]) def test_server_rebuild(self): id = 'test_image_id' image_obj = image.Image(id='test_image_id') # Case1: image object is provided # NOTE: Inside of Server.rebuild is where image_obj gets converted # to an ID instead of object. self._verify('openstack.compute.v2.server.Server.rebuild', self.proxy.rebuild_server, method_args=["value", "test_server", "test_pass"], method_kwargs={"metadata": {"k1": "v1"}, "image": image_obj}, expected_args=["test_server", "test_pass"], expected_kwargs={"metadata": {"k1": "v1"}, "image": image_obj}) # Case2: image name or id is provided self._verify('openstack.compute.v2.server.Server.rebuild', self.proxy.rebuild_server, method_args=["value", "test_server", "test_pass"], method_kwargs={"metadata": {"k1": "v1"}, "image": id}, expected_args=["test_server", "test_pass"], expected_kwargs={"metadata": {"k1": "v1"}, "image": id}) def test_add_fixed_ip_to_server(self): self._verify("openstack.compute.v2.server.Server.add_fixed_ip", self.proxy.add_fixed_ip_to_server, method_args=["value", "network-id"], expected_args=["network-id"]) def test_fixed_ip_from_server(self): self._verify("openstack.compute.v2.server.Server.remove_fixed_ip", self.proxy.remove_fixed_ip_from_server, method_args=["value", "address"], expected_args=["address"]) def test_floating_ip_to_server(self): self._verify("openstack.compute.v2.server.Server.add_floating_ip", self.proxy.add_floating_ip_to_server, method_args=["value", "floating-ip"], expected_args=["floating-ip"], expected_kwargs={'fixed_address': None}) def test_add_floating_ip_to_server_with_fixed_addr(self): self._verify("openstack.compute.v2.server.Server.add_floating_ip", self.proxy.add_floating_ip_to_server, method_args=["value", "floating-ip", 'fixed-addr'], expected_args=["floating-ip"], expected_kwargs={'fixed_address': 'fixed-addr'}) def test_remove_floating_ip_from_server(self): self._verify("openstack.compute.v2.server.Server.remove_floating_ip", self.proxy.remove_floating_ip_from_server, method_args=["value", "address"], expected_args=["address"]) def test_server_backup(self): self._verify("openstack.compute.v2.server.Server.backup", self.proxy.backup_server, method_args=["value", "name", "daily", 1], expected_args=["name", "daily", 1]) def test_server_pause(self): self._verify("openstack.compute.v2.server.Server.pause", self.proxy.pause_server, method_args=["value"]) def test_server_unpause(self): self._verify("openstack.compute.v2.server.Server.unpause", self.proxy.unpause_server, method_args=["value"]) def test_server_suspend(self): self._verify("openstack.compute.v2.server.Server.suspend", self.proxy.suspend_server, method_args=["value"]) def test_server_resume(self): self._verify("openstack.compute.v2.server.Server.resume", self.proxy.resume_server, method_args=["value"]) def test_server_lock(self): self._verify("openstack.compute.v2.server.Server.lock", self.proxy.lock_server, method_args=["value"]) def test_server_unlock(self): self._verify("openstack.compute.v2.server.Server.unlock", self.proxy.unlock_server, method_args=["value"]) def test_server_rescue(self): self._verify("openstack.compute.v2.server.Server.rescue", self.proxy.rescue_server, method_args=["value"], expected_kwargs={"admin_pass": None, "image_ref": None}) def test_server_rescue_with_options(self): self._verify("openstack.compute.v2.server.Server.rescue", self.proxy.rescue_server, method_args=["value", 'PASS', 'IMG'], expected_kwargs={"admin_pass": 'PASS', "image_ref": 'IMG'}) def test_server_unrescue(self): self._verify("openstack.compute.v2.server.Server.unrescue", self.proxy.unrescue_server, method_args=["value"]) def test_server_evacuate(self): self._verify("openstack.compute.v2.server.Server.evacuate", self.proxy.evacuate_server, method_args=["value"], expected_kwargs={"host": None, "admin_pass": None, "force": None}) def test_server_evacuate_with_options(self): self._verify("openstack.compute.v2.server.Server.evacuate", self.proxy.evacuate_server, method_args=["value", 'HOST2', 'NEW_PASS', True], expected_kwargs={"host": "HOST2", "admin_pass": 'NEW_PASS', "force": True}) def test_server_start(self): self._verify("openstack.compute.v2.server.Server.start", self.proxy.start_server, method_args=["value"]) def test_server_stop(self): self._verify("openstack.compute.v2.server.Server.stop", self.proxy.stop_server, method_args=["value"]) def test_server_shelve(self): self._verify("openstack.compute.v2.server.Server.shelve", self.proxy.shelve_server, method_args=["value"]) def test_server_unshelve(self): self._verify("openstack.compute.v2.server.Server.unshelve", self.proxy.unshelve_server, method_args=["value"]) def test_get_server_output(self): self._verify("openstack.compute.v2.server.Server.get_console_output", self.proxy.get_server_console_output, method_args=["value"], expected_kwargs={"length": None}) self._verify("openstack.compute.v2.server.Server.get_console_output", self.proxy.get_server_console_output, method_args=["value", 1], expected_kwargs={"length": 1}) def test_availability_zones(self): self.verify_list_no_kwargs(self.proxy.availability_zones, az.AvailabilityZone, paginated=False) def test_get_all_server_metadata(self): self._verify2("openstack.compute.v2.server.Server.get_metadata", self.proxy.get_server_metadata, method_args=["value"], method_result=server.Server(id="value", metadata={}), expected_args=[self.proxy], expected_result={}) def test_set_server_metadata(self): kwargs = {"a": "1", "b": "2"} id = "an_id" self._verify2("openstack.compute.v2.server.Server.set_metadata", self.proxy.set_server_metadata, method_args=[id], method_kwargs=kwargs, method_result=server.Server.existing(id=id, metadata=kwargs), expected_args=[self.proxy], expected_kwargs=kwargs, expected_result=kwargs) def test_delete_server_metadata(self): self._verify2("openstack.compute.v2.server.Server.delete_metadata", self.proxy.delete_server_metadata, expected_result=None, method_args=["value", "key"], expected_args=[self.proxy, "key"]) def test_server_group_create(self): self.verify_create(self.proxy.create_server_group, server_group.ServerGroup) def test_server_group_delete(self): self.verify_delete(self.proxy.delete_server_group, server_group.ServerGroup, False) def test_server_group_delete_ignore(self): self.verify_delete(self.proxy.delete_server_group, server_group.ServerGroup, True) def test_server_group_find(self): self.verify_find(self.proxy.find_server_group, server_group.ServerGroup) def test_server_group_get(self): self.verify_get(self.proxy.get_server_group, server_group.ServerGroup) def test_server_groups(self): self.verify_list(self.proxy.server_groups, server_group.ServerGroup, paginated=False) def test_hypervisors(self): self.verify_list_no_kwargs(self.proxy.hypervisors, hypervisor.Hypervisor, paginated=False) def test_find_hypervisor(self): self.verify_find(self.proxy.find_hypervisor, hypervisor.Hypervisor) def test_get_hypervisor(self): self.verify_get(self.proxy.get_hypervisor, hypervisor.Hypervisor) def test_services(self): self.verify_list_no_kwargs(self.proxy.services, service.Service, paginated=False) def test_enable_service(self): self._verify('openstack.compute.v2.service.Service.enable', self.proxy.enable_service, method_args=["value", "host1", "nova-compute"], expected_args=["host1", "nova-compute"]) def test_disable_service(self): self._verify('openstack.compute.v2.service.Service.disable', self.proxy.disable_service, method_args=["value", "host1", "nova-compute"], expected_args=["host1", "nova-compute", None]) def test_force_service_down(self): self._verify('openstack.compute.v2.service.Service.force_down', self.proxy.force_service_down, method_args=["value", "host1", "nova-compute"], expected_args=["host1", "nova-compute"]) def test_live_migrate_server(self): self._verify('openstack.compute.v2.server.Server.live_migrate', self.proxy.live_migrate_server, method_args=["value", "host1", "force"], expected_args=["host1", "force"]) openstacksdk-0.11.3/openstack/tests/unit/compute/__init__.py0000666000175100017510000000000013236151340024174 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/compute/test_compute_service.py0000666000175100017510000000210513236151340026700 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.compute import compute_service class TestComputeService(testtools.TestCase): def test_service(self): sot = compute_service.ComputeService() self.assertEqual('compute', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v2', sot.valid_versions[0].module) self.assertEqual('v2', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/fakes.py0000666000175100017510000000235313236151340022067 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock class FakeTransport(mock.Mock): RESPONSE = mock.Mock('200 OK') def __init__(self): super(FakeTransport, self).__init__() self.request = mock.Mock() self.request.return_value = self.RESPONSE class FakeAuthenticator(mock.Mock): TOKEN = 'fake_token' ENDPOINT = 'http://www.example.com/endpoint' def __init__(self): super(FakeAuthenticator, self).__init__() self.get_token = mock.Mock() self.get_token.return_value = self.TOKEN self.get_endpoint = mock.Mock() self.get_endpoint.return_value = self.ENDPOINT openstacksdk-0.11.3/openstack/tests/unit/message/0000775000175100017510000000000013236151501022042 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/message/test_version.py0000666000175100017510000000266213236151340025151 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.message import version IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'status': '3', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('messaging', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['status'], sot.status) openstacksdk-0.11.3/openstack/tests/unit/message/v2/0000775000175100017510000000000013236151501022371 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/message/v2/test_queue.py0000666000175100017510000001377313236151340025144 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools import uuid from openstack.message.v2 import queue FAKE1 = { 'name': 'test_queue', 'description': 'Queue used for test.', '_default_message_ttl': 3600, '_max_messages_post_size': 262144 } FAKE2 = { 'name': 'test_queue', 'description': 'Queue used for test.', '_default_message_ttl': 3600, '_max_messages_post_size': 262144, 'client_id': 'OLD_CLIENT_ID', 'project_id': 'OLD_PROJECT_ID' } class TestQueue(testtools.TestCase): def test_basic(self): sot = queue.Queue() self.assertEqual('queues', sot.resources_key) self.assertEqual('/queues', sot.base_path) self.assertEqual('messaging', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = queue.Queue.new(**FAKE2) self.assertEqual(FAKE1['description'], sot.description) self.assertEqual(FAKE1['name'], sot.name) self.assertEqual(FAKE1['name'], sot.id) self.assertEqual(FAKE1['_default_message_ttl'], sot.default_message_ttl) self.assertEqual(FAKE1['_max_messages_post_size'], sot.max_messages_post_size) self.assertEqual(FAKE2['client_id'], sot.client_id) self.assertEqual(FAKE2['project_id'], sot.project_id) @mock.patch.object(uuid, 'uuid4') def test_create(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.put.return_value = resp sess.get_project_id.return_value = 'NEW_PROJECT_ID' mock_uuid.return_value = 'NEW_CLIENT_ID' sot = queue.Queue(**FAKE1) sot._translate_response = mock.Mock() res = sot.create(sess) url = 'queues/%s' % FAKE1['name'] headers = {'Client-ID': 'NEW_CLIENT_ID', 'X-PROJECT-ID': 'NEW_PROJECT_ID'} sess.put.assert_called_with(url, headers=headers, json=FAKE1) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp, has_body=False) self.assertEqual(sot, res) def test_create_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.put.return_value = resp sot = queue.Queue(**FAKE2) sot._translate_response = mock.Mock() res = sot.create(sess) url = 'queues/%s' % FAKE2['name'] headers = {'Client-ID': 'OLD_CLIENT_ID', 'X-PROJECT-ID': 'OLD_PROJECT_ID'} sess.put.assert_called_with(url, headers=headers, json=FAKE1) sot._translate_response.assert_called_once_with(resp, has_body=False) self.assertEqual(sot, res) @mock.patch.object(uuid, 'uuid4') def test_get(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp sess.get_project_id.return_value = 'NEW_PROJECT_ID' mock_uuid.return_value = 'NEW_CLIENT_ID' sot = queue.Queue(**FAKE1) sot._translate_response = mock.Mock() res = sot.get(sess) url = 'queues/%s' % FAKE1['name'] headers = {'Client-ID': 'NEW_CLIENT_ID', 'X-PROJECT-ID': 'NEW_PROJECT_ID'} sess.get.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp) self.assertEqual(sot, res) def test_get_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp sot = queue.Queue(**FAKE2) sot._translate_response = mock.Mock() res = sot.get(sess) url = 'queues/%s' % FAKE2['name'] headers = {'Client-ID': 'OLD_CLIENT_ID', 'X-PROJECT-ID': 'OLD_PROJECT_ID'} sess.get.assert_called_with(url, headers=headers) sot._translate_response.assert_called_once_with(resp) self.assertEqual(sot, res) @mock.patch.object(uuid, 'uuid4') def test_delete(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sess.get_project_id.return_value = 'NEW_PROJECT_ID' mock_uuid.return_value = 'NEW_CLIENT_ID' sot = queue.Queue(**FAKE1) sot._translate_response = mock.Mock() sot.delete(sess) url = 'queues/%s' % FAKE1['name'] headers = {'Client-ID': 'NEW_CLIENT_ID', 'X-PROJECT-ID': 'NEW_PROJECT_ID'} sess.delete.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp, has_body=False) def test_delete_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sot = queue.Queue(**FAKE2) sot._translate_response = mock.Mock() sot.delete(sess) url = 'queues/%s' % FAKE2['name'] headers = {'Client-ID': 'OLD_CLIENT_ID', 'X-PROJECT-ID': 'OLD_PROJECT_ID'} sess.delete.assert_called_with(url, headers=headers) sot._translate_response.assert_called_once_with(resp, has_body=False) openstacksdk-0.11.3/openstack/tests/unit/message/v2/__init__.py0000666000175100017510000000000013236151340024473 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/message/v2/test_message.py0000666000175100017510000002121013236151340025425 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools import uuid from openstack.message.v2 import message FAKE1 = { 'age': 456, 'body': { 'current_bytes': '0', 'event': 'BackupProgress', 'total_bytes': '99614720' }, 'id': '578ee000508f153f256f717d', 'href': '/v2/queues/queue1/messages/578ee000508f153f256f717d', 'ttl': 3600, 'queue_name': 'queue1' } FAKE2 = { 'age': 456, 'body': { 'current_bytes': '0', 'event': 'BackupProgress', 'total_bytes': '99614720' }, 'id': '578ee000508f153f256f717d', 'href': '/v2/queues/queue1/messages/578ee000508f153f256f717d', 'ttl': 3600, 'queue_name': 'queue1', 'client_id': 'OLD_CLIENT_ID', 'project_id': 'OLD_PROJECT_ID' } class TestMessage(testtools.TestCase): def test_basic(self): sot = message.Message() self.assertEqual('messages', sot.resources_key) self.assertEqual('/queues/%(queue_name)s/messages', sot.base_path) self.assertEqual('messaging', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = message.Message.new(**FAKE2) self.assertEqual(FAKE2['age'], sot.age) self.assertEqual(FAKE2['body'], sot.body) self.assertEqual(FAKE2['id'], sot.id) self.assertEqual(FAKE2['href'], sot.href) self.assertEqual(FAKE2['ttl'], sot.ttl) self.assertEqual(FAKE2['queue_name'], sot.queue_name) self.assertEqual(FAKE2['client_id'], sot.client_id) self.assertEqual(FAKE2['project_id'], sot.project_id) @mock.patch.object(uuid, 'uuid4') def test_post(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.post.return_value = resp resources = [ '/v2/queues/queue1/messages/578ee000508f153f256f717d' '/v2/queues/queue1/messages/579edd6c368cb61de9a7e233' ] resp.json.return_value = {'resources': resources} sess.get_project_id.return_value = 'NEW_PROJECT_ID' mock_uuid.return_value = 'NEW_CLIENT_ID' messages = [ { 'body': {'key': 'value1'}, 'ttl': 3600 }, { 'body': {'key': 'value2'}, 'ttl': 1800 } ] sot = message.Message(**FAKE1) res = sot.post(sess, messages) url = '/queues/%(queue)s/messages' % {'queue': FAKE1['queue_name']} headers = {'Client-ID': 'NEW_CLIENT_ID', 'X-PROJECT-ID': 'NEW_PROJECT_ID'} sess.post.assert_called_once_with(url, headers=headers, json={'messages': messages}) sess.get_project_id.assert_called_once_with() resp.json.assert_called_once_with() self.assertEqual(resources, res) def test_post_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.post.return_value = resp resources = [ '/v2/queues/queue1/messages/578ee000508f153f256f717d' '/v2/queues/queue1/messages/579edd6c368cb61de9a7e233' ] resp.json.return_value = {'resources': resources} messages = [ { 'body': {'key': 'value1'}, 'ttl': 3600 }, { 'body': {'key': 'value2'}, 'ttl': 1800 } ] sot = message.Message(**FAKE2) res = sot.post(sess, messages) url = '/queues/%(queue)s/messages' % {'queue': FAKE2['queue_name']} headers = {'Client-ID': 'OLD_CLIENT_ID', 'X-PROJECT-ID': 'OLD_PROJECT_ID'} sess.post.assert_called_once_with(url, headers=headers, json={'messages': messages}) resp.json.assert_called_once_with() self.assertEqual(resources, res) @mock.patch.object(uuid, 'uuid4') def test_get(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp sess.get_project_id.return_value = 'NEW_PROJECT_ID' mock_uuid.return_value = 'NEW_CLIENT_ID' sot = message.Message(**FAKE1) sot._translate_response = mock.Mock() res = sot.get(sess) url = 'queues/%(queue)s/messages/%(message)s' % { 'queue': FAKE1['queue_name'], 'message': FAKE1['id']} headers = {'Client-ID': 'NEW_CLIENT_ID', 'X-PROJECT-ID': 'NEW_PROJECT_ID'} sess.get.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp) self.assertEqual(sot, res) def test_get_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp sot = message.Message(**FAKE1) sot._translate_response = mock.Mock() res = sot.get(sess) url = 'queues/%(queue)s/messages/%(message)s' % { 'queue': FAKE2['queue_name'], 'message': FAKE2['id']} sot = message.Message(**FAKE2) sot._translate_response = mock.Mock() res = sot.get(sess) headers = {'Client-ID': 'OLD_CLIENT_ID', 'X-PROJECT-ID': 'OLD_PROJECT_ID'} sess.get.assert_called_with(url, headers=headers) sot._translate_response.assert_called_once_with(resp) self.assertEqual(sot, res) @mock.patch.object(uuid, 'uuid4') def test_delete_unclaimed(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sess.get_project_id.return_value = 'NEW_PROJECT_ID' mock_uuid.return_value = 'NEW_CLIENT_ID' sot = message.Message(**FAKE1) sot.claim_id = None sot._translate_response = mock.Mock() sot.delete(sess) url = 'queues/%(queue)s/messages/%(message)s' % { 'queue': FAKE1['queue_name'], 'message': FAKE1['id']} headers = {'Client-ID': 'NEW_CLIENT_ID', 'X-PROJECT-ID': 'NEW_PROJECT_ID'} sess.delete.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp, has_body=False) @mock.patch.object(uuid, 'uuid4') def test_delete_claimed(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sess.get_project_id.return_value = 'NEW_PROJECT_ID' mock_uuid.return_value = 'NEW_CLIENT_ID' sot = message.Message(**FAKE1) sot.claim_id = 'CLAIM_ID' sot._translate_response = mock.Mock() sot.delete(sess) url = 'queues/%(queue)s/messages/%(message)s?claim_id=%(cid)s' % { 'queue': FAKE1['queue_name'], 'message': FAKE1['id'], 'cid': 'CLAIM_ID'} headers = {'Client-ID': 'NEW_CLIENT_ID', 'X-PROJECT-ID': 'NEW_PROJECT_ID'} sess.delete.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp, has_body=False) def test_delete_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sot = message.Message(**FAKE2) sot.claim_id = None sot._translate_response = mock.Mock() sot.delete(sess) url = 'queues/%(queue)s/messages/%(message)s' % { 'queue': FAKE2['queue_name'], 'message': FAKE2['id']} headers = {'Client-ID': 'OLD_CLIENT_ID', 'X-PROJECT-ID': 'OLD_PROJECT_ID'} sess.delete.assert_called_with(url, headers=headers) sot._translate_response.assert_called_once_with(resp, has_body=False) openstacksdk-0.11.3/openstack/tests/unit/message/v2/test_subscription.py0000666000175100017510000001565113236151340026541 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import testtools import uuid from openstack.message.v2 import subscription FAKE1 = { "age": 1632, "id": "576b54963990b48c644bb7e7", "subscriber": "http://10.229.49.117:5679", "subscription_id": "576b54963990b48c644bb7e7", "source": "test", "ttl": 3600, "options": { "name": "test" }, "queue_name": "queue1" } FAKE2 = { "age": 1632, "id": "576b54963990b48c644bb7e7", "subscriber": "http://10.229.49.117:5679", "subscription_id": "576b54963990b48c644bb7e7", "source": "test", "ttl": 3600, "options": { "name": "test" }, "queue_name": "queue1", "client_id": "OLD_CLIENT_ID", "project_id": "OLD_PROJECT_ID" } class TestSubscription(testtools.TestCase): def test_basic(self): sot = subscription.Subscription() self.assertEqual("subscriptions", sot.resources_key) self.assertEqual("/queues/%(queue_name)s/subscriptions", sot.base_path) self.assertEqual("messaging", sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = subscription.Subscription.new(**FAKE2) self.assertEqual(FAKE2["age"], sot.age) self.assertEqual(FAKE2["id"], sot.id) self.assertEqual(FAKE2["options"], sot.options) self.assertEqual(FAKE2["source"], sot.source) self.assertEqual(FAKE2["subscriber"], sot.subscriber) self.assertEqual(FAKE2["subscription_id"], sot.subscription_id) self.assertEqual(FAKE2["ttl"], sot.ttl) self.assertEqual(FAKE2["queue_name"], sot.queue_name) self.assertEqual(FAKE2["client_id"], sot.client_id) self.assertEqual(FAKE2["project_id"], sot.project_id) @mock.patch.object(uuid, "uuid4") def test_create(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.post.return_value = resp sess.get_project_id.return_value = "NEW_PROJECT_ID" mock_uuid.return_value = "NEW_CLIENT_ID" FAKE = copy.deepcopy(FAKE1) sot = subscription.Subscription(**FAKE1) sot._translate_response = mock.Mock() res = sot.create(sess) url = "/queues/%(queue)s/subscriptions" % { "queue": FAKE.pop("queue_name")} headers = {"Client-ID": "NEW_CLIENT_ID", "X-PROJECT-ID": "NEW_PROJECT_ID"} sess.post.assert_called_once_with(url, headers=headers, json=FAKE) sess.get_project_id.assert_called_once_with() self.assertEqual(sot, res) def test_create_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.post.return_value = resp FAKE = copy.deepcopy(FAKE2) sot = subscription.Subscription(**FAKE2) sot._translate_response = mock.Mock() res = sot.create(sess) url = "/queues/%(queue)s/subscriptions" % { "queue": FAKE.pop("queue_name")} headers = {"Client-ID": FAKE.pop("client_id"), "X-PROJECT-ID": FAKE.pop("project_id")} sess.post.assert_called_once_with(url, headers=headers, json=FAKE) self.assertEqual(sot, res) @mock.patch.object(uuid, "uuid4") def test_get(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp sess.get_project_id.return_value = "NEW_PROJECT_ID" mock_uuid.return_value = "NEW_CLIENT_ID" sot = subscription.Subscription(**FAKE1) sot._translate_response = mock.Mock() res = sot.get(sess) url = "queues/%(queue)s/subscriptions/%(subscription)s" % { "queue": FAKE1["queue_name"], "subscription": FAKE1["id"]} headers = {"Client-ID": "NEW_CLIENT_ID", "X-PROJECT-ID": "NEW_PROJECT_ID"} sess.get.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp) self.assertEqual(sot, res) def test_get_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp sot = subscription.Subscription(**FAKE2) sot._translate_response = mock.Mock() res = sot.get(sess) url = "queues/%(queue)s/subscriptions/%(subscription)s" % { "queue": FAKE2["queue_name"], "subscription": FAKE2["id"]} headers = {"Client-ID": "OLD_CLIENT_ID", "X-PROJECT-ID": "OLD_PROJECT_ID"} sess.get.assert_called_with(url, headers=headers) sot._translate_response.assert_called_once_with(resp) self.assertEqual(sot, res) @mock.patch.object(uuid, "uuid4") def test_delete(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sess.get_project_id.return_value = "NEW_PROJECT_ID" mock_uuid.return_value = "NEW_CLIENT_ID" sot = subscription.Subscription(**FAKE1) sot._translate_response = mock.Mock() sot.delete(sess) url = "queues/%(queue)s/subscriptions/%(subscription)s" % { "queue": FAKE1["queue_name"], "subscription": FAKE1["id"]} headers = {"Client-ID": "NEW_CLIENT_ID", "X-PROJECT-ID": "NEW_PROJECT_ID"} sess.delete.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp, has_body=False) def test_delete_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sot = subscription.Subscription(**FAKE2) sot._translate_response = mock.Mock() sot.delete(sess) url = "queues/%(queue)s/subscriptions/%(subscription)s" % { "queue": FAKE2["queue_name"], "subscription": FAKE2["id"]} headers = {"Client-ID": "OLD_CLIENT_ID", "X-PROJECT-ID": "OLD_PROJECT_ID"} sess.delete.assert_called_with(url, headers=headers) sot._translate_response.assert_called_once_with(resp, has_body=False) openstacksdk-0.11.3/openstack/tests/unit/message/v2/test_proxy.py0000666000175100017510000002236213236151364025201 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from openstack.message.v2 import _proxy from openstack.message.v2 import claim from openstack.message.v2 import message from openstack.message.v2 import queue from openstack.message.v2 import subscription from openstack import proxy as proxy_base from openstack.tests.unit import test_proxy_base QUEUE_NAME = 'test_queue' class TestMessageProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestMessageProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_queue_create(self): self.verify_create(self.proxy.create_queue, queue.Queue) def test_queue_get(self): self.verify_get(self.proxy.get_queue, queue.Queue) def test_queues(self): self.verify_list(self.proxy.queues, queue.Queue, paginated=True) def test_queue_delete(self): self.verify_delete(self.proxy.delete_queue, queue.Queue, False) def test_queue_delete_ignore(self): self.verify_delete(self.proxy.delete_queue, queue.Queue, True) @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_message_post(self, mock_get_resource): message_obj = message.Message(queue_name="test_queue") mock_get_resource.return_value = message_obj self._verify("openstack.message.v2.message.Message.post", self.proxy.post_message, method_args=["test_queue", ["msg1", "msg2"]], expected_args=[["msg1", "msg2"]]) mock_get_resource.assert_called_once_with(message.Message, None, queue_name="test_queue") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_message_get(self, mock_get_resource): mock_get_resource.return_value = "resource_or_id" self._verify2("openstack.proxy.BaseProxy._get", self.proxy.get_message, method_args=["test_queue", "resource_or_id"], expected_args=[message.Message, "resource_or_id"]) mock_get_resource.assert_called_once_with(message.Message, "resource_or_id", queue_name="test_queue") def test_messages(self): self.verify_list(self.proxy.messages, message.Message, paginated=True, method_args=["test_queue"], expected_kwargs={"queue_name": "test_queue"}) @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_message_delete(self, mock_get_resource): fake_message = mock.Mock() fake_message.id = "message_id" mock_get_resource.return_value = fake_message self._verify2("openstack.proxy.BaseProxy._delete", self.proxy.delete_message, method_args=["test_queue", "resource_or_id", None, False], expected_args=[message.Message, fake_message], expected_kwargs={"ignore_missing": False}) self.assertIsNone(fake_message.claim_id) mock_get_resource.assert_called_once_with(message.Message, "resource_or_id", queue_name="test_queue") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_message_delete_claimed(self, mock_get_resource): fake_message = mock.Mock() fake_message.id = "message_id" mock_get_resource.return_value = fake_message self._verify2("openstack.proxy.BaseProxy._delete", self.proxy.delete_message, method_args=["test_queue", "resource_or_id", "claim_id", False], expected_args=[message.Message, fake_message], expected_kwargs={"ignore_missing": False}) self.assertEqual("claim_id", fake_message.claim_id) mock_get_resource.assert_called_once_with(message.Message, "resource_or_id", queue_name="test_queue") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_message_delete_ignore(self, mock_get_resource): fake_message = mock.Mock() fake_message.id = "message_id" mock_get_resource.return_value = fake_message self._verify2("openstack.proxy.BaseProxy._delete", self.proxy.delete_message, method_args=["test_queue", "resource_or_id", None, True], expected_args=[message.Message, fake_message], expected_kwargs={"ignore_missing": True}) self.assertIsNone(fake_message.claim_id) mock_get_resource.assert_called_once_with(message.Message, "resource_or_id", queue_name="test_queue") def test_subscription_create(self): self._verify("openstack.message.v2.subscription.Subscription.create", self.proxy.create_subscription, method_args=["test_queue"]) @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_subscription_get(self, mock_get_resource): mock_get_resource.return_value = "resource_or_id" self._verify2("openstack.proxy.BaseProxy._get", self.proxy.get_subscription, method_args=["test_queue", "resource_or_id"], expected_args=[subscription.Subscription, "resource_or_id"]) mock_get_resource.assert_called_once_with( subscription.Subscription, "resource_or_id", queue_name="test_queue") def test_subscriptions(self): self.verify_list(self.proxy.subscriptions, subscription.Subscription, paginated=True, method_args=["test_queue"], expected_kwargs={"queue_name": "test_queue"}) @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_subscription_delete(self, mock_get_resource): mock_get_resource.return_value = "resource_or_id" self.verify_delete(self.proxy.delete_subscription, subscription.Subscription, False, ["test_queue", "resource_or_id"]) mock_get_resource.assert_called_once_with( subscription.Subscription, "resource_or_id", queue_name="test_queue") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_subscription_delete_ignore(self, mock_get_resource): mock_get_resource.return_value = "resource_or_id" self.verify_delete(self.proxy.delete_subscription, subscription.Subscription, True, ["test_queue", "resource_or_id"]) mock_get_resource.assert_called_once_with( subscription.Subscription, "resource_or_id", queue_name="test_queue") def test_claim_create(self): self._verify("openstack.message.v2.claim.Claim.create", self.proxy.create_claim, method_args=["test_queue"]) def test_claim_get(self): self._verify2("openstack.proxy.BaseProxy._get", self.proxy.get_claim, method_args=["test_queue", "resource_or_id"], expected_args=[claim.Claim, "resource_or_id"], expected_kwargs={"queue_name": "test_queue"}) def test_claim_update(self): self._verify2("openstack.proxy.BaseProxy._update", self.proxy.update_claim, method_args=["test_queue", "resource_or_id"], method_kwargs={"k1": "v1"}, expected_args=[claim.Claim, "resource_or_id"], expected_kwargs={"queue_name": "test_queue", "k1": "v1"}) def test_claim_delete(self): self.verify_delete(self.proxy.delete_claim, claim.Claim, False, ["test_queue", "resource_or_id"], expected_kwargs={"queue_name": "test_queue"}) def test_claim_delete_ignore(self): self.verify_delete(self.proxy.delete_claim, claim.Claim, True, ["test_queue", "resource_or_id"], expected_kwargs={"queue_name": "test_queue"}) openstacksdk-0.11.3/openstack/tests/unit/message/v2/test_claim.py0000666000175100017510000002135313236151340025076 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import testtools import uuid from openstack.message.v2 import claim FAKE1 = { "age": 1632, "id": "576b54963990b48c644bb7e7", "grace": 3600, "limit": 10, "messages": [{"id": "1"}, {"id": "2"}], "ttl": 3600, "queue_name": "queue1" } FAKE2 = { "age": 1632, "id": "576b54963990b48c644bb7e7", "grace": 3600, "limit": 10, "messages": [{"id": "1"}, {"id": "2"}], "ttl": 3600, "queue_name": "queue1", "client_id": "OLD_CLIENT_ID", "project_id": "OLD_PROJECT_ID" } class TestClaim(testtools.TestCase): def test_basic(self): sot = claim.Claim() self.assertEqual("claims", sot.resources_key) self.assertEqual("/queues/%(queue_name)s/claims", sot.base_path) self.assertEqual("messaging", sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_update) def test_make_it(self): sot = claim.Claim.new(**FAKE2) self.assertEqual(FAKE2["age"], sot.age) self.assertEqual(FAKE2["id"], sot.id) self.assertEqual(FAKE2["grace"], sot.grace) self.assertEqual(FAKE2["limit"], sot.limit) self.assertEqual(FAKE2["messages"], sot.messages) self.assertEqual(FAKE2["ttl"], sot.ttl) self.assertEqual(FAKE2["queue_name"], sot.queue_name) self.assertEqual(FAKE2["client_id"], sot.client_id) self.assertEqual(FAKE2["project_id"], sot.project_id) @mock.patch.object(uuid, "uuid4") def test_create_204_resp(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.post.return_value = resp resp.status_code = 204 sess.get_project_id.return_value = "NEW_PROJECT_ID" mock_uuid.return_value = "NEW_CLIENT_ID" FAKE = copy.deepcopy(FAKE1) sot = claim.Claim(**FAKE1) res = sot.create(sess) url = "/queues/%(queue)s/claims" % {"queue": FAKE.pop("queue_name")} headers = {"Client-ID": "NEW_CLIENT_ID", "X-PROJECT-ID": "NEW_PROJECT_ID"} sess.post.assert_called_once_with(url, headers=headers, json=FAKE) sess.get_project_id.assert_called_once_with() self.assertEqual(sot, res) @mock.patch.object(uuid, "uuid4") def test_create_non_204_resp(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.post.return_value = resp resp.status_code = 200 sess.get_project_id.return_value = "NEW_PROJECT_ID" mock_uuid.return_value = "NEW_CLIENT_ID" FAKE = copy.deepcopy(FAKE1) sot = claim.Claim(**FAKE1) sot._translate_response = mock.Mock() res = sot.create(sess) url = "/queues/%(queue)s/claims" % {"queue": FAKE.pop("queue_name")} headers = {"Client-ID": "NEW_CLIENT_ID", "X-PROJECT-ID": "NEW_PROJECT_ID"} sess.post.assert_called_once_with(url, headers=headers, json=FAKE) sess.get_project_id.assert_called_once_with() self.assertEqual(sot, res) sot._translate_response.assert_called_once_with(resp) def test_create_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.post.return_value = resp resp.status_code = 200 FAKE = copy.deepcopy(FAKE2) sot = claim.Claim(**FAKE2) sot._translate_response = mock.Mock() res = sot.create(sess) url = "/queues/%(queue)s/claims" % {"queue": FAKE.pop("queue_name")} headers = {"Client-ID": FAKE.pop("client_id"), "X-PROJECT-ID": FAKE.pop("project_id")} sess.post.assert_called_once_with(url, headers=headers, json=FAKE) self.assertEqual(sot, res) @mock.patch.object(uuid, "uuid4") def test_get(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp sess.get_project_id.return_value = "NEW_PROJECT_ID" mock_uuid.return_value = "NEW_CLIENT_ID" sot = claim.Claim(**FAKE1) sot._translate_response = mock.Mock() res = sot.get(sess) url = "queues/%(queue)s/claims/%(claim)s" % { "queue": FAKE1["queue_name"], "claim": FAKE1["id"]} headers = {"Client-ID": "NEW_CLIENT_ID", "X-PROJECT-ID": "NEW_PROJECT_ID"} sess.get.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp) self.assertEqual(sot, res) def test_get_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp sot = claim.Claim(**FAKE2) sot._translate_response = mock.Mock() res = sot.get(sess) url = "queues/%(queue)s/claims/%(claim)s" % { "queue": FAKE2["queue_name"], "claim": FAKE2["id"]} headers = {"Client-ID": "OLD_CLIENT_ID", "X-PROJECT-ID": "OLD_PROJECT_ID"} sess.get.assert_called_with(url, headers=headers) sot._translate_response.assert_called_once_with(resp) self.assertEqual(sot, res) @mock.patch.object(uuid, "uuid4") def test_update(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.update.return_value = resp sess.get_project_id.return_value = "NEW_PROJECT_ID" mock_uuid.return_value = "NEW_CLIENT_ID" FAKE = copy.deepcopy(FAKE1) sot = claim.Claim(**FAKE1) res = sot.update(sess) url = "queues/%(queue)s/claims/%(claim)s" % { "queue": FAKE.pop("queue_name"), "claim": FAKE["id"]} headers = {"Client-ID": "NEW_CLIENT_ID", "X-PROJECT-ID": "NEW_PROJECT_ID"} sess.patch.assert_called_with(url, headers=headers, json=FAKE) sess.get_project_id.assert_called_once_with() self.assertEqual(sot, res) def test_update_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.get.return_value = resp FAKE = copy.deepcopy(FAKE2) sot = claim.Claim(**FAKE2) res = sot.update(sess) url = "queues/%(queue)s/claims/%(claim)s" % { "queue": FAKE.pop("queue_name"), "claim": FAKE["id"]} headers = {"Client-ID": FAKE.pop("client_id"), "X-PROJECT-ID": FAKE.pop("project_id")} sess.patch.assert_called_with(url, headers=headers, json=FAKE) self.assertEqual(sot, res) @mock.patch.object(uuid, "uuid4") def test_delete(self, mock_uuid): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sess.get_project_id.return_value = "NEW_PROJECT_ID" mock_uuid.return_value = "NEW_CLIENT_ID" sot = claim.Claim(**FAKE1) sot._translate_response = mock.Mock() sot.delete(sess) url = "queues/%(queue)s/claims/%(claim)s" % { "queue": FAKE1["queue_name"], "claim": FAKE1["id"]} headers = {"Client-ID": "NEW_CLIENT_ID", "X-PROJECT-ID": "NEW_PROJECT_ID"} sess.delete.assert_called_with(url, headers=headers) sess.get_project_id.assert_called_once_with() sot._translate_response.assert_called_once_with(resp, has_body=False) def test_delete_client_id_project_id_exist(self): sess = mock.Mock() resp = mock.Mock() sess.delete.return_value = resp sot = claim.Claim(**FAKE2) sot._translate_response = mock.Mock() sot.delete(sess) url = "queues/%(queue)s/claims/%(claim)s" % { "queue": FAKE2["queue_name"], "claim": FAKE2["id"]} headers = {"Client-ID": "OLD_CLIENT_ID", "X-PROJECT-ID": "OLD_PROJECT_ID"} sess.delete.assert_called_with(url, headers=headers) sot._translate_response.assert_called_once_with(resp, has_body=False) openstacksdk-0.11.3/openstack/tests/unit/message/__init__.py0000666000175100017510000000000013236151340024144 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/message/test_message_service.py0000666000175100017510000000210713236151340026622 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.message import message_service class TestMessageService(testtools.TestCase): def test_service(self): sot = message_service.MessageService() self.assertEqual('messaging', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v2', sot.valid_versions[0].module) self.assertEqual('v2', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/object_store/0000775000175100017510000000000013236151501023100 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/object_store/v1/0000775000175100017510000000000013236151501023426 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/object_store/v1/test_obj.py0000666000175100017510000001413313236151340025616 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.object_store.v1 import obj from openstack.tests.unit.cloud import test_object as base_test_object # Object can receive both last-modified in headers and last_modified in # the body. However, originally, only last-modified was handled as an # expected prop but it was named last_modified. Under Python 3, creating # an Object with the body value last_modified causes the _attrs dictionary # size to change while iterating over its values as we have an attribute # called `last_modified` and we attempt to grow an additional attribute # called `last-modified`, which is the "name" of `last_modified`. # The same is true of content_type and content-type, or any prop # attribute which would follow the same pattern. # This example should represent the body values returned by a GET, so the keys # must be underscores. class TestObject(base_test_object.BaseTestObject): def setUp(self): super(TestObject, self).setUp() self.the_data = b'test body' self.the_data_length = len(self.the_data) # TODO(mordred) Make the_data be from getUniqueString and then # have hash and etag be actual md5 sums of that string self.body = { "hash": "243f87b91224d85722564a80fd3cb1f1", "last_modified": "2014-07-13T18:41:03.319240", "bytes": self.the_data_length, "name": self.object, "content_type": "application/octet-stream" } self.headers = { 'Content-Length': str(len(self.the_data)), 'Content-Type': 'application/octet-stream', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Thu, 15 Dec 2016 13:34:14 GMT', 'Etag': '"b5c454b44fbd5344793e3fb7e3850768"', 'X-Timestamp': '1481808853.65009', 'X-Trans-Id': 'tx68c2a2278f0c469bb6de1-005857ed80dfw1', 'Date': 'Mon, 19 Dec 2016 14:24:00 GMT', 'X-Static-Large-Object': 'True', 'X-Object-Meta-Mtime': '1481513709.168512', 'X-Delete-At': '1453416226.16744', } def test_basic(self): sot = obj.Object.new(**self.body) self.assert_no_calls() self.assertIsNone(sot.resources_key) self.assertEqual('name', sot._alternate_id()) self.assertEqual('/%(container)s', sot.base_path) self.assertEqual('object-store', sot.service.service_type) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertTrue(sot.allow_head) def test_new(self): sot = obj.Object.new(container=self.container, name=self.object) self.assert_no_calls() self.assertEqual(self.object, sot.name) self.assertEqual(self.container, sot.container) def test_from_body(self): sot = obj.Object.existing(container=self.container, **self.body) self.assert_no_calls() # Attributes from header self.assertEqual(self.container, sot.container) self.assertEqual( int(self.body['bytes']), sot.content_length) self.assertEqual(self.body['last_modified'], sot.last_modified_at) self.assertEqual(self.body['hash'], sot.etag) self.assertEqual(self.body['content_type'], sot.content_type) def test_from_headers(self): sot = obj.Object.existing(container=self.container, **self.headers) self.assert_no_calls() # Attributes from header self.assertEqual(self.container, sot.container) self.assertEqual( int(self.headers['Content-Length']), sot.content_length) self.assertEqual(self.headers['Accept-Ranges'], sot.accept_ranges) self.assertEqual(self.headers['Last-Modified'], sot.last_modified_at) self.assertEqual(self.headers['Etag'], sot.etag) self.assertEqual(self.headers['X-Timestamp'], sot.timestamp) self.assertEqual(self.headers['Content-Type'], sot.content_type) self.assertEqual(self.headers['X-Delete-At'], sot.delete_at) def test_download(self): headers = { 'X-Newest': 'True', 'If-Match': self.headers['Etag'], 'Accept': 'bytes' } self.register_uris([ dict(method='GET', uri=self.object_endpoint, headers=self.headers, content=self.the_data, validate=dict( headers=headers )) ]) sot = obj.Object.new(container=self.container, name=self.object) sot.is_newest = True # if_match is a list type, but we're passing a string. This tests # the up-conversion works properly. sot.if_match = self.headers['Etag'] rv = sot.download(self.conn.object_store) self.assertEqual(self.the_data, rv) self.assert_calls() def _test_create(self, method, data): sot = obj.Object.new(container=self.container, name=self.object, data=data) sot.is_newest = True sent_headers = {"x-newest": 'True', "Accept": ""} self.register_uris([ dict(method=method, uri=self.object_endpoint, headers=self.headers, validate=dict( headers=sent_headers)) ]) rv = sot.create(self.conn.object_store) self.assertEqual(rv.etag, self.headers['Etag']) self.assert_calls() def test_create_data(self): self._test_create('PUT', self.the_data) def test_create_no_data(self): self._test_create('PUT', None) openstacksdk-0.11.3/openstack/tests/unit/object_store/v1/test_account.py0000666000175100017510000000404513236151340026501 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.object_store.v1 import account CONTAINER_NAME = "mycontainer" ACCOUNT_EXAMPLE = { 'content-length': '0', 'accept-ranges': 'bytes', 'date': 'Sat, 05 Jul 2014 19:17:40 GMT', 'x-account-bytes-used': '12345', 'x-account-container-count': '678', 'content-type': 'text/plain; charset=utf-8', 'x-account-object-count': '98765', 'x-timestamp': '1453413555.88937' } class TestAccount(testtools.TestCase): def test_basic(self): sot = account.Account(**ACCOUNT_EXAMPLE) self.assertIsNone(sot.resources_key) self.assertIsNone(sot.id) self.assertEqual('/', sot.base_path) self.assertEqual('object-store', sot.service.service_type) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_head) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) self.assertFalse(sot.allow_create) def test_make_it(self): sot = account.Account(**ACCOUNT_EXAMPLE) self.assertIsNone(sot.id) self.assertEqual(int(ACCOUNT_EXAMPLE['x-account-bytes-used']), sot.account_bytes_used) self.assertEqual(int(ACCOUNT_EXAMPLE['x-account-container-count']), sot.account_container_count) self.assertEqual(int(ACCOUNT_EXAMPLE['x-account-object-count']), sot.account_object_count) self.assertEqual(ACCOUNT_EXAMPLE['x-timestamp'], sot.timestamp) openstacksdk-0.11.3/openstack/tests/unit/object_store/v1/test_container.py0000666000175100017510000001464113236151340027032 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.object_store.v1 import container from openstack.tests.unit import base class TestContainer(base.RequestsMockTestCase): def setUp(self): super(TestContainer, self).setUp() self.container = self.getUniqueString() self.endpoint = self.conn.object_store.get_endpoint() + '/' self.container_endpoint = '{endpoint}{container}'.format( endpoint=self.endpoint, container=self.container) self.body = { "count": 2, "bytes": 630666, "name": self.container, } self.headers = { 'x-container-object-count': '2', 'x-container-read': 'read-settings', 'x-container-write': 'write-settings', 'x-container-sync-to': 'sync-to', 'x-container-sync-key': 'sync-key', 'x-container-bytes-used': '630666', 'x-versions-location': 'versions-location', 'content-type': 'application/json; charset=utf-8', 'x-timestamp': '1453414055.48672' } self.body_plus_headers = dict(self.body, **self.headers) def test_basic(self): sot = container.Container.new(**self.body) self.assertIsNone(sot.resources_key) self.assertEqual('name', sot._alternate_id()) self.assertEqual('/', sot.base_path) self.assertEqual('object-store', sot.service.service_type) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertTrue(sot.allow_head) self.assert_no_calls() def test_make_it(self): sot = container.Container.new(**self.body) self.assertEqual(self.body['name'], sot.id) self.assertEqual(self.body['name'], sot.name) self.assertEqual(self.body['count'], sot.count) self.assertEqual(self.body['count'], sot.object_count) self.assertEqual(self.body['bytes'], sot.bytes) self.assertEqual(self.body['bytes'], sot.bytes_used) self.assert_no_calls() def test_create_and_head(self): sot = container.Container(**self.body_plus_headers) # Attributes from create self.assertEqual(self.body_plus_headers['name'], sot.id) self.assertEqual(self.body_plus_headers['name'], sot.name) self.assertEqual(self.body_plus_headers['count'], sot.count) self.assertEqual(self.body_plus_headers['bytes'], sot.bytes) # Attributes from header self.assertEqual( int(self.body_plus_headers['x-container-object-count']), sot.object_count) self.assertEqual( int(self.body_plus_headers['x-container-bytes-used']), sot.bytes_used) self.assertEqual( self.body_plus_headers['x-container-read'], sot.read_ACL) self.assertEqual( self.body_plus_headers['x-container-write'], sot.write_ACL) self.assertEqual( self.body_plus_headers['x-container-sync-to'], sot.sync_to) self.assertEqual( self.body_plus_headers['x-container-sync-key'], sot.sync_key) self.assertEqual( self.body_plus_headers['x-versions-location'], sot.versions_location) self.assertEqual(self.body_plus_headers['x-timestamp'], sot.timestamp) def test_list(self): containers = [ { "count": 999, "bytes": 12345, "name": "container1" }, { "count": 888, "bytes": 54321, "name": "container2" } ] self.register_uris([ dict(method='GET', uri=self.endpoint, json=containers) ]) response = container.Container.list(self.conn.object_store) self.assertEqual(len(containers), len(list(response))) for index, item in enumerate(response): self.assertEqual(container.Container, type(item)) self.assertEqual(containers[index]["name"], item.name) self.assertEqual(containers[index]["count"], item.count) self.assertEqual(containers[index]["bytes"], item.bytes) self.assert_calls() def _test_create_update(self, sot, sot_call, sess_method): sot.read_ACL = "some ACL" sot.write_ACL = "another ACL" sot.is_content_type_detected = True headers = { "x-container-read": "some ACL", "x-container-write": "another ACL", "x-detect-content-type": 'True', } self.register_uris([ dict(method=sess_method, uri=self.container_endpoint, json=self.body, validate=dict(headers=headers)), ]) sot_call(self.conn.object_store) self.assert_calls() def test_create(self): sot = container.Container.new(name=self.container) self._test_create_update(sot, sot.create, 'PUT') def test_update(self): sot = container.Container.new(name=self.container) self._test_create_update(sot, sot.update, 'POST') def _test_no_headers(self, sot, sot_call, sess_method): headers = {} data = {} self.register_uris([ dict(method=sess_method, uri=self.container_endpoint, json=self.body, validate=dict( headers=headers, json=data)) ]) sot_call(self.conn.object_store) def test_create_no_headers(self): sot = container.Container.new(name=self.container) self._test_no_headers(sot, sot.create, 'PUT') self.assert_calls() def test_update_no_headers(self): sot = container.Container.new(name=self.container) self._test_no_headers(sot, sot.update, 'POST') self.assert_no_calls() openstacksdk-0.11.3/openstack/tests/unit/object_store/v1/__init__.py0000666000175100017510000000000013236151340025530 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/object_store/v1/test_proxy.py0000666000175100017510000002744113236151340026233 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from openstack.object_store.v1 import _proxy from openstack.object_store.v1 import account from openstack.object_store.v1 import container from openstack.object_store.v1 import obj from openstack.tests.unit.cloud import test_object as base_test_object from openstack.tests.unit import test_proxy_base2 class TestObjectStoreProxy(test_proxy_base2.TestProxyBase): kwargs_to_path_args = False def setUp(self): super(TestObjectStoreProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_account_metadata_get(self): self.verify_head(self.proxy.get_account_metadata, account.Account) def test_container_metadata_get(self): self.verify_head(self.proxy.get_container_metadata, container.Container, value="container") def test_container_delete(self): self.verify_delete(self.proxy.delete_container, container.Container, False) def test_container_delete_ignore(self): self.verify_delete(self.proxy.delete_container, container.Container, True) def test_container_create_attrs(self): self.verify_create( self.proxy.create_container, container.Container, method_args=['container_name'], expected_kwargs={'name': 'container_name', "x": 1, "y": 2, "z": 3}) def test_object_metadata_get(self): self.verify_head(self.proxy.get_object_metadata, obj.Object, value="object", container="container") def _test_object_delete(self, ignore): expected_kwargs = { "ignore_missing": ignore, "container": "name", } self._verify2("openstack.proxy.BaseProxy._delete", self.proxy.delete_object, method_args=["resource"], method_kwargs=expected_kwargs, expected_args=[obj.Object, "resource"], expected_kwargs=expected_kwargs) def test_object_delete(self): self._test_object_delete(False) def test_object_delete_ignore(self): self._test_object_delete(True) def test_object_create_attrs(self): kwargs = {"name": "test", "data": "data", "container": "name"} self._verify2("openstack.proxy.BaseProxy._create", self.proxy.upload_object, method_kwargs=kwargs, expected_args=[obj.Object], expected_kwargs=kwargs) def test_object_create_no_container(self): self.assertRaises(TypeError, self.proxy.upload_object) def test_object_get(self): kwargs = dict(container="container") self.verify_get( self.proxy.get_object, obj.Object, value=["object"], method_kwargs=kwargs, expected_kwargs=kwargs) class Test_containers(TestObjectStoreProxy): def setUp(self): super(Test_containers, self).setUp() self.proxy = _proxy.Proxy(self.session) self.containers_body = [] for i in range(3): self.containers_body.append({six.text_type("name"): six.text_type("container%d" % i)}) # @httpretty.activate # def test_all_containers(self): # self.stub_url(httpretty.GET, # path=[container.Container.base_path], # responses=[httpretty.Response( # body=json.dumps(self.containers_body), # status=200, content_type="application/json"), # httpretty.Response(body=json.dumps([]), # status=200, content_type="application/json")]) # # count = 0 # for actual, expected in zip(self.proxy.containers(), # self.containers_body): # self.assertEqual(expected, actual) # count += 1 # self.assertEqual(len(self.containers_body), count) # @httpretty.activate # def test_containers_limited(self): # limit = len(self.containers_body) + 1 # limit_param = "?limit=%d" % limit # # self.stub_url(httpretty.GET, # path=[container.Container.base_path + limit_param], # json=self.containers_body) # # count = 0 # for actual, expected in zip(self.proxy.containers(limit=limit), # self.containers_body): # self.assertEqual(actual, expected) # count += 1 # # self.assertEqual(len(self.containers_body), count) # # Since we've chosen a limit larger than the body, only one request # # should be made, so it should be the last one. # self.assertIn(limit_param, httpretty.last_request().path) # @httpretty.activate # def test_containers_with_marker(self): # marker = six.text_type("container2") # marker_param = "marker=%s" % marker # # self.stub_url(httpretty.GET, # path=[container.Container.base_path + "?" + # marker_param], # json=self.containers_body) # # count = 0 # for actual, expected in zip(self.proxy.containers(marker=marker), # self.containers_body): # # Make sure the marker made it into the actual request. # self.assertIn(marker_param, httpretty.last_request().path) # self.assertEqual(expected, actual) # count += 1 # # self.assertEqual(len(self.containers_body), count) # # # Since we have to make one request beyond the end, because no # # limit was provided, make sure the last container appears as # # the marker in this last request. # self.assertIn(self.containers_body[-1]["name"], # httpretty.last_request().path) class Test_objects(TestObjectStoreProxy): def setUp(self): super(Test_objects, self).setUp() self.proxy = _proxy.Proxy(self.session) self.container_name = six.text_type("my_container") self.objects_body = [] for i in range(3): self.objects_body.append({six.text_type("name"): six.text_type("object%d" % i)}) # Returned object bodies have their container inserted. self.returned_objects = [] for ob in self.objects_body: ob[six.text_type("container")] = self.container_name self.returned_objects.append(ob) self.assertEqual(len(self.objects_body), len(self.returned_objects)) # @httpretty.activate # def test_all_objects(self): # self.stub_url(httpretty.GET, # path=[obj.Object.base_path % # {"container": self.container_name}], # responses=[httpretty.Response( # body=json.dumps(self.objects_body), # status=200, content_type="application/json"), # httpretty.Response(body=json.dumps([]), # status=200, content_type="application/json")]) # # count = 0 # for actual, expected in zip(self.proxy.objects(self.container_name), # self.returned_objects): # self.assertEqual(expected, actual) # count += 1 # self.assertEqual(len(self.returned_objects), count) # @httpretty.activate # def test_objects_limited(self): # limit = len(self.objects_body) + 1 # limit_param = "?limit=%d" % limit # # self.stub_url(httpretty.GET, # path=[obj.Object.base_path % # {"container": self.container_name} + limit_param], # json=self.objects_body) # # count = 0 # for actual, expected in zip(self.proxy.objects(self.container_name, # limit=limit), # self.returned_objects): # self.assertEqual(expected, actual) # count += 1 # # self.assertEqual(len(self.returned_objects), count) # # Since we've chosen a limit larger than the body, only one request # # should be made, so it should be the last one. # self.assertIn(limit_param, httpretty.last_request().path) # @httpretty.activate # def test_objects_with_marker(self): # marker = six.text_type("object2") # # marker_param = "marker=%s" % marker # # self.stub_url(httpretty.GET, # path=[obj.Object.base_path % # {"container": self.container_name} + "?" + # marker_param], # json=self.objects_body) # # count = 0 # for actual, expected in zip(self.proxy.objects(self.container_name, # marker=marker), # self.returned_objects): # # Make sure the marker made it into the actual request. # self.assertIn(marker_param, httpretty.last_request().path) # self.assertEqual(expected, actual) # count += 1 # # self.assertEqual(len(self.returned_objects), count) # # # Since we have to make one request beyond the end, because no # # limit was provided, make sure the last container appears as # # the marker in this last request. # self.assertIn(self.returned_objects[-1]["name"], # httpretty.last_request().path) class Test_download_object(base_test_object.BaseTestObject): def setUp(self): super(Test_download_object, self).setUp() self.the_data = b'test body' self.register_uris([ dict(method='GET', uri=self.object_endpoint, headers={ 'Content-Length': str(len(self.the_data)), 'Content-Type': 'application/octet-stream', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Thu, 15 Dec 2016 13:34:14 GMT', 'Etag': '"b5c454b44fbd5344793e3fb7e3850768"', 'X-Timestamp': '1481808853.65009', 'X-Trans-Id': 'tx68c2a2278f0c469bb6de1-005857ed80dfw1', 'Date': 'Mon, 19 Dec 2016 14:24:00 GMT', 'X-Static-Large-Object': 'True', 'X-Object-Meta-Mtime': '1481513709.168512', }, content=self.the_data)]) def test_download(self): data = self.conn.object_store.download_object( self.object, container=self.container) self.assertEqual(data, self.the_data) self.assert_calls() def test_stream(self): chunk_size = 2 for index, chunk in enumerate(self.conn.object_store.stream_object( self.object, container=self.container, chunk_size=chunk_size)): chunk_len = len(chunk) start = index * chunk_size end = start + chunk_len self.assertLessEqual(chunk_len, chunk_size) self.assertEqual(chunk, self.the_data[start:end]) self.assert_calls() class Test_copy_object(TestObjectStoreProxy): def test_copy_object(self): self.assertRaises(NotImplementedError, self.proxy.copy_object) openstacksdk-0.11.3/openstack/tests/unit/object_store/test_object_store_service.py0000666000175100017510000000214113236151340030714 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.object_store import object_store_service class TestObjectStoreService(testtools.TestCase): def test_service(self): sot = object_store_service.ObjectStoreService() self.assertEqual('object-store', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v1', sot.valid_versions[0].module) self.assertEqual('v1', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/object_store/__init__.py0000666000175100017510000000000013236151340025202 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/image/0000775000175100017510000000000013236151501021500 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/image/v2/0000775000175100017510000000000013236151501022027 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/image/v2/test_member.py0000666000175100017510000000333113236151340024712 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.image.v2 import member IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'created_at': '2015-03-09T12:14:57.233772', 'image_id': '2', 'member_id': IDENTIFIER, 'status': '4', 'updated_at': '2015-03-09T12:15:57.233772', } class TestMember(testtools.TestCase): def test_basic(self): sot = member.Member() self.assertIsNone(sot.resource_key) self.assertEqual('members', sot.resources_key) self.assertEqual('/images/%(image_id)s/members', sot.base_path) self.assertEqual('image', sot.service.service_type) self.assertEqual('member', sot._alternate_id()) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = member.Member(**EXAMPLE) self.assertEqual(IDENTIFIER, sot.id) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['image_id'], sot.image_id) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/image/v2/__init__.py0000666000175100017510000000000013236151340024131 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/image/v2/test_image.py0000666000175100017510000003053613236151340024534 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import operator from keystoneauth1 import adapter import mock import requests import testtools from openstack import exceptions from openstack.image.v2 import image IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'checksum': '1', 'container_format': '2', 'created_at': '2015-03-09T12:14:57.233772', 'data': 'This is not an image', 'disk_format': '4', 'min_disk': 5, 'name': '6', 'owner': '7', 'properties': {'a': 'z', 'b': 'y', }, 'protected': False, 'status': '8', 'tags': ['g', 'h', 'i'], 'updated_at': '2015-03-09T12:15:57.233772', 'virtual_size': '10', 'visibility': '11', 'location': '12', 'size': 13, 'store': '14', 'file': '15', 'locations': ['15', '16'], 'direct_url': '17', 'path': '18', 'value': '19', 'url': '20', 'metadata': {'21': '22'}, 'architecture': '23', 'hypervisor-type': '24', 'instance_type_rxtx_factor': 25.1, 'instance_uuid': '26', 'img_config_drive': '27', 'kernel_id': '28', 'os_distro': '29', 'os_version': '30', 'os_secure_boot': '31', 'ramdisk_id': '32', 'vm_mode': '33', 'hw_cpu_sockets': 34, 'hw_cpu_cores': 35, 'hw_cpu_threads': 36, 'hw_disk_bus': '37', 'hw_rng_model': '38', 'hw_machine_type': '39', 'hw_scsi_model': '40', 'hw_serial_port_count': 41, 'hw_video_model': '42', 'hw_video_ram': 43, 'hw_watchdog_action': '44', 'os_command_line': '45', 'hw_vif_model': '46', 'hw_vif_multiqueue_enabled': True, 'hw_boot_menu': True, 'vmware_adaptertype': '47', 'vmware_ostype': '48', 'auto_disk_config': True, 'os_type': '49', } class FakeResponse(object): def __init__(self, response, status_code=200, headers=None): self.body = response self.content = response self.status_code = status_code headers = headers if headers else {'content-type': 'application/json'} self.headers = requests.structures.CaseInsensitiveDict(headers) def json(self): return self.body class TestImage(testtools.TestCase): def setUp(self): super(TestImage, self).setUp() self.resp = mock.Mock() self.resp.body = None self.resp.json = mock.Mock(return_value=self.resp.body) self.sess = mock.Mock(spec=adapter.Adapter) self.sess.post = mock.Mock(return_value=self.resp) def test_basic(self): sot = image.Image() self.assertIsNone(sot.resource_key) self.assertEqual('images', sot.resources_key) self.assertEqual('/images', sot.base_path) self.assertEqual('image', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = image.Image(**EXAMPLE) self.assertEqual(IDENTIFIER, sot.id) self.assertEqual(EXAMPLE['checksum'], sot.checksum) self.assertEqual(EXAMPLE['container_format'], sot.container_format) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['disk_format'], sot.disk_format) self.assertEqual(EXAMPLE['min_disk'], sot.min_disk) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['owner'], sot.owner_id) self.assertEqual(EXAMPLE['properties'], sot.properties) self.assertFalse(sot.is_protected) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['tags'], sot.tags) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) self.assertEqual(EXAMPLE['virtual_size'], sot.virtual_size) self.assertEqual(EXAMPLE['visibility'], sot.visibility) self.assertEqual(EXAMPLE['size'], sot.size) self.assertEqual(EXAMPLE['store'], sot.store) self.assertEqual(EXAMPLE['file'], sot.file) self.assertEqual(EXAMPLE['locations'], sot.locations) self.assertEqual(EXAMPLE['direct_url'], sot.direct_url) self.assertEqual(EXAMPLE['path'], sot.path) self.assertEqual(EXAMPLE['value'], sot.value) self.assertEqual(EXAMPLE['url'], sot.url) self.assertEqual(EXAMPLE['metadata'], sot.metadata) self.assertEqual(EXAMPLE['architecture'], sot.architecture) self.assertEqual(EXAMPLE['hypervisor-type'], sot.hypervisor_type) self.assertEqual(EXAMPLE['instance_type_rxtx_factor'], sot.instance_type_rxtx_factor) self.assertEqual(EXAMPLE['instance_uuid'], sot.instance_uuid) self.assertEqual(EXAMPLE['img_config_drive'], sot.needs_config_drive) self.assertEqual(EXAMPLE['kernel_id'], sot.kernel_id) self.assertEqual(EXAMPLE['os_distro'], sot.os_distro) self.assertEqual(EXAMPLE['os_version'], sot.os_version) self.assertEqual(EXAMPLE['os_secure_boot'], sot.needs_secure_boot) self.assertEqual(EXAMPLE['ramdisk_id'], sot.ramdisk_id) self.assertEqual(EXAMPLE['vm_mode'], sot.vm_mode) self.assertEqual(EXAMPLE['hw_cpu_sockets'], sot.hw_cpu_sockets) self.assertEqual(EXAMPLE['hw_cpu_cores'], sot.hw_cpu_cores) self.assertEqual(EXAMPLE['hw_cpu_threads'], sot.hw_cpu_threads) self.assertEqual(EXAMPLE['hw_disk_bus'], sot.hw_disk_bus) self.assertEqual(EXAMPLE['hw_rng_model'], sot.hw_rng_model) self.assertEqual(EXAMPLE['hw_machine_type'], sot.hw_machine_type) self.assertEqual(EXAMPLE['hw_scsi_model'], sot.hw_scsi_model) self.assertEqual(EXAMPLE['hw_serial_port_count'], sot.hw_serial_port_count) self.assertEqual(EXAMPLE['hw_video_model'], sot.hw_video_model) self.assertEqual(EXAMPLE['hw_video_ram'], sot.hw_video_ram) self.assertEqual(EXAMPLE['hw_watchdog_action'], sot.hw_watchdog_action) self.assertEqual(EXAMPLE['os_command_line'], sot.os_command_line) self.assertEqual(EXAMPLE['hw_vif_model'], sot.hw_vif_model) self.assertEqual(EXAMPLE['hw_vif_multiqueue_enabled'], sot.is_hw_vif_multiqueue_enabled) self.assertEqual(EXAMPLE['hw_boot_menu'], sot.is_hw_boot_menu_enabled) self.assertEqual(EXAMPLE['vmware_adaptertype'], sot.vmware_adaptertype) self.assertEqual(EXAMPLE['vmware_ostype'], sot.vmware_ostype) self.assertEqual(EXAMPLE['auto_disk_config'], sot.has_auto_disk_config) self.assertEqual(EXAMPLE['os_type'], sot.os_type) def test_deactivate(self): sot = image.Image(**EXAMPLE) self.assertIsNone(sot.deactivate(self.sess)) self.sess.post.assert_called_with( 'images/IDENTIFIER/actions/deactivate', ) def test_reactivate(self): sot = image.Image(**EXAMPLE) self.assertIsNone(sot.reactivate(self.sess)) self.sess.post.assert_called_with( 'images/IDENTIFIER/actions/reactivate', ) def test_add_tag(self): sot = image.Image(**EXAMPLE) tag = "lol" self.assertIsNone(sot.add_tag(self.sess, tag)) self.sess.put.assert_called_with( 'images/IDENTIFIER/tags/%s' % tag, ) def test_remove_tag(self): sot = image.Image(**EXAMPLE) tag = "lol" self.assertIsNone(sot.remove_tag(self.sess, tag)) self.sess.delete.assert_called_with( 'images/IDENTIFIER/tags/%s' % tag, ) def test_upload(self): sot = image.Image(**EXAMPLE) self.assertIsNone(sot.upload(self.sess)) self.sess.put.assert_called_with('images/IDENTIFIER/file', data=sot.data, headers={"Content-Type": "application/octet-stream", "Accept": ""}) def test_download_checksum_match(self): sot = image.Image(**EXAMPLE) resp = FakeResponse( b"abc", headers={"Content-MD5": "900150983cd24fb0d6963f7d28e17f72", "Content-Type": "application/octet-stream"}) self.sess.get.return_value = resp rv = sot.download(self.sess) self.sess.get.assert_called_with('images/IDENTIFIER/file', stream=False) self.assertEqual(rv, resp.content) def test_download_checksum_mismatch(self): sot = image.Image(**EXAMPLE) resp = FakeResponse( b"abc", headers={"Content-MD5": "the wrong checksum", "Content-Type": "application/octet-stream"}) self.sess.get.return_value = resp self.assertRaises(exceptions.InvalidResponse, sot.download, self.sess) def test_download_no_checksum_header(self): sot = image.Image(**EXAMPLE) resp1 = FakeResponse( b"abc", headers={"Content-Type": "application/octet-stream"}) resp2 = FakeResponse( {"checksum": "900150983cd24fb0d6963f7d28e17f72"}) self.sess.get.side_effect = [resp1, resp2] rv = sot.download(self.sess) self.sess.get.assert_has_calls( [mock.call('images/IDENTIFIER/file', stream=False), mock.call('images/IDENTIFIER',)]) self.assertEqual(rv, resp1.content) def test_download_no_checksum_at_all2(self): sot = image.Image(**EXAMPLE) resp1 = FakeResponse( b"abc", headers={"Content-Type": "application/octet-stream"}) resp2 = FakeResponse({"checksum": None}) self.sess.get.side_effect = [resp1, resp2] with self.assertLogs(logger='openstack', level="WARNING") as log: rv = sot.download(self.sess) self.assertEqual(len(log.records), 1, "Too many warnings were logged") self.assertEqual( "Unable to verify the integrity of image IDENTIFIER", log.records[0].msg) self.sess.get.assert_has_calls( [mock.call('images/IDENTIFIER/file', stream=False), mock.call('images/IDENTIFIER',)]) self.assertEqual(rv, resp1.content) def test_download_stream(self): sot = image.Image(**EXAMPLE) resp = FakeResponse( b"abc", headers={"Content-MD5": "900150983cd24fb0d6963f7d28e17f72", "Content-Type": "application/octet-stream"}) self.sess.get.return_value = resp rv = sot.download(self.sess, stream=True) self.sess.get.assert_called_with('images/IDENTIFIER/file', stream=True) self.assertEqual(rv, resp) def test_image_update(self): sot = image.Image(**EXAMPLE) # Let the translate pass through, that portion is tested elsewhere sot._translate_response = mock.Mock() resp = mock.Mock() resp.content = b"abc" headers = { 'Content-Type': 'application/openstack-images-v2.1-json-patch', 'Accept': '', } resp.headers = headers resp.status_code = 200 self.sess.patch.return_value = resp value = ('[{"value": "fake_name", "op": "replace", "path": "/name"}, ' '{"value": "fake_value", "op": "add", ' '"path": "/new_property"}]') fake_img = sot.to_dict() fake_img['name'] = 'fake_name' fake_img['new_property'] = 'fake_value' sot.update(self.sess, **fake_img) url = 'images/' + IDENTIFIER self.sess.patch.assert_called_once() call = self.sess.patch.call_args call_args, call_kwargs = call self.assertEqual(url, call_args[0]) self.assertEqual( sorted(json.loads(value), key=operator.itemgetter('value')), sorted( json.loads(call_kwargs['data']), key=operator.itemgetter('value'))) openstacksdk-0.11.3/openstack/tests/unit/image/v2/test_proxy.py0000666000175100017510000001520713236151340024631 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from openstack import exceptions from openstack.image.v2 import _proxy from openstack.image.v2 import image from openstack.image.v2 import member from openstack.tests.unit.image.v2 import test_image as fake_image from openstack.tests.unit import test_proxy_base EXAMPLE = fake_image.EXAMPLE class TestImageProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestImageProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_image_create_no_args(self): # container_format and disk_format are required args self.assertRaises(exceptions.InvalidRequest, self.proxy.upload_image) def test_image_create(self): # NOTE: This doesn't use any of the base class verify methods # because it ends up making two separate calls to complete the # operation. created_image = mock.Mock(spec=image.Image(id="id")) self.proxy._create = mock.Mock() self.proxy._create.return_value = created_image rv = self.proxy.upload_image(data="data", container_format="x", disk_format="y", name="z") self.proxy._create.assert_called_with(image.Image, container_format="x", disk_format="y", name="z") created_image.upload.assert_called_with(self.proxy) self.assertEqual(rv, created_image) def test_image_delete(self): self.verify_delete(self.proxy.delete_image, image.Image, False) def test_image_delete_ignore(self): self.verify_delete(self.proxy.delete_image, image.Image, True) @mock.patch("openstack.resource.Resource._translate_response") @mock.patch("openstack.proxy.BaseProxy._get") @mock.patch("openstack.image.v2.image.Image.update") def test_image_update(self, mock_update_image, mock_get_image, mock_transpose): original_image = image.Image(**EXAMPLE) mock_get_image.return_value = original_image EXAMPLE['name'] = 'fake_name' updated_image = image.Image(**EXAMPLE) mock_update_image.return_value = updated_image.to_dict() result = self.proxy.update_image(original_image, **updated_image.to_dict()) self.assertEqual('fake_name', result.get('name')) def test_image_get(self): self.verify_get(self.proxy.get_image, image.Image) def test_images(self): self.verify_list(self.proxy.images, image.Image, paginated=True) def test_add_tag(self): self._verify("openstack.image.v2.image.Image.add_tag", self.proxy.add_tag, method_args=["image", "tag"], expected_args=["tag"]) def test_remove_tag(self): self._verify("openstack.image.v2.image.Image.remove_tag", self.proxy.remove_tag, method_args=["image", "tag"], expected_args=["tag"]) def test_deactivate_image(self): self._verify("openstack.image.v2.image.Image.deactivate", self.proxy.deactivate_image, method_args=["image"]) def test_reactivate_image(self): self._verify("openstack.image.v2.image.Image.reactivate", self.proxy.reactivate_image, method_args=["image"]) def test_member_create(self): self.verify_create(self.proxy.add_member, member.Member, method_kwargs={"image": "test_id"}, expected_kwargs={"image_id": "test_id"}) def test_member_delete(self): self._verify2("openstack.proxy.BaseProxy._delete", self.proxy.remove_member, method_args=["member_id"], method_kwargs={"image": "image_id", "ignore_missing": False}, expected_args=[member.Member], expected_kwargs={"member_id": "member_id", "image_id": "image_id", "ignore_missing": False}) def test_member_delete_ignore(self): self._verify2("openstack.proxy.BaseProxy._delete", self.proxy.remove_member, method_args=["member_id"], method_kwargs={"image": "image_id"}, expected_args=[member.Member], expected_kwargs={"member_id": "member_id", "image_id": "image_id", "ignore_missing": True}) def test_member_update(self): self._verify2("openstack.proxy.BaseProxy._update", self.proxy.update_member, method_args=['member_id', 'image_id'], expected_args=[member.Member], expected_kwargs={'member_id': 'member_id', 'image_id': 'image_id'}) def test_member_get(self): self._verify2("openstack.proxy.BaseProxy._get", self.proxy.get_member, method_args=['member_id'], method_kwargs={"image": "image_id"}, expected_args=[member.Member], expected_kwargs={'member_id': 'member_id', 'image_id': 'image_id'}) def test_member_find(self): self._verify2("openstack.proxy.BaseProxy._find", self.proxy.find_member, method_args=['member_id'], method_kwargs={"image": "image_id"}, expected_args=[member.Member, "member_id"], expected_kwargs={'ignore_missing': True, 'image_id': 'image_id'}) def test_members(self): self.verify_list(self.proxy.members, member.Member, paginated=False, method_args=('image_1',), expected_kwargs={'image_id': 'image_1'}) openstacksdk-0.11.3/openstack/tests/unit/image/v1/0000775000175100017510000000000013236151501022026 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/image/v1/__init__.py0000666000175100017510000000000013236151340024130 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/image/v1/test_image.py0000666000175100017510000000507513236151340024533 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.image.v1 import image IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'checksum': '1', 'container_format': '2', 'copy_from': '3', 'disk_format': '4', 'id': IDENTIFIER, 'is_public': True, 'location': '6', 'min_disk': '7', 'min_ram': '8', 'name': '9', 'owner': '10', 'properties': '11', 'protected': True, 'size': '13', 'status': '14', 'created_at': '2015-03-09T12:14:57.233772', 'updated_at': '2015-03-09T12:15:57.233772', } class TestImage(testtools.TestCase): def test_basic(self): sot = image.Image() self.assertEqual('image', sot.resource_key) self.assertEqual('images', sot.resources_key) self.assertEqual('/images', sot.base_path) self.assertEqual('image', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = image.Image(**EXAMPLE) self.assertEqual(EXAMPLE['checksum'], sot.checksum) self.assertEqual(EXAMPLE['container_format'], sot.container_format) self.assertEqual(EXAMPLE['copy_from'], sot.copy_from) self.assertEqual(EXAMPLE['disk_format'], sot.disk_format) self.assertEqual(IDENTIFIER, sot.id) self.assertTrue(sot.is_public) self.assertEqual(EXAMPLE['location'], sot.location) self.assertEqual(EXAMPLE['min_disk'], sot.min_disk) self.assertEqual(EXAMPLE['min_ram'], sot.min_ram) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['owner'], sot.owner_id) self.assertEqual(EXAMPLE['properties'], sot.properties) self.assertTrue(sot.is_protected) self.assertEqual(EXAMPLE['size'], sot.size) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['created_at'], sot.created_at) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/image/v1/test_proxy.py0000666000175100017510000000303313236151340024622 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.image.v1 import _proxy from openstack.image.v1 import image from openstack.tests.unit import test_proxy_base as test_proxy_base class TestImageProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestImageProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_image_upload_attrs(self): self.verify_create(self.proxy.upload_image, image.Image) def test_image_delete(self): self.verify_delete(self.proxy.delete_image, image.Image, False) def test_image_delete_ignore(self): self.verify_delete(self.proxy.delete_image, image.Image, True) def test_image_find(self): self.verify_find(self.proxy.find_image, image.Image) def test_image_get(self): self.verify_get(self.proxy.get_image, image.Image) def test_images(self): self.verify_list(self.proxy.images, image.Image, paginated=True) def test_image_update(self): self.verify_update(self.proxy.update_image, image.Image) openstacksdk-0.11.3/openstack/tests/unit/image/__init__.py0000666000175100017510000000000013236151340023602 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/image/test_image_service.py0000666000175100017510000000226113236151340025717 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.image import image_service class TestImageService(testtools.TestCase): def test_service(self): sot = image_service.ImageService() self.assertEqual('image', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(2, len(sot.valid_versions)) self.assertEqual('v2', sot.valid_versions[0].module) self.assertEqual('v2', sot.valid_versions[0].path) self.assertEqual('v1', sot.valid_versions[1].module) self.assertEqual('v1', sot.valid_versions[1].path) openstacksdk-0.11.3/openstack/tests/unit/__init__.py0000666000175100017510000000000013236151340022520 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/test_service_filter.py0000666000175100017510000000315713236151340025045 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity import identity_service from openstack import service_filter class TestValidVersion(testtools.TestCase): def test_constructor(self): sot = service_filter.ValidVersion('v1.0', 'v1') self.assertEqual('v1.0', sot.module) self.assertEqual('v1', sot.path) class TestServiceFilter(testtools.TestCase): def test_init(self): sot = service_filter.ServiceFilter( 'ServiceType', region='REGION1', service_name='ServiceName', version='1', api_version='1.23', requires_project_id=True) self.assertEqual('servicetype', sot.service_type) self.assertEqual('REGION1', sot.region) self.assertEqual('ServiceName', sot.service_name) self.assertEqual('1', sot.version) self.assertEqual('1.23', sot.api_version) self.assertTrue(sot.requires_project_id) def test_get_module(self): sot = identity_service.IdentityService() self.assertEqual('openstack.identity.v3', sot.get_module()) self.assertEqual('identity', sot.get_service_module()) openstacksdk-0.11.3/openstack/tests/unit/test_format.py0000666000175100017510000000333013236151340023321 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack import format class TestBoolStrFormatter(testtools.TestCase): def test_deserialize(self): self.assertTrue(format.BoolStr.deserialize(True)) self.assertTrue(format.BoolStr.deserialize('True')) self.assertTrue(format.BoolStr.deserialize('TRUE')) self.assertTrue(format.BoolStr.deserialize('true')) self.assertFalse(format.BoolStr.deserialize(False)) self.assertFalse(format.BoolStr.deserialize('False')) self.assertFalse(format.BoolStr.deserialize('FALSE')) self.assertFalse(format.BoolStr.deserialize('false')) self.assertRaises(ValueError, format.BoolStr.deserialize, None) self.assertRaises(ValueError, format.BoolStr.deserialize, '') self.assertRaises(ValueError, format.BoolStr.deserialize, 'INVALID') def test_serialize(self): self.assertEqual('true', format.BoolStr.serialize(True)) self.assertEqual('false', format.BoolStr.serialize(False)) self.assertRaises(ValueError, format.BoolStr.serialize, None) self.assertRaises(ValueError, format.BoolStr.serialize, '') self.assertRaises(ValueError, format.BoolStr.serialize, 'True') openstacksdk-0.11.3/openstack/tests/unit/test_proxy_base2.py0000666000175100017510000002522513236151340024275 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from openstack.tests.unit import base class TestProxyBase(base.TestCase): # object_store makes calls with container= rather than # path_args=dict(container= because container needs to wind up # in the uri components. kwargs_to_path_args = True def setUp(self): super(TestProxyBase, self).setUp() self.session = mock.Mock() def _add_path_args_for_verify(self, path_args, method_args, expected_kwargs, value=None): if path_args is not None: if value is None: for key in path_args: method_args.append(path_args[key]) expected_kwargs['path_args'] = path_args def _verify(self, mock_method, test_method, method_args=None, method_kwargs=None, expected_args=None, expected_kwargs=None, expected_result=None): with mock.patch(mock_method) as mocked: mocked.return_value = expected_result if any([method_args, method_kwargs, expected_args, expected_kwargs]): method_args = method_args or () method_kwargs = method_kwargs or {} expected_args = expected_args or () expected_kwargs = expected_kwargs or {} self.assertEqual(expected_result, test_method(*method_args, **method_kwargs)) mocked.assert_called_with(test_method.__self__, *expected_args, **expected_kwargs) else: self.assertEqual(expected_result, test_method()) mocked.assert_called_with(test_method.__self__) # NOTE(briancurtin): This is a duplicate version of _verify that is # temporarily here while we shift APIs. The difference is that # calls from the Proxy classes aren't going to be going directly into # the Resource layer anymore, so they don't pass in the session which # was tested in assert_called_with. # This is being done in lieu of adding logic and complicating # the _verify method. It will be removed once there is one API to # be verifying. def _verify2(self, mock_method, test_method, method_args=None, method_kwargs=None, method_result=None, expected_args=None, expected_kwargs=None, expected_result=None): with mock.patch(mock_method) as mocked: mocked.return_value = expected_result if any([method_args, method_kwargs, expected_args, expected_kwargs]): method_args = method_args or () method_kwargs = method_kwargs or {} expected_args = expected_args or () expected_kwargs = expected_kwargs or {} if method_result: self.assertEqual(method_result, test_method(*method_args, **method_kwargs)) else: self.assertEqual(expected_result, test_method(*method_args, **method_kwargs)) mocked.assert_called_with(*expected_args, **expected_kwargs) else: self.assertEqual(expected_result, test_method()) mocked.assert_called_with(test_method.__self__) def verify_create(self, test_method, resource_type, mock_method="openstack.proxy.BaseProxy._create", expected_result="result", **kwargs): the_kwargs = {"x": 1, "y": 2, "z": 3} method_kwargs = kwargs.pop("method_kwargs", the_kwargs) expected_args = [resource_type] expected_kwargs = kwargs.pop("expected_kwargs", the_kwargs) self._verify2(mock_method, test_method, expected_result=expected_result, method_kwargs=method_kwargs, expected_args=expected_args, expected_kwargs=expected_kwargs, **kwargs) def verify_delete(self, test_method, resource_type, ignore, input_path_args=None, expected_path_args=None, method_kwargs=None, expected_args=None, expected_kwargs=None, mock_method="openstack.proxy.BaseProxy._delete"): method_args = ["resource_or_id"] method_kwargs = method_kwargs or {} method_kwargs["ignore_missing"] = ignore if isinstance(input_path_args, dict): for key in input_path_args: method_kwargs[key] = input_path_args[key] elif isinstance(input_path_args, list): method_args = input_path_args expected_kwargs = expected_kwargs or {} expected_kwargs["ignore_missing"] = ignore if expected_path_args: expected_kwargs.update(expected_path_args) expected_args = expected_args or [resource_type, "resource_or_id"] self._verify2(mock_method, test_method, method_args=method_args, method_kwargs=method_kwargs, expected_args=expected_args, expected_kwargs=expected_kwargs) def verify_get(self, test_method, resource_type, value=None, args=None, mock_method="openstack.proxy.BaseProxy._get", ignore_value=False, **kwargs): the_value = value if value is None: the_value = [] if ignore_value else ["value"] expected_args = kwargs.pop("expected_args", []) expected_kwargs = kwargs.pop("expected_kwargs", {}) method_kwargs = kwargs.pop("method_kwargs", kwargs) if args: expected_kwargs["args"] = args if kwargs and self.kwargs_to_path_args: expected_kwargs["path_args"] = kwargs if not expected_args: expected_args = [resource_type] + the_value self._verify2(mock_method, test_method, method_args=the_value, method_kwargs=method_kwargs or {}, expected_args=expected_args, expected_kwargs=expected_kwargs) def verify_head(self, test_method, resource_type, mock_method="openstack.proxy.BaseProxy._head", value=None, **kwargs): the_value = [value] if value is not None else [] if self.kwargs_to_path_args: expected_kwargs = {"path_args": kwargs} if kwargs else {} else: expected_kwargs = kwargs or {} self._verify2(mock_method, test_method, method_args=the_value, method_kwargs=kwargs, expected_args=[resource_type] + the_value, expected_kwargs=expected_kwargs) def verify_find(self, test_method, resource_type, value=None, mock_method="openstack.proxy.BaseProxy._find", path_args=None, **kwargs): method_args = value or ["name_or_id"] expected_kwargs = {} self._add_path_args_for_verify(path_args, method_args, expected_kwargs, value=value) # TODO(briancurtin): if sub-tests worked in this mess of # test dependencies, the following would be a lot easier to work with. expected_kwargs["ignore_missing"] = False self._verify2(mock_method, test_method, method_args=method_args + [False], expected_args=[resource_type, "name_or_id"], expected_kwargs=expected_kwargs, expected_result="result", **kwargs) expected_kwargs["ignore_missing"] = True self._verify2(mock_method, test_method, method_args=method_args + [True], expected_args=[resource_type, "name_or_id"], expected_kwargs=expected_kwargs, expected_result="result", **kwargs) def verify_list(self, test_method, resource_type, paginated=False, mock_method="openstack.proxy.BaseProxy._list", **kwargs): expected_kwargs = kwargs.pop("expected_kwargs", {}) expected_kwargs.update({"paginated": paginated}) method_kwargs = kwargs.pop("method_kwargs", {}) self._verify2(mock_method, test_method, method_kwargs=method_kwargs, expected_args=[resource_type], expected_kwargs=expected_kwargs, expected_result=["result"], **kwargs) def verify_list_no_kwargs(self, test_method, resource_type, paginated=False, mock_method="openstack.proxy.BaseProxy._list"): self._verify2(mock_method, test_method, method_kwargs={}, expected_args=[resource_type], expected_kwargs={"paginated": paginated}, expected_result=["result"]) def verify_update(self, test_method, resource_type, value=None, mock_method="openstack.proxy.BaseProxy._update", expected_result="result", path_args=None, **kwargs): method_args = value or ["resource_or_id"] method_kwargs = {"x": 1, "y": 2, "z": 3} expected_args = kwargs.pop("expected_args", ["resource_or_id"]) expected_kwargs = method_kwargs.copy() self._add_path_args_for_verify(path_args, method_args, expected_kwargs, value=value) self._verify2(mock_method, test_method, expected_result=expected_result, method_args=method_args, method_kwargs=method_kwargs, expected_args=[resource_type] + expected_args, expected_kwargs=expected_kwargs, **kwargs) def verify_wait_for_status( self, test_method, mock_method="openstack.resource.wait_for_status", **kwargs): self._verify(mock_method, test_method, **kwargs) openstacksdk-0.11.3/openstack/tests/unit/key_manager/0000775000175100017510000000000013236151501022700 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/key_manager/test_key_management_service.py0000666000175100017510000000213313236151340031017 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.key_manager import key_manager_service class TestKeyManagerService(testtools.TestCase): def test_service(self): sot = key_manager_service.KeyManagerService() self.assertEqual('key-manager', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v1', sot.valid_versions[0].module) self.assertEqual('v1', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/key_manager/v1/0000775000175100017510000000000013236151501023226 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/key_manager/v1/test_order.py0000666000175100017510000000433313236151340025760 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.key_manager.v1 import order ID_VAL = "123" SECRET_ID = "5" IDENTIFIER = 'http://localhost/orders/%s' % ID_VAL EXAMPLE = { 'created': '1', 'creator_id': '2', 'meta': {'key': '3'}, 'order_ref': IDENTIFIER, 'secret_ref': 'http://localhost/secrets/%s' % SECRET_ID, 'status': '6', 'sub_status': '7', 'sub_status_message': '8', 'type': '9', 'updated': '10' } class TestOrder(testtools.TestCase): def test_basic(self): sot = order.Order() self.assertIsNone(sot.resource_key) self.assertEqual('orders', sot.resources_key) self.assertEqual('/orders', sot.base_path) self.assertEqual('key-manager', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = order.Order(**EXAMPLE) self.assertEqual(EXAMPLE['created'], sot.created_at) self.assertEqual(EXAMPLE['creator_id'], sot.creator_id) self.assertEqual(EXAMPLE['meta'], sot.meta) self.assertEqual(EXAMPLE['order_ref'], sot.order_ref) self.assertEqual(ID_VAL, sot.order_id) self.assertEqual(EXAMPLE['secret_ref'], sot.secret_ref) self.assertEqual(SECRET_ID, sot.secret_id) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['sub_status'], sot.sub_status) self.assertEqual(EXAMPLE['sub_status_message'], sot.sub_status_message) self.assertEqual(EXAMPLE['type'], sot.type) self.assertEqual(EXAMPLE['updated'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/key_manager/v1/test_container.py0000666000175100017510000000410713236151340026626 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.key_manager.v1 import container ID_VAL = "123" IDENTIFIER = 'http://localhost/containers/%s' % ID_VAL EXAMPLE = { 'container_ref': IDENTIFIER, 'created': '2015-03-09T12:14:57.233772', 'name': '3', 'secret_refs': ['4'], 'status': '5', 'type': '6', 'updated': '2015-03-09T12:15:57.233772', 'consumers': ['7'] } class TestContainer(testtools.TestCase): def test_basic(self): sot = container.Container() self.assertIsNone(sot.resource_key) self.assertEqual('containers', sot.resources_key) self.assertEqual('/containers', sot.base_path) self.assertEqual('key-manager', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = container.Container(**EXAMPLE) self.assertEqual(EXAMPLE['created'], sot.created_at) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['secret_refs'], sot.secret_refs) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['type'], sot.type) self.assertEqual(EXAMPLE['updated'], sot.updated_at) self.assertEqual(EXAMPLE['container_ref'], sot.id) self.assertEqual(EXAMPLE['container_ref'], sot.container_ref) self.assertEqual(ID_VAL, sot.container_id) self.assertEqual(EXAMPLE['consumers'], sot.consumers) openstacksdk-0.11.3/openstack/tests/unit/key_manager/v1/test_secret.py0000666000175100017510000001170513236151340026133 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.key_manager.v1 import secret ID_VAL = "123" IDENTIFIER = 'http://localhost:9311/v1/secrets/%s' % ID_VAL EXAMPLE = { 'algorithm': '1', 'bit_length': '2', 'content_types': {'default': '3'}, 'expiration': '2017-03-09T12:14:57.233772', 'mode': '5', 'name': '6', 'secret_ref': IDENTIFIER, 'status': '8', 'updated': '2015-03-09T12:15:57.233773', 'created': '2015-03-09T12:15:57.233774', 'secret_type': '9', 'payload': '10', 'payload_content_type': '11', 'payload_content_encoding': '12' } class TestSecret(testtools.TestCase): def test_basic(self): sot = secret.Secret() self.assertIsNone(sot.resource_key) self.assertEqual('secrets', sot.resources_key) self.assertEqual('/secrets', sot.base_path) self.assertEqual('key-manager', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual({"name": "name", "mode": "mode", "bits": "bits", "secret_type": "secret_type", "acl_only": "acl_only", "created": "created", "updated": "updated", "expiration": "expiration", "sort": "sort", "algorithm": "alg", "limit": "limit", "marker": "marker"}, sot._query_mapping._mapping) def test_make_it(self): sot = secret.Secret(**EXAMPLE) self.assertEqual(EXAMPLE['algorithm'], sot.algorithm) self.assertEqual(EXAMPLE['bit_length'], sot.bit_length) self.assertEqual(EXAMPLE['content_types'], sot.content_types) self.assertEqual(EXAMPLE['expiration'], sot.expires_at) self.assertEqual(EXAMPLE['mode'], sot.mode) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['secret_ref'], sot.secret_ref) self.assertEqual(EXAMPLE['secret_ref'], sot.id) self.assertEqual(ID_VAL, sot.secret_id) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['updated'], sot.updated_at) self.assertEqual(EXAMPLE['secret_type'], sot.secret_type) self.assertEqual(EXAMPLE['payload'], sot.payload) self.assertEqual(EXAMPLE['payload_content_type'], sot.payload_content_type) self.assertEqual(EXAMPLE['payload_content_encoding'], sot.payload_content_encoding) def test_get_no_payload(self): sot = secret.Secret(id="id") sess = mock.Mock() rv = mock.Mock() return_body = {"status": "cool"} rv.json = mock.Mock(return_value=return_body) sess.get = mock.Mock(return_value=rv) sot.get(sess) sess.get.assert_called_once_with("secrets/id") def _test_payload(self, sot, metadata, content_type): content_type = "some/type" metadata_response = mock.Mock() # Use copy because the dict gets consumed. metadata_response.json = mock.Mock(return_value=metadata.copy()) payload_response = mock.Mock() payload = "secret info" payload_response.text = payload sess = mock.Mock() sess.get = mock.Mock(side_effect=[metadata_response, payload_response]) rv = sot.get(sess) sess.get.assert_has_calls( [mock.call("secrets/id",), mock.call("secrets/id/payload", headers={"Accept": content_type})]) self.assertEqual(rv.payload, payload) self.assertEqual(rv.status, metadata["status"]) def test_get_with_payload_from_argument(self): metadata = {"status": "great"} content_type = "some/type" sot = secret.Secret(id="id", payload_content_type=content_type) self._test_payload(sot, metadata, content_type) def test_get_with_payload_from_content_types(self): content_type = "some/type" metadata = {"status": "fine", "content_types": {"default": content_type}} sot = secret.Secret(id="id") self._test_payload(sot, metadata, content_type) openstacksdk-0.11.3/openstack/tests/unit/key_manager/v1/__init__.py0000666000175100017510000000000013236151340025330 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/key_manager/v1/test_proxy.py0000666000175100017510000000633013236151364026033 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.key_manager.v1 import _proxy from openstack.key_manager.v1 import container from openstack.key_manager.v1 import order from openstack.key_manager.v1 import secret from openstack.tests.unit import test_proxy_base class TestKeyManagerProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestKeyManagerProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_server_create_attrs(self): self.verify_create(self.proxy.create_container, container.Container) def test_container_delete(self): self.verify_delete(self.proxy.delete_container, container.Container, False) def test_container_delete_ignore(self): self.verify_delete(self.proxy.delete_container, container.Container, True) def test_container_find(self): self.verify_find(self.proxy.find_container, container.Container) def test_container_get(self): self.verify_get(self.proxy.get_container, container.Container) def test_containers(self): self.verify_list(self.proxy.containers, container.Container, paginated=False) def test_container_update(self): self.verify_update(self.proxy.update_container, container.Container) def test_order_create_attrs(self): self.verify_create(self.proxy.create_order, order.Order) def test_order_delete(self): self.verify_delete(self.proxy.delete_order, order.Order, False) def test_order_delete_ignore(self): self.verify_delete(self.proxy.delete_order, order.Order, True) def test_order_find(self): self.verify_find(self.proxy.find_order, order.Order) def test_order_get(self): self.verify_get(self.proxy.get_order, order.Order) def test_orders(self): self.verify_list(self.proxy.orders, order.Order, paginated=False) def test_order_update(self): self.verify_update(self.proxy.update_order, order.Order) def test_secret_create_attrs(self): self.verify_create(self.proxy.create_secret, secret.Secret) def test_secret_delete(self): self.verify_delete(self.proxy.delete_secret, secret.Secret, False) def test_secret_delete_ignore(self): self.verify_delete(self.proxy.delete_secret, secret.Secret, True) def test_secret_find(self): self.verify_find(self.proxy.find_secret, secret.Secret) def test_secret_get(self): self.verify_get(self.proxy.get_secret, secret.Secret) def test_secrets(self): self.verify_list(self.proxy.secrets, secret.Secret, paginated=False) def test_secret_update(self): self.verify_update(self.proxy.update_secret, secret.Secret) openstacksdk-0.11.3/openstack/tests/unit/key_manager/__init__.py0000666000175100017510000000000013236151340025002 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/config/0000775000175100017510000000000013236151501021663 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/config/test_config.py0000666000175100017510000013377413236151340024563 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import argparse import copy import os import extras import fixtures import testtools import yaml from openstack import config from openstack.config import cloud_region from openstack.config import defaults from openstack.config import exceptions from openstack.config import loader from openstack.tests.unit.config import base def prompt_for_password(prompt=None): """Fake prompt function that just returns a constant string""" return 'promptpass' class TestConfig(base.TestCase): def test_get_all(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) clouds = c.get_all() # We add one by hand because the regions cloud is going to exist # twice since it has two regions in it user_clouds = [ cloud for cloud in base.USER_CONF['clouds'].keys() ] + ['_test_cloud_regions'] configured_clouds = [cloud.name for cloud in clouds] self.assertItemsEqual(user_clouds, configured_clouds) def test_get_all_clouds(self): # Ensure the alias is in place c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) clouds = c.get_all_clouds() # We add one by hand because the regions cloud is going to exist # twice since it has two regions in it user_clouds = [ cloud for cloud in base.USER_CONF['clouds'].keys() ] + ['_test_cloud_regions'] configured_clouds = [cloud.name for cloud in clouds] self.assertItemsEqual(user_clouds, configured_clouds) def test_get_one(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = c.get_one(validate=False) self.assertIsInstance(cloud, cloud_region.CloudRegion) self.assertEqual(cloud.name, '') def test_get_one_cloud(self): # Ensure the alias is in place c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = c.get_one_cloud(validate=False) self.assertIsInstance(cloud, cloud_region.CloudRegion) self.assertEqual(cloud.name, '') def test_get_one_default_cloud_from_file(self): single_conf = base._write_yaml({ 'clouds': { 'single': { 'auth': { 'auth_url': 'http://example.com/v2', 'username': 'testuser', 'password': 'testpass', 'project_name': 'testproject', }, 'region_name': 'test-region', } } }) c = config.OpenStackConfig(config_files=[single_conf], vendor_files=[self.vendor_yaml]) cc = c.get_one() self.assertEqual(cc.name, 'single') def test_get_one_auth_defaults(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml]) cc = c.get_one(cloud='_test-cloud_', auth={'username': 'user'}) self.assertEqual('user', cc.auth['username']) self.assertEqual( defaults._defaults['auth_type'], cc.auth_type, ) self.assertEqual( defaults._defaults['identity_api_version'], cc.identity_api_version, ) def test_get_one_auth_override_defaults(self): default_options = {'compute_api_version': '4'} c = config.OpenStackConfig(config_files=[self.cloud_yaml], override_defaults=default_options) cc = c.get_one(cloud='_test-cloud_', auth={'username': 'user'}) self.assertEqual('user', cc.auth['username']) self.assertEqual('4', cc.compute_api_version) self.assertEqual( defaults._defaults['identity_api_version'], cc.identity_api_version, ) def test_get_one_with_config_files(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.secure_yaml]) self.assertIsInstance(c.cloud_config, dict) self.assertIn('cache', c.cloud_config) self.assertIsInstance(c.cloud_config['cache'], dict) self.assertIn('max_age', c.cloud_config['cache']) self.assertIn('path', c.cloud_config['cache']) cc = c.get_one('_test-cloud_') self._assert_cloud_details(cc) cc = c.get_one('_test_cloud_no_vendor') self._assert_cloud_details(cc) def test_get_one_with_int_project_id(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('_test-cloud-int-project_') self.assertEqual('12345', cc.auth['project_id']) def test_get_one_with_domain_id(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('_test-cloud-domain-id_') self.assertEqual('6789', cc.auth['user_domain_id']) self.assertEqual('123456789', cc.auth['project_domain_id']) self.assertNotIn('domain_id', cc.auth) self.assertNotIn('domain-id', cc.auth) self.assertNotIn('domain_id', cc) def test_get_one_domain_scoped(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('_test-cloud-domain-scoped_') self.assertEqual('12345', cc.auth['domain_id']) self.assertNotIn('user_domain_id', cc.auth) self.assertNotIn('project_domain_id', cc.auth) def test_get_one_infer_user_domain(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('_test-cloud-int-project_') self.assertEqual('awesome-domain', cc.auth['user_domain_id']) self.assertEqual('awesome-domain', cc.auth['project_domain_id']) self.assertNotIn('domain_id', cc.auth) self.assertNotIn('domain_id', cc) def test_get_one_with_hyphenated_project_id(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('_test_cloud_hyphenated') self.assertEqual('12345', cc.auth['project_id']) def test_get_one_with_hyphenated_kwargs(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) args = { 'auth': { 'username': 'testuser', 'password': 'testpass', 'project-id': '12345', 'auth-url': 'http://example.com/v2', }, 'region_name': 'test-region', } cc = c.get_one(**args) self.assertEqual('http://example.com/v2', cc.auth['auth_url']) def test_no_environ(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) self.assertRaises( exceptions.OpenStackConfigException, c.get_one, 'envvars') def test_fallthrough(self): c = config.OpenStackConfig(config_files=[self.no_yaml], vendor_files=[self.no_yaml], secure_files=[self.no_yaml]) for k in os.environ.keys(): if k.startswith('OS_'): self.useFixture(fixtures.EnvironmentVariable(k)) c.get_one(cloud='defaults', validate=False) def test_prefer_ipv6_true(self): c = config.OpenStackConfig(config_files=[self.no_yaml], vendor_files=[self.no_yaml], secure_files=[self.no_yaml]) cc = c.get_one(cloud='defaults', validate=False) self.assertTrue(cc.prefer_ipv6) def test_prefer_ipv6_false(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one(cloud='_test-cloud_') self.assertFalse(cc.prefer_ipv6) def test_force_ipv4_true(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one(cloud='_test-cloud_') self.assertTrue(cc.force_ipv4) def test_force_ipv4_false(self): c = config.OpenStackConfig(config_files=[self.no_yaml], vendor_files=[self.no_yaml], secure_files=[self.no_yaml]) cc = c.get_one(cloud='defaults', validate=False) self.assertFalse(cc.force_ipv4) def test_get_one_auth_merge(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml]) cc = c.get_one(cloud='_test-cloud_', auth={'username': 'user'}) self.assertEqual('user', cc.auth['username']) self.assertEqual('testpass', cc.auth['password']) def test_get_one_networks(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('_test-cloud-networks_') self.assertEqual( ['a-public', 'another-public', 'split-default'], cc.get_external_networks()) self.assertEqual( ['a-private', 'another-private', 'split-no-default'], cc.get_internal_networks()) self.assertEqual('a-public', cc.get_nat_source()) self.assertEqual('another-private', cc.get_nat_destination()) self.assertEqual('another-public', cc.get_default_network()) self.assertEqual( ['a-public', 'another-public', 'split-no-default'], cc.get_external_ipv4_networks()) self.assertEqual( ['a-public', 'another-public', 'split-default'], cc.get_external_ipv6_networks()) def test_get_one_no_networks(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('_test-cloud-domain-scoped_') self.assertEqual([], cc.get_external_networks()) self.assertEqual([], cc.get_internal_networks()) self.assertIsNone(cc.get_nat_source()) self.assertIsNone(cc.get_nat_destination()) self.assertIsNone(cc.get_default_network()) def test_only_secure_yaml(self): c = config.OpenStackConfig(config_files=['nonexistent'], vendor_files=['nonexistent'], secure_files=[self.secure_yaml]) cc = c.get_one(cloud='_test_cloud_no_vendor', validate=False) self.assertEqual('testpass', cc.auth['password']) def test_get_cloud_names(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], secure_files=[self.no_yaml]) self.assertEqual( ['_test-cloud-domain-id_', '_test-cloud-domain-scoped_', '_test-cloud-int-project_', '_test-cloud-networks_', '_test-cloud_', '_test-cloud_no_region', '_test_cloud_hyphenated', '_test_cloud_no_vendor', '_test_cloud_regions', ], sorted(c.get_cloud_names())) c = config.OpenStackConfig(config_files=[self.no_yaml], vendor_files=[self.no_yaml], secure_files=[self.no_yaml]) for k in os.environ.keys(): if k.startswith('OS_'): self.useFixture(fixtures.EnvironmentVariable(k)) c.get_one(cloud='defaults', validate=False) self.assertEqual(['defaults'], sorted(c.get_cloud_names())) def test_set_one_cloud_creates_file(self): config_dir = fixtures.TempDir() self.useFixture(config_dir) config_path = os.path.join(config_dir.path, 'clouds.yaml') config.OpenStackConfig.set_one_cloud(config_path, '_test_cloud_') self.assertTrue(os.path.isfile(config_path)) with open(config_path) as fh: self.assertEqual({'clouds': {'_test_cloud_': {}}}, yaml.safe_load(fh)) def test_set_one_cloud_updates_cloud(self): new_config = { 'cloud': 'new_cloud', 'auth': { 'password': 'newpass' } } resulting_cloud_config = { 'auth': { 'password': 'newpass', 'username': 'testuser', 'auth_url': 'http://example.com/v2', }, 'cloud': 'new_cloud', 'profile': '_test_cloud_in_our_cloud', 'region_name': 'test-region' } resulting_config = copy.deepcopy(base.USER_CONF) resulting_config['clouds']['_test-cloud_'] = resulting_cloud_config config.OpenStackConfig.set_one_cloud(self.cloud_yaml, '_test-cloud_', new_config) with open(self.cloud_yaml) as fh: written_config = yaml.safe_load(fh) # We write a cache config for testing written_config['cache'].pop('path', None) self.assertEqual(written_config, resulting_config) def test_get_region_no_region_default(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) region = c._get_region(cloud='_test-cloud_no_region') self.assertEqual(region, {'name': '', 'values': {}}) def test_get_region_no_region(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) region = c._get_region(cloud='_test-cloud_no_region', region_name='override-region') self.assertEqual(region, {'name': 'override-region', 'values': {}}) def test_get_region_region_is_none(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) region = c._get_region(cloud='_test-cloud_no_region', region_name=None) self.assertEqual(region, {'name': '', 'values': {}}) def test_get_region_region_set(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) region = c._get_region(cloud='_test-cloud_', region_name='test-region') self.assertEqual(region, {'name': 'test-region', 'values': {}}) def test_get_region_many_regions_default(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) region = c._get_region(cloud='_test_cloud_regions', region_name='') self.assertEqual(region, {'name': 'region1', 'values': {'external_network': 'region1-network'}}) def test_get_region_many_regions(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) region = c._get_region(cloud='_test_cloud_regions', region_name='region2') self.assertEqual(region, {'name': 'region2', 'values': {'external_network': 'my-network'}}) def test_get_region_invalid_region(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) self.assertRaises( exceptions.OpenStackConfigException, c._get_region, cloud='_test_cloud_regions', region_name='invalid-region') def test_get_region_no_cloud(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.no_yaml]) region = c._get_region(region_name='no-cloud-region') self.assertEqual(region, {'name': 'no-cloud-region', 'values': {}}) class TestExcludedFormattedConfigValue(base.TestCase): # verify LaunchPad bug #1635696 # # get_one_cloud() and get_one_cloud_osc() iterate over config # values and try to expand any variables in those values by # calling value.format(), however some config values # (e.g. password) should never have format() applied to them, not # only might that change the password but it will also cause the # format() function to raise an exception if it can not parse the # format string. Examples would be single brace (e.g. 'foo{') # which raises an ValueError because it's looking for a matching # end brace or a brace pair with a key value that cannot be found # (e.g. 'foo{bar}') which raises a KeyError. def setUp(self): super(TestExcludedFormattedConfigValue, self).setUp() self.args = dict( auth_url='http://example.com/v2', username='user', project_name='project', region_name='region2', snack_type='cookie', os_auth_token='no-good-things', ) self.options = argparse.Namespace(**self.args) def test_get_one_cloud_password_brace(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) password = 'foo{' # Would raise ValueError, single brace self.options.password = password cc = c.get_one_cloud( cloud='_test_cloud_regions', argparse=self.options, validate=False) self.assertEqual(cc.password, password) password = 'foo{bar}' # Would raise KeyError, 'bar' not found self.options.password = password cc = c.get_one_cloud( cloud='_test_cloud_regions', argparse=self.options, validate=False) self.assertEqual(cc.password, password) def test_get_one_cloud_osc_password_brace(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) password = 'foo{' # Would raise ValueError, single brace self.options.password = password cc = c.get_one_cloud_osc( cloud='_test_cloud_regions', argparse=self.options, validate=False) self.assertEqual(cc.password, password) password = 'foo{bar}' # Would raise KeyError, 'bar' not found self.options.password = password cc = c.get_one_cloud_osc( cloud='_test_cloud_regions', argparse=self.options, validate=False) self.assertEqual(cc.password, password) class TestConfigArgparse(base.TestCase): def setUp(self): super(TestConfigArgparse, self).setUp() self.args = dict( auth_url='http://example.com/v2', username='user', password='password', project_name='project', region_name='region2', snack_type='cookie', os_auth_token='no-good-things', ) self.options = argparse.Namespace(**self.args) def test_get_one_bad_region_argparse(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) self.assertRaises( exceptions.OpenStackConfigException, c.get_one, cloud='_test-cloud_', argparse=self.options) def test_get_one_argparse(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one( cloud='_test_cloud_regions', argparse=self.options, validate=False) self.assertEqual(cc.region_name, 'region2') self.assertEqual(cc.snack_type, 'cookie') def test_get_one_precedence(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) kwargs = { 'auth': { 'username': 'testuser', 'password': 'authpass', 'project-id': 'testproject', 'auth_url': 'http://example.com/v2', }, 'region_name': 'kwarg_region', 'password': 'ansible_password', 'arbitrary': 'value', } args = dict( auth_url='http://example.com/v2', username='user', password='argpass', project_name='project', region_name='region2', snack_type='cookie', ) options = argparse.Namespace(**args) cc = c.get_one( argparse=options, **kwargs) self.assertEqual(cc.region_name, 'region2') self.assertEqual(cc.auth['password'], 'authpass') self.assertEqual(cc.snack_type, 'cookie') def test_get_one_cloud_precedence_osc(self): c = config.OpenStackConfig( config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], ) kwargs = { 'auth': { 'username': 'testuser', 'password': 'authpass', 'project-id': 'testproject', 'auth_url': 'http://example.com/v2', }, 'region_name': 'kwarg_region', 'password': 'ansible_password', 'arbitrary': 'value', } args = dict( auth_url='http://example.com/v2', username='user', password='argpass', project_name='project', region_name='region2', snack_type='cookie', ) options = argparse.Namespace(**args) cc = c.get_one_cloud_osc( argparse=options, **kwargs ) self.assertEqual(cc.region_name, 'region2') self.assertEqual(cc.auth['password'], 'argpass') self.assertEqual(cc.snack_type, 'cookie') def test_get_one_precedence_no_argparse(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) kwargs = { 'auth': { 'username': 'testuser', 'password': 'authpass', 'project-id': 'testproject', 'auth_url': 'http://example.com/v2', }, 'region_name': 'kwarg_region', 'password': 'ansible_password', 'arbitrary': 'value', } cc = c.get_one(**kwargs) self.assertEqual(cc.region_name, 'kwarg_region') self.assertEqual(cc.auth['password'], 'authpass') self.assertIsNone(cc.password) def test_get_one_just_argparse(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one(argparse=self.options, validate=False) self.assertIsNone(cc.cloud) self.assertEqual(cc.region_name, 'region2') self.assertEqual(cc.snack_type, 'cookie') def test_get_one_just_kwargs(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one(validate=False, **self.args) self.assertIsNone(cc.cloud) self.assertEqual(cc.region_name, 'region2') self.assertEqual(cc.snack_type, 'cookie') def test_get_one_dash_kwargs(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) args = { 'auth-url': 'http://example.com/v2', 'username': 'user', 'password': 'password', 'project_name': 'project', 'region_name': 'other-test-region', 'snack_type': 'cookie', } cc = c.get_one(**args) self.assertIsNone(cc.cloud) self.assertEqual(cc.region_name, 'other-test-region') self.assertEqual(cc.snack_type, 'cookie') def test_get_one_no_argparse(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one(cloud='_test-cloud_', argparse=None) self._assert_cloud_details(cc) self.assertEqual(cc.region_name, 'test-region') self.assertIsNone(cc.snack_type) def test_get_one_no_argparse_regions(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one(cloud='_test_cloud_regions', argparse=None) self._assert_cloud_details(cc) self.assertEqual(cc.region_name, 'region1') self.assertIsNone(cc.snack_type) def test_get_one_bad_region(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) self.assertRaises( exceptions.OpenStackConfigException, c.get_one, cloud='_test_cloud_regions', region_name='bad') def test_get_one_bad_region_no_regions(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) self.assertRaises( exceptions.OpenStackConfigException, c.get_one, cloud='_test-cloud_', region_name='bad_region') def test_get_one_no_argparse_region2(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one( cloud='_test_cloud_regions', region_name='region2', argparse=None) self._assert_cloud_details(cc) self.assertEqual(cc.region_name, 'region2') self.assertIsNone(cc.snack_type) def test_get_one_network(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one( cloud='_test_cloud_regions', region_name='region1', argparse=None) self._assert_cloud_details(cc) self.assertEqual(cc.region_name, 'region1') self.assertEqual('region1-network', cc.config['external_network']) def test_get_one_per_region_network(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one( cloud='_test_cloud_regions', region_name='region2', argparse=None) self._assert_cloud_details(cc) self.assertEqual(cc.region_name, 'region2') self.assertEqual('my-network', cc.config['external_network']) def test_get_one_no_yaml_no_cloud(self): c = config.OpenStackConfig(load_yaml_config=False) self.assertRaises( exceptions.OpenStackConfigException, c.get_one, cloud='_test_cloud_regions', region_name='region2', argparse=None) def test_get_one_no_yaml(self): c = config.OpenStackConfig(load_yaml_config=False) cc = c.get_one( region_name='region2', argparse=None, **base.USER_CONF['clouds']['_test_cloud_regions']) # Not using assert_cloud_details because of cache settings which # are not present without the file self.assertIsInstance(cc, cloud_region.CloudRegion) self.assertTrue(extras.safe_hasattr(cc, 'auth')) self.assertIsInstance(cc.auth, dict) self.assertIsNone(cc.cloud) self.assertIn('username', cc.auth) self.assertEqual('testuser', cc.auth['username']) self.assertEqual('testpass', cc.auth['password']) self.assertFalse(cc.config['image_api_use_tasks']) self.assertTrue('project_name' in cc.auth or 'project_id' in cc.auth) if 'project_name' in cc.auth: self.assertEqual('testproject', cc.auth['project_name']) elif 'project_id' in cc.auth: self.assertEqual('testproject', cc.auth['project_id']) self.assertEqual(cc.region_name, 'region2') def test_fix_env_args(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) env_args = {'os-compute-api-version': 1} fixed_args = c._fix_args(env_args) self.assertDictEqual({'compute_api_version': 1}, fixed_args) def test_extra_config(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) defaults = {'use_hostnames': False, 'other-value': 'something'} ansible_options = c.get_extra_config('ansible', defaults) # This should show that the default for use_hostnames above is # overridden by the value in the config file defined in base.py # It should also show that other-value key is normalized and passed # through even though there is no corresponding value in the config # file, and that expand-hostvars key is normalized and the value # from the config comes through even though there is no default. self.assertDictEqual( { 'expand_hostvars': False, 'use_hostnames': True, 'other_value': 'something', }, ansible_options) def test_register_argparse_cloud(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() c.register_argparse_arguments(parser, []) opts, _remain = parser.parse_known_args(['--os-cloud', 'foo']) self.assertEqual(opts.os_cloud, 'foo') def test_env_argparse_precedence(self): self.useFixture(fixtures.EnvironmentVariable( 'OS_TENANT_NAME', 'tenants-are-bad')) c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one( cloud='envvars', argparse=self.options, validate=False) self.assertEqual(cc.auth['project_name'], 'project') def test_argparse_default_no_token(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() c.register_argparse_arguments(parser, []) # novaclient will add this parser.add_argument('--os-auth-token') opts, _remain = parser.parse_known_args() cc = c.get_one( cloud='_test_cloud_regions', argparse=opts) self.assertEqual(cc.config['auth_type'], 'password') self.assertNotIn('token', cc.config['auth']) def test_argparse_token(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() c.register_argparse_arguments(parser, []) # novaclient will add this parser.add_argument('--os-auth-token') opts, _remain = parser.parse_known_args( ['--os-auth-token', 'very-bad-things', '--os-auth-type', 'token']) cc = c.get_one(argparse=opts, validate=False) self.assertEqual(cc.config['auth_type'], 'token') self.assertEqual(cc.config['auth']['token'], 'very-bad-things') def test_argparse_underscores(self): c = config.OpenStackConfig(config_files=[self.no_yaml], vendor_files=[self.no_yaml], secure_files=[self.no_yaml]) parser = argparse.ArgumentParser() parser.add_argument('--os_username') argv = [ '--os_username', 'user', '--os_password', 'pass', '--os-auth-url', 'auth-url', '--os-project-name', 'project'] c.register_argparse_arguments(parser, argv=argv) opts, _remain = parser.parse_known_args(argv) cc = c.get_one(argparse=opts) self.assertEqual(cc.config['auth']['username'], 'user') self.assertEqual(cc.config['auth']['password'], 'pass') self.assertEqual(cc.config['auth']['auth_url'], 'auth-url') def test_argparse_action_append_no_underscore(self): c = config.OpenStackConfig(config_files=[self.no_yaml], vendor_files=[self.no_yaml], secure_files=[self.no_yaml]) parser = argparse.ArgumentParser() parser.add_argument('--foo', action='append') argv = ['--foo', '1', '--foo', '2'] c.register_argparse_arguments(parser, argv=argv) opts, _remain = parser.parse_known_args(argv) self.assertEqual(opts.foo, ['1', '2']) def test_argparse_underscores_duplicate(self): c = config.OpenStackConfig(config_files=[self.no_yaml], vendor_files=[self.no_yaml], secure_files=[self.no_yaml]) parser = argparse.ArgumentParser() parser.add_argument('--os_username') argv = [ '--os_username', 'user', '--os_password', 'pass', '--os-username', 'user1', '--os-password', 'pass1', '--os-auth-url', 'auth-url', '--os-project-name', 'project'] self.assertRaises( exceptions.OpenStackConfigException, c.register_argparse_arguments, parser=parser, argv=argv) def test_register_argparse_bad_plugin(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() self.assertRaises( exceptions.OpenStackConfigException, c.register_argparse_arguments, parser, ['--os-auth-type', 'foo']) def test_register_argparse_not_password(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() args = [ '--os-auth-type', 'v3token', '--os-token', 'some-secret', ] c.register_argparse_arguments(parser, args) opts, _remain = parser.parse_known_args(args) self.assertEqual(opts.os_token, 'some-secret') def test_register_argparse_password(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() args = [ '--os-password', 'some-secret', ] c.register_argparse_arguments(parser, args) opts, _remain = parser.parse_known_args(args) self.assertEqual(opts.os_password, 'some-secret') with testtools.ExpectedException(AttributeError): opts.os_token def test_register_argparse_service_type(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() args = [ '--os-service-type', 'network', '--os-endpoint-type', 'admin', '--http-timeout', '20', ] c.register_argparse_arguments(parser, args) opts, _remain = parser.parse_known_args(args) self.assertEqual(opts.os_service_type, 'network') self.assertEqual(opts.os_endpoint_type, 'admin') self.assertEqual(opts.http_timeout, '20') with testtools.ExpectedException(AttributeError): opts.os_network_service_type cloud = c.get_one(argparse=opts, validate=False) self.assertEqual(cloud.config['service_type'], 'network') self.assertEqual(cloud.config['interface'], 'admin') self.assertEqual(cloud.config['api_timeout'], '20') self.assertNotIn('http_timeout', cloud.config) def test_register_argparse_network_service_type(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() args = [ '--os-endpoint-type', 'admin', '--network-api-version', '4', ] c.register_argparse_arguments(parser, args, ['network']) opts, _remain = parser.parse_known_args(args) self.assertEqual(opts.os_service_type, 'network') self.assertEqual(opts.os_endpoint_type, 'admin') self.assertIsNone(opts.os_network_service_type) self.assertIsNone(opts.os_network_api_version) self.assertEqual(opts.network_api_version, '4') cloud = c.get_one(argparse=opts, validate=False) self.assertEqual(cloud.config['service_type'], 'network') self.assertEqual(cloud.config['interface'], 'admin') self.assertEqual(cloud.config['network_api_version'], '4') self.assertNotIn('http_timeout', cloud.config) def test_register_argparse_network_service_types(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) parser = argparse.ArgumentParser() args = [ '--os-compute-service-name', 'cloudServers', '--os-network-service-type', 'badtype', '--os-endpoint-type', 'admin', '--network-api-version', '4', ] c.register_argparse_arguments( parser, args, ['compute', 'network', 'volume']) opts, _remain = parser.parse_known_args(args) self.assertEqual(opts.os_network_service_type, 'badtype') self.assertIsNone(opts.os_compute_service_type) self.assertIsNone(opts.os_volume_service_type) self.assertEqual(opts.os_service_type, 'compute') self.assertEqual(opts.os_compute_service_name, 'cloudServers') self.assertEqual(opts.os_endpoint_type, 'admin') self.assertIsNone(opts.os_network_api_version) self.assertEqual(opts.network_api_version, '4') cloud = c.get_one(argparse=opts, validate=False) self.assertEqual(cloud.config['service_type'], 'compute') self.assertEqual(cloud.config['network_service_type'], 'badtype') self.assertEqual(cloud.config['interface'], 'admin') self.assertEqual(cloud.config['network_api_version'], '4') self.assertNotIn('volume_service_type', cloud.config) self.assertNotIn('http_timeout', cloud.config) class TestConfigPrompt(base.TestCase): def setUp(self): super(TestConfigPrompt, self).setUp() self.args = dict( auth_url='http://example.com/v2', username='user', project_name='project', # region_name='region2', auth_type='password', ) self.options = argparse.Namespace(**self.args) def test_get_one_prompt(self): c = config.OpenStackConfig( config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], pw_func=prompt_for_password, ) # This needs a cloud definition without a password. # If this starts failing unexpectedly check that the cloud_yaml # and/or vendor_yaml do not have a password in the selected cloud. cc = c.get_one( cloud='_test_cloud_no_vendor', argparse=self.options, ) self.assertEqual('promptpass', cc.auth['password']) class TestConfigDefault(base.TestCase): def setUp(self): super(TestConfigDefault, self).setUp() # Reset defaults after each test so that other tests are # not affected by any changes. self.addCleanup(self._reset_defaults) def _reset_defaults(self): defaults._defaults = None def test_set_no_default(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one(cloud='_test-cloud_', argparse=None) self._assert_cloud_details(cc) self.assertEqual('password', cc.auth_type) def test_set_default_before_init(self): loader.set_default('identity_api_version', '4') c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one(cloud='_test-cloud_', argparse=None) self.assertEqual('4', cc.identity_api_version) class TestBackwardsCompatibility(base.TestCase): def test_set_no_default(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = { 'identity_endpoint_type': 'admin', 'compute_endpoint_type': 'private', 'endpoint_type': 'public', 'auth_type': 'v3password', } result = c._fix_backwards_interface(cloud) expected = { 'identity_interface': 'admin', 'compute_interface': 'private', 'interface': 'public', 'auth_type': 'v3password', } self.assertDictEqual(expected, result) def test_project_v2password(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = { 'auth_type': 'v2password', 'auth': { 'project-name': 'my_project_name', 'project-id': 'my_project_id' } } result = c._fix_backwards_project(cloud) expected = { 'auth_type': 'v2password', 'auth': { 'tenant_name': 'my_project_name', 'tenant_id': 'my_project_id' } } self.assertEqual(expected, result) def test_project_password(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = { 'auth_type': 'password', 'auth': { 'project-name': 'my_project_name', 'project-id': 'my_project_id' } } result = c._fix_backwards_project(cloud) expected = { 'auth_type': 'password', 'auth': { 'project_name': 'my_project_name', 'project_id': 'my_project_id' } } self.assertEqual(expected, result) def test_backwards_network_fail(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = { 'external_network': 'public', 'networks': [ {'name': 'private', 'routes_externally': False}, ] } self.assertRaises( exceptions.OpenStackConfigException, c._fix_backwards_networks, cloud) def test_backwards_network(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = { 'external_network': 'public', 'internal_network': 'private', } result = c._fix_backwards_networks(cloud) expected = { 'external_network': 'public', 'internal_network': 'private', 'networks': [ {'name': 'public', 'routes_externally': True, 'nat_destination': False, 'default_interface': True}, {'name': 'private', 'routes_externally': False, 'nat_destination': True, 'default_interface': False}, ] } self.assertEqual(expected, result) def test_normalize_network(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = { 'networks': [ {'name': 'private'} ] } result = c._fix_backwards_networks(cloud) expected = { 'networks': [ {'name': 'private', 'routes_externally': False, 'nat_destination': False, 'default_interface': False, 'nat_source': False, 'routes_ipv4_externally': False, 'routes_ipv6_externally': False}, ] } self.assertEqual(expected, result) def test_single_default_interface(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cloud = { 'networks': [ {'name': 'blue', 'default_interface': True}, {'name': 'purple', 'default_interface': True}, ] } self.assertRaises( exceptions.OpenStackConfigException, c._fix_backwards_networks, cloud) openstacksdk-0.11.3/openstack/tests/unit/config/test_from_session.py0000666000175100017510000000366413236151364026024 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from testscenarios import load_tests_apply_scenarios as load_tests # noqa import uuid from openstack.config import cloud_region from openstack import connection from openstack.tests import fakes from openstack.tests.unit import base class TestFromSession(base.RequestsMockTestCase): scenarios = [ ('no_region', dict(test_region=None)), ('with_region', dict(test_region='RegionOne')), ] def test_from_session(self): config = cloud_region.from_session( self.cloud.keystone_session, region_name=self.test_region) self.assertEqual(config.name, 'identity.example.com') if not self.test_region: self.assertIsNone(config.region_name) else: self.assertEqual(config.region_name, self.test_region) server_id = str(uuid.uuid4()) server_name = self.getUniqueString('name') fake_server = fakes.make_fake_server(server_id, server_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), ]) conn = connection.Connection(config=config) s = next(conn.compute.servers()) self.assertEqual(s.id, server_id) self.assertEqual(s.name, server_name) self.assert_calls() openstacksdk-0.11.3/openstack/tests/unit/config/test_environ.py0000666000175100017510000001764213236151340024771 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import config from openstack.config import cloud_region from openstack.config import exceptions from openstack.tests.unit.config import base import fixtures class TestEnviron(base.TestCase): def setUp(self): super(TestEnviron, self).setUp() self.useFixture( fixtures.EnvironmentVariable('OS_AUTH_URL', 'https://example.com')) self.useFixture( fixtures.EnvironmentVariable('OS_USERNAME', 'testuser')) self.useFixture( fixtures.EnvironmentVariable('OS_PASSWORD', 'testpass')) self.useFixture( fixtures.EnvironmentVariable('OS_PROJECT_NAME', 'testproject')) self.useFixture( fixtures.EnvironmentVariable('NOVA_PROJECT_ID', 'testnova')) def test_get_one(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) self.assertIsInstance(c.get_one(), cloud_region.CloudRegion) def test_no_fallthrough(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) self.assertRaises( exceptions.OpenStackConfigException, c.get_one, 'openstack') def test_envvar_name_override(self): self.useFixture( fixtures.EnvironmentVariable('OS_CLOUD_NAME', 'override')) c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('override') self._assert_cloud_details(cc) def test_envvar_prefer_ipv6_override(self): self.useFixture( fixtures.EnvironmentVariable('OS_PREFER_IPV6', 'false')) c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.secure_yaml]) cc = c.get_one('_test-cloud_') self.assertFalse(cc.prefer_ipv6) def test_environ_exists(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.secure_yaml]) cc = c.get_one('envvars') self._assert_cloud_details(cc) self.assertNotIn('auth_url', cc.config) self.assertIn('auth_url', cc.config['auth']) self.assertNotIn('project_id', cc.config['auth']) self.assertNotIn('auth_url', cc.config) cc = c.get_one('_test-cloud_') self._assert_cloud_details(cc) cc = c.get_one('_test_cloud_no_vendor') self._assert_cloud_details(cc) def test_environ_prefix(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], envvar_prefix='NOVA_', secure_files=[self.secure_yaml]) cc = c.get_one('envvars') self._assert_cloud_details(cc) self.assertNotIn('auth_url', cc.config) self.assertIn('auth_url', cc.config['auth']) self.assertIn('project_id', cc.config['auth']) self.assertNotIn('auth_url', cc.config) cc = c.get_one('_test-cloud_') self._assert_cloud_details(cc) cc = c.get_one('_test_cloud_no_vendor') self._assert_cloud_details(cc) def test_get_one_with_config_files(self): c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], secure_files=[self.secure_yaml]) self.assertIsInstance(c.cloud_config, dict) self.assertIn('cache', c.cloud_config) self.assertIsInstance(c.cloud_config['cache'], dict) self.assertIn('max_age', c.cloud_config['cache']) self.assertIn('path', c.cloud_config['cache']) cc = c.get_one('_test-cloud_') self._assert_cloud_details(cc) cc = c.get_one('_test_cloud_no_vendor') self._assert_cloud_details(cc) def test_config_file_override(self): self.useFixture( fixtures.EnvironmentVariable( 'OS_CLIENT_CONFIG_FILE', self.cloud_yaml)) c = config.OpenStackConfig(config_files=[], vendor_files=[self.vendor_yaml]) cc = c.get_one('_test-cloud_') self._assert_cloud_details(cc) class TestEnvvars(base.TestCase): def test_no_envvars(self): self.useFixture( fixtures.EnvironmentVariable('NOVA_USERNAME', 'nova')) c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) self.assertRaises( exceptions.OpenStackConfigException, c.get_one, 'envvars') def test_test_envvars(self): self.useFixture( fixtures.EnvironmentVariable('NOVA_USERNAME', 'nova')) self.useFixture( fixtures.EnvironmentVariable('OS_STDERR_CAPTURE', 'True')) c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) self.assertRaises( exceptions.OpenStackConfigException, c.get_one, 'envvars') def test_incomplete_envvars(self): self.useFixture( fixtures.EnvironmentVariable('NOVA_USERNAME', 'nova')) self.useFixture( fixtures.EnvironmentVariable('OS_USERNAME', 'user')) config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) # This is broken due to an issue that's fixed in a subsequent patch # commenting it out in this patch to keep the patch size reasonable # self.assertRaises( # keystoneauth1.exceptions.auth_plugins.MissingRequiredOptions, # c.get_one, 'envvars') def test_have_envvars(self): self.useFixture( fixtures.EnvironmentVariable('NOVA_USERNAME', 'nova')) self.useFixture( fixtures.EnvironmentVariable('OS_AUTH_URL', 'http://example.com')) self.useFixture( fixtures.EnvironmentVariable('OS_USERNAME', 'user')) self.useFixture( fixtures.EnvironmentVariable('OS_PASSWORD', 'password')) self.useFixture( fixtures.EnvironmentVariable('OS_PROJECT_NAME', 'project')) c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml]) cc = c.get_one('envvars') self.assertEqual(cc.config['auth']['username'], 'user') def test_old_envvars(self): self.useFixture( fixtures.EnvironmentVariable('NOVA_USERNAME', 'nova')) self.useFixture( fixtures.EnvironmentVariable( 'NOVA_AUTH_URL', 'http://example.com')) self.useFixture( fixtures.EnvironmentVariable('NOVA_PASSWORD', 'password')) self.useFixture( fixtures.EnvironmentVariable('NOVA_PROJECT_NAME', 'project')) c = config.OpenStackConfig(config_files=[self.cloud_yaml], vendor_files=[self.vendor_yaml], envvar_prefix='NOVA_') cc = c.get_one('envvars') self.assertEqual(cc.config['auth']['username'], 'nova') openstacksdk-0.11.3/openstack/tests/unit/config/test_init.py0000666000175100017510000000235313236151340024245 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import argparse import openstack.config from openstack.tests.unit.config import base class TestInit(base.TestCase): def test_get_cloud_region_without_arg_parser(self): cloud_region = openstack.config.get_cloud_region( options=None, validate=False) self.assertIsInstance( cloud_region, openstack.config.cloud_region.CloudRegion ) def test_get_cloud_region_with_arg_parser(self): cloud_region = openstack.config.get_cloud_region( options=argparse.ArgumentParser(), validate=False) self.assertIsInstance( cloud_region, openstack.config.cloud_region.CloudRegion ) openstacksdk-0.11.3/openstack/tests/unit/config/test_json.py0000666000175100017510000000446113236151340024255 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glob import json import os import jsonschema from testtools import content from openstack.config import defaults from openstack.tests.unit.config import base class TestConfig(base.TestCase): def json_diagnostics(self, exc_info): self.addDetail('filename', content.text_content(self.filename)) for error in sorted(self.validator.iter_errors(self.json_data)): self.addDetail('jsonschema', content.text_content(str(error))) def test_defaults_valid_json(self): _schema_path = os.path.join( os.path.dirname(os.path.realpath(defaults.__file__)), 'schema.json') schema = json.load(open(_schema_path, 'r')) self.validator = jsonschema.Draft4Validator(schema) self.addOnException(self.json_diagnostics) self.filename = os.path.join( os.path.dirname(os.path.realpath(defaults.__file__)), 'defaults.json') self.json_data = json.load(open(self.filename, 'r')) self.assertTrue(self.validator.is_valid(self.json_data)) def test_vendors_valid_json(self): _schema_path = os.path.join( os.path.dirname(os.path.realpath(defaults.__file__)), 'vendor-schema.json') schema = json.load(open(_schema_path, 'r')) self.validator = jsonschema.Draft4Validator(schema) self.addOnException(self.json_diagnostics) _vendors_path = os.path.join( os.path.dirname(os.path.realpath(defaults.__file__)), 'vendors') for self.filename in glob.glob(os.path.join(_vendors_path, '*.json')): self.json_data = json.load(open(self.filename, 'r')) self.assertTrue(self.validator.is_valid(self.json_data)) openstacksdk-0.11.3/openstack/tests/unit/config/__init__.py0000666000175100017510000000000013236151340023765 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/config/base.py0000666000175100017510000001732413236151340023161 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2010-2011 OpenStack Foundation # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(shade) Shift to using new combined base unit test class import copy import os import tempfile from openstack.config import cloud_region import extras import fixtures from oslotest import base import yaml VENDOR_CONF = { 'public-clouds': { '_test_cloud_in_our_cloud': { 'auth': { 'auth_url': 'http://example.com/v2', 'username': 'testotheruser', 'project_name': 'testproject', }, }, } } USER_CONF = { 'cache': { 'max_age': '1', 'expiration': { 'server': 5, 'image': '7', }, }, 'client': { 'force_ipv4': True, }, 'clouds': { '_test-cloud_': { 'profile': '_test_cloud_in_our_cloud', 'auth': { 'auth_url': 'http://example.com/v2', 'username': 'testuser', 'password': 'testpass', }, 'region_name': 'test-region', }, '_test_cloud_no_vendor': { 'profile': '_test_non_existant_cloud', 'auth': { 'auth_url': 'http://example.com/v2', 'username': 'testuser', 'project_name': 'testproject', }, 'region-name': 'test-region', }, '_test-cloud-int-project_': { 'auth': { 'username': 'testuser', 'password': 'testpass', 'domain_id': 'awesome-domain', 'project_id': 12345, 'auth_url': 'http://example.com/v2', }, 'region_name': 'test-region', }, '_test-cloud-domain-id_': { 'auth': { 'username': 'testuser', 'password': 'testpass', 'project_id': 12345, 'auth_url': 'http://example.com/v2', 'domain_id': '6789', 'project_domain_id': '123456789', }, 'region_name': 'test-region', }, '_test-cloud-networks_': { 'auth': { 'username': 'testuser', 'password': 'testpass', 'project_id': 12345, 'auth_url': 'http://example.com/v2', 'domain_id': '6789', 'project_domain_id': '123456789', }, 'networks': [{ 'name': 'a-public', 'routes_externally': True, 'nat_source': True, }, { 'name': 'another-public', 'routes_externally': True, 'default_interface': True, }, { 'name': 'a-private', 'routes_externally': False, }, { 'name': 'another-private', 'routes_externally': False, 'nat_destination': True, }, { 'name': 'split-default', 'routes_externally': True, 'routes_ipv4_externally': False, }, { 'name': 'split-no-default', 'routes_ipv6_externally': False, 'routes_ipv4_externally': True, }], 'region_name': 'test-region', }, '_test_cloud_regions': { 'auth': { 'username': 'testuser', 'password': 'testpass', 'project-id': 'testproject', 'auth_url': 'http://example.com/v2', }, 'regions': [ { 'name': 'region1', 'values': { 'external_network': 'region1-network', } }, { 'name': 'region2', 'values': { 'external_network': 'my-network', } } ], }, '_test_cloud_hyphenated': { 'auth': { 'username': 'testuser', 'password': 'testpass', 'project-id': '12345', 'auth_url': 'http://example.com/v2', }, 'region_name': 'test-region', }, '_test-cloud_no_region': { 'profile': '_test_cloud_in_our_cloud', 'auth': { 'auth_url': 'http://example.com/v2', 'username': 'testuser', 'password': 'testpass', }, }, '_test-cloud-domain-scoped_': { 'auth': { 'auth_url': 'http://example.com/v2', 'username': 'testuser', 'password': 'testpass', 'domain-id': '12345', }, }, }, 'ansible': { 'expand-hostvars': False, 'use_hostnames': True, }, } SECURE_CONF = { 'clouds': { '_test_cloud_no_vendor': { 'auth': { 'password': 'testpass', }, } } } NO_CONF = { 'cache': {'max_age': 1}, } def _write_yaml(obj): # Assume NestedTempfile so we don't have to cleanup with tempfile.NamedTemporaryFile(delete=False) as obj_yaml: obj_yaml.write(yaml.safe_dump(obj).encode('utf-8')) return obj_yaml.name class TestCase(base.BaseTestCase): """Test case base class for all unit tests.""" def setUp(self): super(TestCase, self).setUp() self.useFixture(fixtures.NestedTempfile()) conf = copy.deepcopy(USER_CONF) tdir = self.useFixture(fixtures.TempDir()) conf['cache']['path'] = tdir.path self.cloud_yaml = _write_yaml(conf) self.secure_yaml = _write_yaml(SECURE_CONF) self.vendor_yaml = _write_yaml(VENDOR_CONF) self.no_yaml = _write_yaml(NO_CONF) # Isolate the test runs from the environment # Do this as two loops because you can't modify the dict in a loop # over the dict in 3.4 keys_to_isolate = [] for env in os.environ.keys(): if env.startswith('OS_'): keys_to_isolate.append(env) for env in keys_to_isolate: self.useFixture(fixtures.EnvironmentVariable(env)) def _assert_cloud_details(self, cc): self.assertIsInstance(cc, cloud_region.CloudRegion) self.assertTrue(extras.safe_hasattr(cc, 'auth')) self.assertIsInstance(cc.auth, dict) self.assertIsNone(cc.cloud) self.assertIn('username', cc.auth) self.assertEqual('testuser', cc.auth['username']) self.assertEqual('testpass', cc.auth['password']) self.assertFalse(cc.config['image_api_use_tasks']) self.assertTrue('project_name' in cc.auth or 'project_id' in cc.auth) if 'project_name' in cc.auth: self.assertEqual('testproject', cc.auth['project_name']) elif 'project_id' in cc.auth: self.assertEqual('testproject', cc.auth['project_id']) self.assertEqual(cc.get_cache_expiration_time(), 1) self.assertEqual(cc.get_cache_resource_expiration('server'), 5.0) self.assertEqual(cc.get_cache_resource_expiration('image'), 7.0) openstacksdk-0.11.3/openstack/tests/unit/config/test_cloud_config.py0000666000175100017510000002660413236151340025742 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from keystoneauth1 import exceptions as ksa_exceptions from keystoneauth1 import session as ksa_session import mock from openstack import version as openstack_version from openstack.config import cloud_region from openstack.config import defaults from openstack.config import exceptions from openstack.tests.unit.config import base fake_config_dict = {'a': 1, 'os_b': 2, 'c': 3, 'os_c': 4} fake_services_dict = { 'compute_api_version': '2', 'compute_endpoint_override': 'http://compute.example.com', 'telemetry_endpoint': 'http://telemetry.example.com', 'interface': 'public', 'image_service_type': 'mage', 'identity_interface': 'admin', 'identity_service_name': 'locks', 'volume_api_version': '1', 'auth': {'password': 'hunter2', 'username': 'AzureDiamond'}, } class TestCloudRegion(base.TestCase): def test_arbitrary_attributes(self): cc = cloud_region.CloudRegion("test1", "region-al", fake_config_dict) self.assertEqual("test1", cc.name) self.assertEqual("region-al", cc.region_name) # Look up straight value self.assertEqual(1, cc.a) # Look up prefixed attribute, fail - returns None self.assertIsNone(cc.os_b) # Look up straight value, then prefixed value self.assertEqual(3, cc.c) self.assertEqual(3, cc.os_c) # Lookup mystery attribute self.assertIsNone(cc.x) # Test default ipv6 self.assertFalse(cc.force_ipv4) def test_iteration(self): cc = cloud_region.CloudRegion("test1", "region-al", fake_config_dict) self.assertTrue('a' in cc) self.assertFalse('x' in cc) def test_equality(self): cc1 = cloud_region.CloudRegion("test1", "region-al", fake_config_dict) cc2 = cloud_region.CloudRegion("test1", "region-al", fake_config_dict) self.assertEqual(cc1, cc2) def test_inequality(self): cc1 = cloud_region.CloudRegion("test1", "region-al", fake_config_dict) cc2 = cloud_region.CloudRegion("test2", "region-al", fake_config_dict) self.assertNotEqual(cc1, cc2) cc2 = cloud_region.CloudRegion("test1", "region-xx", fake_config_dict) self.assertNotEqual(cc1, cc2) cc2 = cloud_region.CloudRegion("test1", "region-al", {}) self.assertNotEqual(cc1, cc2) def test_verify(self): config_dict = copy.deepcopy(fake_config_dict) config_dict['cacert'] = None config_dict['verify'] = False cc = cloud_region.CloudRegion("test1", "region-xx", config_dict) (verify, cert) = cc.get_requests_verify_args() self.assertFalse(verify) config_dict['verify'] = True cc = cloud_region.CloudRegion("test1", "region-xx", config_dict) (verify, cert) = cc.get_requests_verify_args() self.assertTrue(verify) def test_verify_cacert(self): config_dict = copy.deepcopy(fake_config_dict) config_dict['cacert'] = "certfile" config_dict['verify'] = False cc = cloud_region.CloudRegion("test1", "region-xx", config_dict) (verify, cert) = cc.get_requests_verify_args() self.assertFalse(verify) config_dict['verify'] = True cc = cloud_region.CloudRegion("test1", "region-xx", config_dict) (verify, cert) = cc.get_requests_verify_args() self.assertEqual("certfile", verify) def test_cert_with_key(self): config_dict = copy.deepcopy(fake_config_dict) config_dict['cacert'] = None config_dict['verify'] = False config_dict['cert'] = 'cert' config_dict['key'] = 'key' cc = cloud_region.CloudRegion("test1", "region-xx", config_dict) (verify, cert) = cc.get_requests_verify_args() self.assertEqual(("cert", "key"), cert) def test_ipv6(self): cc = cloud_region.CloudRegion( "test1", "region-al", fake_config_dict, force_ipv4=True) self.assertTrue(cc.force_ipv4) def test_getters(self): cc = cloud_region.CloudRegion("test1", "region-al", fake_services_dict) self.assertEqual(['compute', 'identity', 'image', 'volume'], sorted(cc.get_services())) self.assertEqual({'password': 'hunter2', 'username': 'AzureDiamond'}, cc.get_auth_args()) self.assertEqual('public', cc.get_interface()) self.assertEqual('public', cc.get_interface('compute')) self.assertEqual('admin', cc.get_interface('identity')) self.assertEqual('region-al', cc.region_name) self.assertIsNone(cc.get_api_version('image')) self.assertEqual('2', cc.get_api_version('compute')) self.assertEqual('mage', cc.get_service_type('image')) self.assertEqual('compute', cc.get_service_type('compute')) self.assertEqual('1', cc.get_api_version('volume')) self.assertEqual('volume', cc.get_service_type('volume')) self.assertEqual('http://compute.example.com', cc.get_endpoint('compute')) self.assertIsNone(cc.get_endpoint('image')) self.assertIsNone(cc.get_service_name('compute')) self.assertEqual('locks', cc.get_service_name('identity')) def test_volume_override(self): cc = cloud_region.CloudRegion("test1", "region-al", fake_services_dict) cc.config['volume_api_version'] = '2' self.assertEqual('volumev2', cc.get_service_type('volume')) def test_volume_override_v3(self): cc = cloud_region.CloudRegion("test1", "region-al", fake_services_dict) cc.config['volume_api_version'] = '3' self.assertEqual('volumev3', cc.get_service_type('volume')) def test_workflow_override_v2(self): cc = cloud_region.CloudRegion("test1", "region-al", fake_services_dict) cc.config['workflow_api_version'] = '2' self.assertEqual('workflowv2', cc.get_service_type('workflow')) def test_no_override(self): """Test no override happens when defaults are not configured""" cc = cloud_region.CloudRegion("test1", "region-al", fake_services_dict) self.assertEqual('volume', cc.get_service_type('volume')) self.assertEqual('workflow', cc.get_service_type('workflow')) self.assertEqual('not-exist', cc.get_service_type('not-exist')) def test_get_session_no_auth(self): config_dict = defaults.get_defaults() config_dict.update(fake_services_dict) cc = cloud_region.CloudRegion("test1", "region-al", config_dict) self.assertRaises( exceptions.OpenStackConfigException, cc.get_session) @mock.patch.object(ksa_session, 'Session') def test_get_session(self, mock_session): config_dict = defaults.get_defaults() config_dict.update(fake_services_dict) fake_session = mock.Mock() fake_session.additional_user_agent = [] mock_session.return_value = fake_session cc = cloud_region.CloudRegion( "test1", "region-al", config_dict, auth_plugin=mock.Mock()) cc.get_session() mock_session.assert_called_with( auth=mock.ANY, verify=True, cert=None, timeout=None) self.assertEqual( fake_session.additional_user_agent, [('openstacksdk', openstack_version.__version__)]) @mock.patch.object(ksa_session, 'Session') def test_get_session_with_app_name(self, mock_session): config_dict = defaults.get_defaults() config_dict.update(fake_services_dict) fake_session = mock.Mock() fake_session.additional_user_agent = [] fake_session.app_name = None fake_session.app_version = None mock_session.return_value = fake_session cc = cloud_region.CloudRegion( "test1", "region-al", config_dict, auth_plugin=mock.Mock(), app_name="test_app", app_version="test_version") cc.get_session() mock_session.assert_called_with( auth=mock.ANY, verify=True, cert=None, timeout=None) self.assertEqual(fake_session.app_name, "test_app") self.assertEqual(fake_session.app_version, "test_version") self.assertEqual( fake_session.additional_user_agent, [('openstacksdk', openstack_version.__version__)]) @mock.patch.object(ksa_session, 'Session') def test_get_session_with_timeout(self, mock_session): fake_session = mock.Mock() fake_session.additional_user_agent = [] mock_session.return_value = fake_session config_dict = defaults.get_defaults() config_dict.update(fake_services_dict) config_dict['api_timeout'] = 9 cc = cloud_region.CloudRegion( "test1", "region-al", config_dict, auth_plugin=mock.Mock()) cc.get_session() mock_session.assert_called_with( auth=mock.ANY, verify=True, cert=None, timeout=9) self.assertEqual( fake_session.additional_user_agent, [('openstacksdk', openstack_version.__version__)]) @mock.patch.object(ksa_session, 'Session') def test_override_session_endpoint_override(self, mock_session): config_dict = defaults.get_defaults() config_dict.update(fake_services_dict) cc = cloud_region.CloudRegion( "test1", "region-al", config_dict, auth_plugin=mock.Mock()) self.assertEqual( cc.get_session_endpoint('compute'), fake_services_dict['compute_endpoint_override']) @mock.patch.object(ksa_session, 'Session') def test_override_session_endpoint(self, mock_session): config_dict = defaults.get_defaults() config_dict.update(fake_services_dict) cc = cloud_region.CloudRegion( "test1", "region-al", config_dict, auth_plugin=mock.Mock()) self.assertEqual( cc.get_session_endpoint('telemetry'), fake_services_dict['telemetry_endpoint']) @mock.patch.object(cloud_region.CloudRegion, 'get_session') def test_session_endpoint(self, mock_get_session): mock_session = mock.Mock() mock_get_session.return_value = mock_session config_dict = defaults.get_defaults() config_dict.update(fake_services_dict) cc = cloud_region.CloudRegion( "test1", "region-al", config_dict, auth_plugin=mock.Mock()) cc.get_session_endpoint('orchestration') mock_session.get_endpoint.assert_called_with( interface='public', service_name=None, region_name='region-al', service_type='orchestration') @mock.patch.object(cloud_region.CloudRegion, 'get_session') def test_session_endpoint_not_found(self, mock_get_session): exc_to_raise = ksa_exceptions.catalog.EndpointNotFound mock_get_session.return_value.get_endpoint.side_effect = exc_to_raise cc = cloud_region.CloudRegion( "test1", "region-al", {}, auth_plugin=mock.Mock()) self.assertIsNone(cc.get_session_endpoint('notfound')) openstacksdk-0.11.3/openstack/tests/unit/base.py0000666000175100017510000007007713236151364021726 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import time import uuid import fixtures import mock import os import openstack.config as occ from requests import structures from requests_mock.contrib import fixture as rm_fixture from six.moves import urllib import tempfile import openstack.cloud import openstack.connection from openstack.tests import base _ProjectData = collections.namedtuple( 'ProjectData', 'project_id, project_name, enabled, domain_id, description, ' 'json_response, json_request') _UserData = collections.namedtuple( 'UserData', 'user_id, password, name, email, description, domain_id, enabled, ' 'json_response, json_request') _GroupData = collections.namedtuple( 'GroupData', 'group_id, group_name, domain_id, description, json_response, ' 'json_request') _DomainData = collections.namedtuple( 'DomainData', 'domain_id, domain_name, description, json_response, ' 'json_request') _ServiceData = collections.namedtuple( 'Servicedata', 'service_id, service_name, service_type, description, enabled, ' 'json_response_v3, json_response_v2, json_request') _EndpointDataV3 = collections.namedtuple( 'EndpointData', 'endpoint_id, service_id, interface, region, url, enabled, ' 'json_response, json_request') _EndpointDataV2 = collections.namedtuple( 'EndpointData', 'endpoint_id, service_id, region, public_url, internal_url, ' 'admin_url, v3_endpoint_list, json_response, ' 'json_request') # NOTE(notmorgan): Shade does not support domain-specific roles # This should eventually be fixed if it becomes a main-stream feature. _RoleData = collections.namedtuple( 'RoleData', 'role_id, role_name, json_response, json_request') class BaseTestCase(base.TestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): """Run before each test method to initialize test environment.""" super(BaseTestCase, self).setUp() # Sleeps are for real testing, but unit tests shouldn't need them realsleep = time.sleep def _nosleep(seconds): return realsleep(seconds * 0.0001) self.sleep_fixture = self.useFixture(fixtures.MonkeyPatch( 'time.sleep', _nosleep)) self.fixtures_directory = 'openstack/tests/unit/fixtures' # Isolate os-client-config from test environment config = tempfile.NamedTemporaryFile(delete=False) cloud_path = '%s/clouds/%s' % (self.fixtures_directory, cloud_config_fixture) with open(cloud_path, 'rb') as f: content = f.read() config.write(content) config.close() vendor = tempfile.NamedTemporaryFile(delete=False) vendor.write(b'{}') vendor.close() test_cloud = os.environ.get('OPENSTACKSDK_OS_CLOUD', '_test_cloud_') self.config = occ.OpenStackConfig( config_files=[config.name], vendor_files=[vendor.name], secure_files=['non-existant']) self.cloud_config = self.config.get_one( cloud=test_cloud, validate=False) self.cloud = openstack.cloud.OpenStackCloud( cloud_config=self.cloud_config) self.strict_cloud = openstack.cloud.OpenStackCloud( cloud_config=self.cloud_config, strict=True) # TODO(shade) Remove this and rename RequestsMockTestCase to TestCase. # There are still a few places, like test_normalize, that assume # this mocking is in place rather than having the correct # requests_mock entries set up that need to be converted. class TestCase(BaseTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(TestCase, self).setUp(cloud_config_fixture=cloud_config_fixture) self.session_fixture = self.useFixture(fixtures.MonkeyPatch( 'openstack.config.cloud_region.CloudRegion.get_session', mock.Mock())) class RequestsMockTestCase(BaseTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(RequestsMockTestCase, self).setUp( cloud_config_fixture=cloud_config_fixture) # FIXME(notmorgan): Convert the uri_registry, discovery.json, and # use of keystone_v3/v2 to a proper fixtures.Fixture. For now this # is acceptable, but eventually this should become it's own fixture # that encapsulates the registry, registering the URIs, and # assert_calls (and calling assert_calls every test case that uses # it on cleanup). Subclassing here could be 100% eliminated in the # future allowing any class to simply # self.useFixture(openstack.cloud.RequestsMockFixture) and get all # the benefits. # NOTE(notmorgan): use an ordered dict here to ensure we preserve the # order in which items are added to the uri_registry. This makes # the behavior more consistent when dealing with ensuring the # requests_mock uri/query_string matchers are ordered and parse the # request in the correct orders. self._uri_registry = collections.OrderedDict() self.discovery_json = os.path.join( self.fixtures_directory, 'discovery.json') self.use_keystone_v3() self.__register_uris_called = False # TODO(shade) Update this to handle service type aliases def get_mock_url(self, service_type, interface='public', resource=None, append=None, base_url_append=None, qs_elements=None): endpoint_url = self.cloud.endpoint_for( service_type=service_type, interface=interface) # Strip trailing slashes, so as not to produce double-slashes below if endpoint_url.endswith('/'): endpoint_url = endpoint_url[:-1] to_join = [endpoint_url] qs = '' if base_url_append: to_join.append(base_url_append) if resource: to_join.append(resource) to_join.extend(append or []) if qs_elements is not None: qs = '?%s' % '&'.join(qs_elements) return '%(uri)s%(qs)s' % {'uri': '/'.join(to_join), 'qs': qs} def mock_for_keystone_projects(self, project=None, v3=True, list_get=False, id_get=False, project_list=None, project_count=None): if project: assert not (project_list or project_count) elif project_list: assert not (project or project_count) elif project_count: assert not (project or project_list) else: raise Exception('Must specify a project, project_list, ' 'or project_count') assert list_get or id_get base_url_append = 'v3' if v3 else None if project: project_list = [project] elif project_count: # Generate multiple projects project_list = [self._get_project_data(v3=v3) for c in range(0, project_count)] uri_mock_list = [] if list_get: uri_mock_list.append( dict(method='GET', uri=self.get_mock_url( service_type='identity', interface='admin', resource='projects', base_url_append=base_url_append), status_code=200, json={'projects': [p.json_response['project'] for p in project_list]}) ) if id_get: for p in project_list: uri_mock_list.append( dict(method='GET', uri=self.get_mock_url( service_type='identity', interface='admin', resource='projects', append=[p.project_id], base_url_append=base_url_append), status_code=200, json=p.json_response) ) self.__do_register_uris(uri_mock_list) return project_list def _get_project_data(self, project_name=None, enabled=None, domain_id=None, description=None, v3=True, project_id=None): project_name = project_name or self.getUniqueString('projectName') project_id = uuid.UUID(project_id or uuid.uuid4().hex).hex response = {'id': project_id, 'name': project_name} request = {'name': project_name} domain_id = (domain_id or uuid.uuid4().hex) if v3 else None if domain_id: request['domain_id'] = domain_id response['domain_id'] = domain_id if enabled is not None: enabled = bool(enabled) response['enabled'] = enabled request['enabled'] = enabled response.setdefault('enabled', True) request.setdefault('enabled', True) if description: response['description'] = description request['description'] = description request.setdefault('description', None) if v3: project_key = 'project' else: project_key = 'tenant' return _ProjectData(project_id, project_name, enabled, domain_id, description, {project_key: response}, {project_key: request}) def _get_group_data(self, name=None, domain_id=None, description=None): group_id = uuid.uuid4().hex name = name or self.getUniqueString('groupname') domain_id = uuid.UUID(domain_id or uuid.uuid4().hex).hex response = {'id': group_id, 'name': name, 'domain_id': domain_id} request = {'name': name, 'domain_id': domain_id} if description is not None: response['description'] = description request['description'] = description return _GroupData(group_id, name, domain_id, description, {'group': response}, {'group': request}) def _get_user_data(self, name=None, password=None, **kwargs): name = name or self.getUniqueString('username') password = password or self.getUniqueString('user_password') user_id = uuid.uuid4().hex response = {'name': name, 'id': user_id} request = {'name': name, 'password': password} if kwargs.get('domain_id'): kwargs['domain_id'] = uuid.UUID(kwargs['domain_id']).hex response['domain_id'] = kwargs.pop('domain_id') request['domain_id'] = response['domain_id'] response['email'] = kwargs.pop('email', None) request['email'] = response['email'] response['enabled'] = kwargs.pop('enabled', True) request['enabled'] = response['enabled'] response['description'] = kwargs.pop('description', None) if response['description']: request['description'] = response['description'] self.assertIs(0, len(kwargs), message='extra key-word args received ' 'on _get_user_data') return _UserData(user_id, password, name, response['email'], response['description'], response.get('domain_id'), response.get('enabled'), {'user': response}, {'user': request}) def _get_domain_data(self, domain_name=None, description=None, enabled=None): domain_id = uuid.uuid4().hex domain_name = domain_name or self.getUniqueString('domainName') response = {'id': domain_id, 'name': domain_name} request = {'name': domain_name} if enabled is not None: request['enabled'] = bool(enabled) response['enabled'] = bool(enabled) if description: response['description'] = description request['description'] = description response.setdefault('enabled', True) return _DomainData(domain_id, domain_name, description, {'domain': response}, {'domain': request}) def _get_service_data(self, type=None, name=None, description=None, enabled=True): service_id = uuid.uuid4().hex name = name or uuid.uuid4().hex type = type or uuid.uuid4().hex response = {'id': service_id, 'name': name, 'type': type, 'enabled': enabled} if description is not None: response['description'] = description request = response.copy() request.pop('id') return _ServiceData(service_id, name, type, description, enabled, {'service': response}, {'OS-KSADM:service': response}, request) def _get_endpoint_v3_data(self, service_id=None, region=None, url=None, interface=None, enabled=True): endpoint_id = uuid.uuid4().hex service_id = service_id or uuid.uuid4().hex region = region or uuid.uuid4().hex url = url or 'https://example.com/' interface = interface or uuid.uuid4().hex response = {'id': endpoint_id, 'service_id': service_id, 'region': region, 'interface': interface, 'url': url, 'enabled': enabled} request = response.copy() request.pop('id') response['region_id'] = response['region'] return _EndpointDataV3(endpoint_id, service_id, interface, region, url, enabled, {'endpoint': response}, {'endpoint': request}) def _get_endpoint_v2_data(self, service_id=None, region=None, public_url=None, admin_url=None, internal_url=None): endpoint_id = uuid.uuid4().hex service_id = service_id or uuid.uuid4().hex region = region or uuid.uuid4().hex response = {'id': endpoint_id, 'service_id': service_id, 'region': region} v3_endpoints = {} request = response.copy() request.pop('id') if admin_url: response['adminURL'] = admin_url v3_endpoints['admin'] = self._get_endpoint_v3_data( service_id, region, public_url, interface='admin') if internal_url: response['internalURL'] = internal_url v3_endpoints['internal'] = self._get_endpoint_v3_data( service_id, region, internal_url, interface='internal') if public_url: response['publicURL'] = public_url v3_endpoints['public'] = self._get_endpoint_v3_data( service_id, region, public_url, interface='public') request = response.copy() request.pop('id') for u in ('publicURL', 'internalURL', 'adminURL'): if request.get(u): request[u.lower()] = request.pop(u) return _EndpointDataV2(endpoint_id, service_id, region, public_url, internal_url, admin_url, v3_endpoints, {'endpoint': response}, {'endpoint': request}) def _get_role_data(self, role_name=None): role_id = uuid.uuid4().hex role_name = role_name or uuid.uuid4().hex request = {'name': role_name} response = request.copy() response['id'] = role_id return _RoleData(role_id, role_name, {'role': response}, {'role': request}) def use_broken_keystone(self): self.adapter = self.useFixture(rm_fixture.Fixture()) self.calls = [] self._uri_registry.clear() self.__do_register_uris([ dict(method='GET', uri='https://identity.example.com/', text=open(self.discovery_json, 'r').read()), dict(method='POST', uri='https://identity.example.com/v3/auth/tokens', status_code=400), ]) self._make_test_cloud(identity_api_version='3') def use_nothing(self): self.calls = [] self._uri_registry.clear() def use_keystone_v3(self, catalog='catalog-v3.json'): self.adapter = self.useFixture(rm_fixture.Fixture()) self.calls = [] self._uri_registry.clear() self.__do_register_uris([ dict(method='GET', uri='https://identity.example.com/', text=open(self.discovery_json, 'r').read()), dict(method='POST', uri='https://identity.example.com/v3/auth/tokens', headers={ 'X-Subject-Token': self.getUniqueString('KeystoneToken')}, text=open(os.path.join( self.fixtures_directory, catalog), 'r').read() ), ]) self._make_test_cloud(identity_api_version='3') def use_keystone_v2(self): self.adapter = self.useFixture(rm_fixture.Fixture()) self.calls = [] self._uri_registry.clear() self.__do_register_uris([ dict(method='GET', uri='https://identity.example.com/', text=open(self.discovery_json, 'r').read()), dict(method='POST', uri='https://identity.example.com/v2.0/tokens', text=open(os.path.join( self.fixtures_directory, 'catalog-v2.json'), 'r').read() ), ]) self._make_test_cloud(cloud_name='_test_cloud_v2_', identity_api_version='2.0') def _make_test_cloud(self, cloud_name='_test_cloud_', **kwargs): test_cloud = os.environ.get('OPENSTACKSDK_OS_CLOUD', cloud_name) self.cloud_config = self.config.get_one( cloud=test_cloud, validate=True, **kwargs) self.conn = openstack.connection.Connection( config=self.cloud_config) self.cloud = openstack.cloud.OpenStackCloud( cloud_config=self.cloud_config) def get_glance_discovery_mock_dict( self, image_version_json='image-version.json', image_discovery_url='https://image.example.com/'): discovery_fixture = os.path.join( self.fixtures_directory, image_version_json) return dict(method='GET', uri=image_discovery_url, status_code=300, text=open(discovery_fixture, 'r').read()) def get_designate_discovery_mock_dict(self): discovery_fixture = os.path.join( self.fixtures_directory, "dns.json") return dict(method='GET', uri="https://dns.example.com/", text=open(discovery_fixture, 'r').read()) def get_ironic_discovery_mock_dict(self): discovery_fixture = os.path.join( self.fixtures_directory, "baremetal.json") return dict(method='GET', uri="https://bare-metal.example.com/", text=open(discovery_fixture, 'r').read()) def use_glance( self, image_version_json='image-version.json', image_discovery_url='https://image.example.com/'): # NOTE(notmorgan): This method is only meant to be used in "setUp" # where the ordering of the url being registered is tightly controlled # if the functionality of .use_glance is meant to be used during an # actual test case, use .get_glance_discovery_mock and apply to the # right location in the mock_uris when calling .register_uris self.__do_register_uris([ self.get_glance_discovery_mock_dict( image_version_json, image_discovery_url)]) def use_designate(self): # NOTE(slaweq): This method is only meant to be used in "setUp" # where the ordering of the url being registered is tightly controlled # if the functionality of .use_designate is meant to be used during an # actual test case, use .get_designate_discovery_mock and apply to the # right location in the mock_uris when calling .register_uris self.__do_register_uris([ self.get_designate_discovery_mock_dict()]) def use_ironic(self): # NOTE(TheJulia): This method is only meant to be used in "setUp" # where the ordering of the url being registered is tightly controlled # if the functionality of .use_ironic is meant to be used during an # actual test case, use .get_ironic_discovery_mock and apply to the # right location in the mock_uris when calling .register_uris self.__do_register_uris([ self.get_ironic_discovery_mock_dict()]) def register_uris(self, uri_mock_list=None): """Mock a list of URIs and responses via requests mock. This method may be called only once per test-case to avoid odd and difficult to debug interactions. Discovery and Auth request mocking happens separately from this method. :param uri_mock_list: List of dictionaries that template out what is passed to requests_mock fixture's `register_uri`. Format is: {'method': , 'uri': , ... } Common keys to pass in the dictionary: * json: the json response (dict) * status_code: the HTTP status (int) * validate: The request body (dict) to validate with assert_calls all key-word arguments that are valid to send to requests_mock are supported. This list should be in the order in which calls are made. When `assert_calls` is executed, order here will be validated. Duplicate URIs and Methods are allowed and will be collapsed into a single matcher. Each response will be returned in order as the URI+Method is hit. :type uri_mock_list: list :return: None """ assert not self.__register_uris_called self.__do_register_uris(uri_mock_list or []) self.__register_uris_called = True def __do_register_uris(self, uri_mock_list=None): for to_mock in uri_mock_list: kw_params = {k: to_mock.pop(k) for k in ('request_headers', 'complete_qs', '_real_http') if k in to_mock} method = to_mock.pop('method') uri = to_mock.pop('uri') # NOTE(notmorgan): make sure the delimiter is non-url-safe, in this # case "|" is used so that the split can be a bit easier on # maintainers of this code. key = '{method}|{uri}|{params}'.format( method=method, uri=uri, params=kw_params) validate = to_mock.pop('validate', {}) valid_keys = set(['json', 'headers', 'params']) invalid_keys = set(validate.keys()) - valid_keys if invalid_keys: raise TypeError( "Invalid values passed to validate: {keys}".format( keys=invalid_keys)) headers = structures.CaseInsensitiveDict(to_mock.pop('headers', {})) if 'content-type' not in headers: headers[u'content-type'] = 'application/json' to_mock['headers'] = headers self.calls += [ dict( method=method, url=uri, **validate) ] self._uri_registry.setdefault( key, {'response_list': [], 'kw_params': kw_params}) if self._uri_registry[key]['kw_params'] != kw_params: raise AssertionError( 'PROGRAMMING ERROR: key-word-params ' 'should be part of the uri_key and cannot change, ' 'it will affect the matcher in requests_mock. ' '%(old)r != %(new)r' % {'old': self._uri_registry[key]['kw_params'], 'new': kw_params}) self._uri_registry[key]['response_list'].append(to_mock) for mocked, params in self._uri_registry.items(): mock_method, mock_uri, _ignored = mocked.split('|', 2) self.adapter.register_uri( mock_method, mock_uri, params['response_list'], **params['kw_params']) def assert_no_calls(self): # TODO(mordred) For now, creating the adapter for self.conn is # triggering catalog lookups. Make sure no_calls is only 2. # When we can make that on-demand through a descriptor object, # drop this to 0. self.assertEqual(2, len(self.adapter.request_history)) def assert_calls(self, stop_after=None, do_count=True): for (x, (call, history)) in enumerate( zip(self.calls, self.adapter.request_history)): if stop_after and x > stop_after: break call_uri_parts = urllib.parse.urlparse(call['url']) history_uri_parts = urllib.parse.urlparse(history.url) self.assertEqual( (call['method'], call_uri_parts.scheme, call_uri_parts.netloc, call_uri_parts.path, call_uri_parts.params, urllib.parse.parse_qs(call_uri_parts.query)), (history.method, history_uri_parts.scheme, history_uri_parts.netloc, history_uri_parts.path, history_uri_parts.params, urllib.parse.parse_qs(history_uri_parts.query)), ('REST mismatch on call %(index)d. Expected %(call)r. ' 'Got %(history)r). ' 'NOTE: query string order differences wont cause mismatch' % { 'index': x, 'call': '{method} {url}'.format(method=call['method'], url=call['url']), 'history': '{method} {url}'.format( method=history.method, url=history.url)}) ) if 'json' in call: self.assertEqual( call['json'], history.json(), 'json content mismatch in call {index}'.format(index=x)) # headers in a call isn't exhaustive - it's checking to make sure # a specific header or headers are there, not that they are the # only headers if 'headers' in call: for key, value in call['headers'].items(): self.assertEqual( value, history.headers[key], 'header mismatch in call {index}'.format(index=x)) if do_count: self.assertEqual( len(self.calls), len(self.adapter.request_history)) class IronicTestCase(RequestsMockTestCase): def setUp(self): super(IronicTestCase, self).setUp() self.use_ironic() self.uuid = str(uuid.uuid4()) self.name = self.getUniqueString('name') def get_mock_url(self, resource=None, append=None, qs_elements=None): return super(IronicTestCase, self).get_mock_url( service_type='baremetal', interface='public', resource=resource, append=append, base_url_append='v1', qs_elements=qs_elements) openstacksdk-0.11.3/openstack/tests/unit/baremetal/0000775000175100017510000000000013236151501022352 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/baremetal/test_version.py0000666000175100017510000000322213236151340025452 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.baremetal import version IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'status': '3', 'updated': '4', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertFalse(sot.allow_head) self.assertEqual('PUT', sot.update_method) self.assertEqual('POST', sot.create_method) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['updated'], sot.updated) openstacksdk-0.11.3/openstack/tests/unit/baremetal/v1/0000775000175100017510000000000013236151501022700 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/baremetal/v1/test_port_group.py0000666000175100017510000001001713236151340026513 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.baremetal.v1 import port_group FAKE = { "address": "11:11:11:11:11:11", "created_at": "2016-08-18T22:28:48.165105+00:00", "extra": {}, "internal_info": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/portgroups/", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups/", "rel": "bookmark" } ], "name": "test_portgroup", "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "ports": [ { "href": "http://127.0.0.1:6385/v1/portgroups//ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/portgroups//ports", "rel": "bookmark" } ], "standalone_ports_supported": True, "updated_at": None, "uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a", } class TestPortGroup(testtools.TestCase): def test_basic(self): sot = port_group.PortGroup() self.assertIsNone(sot.resource_key) self.assertEqual('portgroups', sot.resources_key) self.assertEqual('/portgroups', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) def test_instantiate(self): sot = port_group.PortGroup(**FAKE) self.assertEqual(FAKE['uuid'], sot.id) self.assertEqual(FAKE['address'], sot.address) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['extra'], sot.extra) self.assertEqual(FAKE['internal_info'], sot.internal_info) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['node_uuid'], sot.node_id) self.assertEqual(FAKE['ports'], sot.ports) self.assertEqual(FAKE['standalone_ports_supported'], sot.is_standalone_ports_supported) self.assertEqual(FAKE['updated_at'], sot.updated_at) class TestPortGroupDetail(testtools.TestCase): def test_basic(self): sot = port_group.PortGroupDetail() self.assertIsNone(sot.resource_key) self.assertEqual('portgroups', sot.resources_key) self.assertEqual('/portgroups/detail', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = port_group.PortGroupDetail(**FAKE) self.assertEqual(FAKE['uuid'], sot.id) self.assertEqual(FAKE['address'], sot.address) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['extra'], sot.extra) self.assertEqual(FAKE['internal_info'], sot.internal_info) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['node_uuid'], sot.node_id) self.assertEqual(FAKE['ports'], sot.ports) self.assertEqual(FAKE['standalone_ports_supported'], sot.is_standalone_ports_supported) self.assertEqual(FAKE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/baremetal/v1/test_chassis.py0000666000175100017510000000636413236151340025762 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.baremetal.v1 import chassis FAKE = { "created_at": "2016-08-18T22:28:48.165105+00:00", "description": "Sample chassis", "extra": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/chassis/ID", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/ID", "rel": "bookmark" } ], "nodes": [ { "href": "http://127.0.0.1:6385/v1/chassis/ID/nodes", "rel": "self" }, { "href": "http://127.0.0.1:6385/chassis/ID/nodes", "rel": "bookmark" } ], "updated_at": None, "uuid": "dff29d23-1ded-43b4-8ae1-5eebb3e30de1" } class TestChassis(testtools.TestCase): def test_basic(self): sot = chassis.Chassis() self.assertIsNone(sot.resource_key) self.assertEqual('chassis', sot.resources_key) self.assertEqual('/chassis', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) def test_instantiate(self): sot = chassis.Chassis(**FAKE) self.assertEqual(FAKE['uuid'], sot.id) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['description'], sot.description) self.assertEqual(FAKE['extra'], sot.extra) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['nodes'], sot.nodes) self.assertEqual(FAKE['updated_at'], sot.updated_at) class TestChassisDetail(testtools.TestCase): def test_basic(self): sot = chassis.ChassisDetail() self.assertIsNone(sot.resource_key) self.assertEqual('chassis', sot.resources_key) self.assertEqual('/chassis/detail', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = chassis.ChassisDetail(**FAKE) self.assertEqual(FAKE['uuid'], sot.id) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['description'], sot.description) self.assertEqual(FAKE['extra'], sot.extra) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['nodes'], sot.nodes) self.assertEqual(FAKE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/baremetal/v1/__init__.py0000666000175100017510000000000013236151340025002 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/baremetal/v1/test_driver.py0000666000175100017510000000406313236151340025612 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.baremetal.v1 import driver FAKE = { "hosts": [ "897ab1dad809" ], "links": [ { "href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/agent_ipmitool", "rel": "bookmark" } ], "name": "agent_ipmitool", "properties": [ { "href": "http://127.0.0.1:6385/v1/drivers/agent_ipmitool/properties", "rel": "self" }, { "href": "http://127.0.0.1:6385/drivers/agent_ipmitool/properties", "rel": "bookmark" } ] } class TestDriver(testtools.TestCase): def test_basic(self): sot = driver.Driver() self.assertIsNone(sot.resource_key) self.assertEqual('drivers', sot.resources_key) self.assertEqual('/drivers', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = driver.Driver(**FAKE) self.assertEqual(FAKE['name'], sot.id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['hosts'], sot.hosts) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['properties'], sot.properties) openstacksdk-0.11.3/openstack/tests/unit/baremetal/v1/test_node.py0000666000175100017510000001736113236151340025251 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.baremetal.v1 import node # NOTE: Sample data from api-ref doc FAKE = { "chassis_uuid": "1", # NOTE: missed in api-ref sample "clean_step": {}, "console_enabled": False, "created_at": "2016-08-18T22:28:48.643434+00:00", "driver": "agent_ipmitool", "driver_info": { "ipmi_password": "******", "ipmi_username": "ADMIN" }, "driver_internal_info": {}, "extra": {}, "inspection_finished_at": None, "inspection_started_at": None, "instance_info": {}, "instance_uuid": None, "last_error": None, "links": [ { "href": "http://127.0.0.1:6385/v1/nodes/", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes/", "rel": "bookmark" } ], "maintenance": False, "maintenance_reason": None, "name": "test_node", "network_interface": "flat", "portgroups": [ { "href": "http://127.0.0.1:6385/v1/nodes//portgroups", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes//portgroups", "rel": "bookmark" } ], "ports": [ { "href": "http://127.0.0.1:6385/v1/nodes//ports", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes//ports", "rel": "bookmark" } ], "power_state": None, "properties": {}, "provision_state": "enroll", "provision_updated_at": None, "raid_config": {}, "reservation": None, "resource_class": None, "states": [ { "href": "http://127.0.0.1:6385/v1/nodes//states", "rel": "self" }, { "href": "http://127.0.0.1:6385/nodes//states", "rel": "bookmark" } ], "target_power_state": None, "target_provision_state": None, "target_raid_config": {}, "updated_at": None, "uuid": "6d85703a-565d-469a-96ce-30b6de53079d" } class TestNode(testtools.TestCase): def test_basic(self): sot = node.Node() self.assertIsNone(sot.resource_key) self.assertEqual('nodes', sot.resources_key) self.assertEqual('/nodes', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) def test_instantiate(self): sot = node.Node(**FAKE) self.assertEqual(FAKE['uuid'], sot.id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['chassis_uuid'], sot.chassis_id) self.assertEqual(FAKE['clean_step'], sot.clean_step) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['driver'], sot.driver) self.assertEqual(FAKE['driver_info'], sot.driver_info) self.assertEqual(FAKE['driver_internal_info'], sot.driver_internal_info) self.assertEqual(FAKE['extra'], sot.extra) self.assertEqual(FAKE['instance_info'], sot.instance_info) self.assertEqual(FAKE['instance_uuid'], sot.instance_id) self.assertEqual(FAKE['console_enabled'], sot.is_console_enabled) self.assertEqual(FAKE['maintenance'], sot.is_maintenance) self.assertEqual(FAKE['last_error'], sot.last_error) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['maintenance_reason'], sot.maintenance_reason) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['network_interface'], sot.network_interface) self.assertEqual(FAKE['ports'], sot.ports) self.assertEqual(FAKE['portgroups'], sot.port_groups) self.assertEqual(FAKE['power_state'], sot.power_state) self.assertEqual(FAKE['properties'], sot.properties) self.assertEqual(FAKE['provision_state'], sot.provision_state) self.assertEqual(FAKE['raid_config'], sot.raid_config) self.assertEqual(FAKE['reservation'], sot.reservation) self.assertEqual(FAKE['resource_class'], sot.resource_class) self.assertEqual(FAKE['states'], sot.states) self.assertEqual(FAKE['target_provision_state'], sot.target_provision_state) self.assertEqual(FAKE['target_power_state'], sot.target_power_state) self.assertEqual(FAKE['target_raid_config'], sot.target_raid_config) self.assertEqual(FAKE['updated_at'], sot.updated_at) class TestNodeDetail(testtools.TestCase): def test_basic(self): sot = node.NodeDetail() self.assertIsNone(sot.resource_key) self.assertEqual('nodes', sot.resources_key) self.assertEqual('/nodes/detail', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = node.NodeDetail(**FAKE) self.assertEqual(FAKE['uuid'], sot.id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['chassis_uuid'], sot.chassis_id) self.assertEqual(FAKE['clean_step'], sot.clean_step) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['driver'], sot.driver) self.assertEqual(FAKE['driver_info'], sot.driver_info) self.assertEqual(FAKE['driver_internal_info'], sot.driver_internal_info) self.assertEqual(FAKE['extra'], sot.extra) self.assertEqual(FAKE['instance_info'], sot.instance_info) self.assertEqual(FAKE['instance_uuid'], sot.instance_id) self.assertEqual(FAKE['console_enabled'], sot.is_console_enabled) self.assertEqual(FAKE['maintenance'], sot.is_maintenance) self.assertEqual(FAKE['last_error'], sot.last_error) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['maintenance_reason'], sot.maintenance_reason) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['network_interface'], sot.network_interface) self.assertEqual(FAKE['ports'], sot.ports) self.assertEqual(FAKE['portgroups'], sot.port_groups) self.assertEqual(FAKE['power_state'], sot.power_state) self.assertEqual(FAKE['properties'], sot.properties) self.assertEqual(FAKE['provision_state'], sot.provision_state) self.assertEqual(FAKE['raid_config'], sot.raid_config) self.assertEqual(FAKE['reservation'], sot.reservation) self.assertEqual(FAKE['resource_class'], sot.resource_class) self.assertEqual(FAKE['states'], sot.states) self.assertEqual(FAKE['target_provision_state'], sot.target_provision_state) self.assertEqual(FAKE['target_power_state'], sot.target_power_state) self.assertEqual(FAKE['target_raid_config'], sot.target_raid_config) self.assertEqual(FAKE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/baremetal/v1/test_proxy.py0000666000175100017510000001414713236151340025504 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import deprecation from openstack.baremetal.v1 import _proxy from openstack.baremetal.v1 import chassis from openstack.baremetal.v1 import driver from openstack.baremetal.v1 import node from openstack.baremetal.v1 import port from openstack.baremetal.v1 import port_group from openstack.tests.unit import test_proxy_base class TestBaremetalProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestBaremetalProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_drivers(self): self.verify_list(self.proxy.drivers, driver.Driver, paginated=False) def test_get_driver(self): self.verify_get(self.proxy.get_driver, driver.Driver) def test_chassis_detailed(self): self.verify_list(self.proxy.chassis, chassis.ChassisDetail, paginated=True, method_kwargs={"details": True, "query": 1}, expected_kwargs={"query": 1}) def test_chassis_not_detailed(self): self.verify_list(self.proxy.chassis, chassis.Chassis, paginated=True, method_kwargs={"details": False, "query": 1}, expected_kwargs={"query": 1}) def test_create_chassis(self): self.verify_create(self.proxy.create_chassis, chassis.Chassis) def test_find_chassis(self): self.verify_find(self.proxy.find_chassis, chassis.Chassis) def test_get_chassis(self): self.verify_get(self.proxy.get_chassis, chassis.Chassis) def test_update_chassis(self): self.verify_update(self.proxy.update_chassis, chassis.Chassis) def test_delete_chassis(self): self.verify_delete(self.proxy.delete_chassis, chassis.Chassis, False) def test_delete_chassis_ignore(self): self.verify_delete(self.proxy.delete_chassis, chassis.Chassis, True) def test_nodes_detailed(self): self.verify_list(self.proxy.nodes, node.NodeDetail, paginated=True, method_kwargs={"details": True, "query": 1}, expected_kwargs={"query": 1}) def test_nodes_not_detailed(self): self.verify_list(self.proxy.nodes, node.Node, paginated=True, method_kwargs={"details": False, "query": 1}, expected_kwargs={"query": 1}) def test_create_node(self): self.verify_create(self.proxy.create_node, node.Node) def test_find_node(self): self.verify_find(self.proxy.find_node, node.Node) def test_get_node(self): self.verify_get(self.proxy.get_node, node.Node) def test_update_node(self): self.verify_update(self.proxy.update_node, node.Node) def test_delete_node(self): self.verify_delete(self.proxy.delete_node, node.Node, False) def test_delete_node_ignore(self): self.verify_delete(self.proxy.delete_node, node.Node, True) def test_ports_detailed(self): self.verify_list(self.proxy.ports, port.PortDetail, paginated=True, method_kwargs={"details": True, "query": 1}, expected_kwargs={"query": 1}) def test_ports_not_detailed(self): self.verify_list(self.proxy.ports, port.Port, paginated=True, method_kwargs={"details": False, "query": 1}, expected_kwargs={"query": 1}) def test_create_port(self): self.verify_create(self.proxy.create_port, port.Port) def test_find_port(self): self.verify_find(self.proxy.find_port, port.Port) def test_get_port(self): self.verify_get(self.proxy.get_port, port.Port) def test_update_port(self): self.verify_update(self.proxy.update_port, port.Port) def test_delete_port(self): self.verify_delete(self.proxy.delete_port, port.Port, False) def test_delete_port_ignore(self): self.verify_delete(self.proxy.delete_port, port.Port, True) @deprecation.fail_if_not_removed def test_portgroups_detailed(self): self.verify_list(self.proxy.portgroups, port_group.PortGroupDetail, paginated=True, method_kwargs={"details": True, "query": 1}, expected_kwargs={"query": 1}) @deprecation.fail_if_not_removed def test_portgroups_not_detailed(self): self.verify_list(self.proxy.portgroups, port_group.PortGroup, paginated=True, method_kwargs={"details": False, "query": 1}, expected_kwargs={"query": 1}) @deprecation.fail_if_not_removed def test_create_portgroup(self): self.verify_create(self.proxy.create_portgroup, port_group.PortGroup) @deprecation.fail_if_not_removed def test_find_portgroup(self): self.verify_find(self.proxy.find_portgroup, port_group.PortGroup) @deprecation.fail_if_not_removed def test_get_portgroup(self): self.verify_get(self.proxy.get_portgroup, port_group.PortGroup) @deprecation.fail_if_not_removed def test_update_portgroup(self): self.verify_update(self.proxy.update_portgroup, port_group.PortGroup) @deprecation.fail_if_not_removed def test_delete_portgroup(self): self.verify_delete(self.proxy.delete_portgroup, port_group.PortGroup, False) @deprecation.fail_if_not_removed def test_delete_portgroup_ignore(self): self.verify_delete(self.proxy.delete_portgroup, port_group.PortGroup, True) openstacksdk-0.11.3/openstack/tests/unit/baremetal/v1/test_port.py0000666000175100017510000000760613236151340025311 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.baremetal.v1 import port FAKE = { "address": "11:11:11:11:11:11", "created_at": "2016-08-18T22:28:49.946416+00:00", "extra": {}, "internal_info": {}, "links": [ { "href": "http://127.0.0.1:6385/v1/ports/", "rel": "self" }, { "href": "http://127.0.0.1:6385/ports/", "rel": "bookmark" } ], "local_link_connection": { "port_id": "Ethernet3/1", "switch_id": "0a:1b:2c:3d:4e:5f", "switch_info": "switch1" }, "node_uuid": "6d85703a-565d-469a-96ce-30b6de53079d", "portgroup_uuid": "e43c722c-248e-4c6e-8ce8-0d8ff129387a", "pxe_enabled": True, "updated_at": None, "uuid": "d2b30520-907d-46c8-bfee-c5586e6fb3a1" } class TestPort(testtools.TestCase): def test_basic(self): sot = port.Port() self.assertIsNone(sot.resource_key) self.assertEqual('ports', sot.resources_key) self.assertEqual('/ports', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) def test_instantiate(self): sot = port.PortDetail(**FAKE) self.assertEqual(FAKE['uuid'], sot.id) self.assertEqual(FAKE['address'], sot.address) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['extra'], sot.extra) self.assertEqual(FAKE['internal_info'], sot.internal_info) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['local_link_connection'], sot.local_link_connection) self.assertEqual(FAKE['node_uuid'], sot.node_id) self.assertEqual(FAKE['portgroup_uuid'], sot.port_group_id) self.assertEqual(FAKE['pxe_enabled'], sot.is_pxe_enabled) self.assertEqual(FAKE['updated_at'], sot.updated_at) class TestPortDetail(testtools.TestCase): def test_basic(self): sot = port.PortDetail() self.assertIsNone(sot.resource_key) self.assertEqual('ports', sot.resources_key) self.assertEqual('/ports/detail', sot.base_path) self.assertEqual('baremetal', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = port.PortDetail(**FAKE) self.assertEqual(FAKE['uuid'], sot.id) self.assertEqual(FAKE['address'], sot.address) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['extra'], sot.extra) self.assertEqual(FAKE['internal_info'], sot.internal_info) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['local_link_connection'], sot.local_link_connection) self.assertEqual(FAKE['node_uuid'], sot.node_id) self.assertEqual(FAKE['portgroup_uuid'], sot.port_group_id) self.assertEqual(FAKE['pxe_enabled'], sot.is_pxe_enabled) self.assertEqual(FAKE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/baremetal/__init__.py0000666000175100017510000000000013236151340024454 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/baremetal/test_baremetal_service.py0000666000175100017510000000212113236151340027436 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.baremetal import baremetal_service class TestBaremetalService(testtools.TestCase): def test_service(self): sot = baremetal_service.BaremetalService() self.assertEqual('baremetal', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v1', sot.valid_versions[0].module) self.assertEqual('v1', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/cluster/0000775000175100017510000000000013236151501022077 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/cluster/test_version.py0000666000175100017510000000266613236151364025220 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering import version IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'status': '3', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['status'], sot.status) openstacksdk-0.11.3/openstack/tests/unit/cluster/test_cluster_service.py0000666000175100017510000000212713236151364026724 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering import clustering_service class TestClusteringService(testtools.TestCase): def test_service(self): sot = clustering_service.ClusteringService() self.assertEqual('clustering', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v1', sot.valid_versions[0].module) self.assertEqual('v1', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/0000775000175100017510000000000013236151501022425 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_cluster_policy.py0000666000175100017510000000452113236151364027111 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import cluster_policy FAKE = { 'cluster_id': '99e39f4b-1990-4237-a556-1518f0f0c9e7', 'cluster_name': 'test_cluster', 'data': {'purpose': 'unknown'}, 'enabled': True, 'policy_id': 'ac5415bd-f522-4160-8be0-f8853e4bc332', 'policy_name': 'dp01', 'policy_type': 'senlin.poicy.deletion-1.0', } class TestClusterPolicy(testtools.TestCase): def setUp(self): super(TestClusterPolicy, self).setUp() def test_basic(self): sot = cluster_policy.ClusterPolicy() self.assertEqual('cluster_policy', sot.resource_key) self.assertEqual('cluster_policies', sot.resources_key) self.assertEqual('/clusters/%(cluster_id)s/policies', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_list) self.assertDictEqual({"policy_name": "policy_name", "policy_type": "policy_type", "is_enabled": "enabled", "sort": "sort", "limit": "limit", "marker": "marker"}, sot._query_mapping._mapping) def test_instantiate(self): sot = cluster_policy.ClusterPolicy(**FAKE) self.assertEqual(FAKE['policy_id'], sot.id) self.assertEqual(FAKE['cluster_id'], sot.cluster_id) self.assertEqual(FAKE['cluster_name'], sot.cluster_name) self.assertEqual(FAKE['data'], sot.data) self.assertTrue(sot.is_enabled) self.assertEqual(FAKE['policy_id'], sot.policy_id) self.assertEqual(FAKE['policy_name'], sot.policy_name) self.assertEqual(FAKE['policy_type'], sot.policy_type) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_action.py0000666000175100017510000000603713236151364025332 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import action FAKE_ID = '633bd3c6-520b-420f-8e6a-dc2a47022b53' FAKE_NAME = 'node_create_c3783474' FAKE = { 'id': FAKE_ID, 'name': FAKE_NAME, 'target': 'c378e474-d091-43a3-b083-e19719291358', 'action': 'NODE_CREATE', 'cause': 'RPC Request', 'owner': None, 'user': '3747afc360b64702a53bdd64dc1b8976', 'project': '42d9e9663331431f97b75e25136307ff', 'domain': '204ccccd267b40aea871750116b5b184', 'interval': -1, 'start_time': 1453414055.48672, 'end_time': 1453414055.48672, 'timeout': 3600, 'status': 'SUCCEEDED', 'status_reason': 'Action completed successfully.', 'inputs': {}, 'outputs': {}, 'depends_on': [], 'depended_by': [], 'created_at': '2015-10-10T12:46:36.000000', 'updated_at': '2016-10-10T12:46:36.000000', } class TestAction(testtools.TestCase): def setUp(self): super(TestAction, self).setUp() def test_basic(self): sot = action.Action() self.assertEqual('action', sot.resource_key) self.assertEqual('actions', sot.resources_key) self.assertEqual('/actions', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = action.Action(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['target'], sot.target_id) self.assertEqual(FAKE['action'], sot.action) self.assertEqual(FAKE['cause'], sot.cause) self.assertEqual(FAKE['owner'], sot.owner_id) self.assertEqual(FAKE['user'], sot.user_id) self.assertEqual(FAKE['project'], sot.project_id) self.assertEqual(FAKE['domain'], sot.domain_id) self.assertEqual(FAKE['interval'], sot.interval) self.assertEqual(FAKE['start_time'], sot.start_at) self.assertEqual(FAKE['end_time'], sot.end_at) self.assertEqual(FAKE['timeout'], sot.timeout) self.assertEqual(FAKE['status'], sot.status) self.assertEqual(FAKE['status_reason'], sot.status_reason) self.assertEqual(FAKE['inputs'], sot.inputs) self.assertEqual(FAKE['outputs'], sot.outputs) self.assertEqual(FAKE['depends_on'], sot.depends_on) self.assertEqual(FAKE['depended_by'], sot.depended_by) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_build_info.py0000666000175100017510000000241013236151364026156 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import build_info FAKE = { 'api': { 'revision': '1.0.0', }, 'engine': { 'revision': '1.0.0', } } class TestBuildInfo(testtools.TestCase): def setUp(self): super(TestBuildInfo, self).setUp() def test_basic(self): sot = build_info.BuildInfo() self.assertEqual('/build-info', sot.base_path) self.assertEqual('build_info', sot.resource_key) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_get) def test_instantiate(self): sot = build_info.BuildInfo(**FAKE) self.assertEqual(FAKE['api'], sot.api) self.assertEqual(FAKE['engine'], sot.engine) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_receiver.py0000666000175100017510000000474713236151364025667 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import receiver FAKE_ID = 'ae63a10b-4a90-452c-aef1-113a0b255ee3' FAKE_NAME = 'test_receiver' FAKE = { 'id': FAKE_ID, 'name': FAKE_NAME, 'type': 'webhook', 'cluster_id': 'FAKE_CLUSTER', 'action': 'CLUSTER_RESIZE', 'created_at': '2015-10-10T12:46:36.000000', 'updated_at': '2016-10-10T12:46:36.000000', 'actor': {}, 'params': { 'adjustment_type': 'CHANGE_IN_CAPACITY', 'adjustment': 2 }, 'channel': { 'alarm_url': 'http://host:port/webhooks/AN_ID/trigger?V=1', }, 'user': 'FAKE_USER', 'project': 'FAKE_PROJECT', 'domain': '', } class TestReceiver(testtools.TestCase): def setUp(self): super(TestReceiver, self).setUp() def test_basic(self): sot = receiver.Receiver() self.assertEqual('receiver', sot.resource_key) self.assertEqual('receivers', sot.resources_key) self.assertEqual('/receivers', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = receiver.Receiver(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['type'], sot.type) self.assertEqual(FAKE['cluster_id'], sot.cluster_id) self.assertEqual(FAKE['action'], sot.action) self.assertEqual(FAKE['params'], sot.params) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['updated_at'], sot.updated_at) self.assertEqual(FAKE['user'], sot.user_id) self.assertEqual(FAKE['project'], sot.project_id) self.assertEqual(FAKE['domain'], sot.domain_id) self.assertEqual(FAKE['channel'], sot.channel) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_cluster_attr.py0000666000175100017510000000277213236151364026572 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import cluster_attr as ca FAKE = { 'cluster_id': '633bd3c6-520b-420f-8e6a-dc2a47022b53', 'path': 'path.to.attr', 'id': 'c378e474-d091-43a3-b083-e19719291358', 'value': 'fake value', } class TestClusterAttr(testtools.TestCase): def setUp(self): super(TestClusterAttr, self).setUp() def test_basic(self): sot = ca.ClusterAttr() self.assertEqual('cluster_attributes', sot.resources_key) self.assertEqual('/clusters/%(cluster_id)s/attrs/%(path)s', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = ca.ClusterAttr(**FAKE) self.assertEqual(FAKE['cluster_id'], sot.cluster_id) self.assertEqual(FAKE['path'], sot.path) self.assertEqual(FAKE['id'], sot.node_id) self.assertEqual(FAKE['value'], sot.attr_value) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_profile.py0000666000175100017510000000623713236151364025517 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import profile FAKE_ID = '9b127538-a675-4271-ab9b-f24f54cfe173' FAKE_NAME = 'test_profile' FAKE = { 'metadata': {}, 'name': FAKE_NAME, 'id': FAKE_ID, 'spec': { 'type': 'os.nova.server', 'version': 1.0, 'properties': { 'flavor': 1, 'image': 'cirros-0.3.2-x86_64-uec', 'key_name': 'oskey', 'name': 'cirros_server' } }, 'project': '42d9e9663331431f97b75e25136307ff', 'domain': '204ccccd267b40aea871750116b5b184', 'user': '3747afc360b64702a53bdd64dc1b8976', 'type': 'os.nova.server', 'created_at': '2015-10-10T12:46:36.000000', 'updated_at': '2016-10-10T12:46:36.000000', } class TestProfile(testtools.TestCase): def setUp(self): super(TestProfile, self).setUp() def test_basic(self): sot = profile.Profile() self.assertEqual('profile', sot.resource_key) self.assertEqual('profiles', sot.resources_key) self.assertEqual('/profiles', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) def test_instantiate(self): sot = profile.Profile(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['metadata'], sot.metadata) self.assertEqual(FAKE['spec'], sot.spec) self.assertEqual(FAKE['project'], sot.project_id) self.assertEqual(FAKE['domain'], sot.domain_id) self.assertEqual(FAKE['user'], sot.user_id) self.assertEqual(FAKE['type'], sot.type) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['updated_at'], sot.updated_at) class TestProfileValidate(testtools.TestCase): def setUp(self): super(TestProfileValidate, self).setUp() def test_basic(self): sot = profile.ProfileValidate() self.assertEqual('profile', sot.resource_key) self.assertEqual('profiles', sot.resources_key) self.assertEqual('/profiles/validate', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) self.assertEqual('PUT', sot.update_method) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/__init__.py0000666000175100017510000000000013236151364024535 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_cluster.py0000666000175100017510000002536513236151364025543 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.clustering.v1 import cluster FAKE_ID = '092d0955-2645-461a-b8fa-6a44655cdb2c' FAKE_NAME = 'test_cluster' FAKE = { 'id': 'IDENTIFIER', 'config': {'key1': 'value1', 'key2': 'value2'}, 'desired_capacity': 1, 'max_size': 3, 'min_size': 0, 'name': FAKE_NAME, 'profile_id': 'myserver', 'profile_only': True, 'metadata': {}, 'dependents': {}, 'timeout': None, 'init_at': '2015-10-10T12:46:36.000000', 'created_at': '2015-10-10T12:46:36.000000', 'updated_at': '2016-10-10T12:46:36.000000', } FAKE_CREATE_RESP = { 'cluster': { 'action': 'a679c926-908f-49e7-a822-06ca371e64e1', 'init_at': '2015-10-10T12:46:36.000000', 'created_at': '2015-10-10T12:46:36.000000', 'updated_at': '2016-10-10T12:46:36.000000', 'data': {}, 'desired_capacity': 1, 'domain': None, 'id': FAKE_ID, 'init_time': None, 'max_size': 3, 'metadata': {}, 'min_size': 0, 'name': 'test_cluster', 'nodes': [], 'policies': [], 'profile_id': '560a8f9d-7596-4a32-85e8-03645fa7be13', 'profile_name': 'myserver', 'project': '333acb15a43242f4a609a27cb097a8f2', 'status': 'INIT', 'status_reason': 'Initializing', 'timeout': None, 'user': '6d600911ff764e54b309ce734c89595e', 'dependents': {}, } } class TestCluster(testtools.TestCase): def setUp(self): super(TestCluster, self).setUp() def test_basic(self): sot = cluster.Cluster() self.assertEqual('cluster', sot.resource_key) self.assertEqual('clusters', sot.resources_key) self.assertEqual('/clusters', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = cluster.Cluster(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['profile_id'], sot.profile_id) self.assertEqual(FAKE['min_size'], sot.min_size) self.assertEqual(FAKE['max_size'], sot.max_size) self.assertEqual(FAKE['desired_capacity'], sot.desired_capacity) self.assertEqual(FAKE['config'], sot.config) self.assertEqual(FAKE['timeout'], sot.timeout) self.assertEqual(FAKE['metadata'], sot.metadata) self.assertEqual(FAKE['init_at'], sot.init_at) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['updated_at'], sot.updated_at) self.assertEqual(FAKE['dependents'], sot.dependents) self.assertTrue(sot.is_profile_only) self.assertDictEqual({"limit": "limit", "marker": "marker", "name": "name", "status": "status", "sort": "sort", "global_project": "global_project"}, sot._query_mapping._mapping) def test_scale_in(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.scale_in(sess, 3)) url = 'clusters/%s/actions' % sot.id body = {'scale_in': {'count': 3}} sess.post.assert_called_once_with(url, json=body) def test_scale_out(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.scale_out(sess, 3)) url = 'clusters/%s/actions' % sot.id body = {'scale_out': {'count': 3}} sess.post.assert_called_once_with(url, json=body) def test_resize(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.resize(sess, foo='bar', zoo=5)) url = 'clusters/%s/actions' % sot.id body = {'resize': {'foo': 'bar', 'zoo': 5}} sess.post.assert_called_once_with(url, json=body) def test_add_nodes(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.add_nodes(sess, ['node-33'])) url = 'clusters/%s/actions' % sot.id body = {'add_nodes': {'nodes': ['node-33']}} sess.post.assert_called_once_with(url, json=body) def test_del_nodes(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.del_nodes(sess, ['node-11'])) url = 'clusters/%s/actions' % sot.id body = {'del_nodes': {'nodes': ['node-11']}} sess.post.assert_called_once_with(url, json=body) def test_del_nodes_with_params(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) params = { 'destroy_after_deletion': True, } self.assertEqual('', sot.del_nodes(sess, ['node-11'], **params)) url = 'clusters/%s/actions' % sot.id body = { 'del_nodes': { 'nodes': ['node-11'], 'destroy_after_deletion': True, } } sess.post.assert_called_once_with(url, json=body) def test_replace_nodes(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.replace_nodes(sess, {'node-22': 'node-44'})) url = 'clusters/%s/actions' % sot.id body = {'replace_nodes': {'nodes': {'node-22': 'node-44'}}} sess.post.assert_called_once_with(url, json=body) def test_policy_attach(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) params = { 'enabled': True, } self.assertEqual('', sot.policy_attach(sess, 'POLICY', **params)) url = 'clusters/%s/actions' % sot.id body = { 'policy_attach': { 'policy_id': 'POLICY', 'enabled': True, } } sess.post.assert_called_once_with(url, json=body) def test_policy_detach(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.policy_detach(sess, 'POLICY')) url = 'clusters/%s/actions' % sot.id body = {'policy_detach': {'policy_id': 'POLICY'}} sess.post.assert_called_once_with(url, json=body) def test_policy_update(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) params = { 'enabled': False } self.assertEqual('', sot.policy_update(sess, 'POLICY', **params)) url = 'clusters/%s/actions' % sot.id body = { 'policy_update': { 'policy_id': 'POLICY', 'enabled': False } } sess.post.assert_called_once_with(url, json=body) def test_check(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.check(sess)) url = 'clusters/%s/actions' % sot.id body = {'check': {}} sess.post.assert_called_once_with(url, json=body) def test_recover(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.recover(sess)) url = 'clusters/%s/actions' % sot.id body = {'recover': {}} sess.post.assert_called_once_with(url, json=body) def test_operation(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.op(sess, 'dance', style='tango')) url = 'clusters/%s/ops' % sot.id body = {'dance': {'style': 'tango'}} sess.post.assert_called_once_with(url, json=body) def test_force_delete(self): sot = cluster.Cluster(**FAKE) resp = mock.Mock() resp.headers = {} resp.json = mock.Mock(return_value={"foo": "bar"}) resp.status_code = 200 sess = mock.Mock() sess.delete = mock.Mock(return_value=resp) res = sot.force_delete(sess) self.assertEqual(sot, res) url = 'clusters/%s' % sot.id body = {'force': True} sess.delete.assert_called_once_with(url, json=body) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_node.py0000666000175100017510000001334013236151364024775 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.clustering.v1 import node FAKE_ID = '123d0955-0099-aabb-b8fa-6a44655ceeff' FAKE_NAME = 'test_node' FAKE = { 'id': FAKE_ID, 'cluster_id': 'clusterA', 'metadata': {'key1': 'value1'}, 'name': FAKE_NAME, 'profile_id': 'myserver', 'domain': '204ccccd267b40aea871750116b5b184', 'user': '3747afc360b64702a53bdd64dc1b8976', 'project': '42d9e9663331431f97b75e25136307ff', 'index': 1, 'role': 'master', 'dependents': {}, 'created_at': '2015-10-10T12:46:36.000000', 'updated_at': '2016-10-10T12:46:36.000000', 'init_at': '2015-10-10T12:46:36.000000', } class TestNode(testtools.TestCase): def test_basic(self): sot = node.Node() self.assertEqual('node', sot.resource_key) self.assertEqual('nodes', sot.resources_key) self.assertEqual('/nodes', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = node.Node(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['profile_id'], sot.profile_id) self.assertEqual(FAKE['cluster_id'], sot.cluster_id) self.assertEqual(FAKE['user'], sot.user_id) self.assertEqual(FAKE['project'], sot.project_id) self.assertEqual(FAKE['domain'], sot.domain_id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['index'], sot.index) self.assertEqual(FAKE['role'], sot.role) self.assertEqual(FAKE['metadata'], sot.metadata) self.assertEqual(FAKE['init_at'], sot.init_at) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['updated_at'], sot.updated_at) self.assertEqual(FAKE['dependents'], sot.dependents) def test_check(self): sot = node.Node(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.check(sess)) url = 'nodes/%s/actions' % sot.id body = {'check': {}} sess.post.assert_called_once_with(url, json=body) def test_recover(self): sot = node.Node(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.recover(sess)) url = 'nodes/%s/actions' % sot.id body = {'recover': {}} sess.post.assert_called_once_with(url, json=body) def test_operation(self): sot = node.Node(**FAKE) resp = mock.Mock() resp.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=resp) self.assertEqual('', sot.op(sess, 'dance', style='tango')) url = 'nodes/%s/ops' % sot.id sess.post.assert_called_once_with(url, json={'dance': {'style': 'tango'}}) def test_adopt_preview(self): sot = node.Node.new() resp = mock.Mock() resp.headers = {} resp.json = mock.Mock(return_value={"foo": "bar"}) sess = mock.Mock() sess.post = mock.Mock(return_value=resp) attrs = { 'identity': 'fake-resource-id', 'overrides': {}, 'type': 'os.nova.server-1.0', 'snapshot': False } res = sot.adopt(sess, True, **attrs) self.assertEqual({"foo": "bar"}, res) sess.post.assert_called_once_with("nodes/adopt-preview", json=attrs) def test_adopt(self): sot = node.Node.new() resp = mock.Mock() resp.headers = {} resp.json = mock.Mock(return_value={"foo": "bar"}) resp.status_code = 200 sess = mock.Mock() sess.post = mock.Mock(return_value=resp) res = sot.adopt(sess, False, param="value") self.assertEqual(sot, res) sess.post.assert_called_once_with("nodes/adopt", json={"param": "value"}) def test_force_delete(self): sot = node.Node(**FAKE) resp = mock.Mock() resp.headers = {} resp.json = mock.Mock(return_value={"foo": "bar"}) resp.status_code = 200 sess = mock.Mock() sess.delete = mock.Mock(return_value=resp) res = sot.force_delete(sess) self.assertEqual(sot, res) url = 'nodes/%s' % sot.id body = {'force': True} sess.delete.assert_called_once_with(url, json=body) class TestNodeDetail(testtools.TestCase): def test_basic(self): sot = node.NodeDetail() self.assertEqual('/nodes/%(node_id)s?show_details=True', sot.base_path) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_event.py0000666000175100017510000000432113236151364025170 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import event FAKE = { 'action': 'NODE_CREATE', 'cluster_id': None, 'id': 'ffaed25e-46f5-4089-8e20-b3b4722fd597', 'level': '20', 'oid': 'efff1c11-2ada-47da-bedd-2c9af4fd099a', 'oname': 'node_create_b4a49016', 'otype': 'NODEACTION', 'project': '42d9e9663331431f97b75e25136307ff', 'status': 'START', 'status_reason': 'The action was abandoned.', 'timestamp': '2016-10-10T12:46:36.000000', 'user': '5e5bf8027826429c96af157f68dc9072' } class TestEvent(testtools.TestCase): def setUp(self): super(TestEvent, self).setUp() def test_basic(self): sot = event.Event() self.assertEqual('event', sot.resource_key) self.assertEqual('events', sot.resources_key) self.assertEqual('/events', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = event.Event(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['action'], sot.action) self.assertEqual(FAKE['cluster_id'], sot.cluster_id) self.assertEqual(FAKE['level'], sot.level) self.assertEqual(FAKE['oid'], sot.obj_id) self.assertEqual(FAKE['oname'], sot.obj_name) self.assertEqual(FAKE['otype'], sot.obj_type) self.assertEqual(FAKE['project'], sot.project_id) self.assertEqual(FAKE['status'], sot.status) self.assertEqual(FAKE['status_reason'], sot.status_reason) self.assertEqual(FAKE['timestamp'], sot.generated_at) self.assertEqual(FAKE['user'], sot.user_id) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_policy.py0000666000175100017510000000602613236151364025352 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import policy FAKE_ID = 'ac5415bd-f522-4160-8be0-f8853e4bc332' FAKE_NAME = 'test_policy' FAKE = { 'id': FAKE_ID, 'name': FAKE_NAME, 'spec': { 'type': 'senlin.policy.deletion', 'version': '1.0', 'properties': { 'criteria': 'OLDEST_FIRST', 'grace_period': 60, 'reduce_desired_capacity': False, 'destroy_after_deletion': True, } }, 'project': '42d9e9663331431f97b75e25136307ff', 'domain': '204ccccd267b40aea871750116b5b184', 'user': '3747afc360b64702a53bdd64dc1b8976', 'type': 'senlin.policy.deletion-1.0', 'created_at': '2015-10-10T12:46:36.000000', 'updated_at': '2016-10-10T12:46:36.000000', 'data': {}, } class TestPolicy(testtools.TestCase): def setUp(self): super(TestPolicy, self).setUp() def test_basic(self): sot = policy.Policy() self.assertEqual('policy', sot.resource_key) self.assertEqual('policies', sot.resources_key) self.assertEqual('/policies', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = policy.Policy(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['spec'], sot.spec) self.assertEqual(FAKE['project'], sot.project_id) self.assertEqual(FAKE['domain'], sot.domain_id) self.assertEqual(FAKE['user'], sot.user_id) self.assertEqual(FAKE['data'], sot.data) self.assertEqual(FAKE['created_at'], sot.created_at) self.assertEqual(FAKE['updated_at'], sot.updated_at) class TestPolicyValidate(testtools.TestCase): def setUp(self): super(TestPolicyValidate, self).setUp() def test_basic(self): sot = policy.PolicyValidate() self.assertEqual('policy', sot.resource_key) self.assertEqual('policies', sot.resources_key) self.assertEqual('/policies/validate', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_profile_type.py0000666000175100017510000000304413236151364026551 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import profile_type FAKE = { 'name': 'FAKE_PROFILE_TYPE', 'schema': { 'foo': 'bar' }, 'support_status': { '1.0': [{ 'status': 'supported', 'since': '2016.10', }] } } class TestProfileType(testtools.TestCase): def test_basic(self): sot = profile_type.ProfileType() self.assertEqual('profile_type', sot.resource_key) self.assertEqual('profile_types', sot.resources_key) self.assertEqual('/profile-types', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = profile_type.ProfileType(**FAKE) self.assertEqual(FAKE['name'], sot._get_id(sot)) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['schema'], sot.schema) self.assertEqual(FAKE['support_status'], sot.support_status) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_policy_type.py0000666000175100017510000000303113236151364026404 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.clustering.v1 import policy_type FAKE = { 'name': 'FAKE_POLICY_TYPE', 'schema': { 'foo': 'bar' }, 'support_status': { '1.0': [{ 'status': 'supported', 'since': '2016.10' }] } } class TestPolicyType(testtools.TestCase): def test_basic(self): sot = policy_type.PolicyType() self.assertEqual('policy_type', sot.resource_key) self.assertEqual('policy_types', sot.resources_key) self.assertEqual('/policy-types', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_list) def test_instantiate(self): sot = policy_type.PolicyType(**FAKE) self.assertEqual(FAKE['name'], sot._get_id(sot)) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['schema'], sot.schema) self.assertEqual(FAKE['support_status'], sot.support_status) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_service.py0000666000175100017510000000356313236151364025516 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.clustering.v1 import service IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'binary': 'senlin-engine', 'host': 'host1', 'status': 'enabled', 'state': 'up', 'disabled_reason': None, 'updated_at': '2016-10-10T12:46:36.000000', } class TestService(testtools.TestCase): def setUp(self): super(TestService, self).setUp() self.resp = mock.Mock() self.resp.body = None self.resp.json = mock.Mock(return_value=self.resp.body) self.sess = mock.Mock() self.sess.put = mock.Mock(return_value=self.resp) def test_basic(self): sot = service.Service() self.assertEqual('service', sot.resource_key) self.assertEqual('services', sot.resources_key) self.assertEqual('/services', sot.base_path) self.assertEqual('clustering', sot.service.service_type) self.assertTrue(sot.allow_list) def test_make_it(self): sot = service.Service(**EXAMPLE) self.assertEqual(EXAMPLE['host'], sot.host) self.assertEqual(EXAMPLE['binary'], sot.binary) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['state'], sot.state) self.assertEqual(EXAMPLE['disabled_reason'], sot.disabled_reason) self.assertEqual(EXAMPLE['updated_at'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/cluster/v1/test_proxy.py0000666000175100017510000006212713236151364025240 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import deprecation import mock from openstack.clustering.v1 import _proxy from openstack.clustering.v1 import action from openstack.clustering.v1 import build_info from openstack.clustering.v1 import cluster from openstack.clustering.v1 import cluster_attr from openstack.clustering.v1 import cluster_policy from openstack.clustering.v1 import event from openstack.clustering.v1 import node from openstack.clustering.v1 import policy from openstack.clustering.v1 import policy_type from openstack.clustering.v1 import profile from openstack.clustering.v1 import profile_type from openstack.clustering.v1 import receiver from openstack.clustering.v1 import service from openstack import proxy as proxy_base from openstack.tests.unit import test_proxy_base class TestClusterProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestClusterProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_build_info_get(self): self.verify_get(self.proxy.get_build_info, build_info.BuildInfo, ignore_value=True, expected_kwargs={'requires_id': False}) def test_profile_types(self): self.verify_list(self.proxy.profile_types, profile_type.ProfileType, paginated=False) def test_profile_type_get(self): self.verify_get(self.proxy.get_profile_type, profile_type.ProfileType) def test_policy_types(self): self.verify_list(self.proxy.policy_types, policy_type.PolicyType, paginated=False) def test_policy_type_get(self): self.verify_get(self.proxy.get_policy_type, policy_type.PolicyType) def test_profile_create(self): self.verify_create(self.proxy.create_profile, profile.Profile) def test_profile_validate(self): self.verify_create(self.proxy.validate_profile, profile.ProfileValidate) def test_profile_delete(self): self.verify_delete(self.proxy.delete_profile, profile.Profile, False) def test_profile_delete_ignore(self): self.verify_delete(self.proxy.delete_profile, profile.Profile, True) def test_profile_find(self): self.verify_find(self.proxy.find_profile, profile.Profile) def test_profile_get(self): self.verify_get(self.proxy.get_profile, profile.Profile) def test_profiles(self): self.verify_list(self.proxy.profiles, profile.Profile, paginated=True, method_kwargs={'limit': 2}, expected_kwargs={'limit': 2}) def test_profile_update(self): self.verify_update(self.proxy.update_profile, profile.Profile) def test_cluster_create(self): self.verify_create(self.proxy.create_cluster, cluster.Cluster) def test_cluster_delete(self): self.verify_delete(self.proxy.delete_cluster, cluster.Cluster, False) def test_cluster_delete_ignore(self): self.verify_delete(self.proxy.delete_cluster, cluster.Cluster, True) def test_cluster_force_delete(self): self._verify("openstack.clustering.v1.cluster.Cluster.force_delete", self.proxy.delete_cluster, method_args=["value", False, True]) def test_cluster_find(self): self.verify_find(self.proxy.find_cluster, cluster.Cluster) def test_cluster_get(self): self.verify_get(self.proxy.get_cluster, cluster.Cluster) def test_clusters(self): self.verify_list(self.proxy.clusters, cluster.Cluster, paginated=True, method_kwargs={'limit': 2}, expected_kwargs={'limit': 2}) def test_cluster_update(self): self.verify_update(self.proxy.update_cluster, cluster.Cluster) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_add_nodes(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.add_nodes", self.proxy.cluster_add_nodes, method_args=["FAKE_CLUSTER", ["node1"]], expected_args=[["node1"]]) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) @deprecation.fail_if_not_removed def test_cluster_add_nodes_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.add_nodes", self.proxy.cluster_add_nodes, method_args=[mock_cluster, ["node1"]], expected_args=[["node1"]]) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_del_nodes(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.del_nodes", self.proxy.cluster_del_nodes, method_args=["FAKE_CLUSTER", ["node1"]], expected_args=[["node1"]]) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) @deprecation.fail_if_not_removed def test_cluster_del_nodes_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.del_nodes", self.proxy.cluster_del_nodes, method_args=[mock_cluster, ["node1"]], method_kwargs={"key": "value"}, expected_args=[["node1"]], expected_kwargs={"key": "value"}) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_replace_nodes(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.replace_nodes", self.proxy.cluster_replace_nodes, method_args=["FAKE_CLUSTER", {"node1": "node2"}], expected_args=[{"node1": "node2"}]) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) @deprecation.fail_if_not_removed def test_cluster_replace_nodes_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.replace_nodes", self.proxy.cluster_replace_nodes, method_args=[mock_cluster, {"node1": "node2"}], expected_args=[{"node1": "node2"}]) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_scale_out(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.scale_out", self.proxy.cluster_scale_out, method_args=["FAKE_CLUSTER", 3], expected_args=[3]) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) @deprecation.fail_if_not_removed def test_cluster_scale_out_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.scale_out", self.proxy.cluster_scale_out, method_args=[mock_cluster, 5], expected_args=[5]) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_scale_in(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.scale_in", self.proxy.cluster_scale_in, method_args=["FAKE_CLUSTER", 3], expected_args=[3]) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) @deprecation.fail_if_not_removed def test_cluster_scale_in_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.scale_in", self.proxy.cluster_scale_in, method_args=[mock_cluster, 5], expected_args=[5]) def test_services(self): self.verify_list(self.proxy.services, service.Service, paginated=False) @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_resize(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.resize", self.proxy.cluster_resize, method_args=["FAKE_CLUSTER"], method_kwargs={'k1': 'v1', 'k2': 'v2'}, expected_kwargs={'k1': 'v1', 'k2': 'v2'}) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) def test_cluster_resize_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.resize", self.proxy.cluster_resize, method_args=[mock_cluster], method_kwargs={'k1': 'v1', 'k2': 'v2'}, expected_kwargs={'k1': 'v1', 'k2': 'v2'}) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_attach_policy(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.policy_attach", self.proxy.cluster_attach_policy, method_args=["FAKE_CLUSTER", "FAKE_POLICY"], method_kwargs={"k1": "v1", "k2": "v2"}, expected_args=["FAKE_POLICY"], expected_kwargs={"k1": "v1", 'k2': "v2"}) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) @deprecation.fail_if_not_removed def test_cluster_attach_policy_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.policy_attach", self.proxy.cluster_attach_policy, method_args=[mock_cluster, "FAKE_POLICY"], method_kwargs={"k1": "v1", "k2": "v2"}, expected_args=["FAKE_POLICY"], expected_kwargs={"k1": "v1", 'k2': "v2"}) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_detach_policy(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.policy_detach", self.proxy.cluster_detach_policy, method_args=["FAKE_CLUSTER", "FAKE_POLICY"], expected_args=["FAKE_POLICY"]) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) @deprecation.fail_if_not_removed def test_cluster_detach_policy_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.policy_detach", self.proxy.cluster_detach_policy, method_args=[mock_cluster, "FAKE_POLICY"], expected_args=["FAKE_POLICY"]) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_find') def test_cluster_update_policy(self, mock_find): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_find.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.policy_update", self.proxy.cluster_update_policy, method_args=["FAKE_CLUSTER", "FAKE_POLICY"], method_kwargs={"k1": "v1", "k2": "v2"}, expected_args=["FAKE_POLICY"], expected_kwargs={"k1": "v1", 'k2': "v2"}) mock_find.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER", ignore_missing=False) @deprecation.fail_if_not_removed def test_cluster_update_policy_with_obj(self): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') self._verify("openstack.clustering.v1.cluster.Cluster.policy_update", self.proxy.cluster_update_policy, method_args=[mock_cluster, "FAKE_POLICY"], method_kwargs={"k1": "v1", "k2": "v2"}, expected_args=["FAKE_POLICY"], expected_kwargs={"k1": "v1", 'k2': "v2"}) def test_collect_cluster_attrs(self): self.verify_list(self.proxy.collect_cluster_attrs, cluster_attr.ClusterAttr, paginated=False, method_args=['FAKE_ID', 'path.to.attr'], expected_kwargs={'cluster_id': 'FAKE_ID', 'path': 'path.to.attr'}) @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_cluster_check(self, mock_get): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_get.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.check", self.proxy.check_cluster, method_args=["FAKE_CLUSTER"]) mock_get.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_cluster_recover(self, mock_get): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_get.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.recover", self.proxy.recover_cluster, method_args=["FAKE_CLUSTER"]) mock_get.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER") @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_cluster_operation(self, mock_get): mock_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') mock_get.return_value = mock_cluster self._verify("openstack.clustering.v1.cluster.Cluster.op", self.proxy.cluster_operation, method_args=["FAKE_CLUSTER", "dance"], expected_args=["dance"]) mock_get.assert_called_once_with(cluster.Cluster, "FAKE_CLUSTER") def test_node_create(self): self.verify_create(self.proxy.create_node, node.Node) def test_node_delete(self): self.verify_delete(self.proxy.delete_node, node.Node, False) def test_node_delete_ignore(self): self.verify_delete(self.proxy.delete_node, node.Node, True) def test_node_force_delete(self): self._verify("openstack.clustering.v1.node.Node.force_delete", self.proxy.delete_node, method_args=["value", False, True]) def test_node_find(self): self.verify_find(self.proxy.find_node, node.Node) def test_node_get(self): self.verify_get(self.proxy.get_node, node.Node) def test_node_get_with_details(self): self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_node, method_args=['NODE_ID'], method_kwargs={'details': True}, expected_args=[node.NodeDetail], expected_kwargs={'node_id': 'NODE_ID', 'requires_id': False}) def test_nodes(self): self.verify_list(self.proxy.nodes, node.Node, paginated=True, method_kwargs={'limit': 2}, expected_kwargs={'limit': 2}) def test_node_update(self): self.verify_update(self.proxy.update_node, node.Node) @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_node_check(self, mock_get): mock_node = node.Node.new(id='FAKE_NODE') mock_get.return_value = mock_node self._verify("openstack.clustering.v1.node.Node.check", self.proxy.check_node, method_args=["FAKE_NODE"]) mock_get.assert_called_once_with(node.Node, "FAKE_NODE") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_node_recover(self, mock_get): mock_node = node.Node.new(id='FAKE_NODE') mock_get.return_value = mock_node self._verify("openstack.clustering.v1.node.Node.recover", self.proxy.recover_node, method_args=["FAKE_NODE"]) mock_get.assert_called_once_with(node.Node, "FAKE_NODE") @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_node_adopt(self, mock_get): mock_node = node.Node.new() mock_get.return_value = mock_node self._verify("openstack.clustering.v1.node.Node.adopt", self.proxy.adopt_node, method_kwargs={"preview": False, "foo": "bar"}, expected_kwargs={"preview": False, "foo": "bar"}) mock_get.assert_called_once_with(node.Node, None) @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_node_adopt_preview(self, mock_get): mock_node = node.Node.new() mock_get.return_value = mock_node self._verify("openstack.clustering.v1.node.Node.adopt", self.proxy.adopt_node, method_kwargs={"preview": True, "foo": "bar"}, expected_kwargs={"preview": True, "foo": "bar"}) mock_get.assert_called_once_with(node.Node, None) @deprecation.fail_if_not_removed @mock.patch.object(proxy_base.BaseProxy, '_get_resource') def test_node_operation(self, mock_get): mock_node = node.Node.new(id='FAKE_CLUSTER') mock_get.return_value = mock_node self._verify("openstack.clustering.v1.node.Node.op", self.proxy.node_operation, method_args=["FAKE_NODE", "dance"], expected_args=["dance"]) mock_get.assert_called_once_with(node.Node, "FAKE_NODE") def test_policy_create(self): self.verify_create(self.proxy.create_policy, policy.Policy) def test_policy_validate(self): self.verify_create(self.proxy.validate_policy, policy.PolicyValidate) def test_policy_delete(self): self.verify_delete(self.proxy.delete_policy, policy.Policy, False) def test_policy_delete_ignore(self): self.verify_delete(self.proxy.delete_policy, policy.Policy, True) def test_policy_find(self): self.verify_find(self.proxy.find_policy, policy.Policy) def test_policy_get(self): self.verify_get(self.proxy.get_policy, policy.Policy) def test_policies(self): self.verify_list(self.proxy.policies, policy.Policy, paginated=True, method_kwargs={'limit': 2}, expected_kwargs={'limit': 2}) def test_policy_update(self): self.verify_update(self.proxy.update_policy, policy.Policy) def test_cluster_policies(self): self.verify_list(self.proxy.cluster_policies, cluster_policy.ClusterPolicy, paginated=False, method_args=["FAKE_CLUSTER"], expected_kwargs={"cluster_id": "FAKE_CLUSTER"}) def test_get_cluster_policy(self): fake_policy = cluster_policy.ClusterPolicy.new(id="FAKE_POLICY") fake_cluster = cluster.Cluster.new(id='FAKE_CLUSTER') # ClusterPolicy object as input self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_cluster_policy, method_args=[fake_policy, "FAKE_CLUSTER"], expected_args=[cluster_policy.ClusterPolicy, fake_policy], expected_kwargs={'cluster_id': 'FAKE_CLUSTER'}, expected_result=fake_policy) # Policy ID as input self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_cluster_policy, method_args=["FAKE_POLICY", "FAKE_CLUSTER"], expected_args=[cluster_policy.ClusterPolicy, "FAKE_POLICY"], expected_kwargs={"cluster_id": "FAKE_CLUSTER"}) # Cluster object as input self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_cluster_policy, method_args=["FAKE_POLICY", fake_cluster], expected_args=[cluster_policy.ClusterPolicy, "FAKE_POLICY"], expected_kwargs={"cluster_id": fake_cluster}) def test_receiver_create(self): self.verify_create(self.proxy.create_receiver, receiver.Receiver) def test_receiver_update(self): self.verify_update(self.proxy.update_receiver, receiver.Receiver) def test_receiver_delete(self): self.verify_delete(self.proxy.delete_receiver, receiver.Receiver, False) def test_receiver_delete_ignore(self): self.verify_delete(self.proxy.delete_receiver, receiver.Receiver, True) def test_receiver_find(self): self.verify_find(self.proxy.find_receiver, receiver.Receiver) def test_receiver_get(self): self.verify_get(self.proxy.get_receiver, receiver.Receiver) def test_receivers(self): self.verify_list(self.proxy.receivers, receiver.Receiver, paginated=True, method_kwargs={'limit': 2}, expected_kwargs={'limit': 2}) def test_action_get(self): self.verify_get(self.proxy.get_action, action.Action) def test_actions(self): self.verify_list(self.proxy.actions, action.Action, paginated=True, method_kwargs={'limit': 2}, expected_kwargs={'limit': 2}) def test_event_get(self): self.verify_get(self.proxy.get_event, event.Event) def test_events(self): self.verify_list(self.proxy.events, event.Event, paginated=True, method_kwargs={'limit': 2}, expected_kwargs={'limit': 2}) @mock.patch("openstack.resource.wait_for_status") def test_wait_for(self, mock_wait): mock_resource = mock.Mock() mock_wait.return_value = mock_resource self.proxy.wait_for_status(mock_resource, 'ACTIVE') mock_wait.assert_called_once_with(self.proxy, mock_resource, 'ACTIVE', [], 2, 120) @mock.patch("openstack.resource.wait_for_status") def test_wait_for_params(self, mock_wait): mock_resource = mock.Mock() mock_wait.return_value = mock_resource self.proxy.wait_for_status(mock_resource, 'ACTIVE', ['ERROR'], 1, 2) mock_wait.assert_called_once_with(self.proxy, mock_resource, 'ACTIVE', ['ERROR'], 1, 2) @mock.patch("openstack.resource.wait_for_delete") def test_wait_for_delete(self, mock_wait): mock_resource = mock.Mock() mock_wait.return_value = mock_resource self.proxy.wait_for_delete(mock_resource) mock_wait.assert_called_once_with(self.proxy, mock_resource, 2, 120) @mock.patch("openstack.resource.wait_for_delete") def test_wait_for_delete_params(self, mock_wait): mock_resource = mock.Mock() mock_wait.return_value = mock_resource self.proxy.wait_for_delete(mock_resource, 1, 2) mock_wait.assert_called_once_with(self.proxy, mock_resource, 1, 2) openstacksdk-0.11.3/openstack/tests/unit/cluster/__init__.py0000666000175100017510000000000013236151364024207 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/workflow/0000775000175100017510000000000013236151501022270 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/workflow/test_version.py0000666000175100017510000000266413236151340025401 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.workflow import version IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'status': '3', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('workflowv2', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['status'], sot.status) openstacksdk-0.11.3/openstack/tests/unit/workflow/test_workflow.py0000666000175100017510000000275713236151340025571 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.workflow.v2 import workflow FAKE = { 'scope': 'private', 'id': 'ffaed25e-46f5-4089-8e20-b3b4722fd597', 'definition': 'workflow_def', } class TestWorkflow(testtools.TestCase): def setUp(self): super(TestWorkflow, self).setUp() def test_basic(self): sot = workflow.Workflow() self.assertEqual('workflow', sot.resource_key) self.assertEqual('workflows', sot.resources_key) self.assertEqual('/workflows', sot.base_path) self.assertEqual('workflowv2', sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_list) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_delete) def test_instantiate(self): sot = workflow.Workflow(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['scope'], sot.scope) self.assertEqual(FAKE['definition'], sot.definition) openstacksdk-0.11.3/openstack/tests/unit/workflow/test_execution.py0000666000175100017510000000324313236151340025711 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.workflow.v2 import execution FAKE_INPUT = { 'cluster_id': '8c74607c-5a74-4490-9414-a3475b1926c2', 'node_id': 'fba2cc5d-706f-4631-9577-3956048d13a2', 'flavor_id': '1' } FAKE = { 'id': 'ffaed25e-46f5-4089-8e20-b3b4722fd597', 'workflow_name': 'cluster-coldmigration', 'input': FAKE_INPUT, } class TestExecution(testtools.TestCase): def setUp(self): super(TestExecution, self).setUp() def test_basic(self): sot = execution.Execution() self.assertEqual('execution', sot.resource_key) self.assertEqual('executions', sot.resources_key) self.assertEqual('/executions', sot.base_path) self.assertEqual('workflowv2', sot.service.service_type) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_list) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_delete) def test_instantiate(self): sot = execution.Execution(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['workflow_name'], sot.workflow_name) self.assertEqual(FAKE['input'], sot.input) openstacksdk-0.11.3/openstack/tests/unit/workflow/__init__.py0000666000175100017510000000000013236151340024372 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/workflow/test_workflow_service.py0000666000175100017510000000211513236151340027275 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.workflow import workflow_service class TestWorkflowService(testtools.TestCase): def test_service(self): sot = workflow_service.WorkflowService() self.assertEqual('workflowv2', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v2', sot.valid_versions[0].module) self.assertEqual('v2', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/workflow/test_proxy.py0000666000175100017510000000441713236151340025073 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.tests.unit import test_proxy_base from openstack.workflow.v2 import _proxy from openstack.workflow.v2 import execution from openstack.workflow.v2 import workflow class TestWorkflowProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestWorkflowProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_workflows(self): self.verify_list(self.proxy.workflows, workflow.Workflow, paginated=True) def test_executions(self): self.verify_list(self.proxy.executions, execution.Execution, paginated=True) def test_workflow_get(self): self.verify_get(self.proxy.get_workflow, workflow.Workflow) def test_execution_get(self): self.verify_get(self.proxy.get_execution, execution.Execution) def test_workflow_create(self): self.verify_create(self.proxy.create_workflow, workflow.Workflow) def test_execution_create(self): self.verify_create(self.proxy.create_execution, execution.Execution) def test_workflow_delete(self): self.verify_delete(self.proxy.delete_workflow, workflow.Workflow, True) def test_execution_delete(self): self.verify_delete(self.proxy.delete_execution, execution.Execution, True) def test_workflow_find(self): self.verify_find(self.proxy.find_workflow, workflow.Workflow) def test_execution_find(self): self.verify_find(self.proxy.find_execution, execution.Execution) openstacksdk-0.11.3/openstack/tests/unit/test_exceptions.py0000666000175100017510000001066613236151340024224 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools import uuid from openstack import exceptions class Test_Exception(testtools.TestCase): def test_method_not_supported(self): exc = exceptions.MethodNotSupported(self.__class__, 'list') expected = ('The list method is not supported for ' + 'openstack.tests.unit.test_exceptions.Test_Exception') self.assertEqual(expected, str(exc)) class Test_HttpException(testtools.TestCase): def setUp(self): super(Test_HttpException, self).setUp() self.message = "mayday" def _do_raise(self, *args, **kwargs): raise exceptions.HttpException(*args, **kwargs) def test_message(self): exc = self.assertRaises(exceptions.HttpException, self._do_raise, self.message) self.assertEqual(self.message, exc.message) def test_details(self): details = "some details" exc = self.assertRaises(exceptions.HttpException, self._do_raise, self.message, details=details) self.assertEqual(self.message, exc.message) self.assertEqual(details, exc.details) def test_http_status(self): http_status = 123 exc = self.assertRaises(exceptions.HttpException, self._do_raise, self.message, http_status=http_status) self.assertEqual(self.message, exc.message) self.assertEqual(http_status, exc.status_code) class TestRaiseFromResponse(testtools.TestCase): def setUp(self): super(TestRaiseFromResponse, self).setUp() self.message = "Where is my kitty?" def _do_raise(self, *args, **kwargs): return exceptions.raise_from_response(*args, **kwargs) def test_raise_no_exception(self): response = mock.Mock() response.status_code = 200 self.assertIsNone(self._do_raise(response)) def test_raise_not_found_exception(self): response = mock.Mock() response.status_code = 404 response.headers = { 'content-type': 'application/json', 'x-openstack-request-id': uuid.uuid4().hex, } exc = self.assertRaises(exceptions.NotFoundException, self._do_raise, response, error_message=self.message) self.assertEqual(self.message, exc.message) self.assertEqual(response.status_code, exc.status_code) self.assertEqual( response.headers.get('x-openstack-request-id'), exc.request_id ) def test_raise_bad_request_exception(self): response = mock.Mock() response.status_code = 400 response.headers = { 'content-type': 'application/json', 'x-openstack-request-id': uuid.uuid4().hex, } exc = self.assertRaises(exceptions.BadRequestException, self._do_raise, response, error_message=self.message) self.assertEqual(self.message, exc.message) self.assertEqual(response.status_code, exc.status_code) self.assertEqual( response.headers.get('x-openstack-request-id'), exc.request_id ) def test_raise_http_exception(self): response = mock.Mock() response.status_code = 403 response.headers = { 'content-type': 'application/json', 'x-openstack-request-id': uuid.uuid4().hex, } exc = self.assertRaises(exceptions.HttpException, self._do_raise, response, error_message=self.message) self.assertEqual(self.message, exc.message) self.assertEqual(response.status_code, exc.status_code) self.assertEqual( response.headers.get('x-openstack-request-id'), exc.request_id ) openstacksdk-0.11.3/openstack/tests/unit/orchestration/0000775000175100017510000000000013236151501023302 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/orchestration/test_version.py0000666000175100017510000000267413236151340026414 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.orchestration import version IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'status': '3', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('orchestration', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['status'], sot.status) openstacksdk-0.11.3/openstack/tests/unit/orchestration/test_orchestration_service.py0000666000175100017510000000223213236151340031321 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.orchestration import orchestration_service class TestOrchestrationService(testtools.TestCase): def test_service(self): sot = orchestration_service.OrchestrationService() self.assertEqual('orchestration', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v1', sot.valid_versions[0].module) self.assertEqual('v1', sot.valid_versions[0].path) self.assertTrue(sot.requires_project_id) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/0000775000175100017510000000000013236151501023630 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_stack_template.py0000666000175100017510000000604513236151340030251 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import testtools from openstack.orchestration.v1 import stack_template FAKE = { 'description': 'template description', 'heat_template_version': '2014-10-16', 'parameters': { 'key_name': { 'type': 'string' } }, 'resources': { 'resource1': { 'type': 'ResourceType' } }, 'conditions': {'cd1': True}, 'outputs': { 'key1': 'value1' } } class TestStackTemplate(testtools.TestCase): def test_basic(self): sot = stack_template.StackTemplate() self.assertEqual('orchestration', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) def test_make_it(self): sot = stack_template.StackTemplate(**FAKE) self.assertEqual(FAKE['description'], sot.description) self.assertEqual(FAKE['heat_template_version'], sot.heat_template_version) self.assertEqual(FAKE['outputs'], sot.outputs) self.assertEqual(FAKE['parameters'], sot.parameters) self.assertEqual(FAKE['resources'], sot.resources) self.assertEqual(FAKE['conditions'], sot.conditions) def test_to_dict(self): fake_sot = copy.deepcopy(FAKE) fake_sot['parameter_groups'] = [{ "description": "server parameters", "parameters": ["key_name", "image_id"], "label": "server_parameters"}] for temp_version in ['2016-10-14', '2017-02-24', '2017-02-24', '2017-09-01', '2018-03-02', 'newton', 'ocata', 'pike', 'queens']: fake_sot['heat_template_version'] = temp_version sot = stack_template.StackTemplate(**fake_sot) self.assertEqual(fake_sot, sot.to_dict()) def test_to_dict_without_conditions(self): fake_sot = copy.deepcopy(FAKE) fake_sot['parameter_groups'] = [{ "description": "server parameters", "parameters": ["key_name", "image_id"], "label": "server_parameters"}] fake_sot.pop('conditions') for temp_version in ['2013-05-23', '2014-10-16', '2015-04-30', '2015-10-15', '2016-04-08']: fake_sot['heat_template_version'] = temp_version sot = stack_template.StackTemplate(**fake_sot) self.assertEqual(fake_sot, sot.to_dict()) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_software_config.py0000666000175100017510000000377013236151340030432 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.orchestration.v1 import software_config FAKE_ID = 'ce8ae86c-9810-4cb1-8888-7fb53bc523bf' FAKE_NAME = 'test_software_config' FAKE = { 'id': FAKE_ID, 'name': FAKE_NAME, 'config': 'fake config', 'creation_time': '2015-03-09T12:15:57', 'group': 'fake group', 'inputs': [{'foo': 'bar'}], 'outputs': [{'baz': 'zoo'}], 'options': {'key': 'value'}, } class TestSoftwareConfig(testtools.TestCase): def test_basic(self): sot = software_config.SoftwareConfig() self.assertEqual('software_config', sot.resource_key) self.assertEqual('software_configs', sot.resources_key) self.assertEqual('/software_configs', sot.base_path) self.assertEqual('orchestration', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = software_config.SoftwareConfig(**FAKE) self.assertEqual(FAKE_ID, sot.id) self.assertEqual(FAKE_NAME, sot.name) self.assertEqual(FAKE['config'], sot.config) self.assertEqual(FAKE['creation_time'], sot.created_at) self.assertEqual(FAKE['group'], sot.group) self.assertEqual(FAKE['inputs'], sot.inputs) self.assertEqual(FAKE['outputs'], sot.outputs) self.assertEqual(FAKE['options'], sot.options) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_resource.py0000666000175100017510000000463113236151340027077 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.orchestration.v1 import resource FAKE_ID = '32e39358-2422-4ad0-a1b5-dd60696bf564' FAKE_NAME = 'test_stack' FAKE = { 'links': [{ 'href': 'http://res_link', 'rel': 'self' }, { 'href': 'http://stack_link', 'rel': 'stack' }], 'logical_resource_id': 'the_resource', 'name': 'the_resource', 'physical_resource_id': '9f38ab5a-37c8-4e40-9702-ce27fc5f6954', 'required_by': [], 'resource_type': 'OS::Heat::FakeResource', 'status': 'CREATE_COMPLETE', 'status_reason': 'state changed', 'updated_time': '2015-03-09T12:15:57.233772', } class TestResource(testtools.TestCase): def test_basic(self): sot = resource.Resource() self.assertEqual('resource', sot.resource_key) self.assertEqual('resources', sot.resources_key) self.assertEqual('/stacks/%(stack_name)s/%(stack_id)s/resources', sot.base_path) self.assertEqual('orchestration', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_retrieve) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = resource.Resource(**FAKE) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['logical_resource_id'], sot.logical_resource_id) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['physical_resource_id'], sot.physical_resource_id) self.assertEqual(FAKE['required_by'], sot.required_by) self.assertEqual(FAKE['resource_type'], sot.resource_type) self.assertEqual(FAKE['status'], sot.status) self.assertEqual(FAKE['status_reason'], sot.status_reason) self.assertEqual(FAKE['updated_time'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_stack.py0000666000175100017510000001227113236151340026354 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import testtools from openstack import exceptions from openstack.orchestration.v1 import stack from openstack import resource FAKE_ID = 'ce8ae86c-9810-4cb1-8888-7fb53bc523bf' FAKE_NAME = 'test_stack' FAKE = { 'capabilities': '1', 'creation_time': '2015-03-09T12:15:57.233772', 'description': '3', 'disable_rollback': True, 'id': FAKE_ID, 'links': [{ 'href': 'stacks/%s/%s' % (FAKE_NAME, FAKE_ID), 'rel': 'self'}], 'notification_topics': '7', 'outputs': '8', 'parameters': {'OS::stack_id': '9'}, 'name': FAKE_NAME, 'status': '11', 'status_reason': '12', 'tags': ['FOO', 'bar:1'], 'template_description': '13', 'template_url': 'http://www.example.com/wordpress.yaml', 'timeout_mins': '14', 'updated_time': '2015-03-09T12:30:00.000000', } FAKE_CREATE_RESPONSE = { 'stack': { 'id': FAKE_ID, 'links': [{ 'href': 'stacks/%s/%s' % (FAKE_NAME, FAKE_ID), 'rel': 'self'}]} } class TestStack(testtools.TestCase): def test_basic(self): sot = stack.Stack() self.assertEqual('stack', sot.resource_key) self.assertEqual('stacks', sot.resources_key) self.assertEqual('/stacks', sot.base_path) self.assertEqual('orchestration', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = stack.Stack(**FAKE) self.assertEqual(FAKE['capabilities'], sot.capabilities) self.assertEqual(FAKE['creation_time'], sot.created_at) self.assertEqual(FAKE['description'], sot.description) self.assertTrue(sot.is_rollback_disabled) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['links'], sot.links) self.assertEqual(FAKE['notification_topics'], sot.notification_topics) self.assertEqual(FAKE['outputs'], sot.outputs) self.assertEqual(FAKE['parameters'], sot.parameters) self.assertEqual(FAKE['name'], sot.name) self.assertEqual(FAKE['status'], sot.status) self.assertEqual(FAKE['status_reason'], sot.status_reason) self.assertEqual(FAKE['tags'], sot.tags) self.assertEqual(FAKE['template_description'], sot.template_description) self.assertEqual(FAKE['template_url'], sot.template_url) self.assertEqual(FAKE['timeout_mins'], sot.timeout_mins) self.assertEqual(FAKE['updated_time'], sot.updated_at) @mock.patch.object(resource.Resource, 'create') def test_create(self, mock_create): sess = mock.Mock() sot = stack.Stack(FAKE) res = sot.create(sess) mock_create.assert_called_once_with(sess, prepend_key=False) self.assertEqual(mock_create.return_value, res) @mock.patch.object(resource.Resource, 'update') def test_update(self, mock_update): sess = mock.Mock() sot = stack.Stack(FAKE) res = sot.update(sess) mock_update.assert_called_once_with(sess, prepend_key=False, has_body=False) self.assertEqual(mock_update.return_value, res) def test_check(self): sess = mock.Mock() sot = stack.Stack(**FAKE) sot._action = mock.Mock() body = {'check': ''} sot.check(sess) sot._action.assert_called_with(sess, body) @mock.patch.object(resource.Resource, 'get') def test_get(self, mock_get): sess = mock.Mock() sot = stack.Stack(**FAKE) deleted_stack = mock.Mock(id=FAKE_ID, status='DELETE_COMPLETE') normal_stack = mock.Mock(status='CREATE_COMPLETE') mock_get.side_effect = [ normal_stack, exceptions.NotFoundException(message='oops'), deleted_stack, ] self.assertEqual(normal_stack, sot.get(sess)) ex = self.assertRaises(exceptions.NotFoundException, sot.get, sess) self.assertEqual('oops', six.text_type(ex)) ex = self.assertRaises(exceptions.NotFoundException, sot.get, sess) self.assertEqual('No stack found for %s' % FAKE_ID, six.text_type(ex)) class TestStackPreview(testtools.TestCase): def test_basic(self): sot = stack.StackPreview() self.assertEqual('/stacks/preview', sot.base_path) self.assertTrue(sot.allow_create) self.assertFalse(sot.allow_list) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_stack_environment.py0000666000175100017510000000342413236151340031000 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.orchestration.v1 import stack_environment as se FAKE = { 'encrypted_param_names': ['n1', 'n2'], 'event_sinks': { 's1': 'v1' }, 'parameters': { 'key_name': { 'type': 'string' } }, 'parameter_defaults': { 'p1': 'def1' }, 'resource_registry': { 'resources': { 'type1': 'type2' } }, } class TestStackTemplate(testtools.TestCase): def test_basic(self): sot = se.StackEnvironment() self.assertEqual('orchestration', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) def test_make_it(self): sot = se.StackEnvironment(**FAKE) self.assertEqual(FAKE['encrypted_param_names'], sot.encrypted_param_names) self.assertEqual(FAKE['event_sinks'], sot.event_sinks) self.assertEqual(FAKE['parameters'], sot.parameters) self.assertEqual(FAKE['parameter_defaults'], sot.parameter_defaults) self.assertEqual(FAKE['resource_registry'], sot.resource_registry) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_software_deployment.py0000666000175100017510000000462013236151340031340 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.orchestration.v1 import software_deployment FAKE = { 'id': 'ce8ae86c-9810-4cb1-8888-7fb53bc523bf', 'action': 'CREATE', 'config_id': 'CONFIG ID', 'creation_time': '2015-03-09T12:15:57', 'server_id': 'FAKE_SERVER', 'stack_user_project_id': 'ANOTHER PROJECT', 'status': 'IN_PROGRESS', 'status_reason': 'Why are we here?', 'input_values': {'foo': 'bar'}, 'output_values': {'baz': 'zoo'}, 'updated_time': '2015-03-09T12:15:57', } class TestSoftwareDeployment(testtools.TestCase): def test_basic(self): sot = software_deployment.SoftwareDeployment() self.assertEqual('software_deployment', sot.resource_key) self.assertEqual('software_deployments', sot.resources_key) self.assertEqual('/software_deployments', sot.base_path) self.assertEqual('orchestration', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = software_deployment.SoftwareDeployment(**FAKE) self.assertEqual(FAKE['id'], sot.id) self.assertEqual(FAKE['action'], sot.action) self.assertEqual(FAKE['config_id'], sot.config_id) self.assertEqual(FAKE['creation_time'], sot.created_at) self.assertEqual(FAKE['server_id'], sot.server_id) self.assertEqual(FAKE['stack_user_project_id'], sot.stack_user_project_id) self.assertEqual(FAKE['input_values'], sot.input_values) self.assertEqual(FAKE['output_values'], sot.output_values) self.assertEqual(FAKE['status'], sot.status) self.assertEqual(FAKE['status_reason'], sot.status_reason) self.assertEqual(FAKE['updated_time'], sot.updated_at) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/__init__.py0000666000175100017510000000000013236151340025732 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_template.py0000666000175100017510000000656013236151340027066 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.orchestration.v1 import template from openstack import resource FAKE = { 'Description': 'Blah blah', 'Parameters': { 'key_name': { 'type': 'string' } }, 'ParameterGroups': [{ 'label': 'Group 1', 'parameters': ['key_name'] }] } class TestTemplate(testtools.TestCase): def test_basic(self): sot = template.Template() self.assertEqual('orchestration', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) def test_make_it(self): sot = template.Template(**FAKE) self.assertEqual(FAKE['Description'], sot.description) self.assertEqual(FAKE['Parameters'], sot.parameters) self.assertEqual(FAKE['ParameterGroups'], sot.parameter_groups) @mock.patch.object(resource.Resource, '_translate_response') def test_validate(self, mock_translate): sess = mock.Mock() sot = template.Template() tmpl = mock.Mock() body = {'template': tmpl} sot.validate(sess, tmpl) sess.post.assert_called_once_with( '/validate', json=body) mock_translate.assert_called_once_with(sess.post.return_value) @mock.patch.object(resource.Resource, '_translate_response') def test_validate_with_env(self, mock_translate): sess = mock.Mock() sot = template.Template() tmpl = mock.Mock() env = mock.Mock() body = {'template': tmpl, 'environment': env} sot.validate(sess, tmpl, environment=env) sess.post.assert_called_once_with( '/validate', json=body) mock_translate.assert_called_once_with(sess.post.return_value) @mock.patch.object(resource.Resource, '_translate_response') def test_validate_with_template_url(self, mock_translate): sess = mock.Mock() sot = template.Template() template_url = 'http://host1' body = {'template': None, 'template_url': template_url} sot.validate(sess, None, template_url=template_url) sess.post.assert_called_once_with( '/validate', json=body) mock_translate.assert_called_once_with(sess.post.return_value) @mock.patch.object(resource.Resource, '_translate_response') def test_validate_with_ignore_errors(self, mock_translate): sess = mock.Mock() sot = template.Template() tmpl = mock.Mock() body = {'template': tmpl} sot.validate(sess, tmpl, ignore_errors='123,456') sess.post.assert_called_once_with( '/validate?ignore_errors=123%2C456', json=body) mock_translate.assert_called_once_with(sess.post.return_value) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_stack_files.py0000666000175100017510000000375213236151340027542 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.orchestration.v1 import stack_files as sf from openstack import resource FAKE = { 'stack_id': 'ID', 'stack_name': 'NAME' } class TestStackFiles(testtools.TestCase): def test_basic(self): sot = sf.StackFiles() self.assertEqual('orchestration', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertFalse(sot.allow_list) def test_make_it(self): sot = sf.StackFiles(**FAKE) self.assertEqual(FAKE['stack_id'], sot.stack_id) self.assertEqual(FAKE['stack_name'], sot.stack_name) @mock.patch.object(resource.Resource, '_prepare_request') def test_get(self, mock_prepare_request): resp = mock.Mock() resp.json = mock.Mock(return_value={'file': 'file-content'}) sess = mock.Mock() sess.get = mock.Mock(return_value=resp) sot = sf.StackFiles(**FAKE) sot.service = mock.Mock() req = mock.MagicMock() req.url = ('/stacks/%(stack_name)s/%(stack_id)s/files' % {'stack_name': FAKE['stack_name'], 'stack_id': FAKE['stack_id']}) mock_prepare_request.return_value = req files = sot.get(sess) sess.get.assert_called_once_with(req.url) self.assertEqual({'file': 'file-content'}, files) openstacksdk-0.11.3/openstack/tests/unit/orchestration/v1/test_proxy.py0000666000175100017510000002537313236151364026445 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from openstack import exceptions from openstack.orchestration.v1 import _proxy from openstack.orchestration.v1 import resource from openstack.orchestration.v1 import software_config as sc from openstack.orchestration.v1 import software_deployment as sd from openstack.orchestration.v1 import stack from openstack.orchestration.v1 import stack_environment from openstack.orchestration.v1 import stack_files from openstack.orchestration.v1 import stack_template from openstack.orchestration.v1 import template from openstack.tests.unit import test_proxy_base class TestOrchestrationProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestOrchestrationProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_create_stack(self): self.verify_create(self.proxy.create_stack, stack.Stack) def test_create_stack_preview(self): method_kwargs = {"preview": True, "x": 1, "y": 2, "z": 3} self.verify_create(self.proxy.create_stack, stack.StackPreview, method_kwargs=method_kwargs) def test_find_stack(self): self.verify_find(self.proxy.find_stack, stack.Stack) def test_stacks(self): self.verify_list(self.proxy.stacks, stack.Stack, paginated=False) def test_get_stack(self): self.verify_get(self.proxy.get_stack, stack.Stack) def test_update_stack(self): self.verify_update(self.proxy.update_stack, stack.Stack) def test_delete_stack(self): self.verify_delete(self.proxy.delete_stack, stack.Stack, False) def test_delete_stack_ignore(self): self.verify_delete(self.proxy.delete_stack, stack.Stack, True) @mock.patch.object(stack.Stack, 'check') def test_check_stack_with_stack_object(self, mock_check): stk = stack.Stack(id='FAKE_ID') res = self.proxy.check_stack(stk) self.assertIsNone(res) mock_check.assert_called_once_with(self.proxy) @mock.patch.object(stack.Stack, 'existing') def test_check_stack_with_stack_ID(self, mock_stack): stk = mock.Mock() mock_stack.return_value = stk res = self.proxy.check_stack('FAKE_ID') self.assertIsNone(res) mock_stack.assert_called_once_with(id='FAKE_ID') stk.check.assert_called_once_with(self.proxy) @mock.patch.object(stack.Stack, 'find') def test_get_stack_environment_with_stack_identity(self, mock_find): stack_id = '1234' stack_name = 'test_stack' stk = stack.Stack(id=stack_id, name=stack_name) mock_find.return_value = stk self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_stack_environment, method_args=['IDENTITY'], expected_args=[stack_environment.StackEnvironment], expected_kwargs={'requires_id': False, 'stack_name': stack_name, 'stack_id': stack_id}) mock_find.assert_called_once_with(mock.ANY, 'IDENTITY', ignore_missing=False) def test_get_stack_environment_with_stack_object(self): stack_id = '1234' stack_name = 'test_stack' stk = stack.Stack(id=stack_id, name=stack_name) self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_stack_environment, method_args=[stk], expected_args=[stack_environment.StackEnvironment], expected_kwargs={'requires_id': False, 'stack_name': stack_name, 'stack_id': stack_id}) @mock.patch.object(stack_files.StackFiles, 'get') @mock.patch.object(stack.Stack, 'find') def test_get_stack_files_with_stack_identity(self, mock_find, mock_get): stack_id = '1234' stack_name = 'test_stack' stk = stack.Stack(id=stack_id, name=stack_name) mock_find.return_value = stk mock_get.return_value = {'file': 'content'} res = self.proxy.get_stack_files('IDENTITY') self.assertEqual({'file': 'content'}, res) mock_find.assert_called_once_with(mock.ANY, 'IDENTITY', ignore_missing=False) mock_get.assert_called_once_with(self.proxy) @mock.patch.object(stack_files.StackFiles, 'get') def test_get_stack_files_with_stack_object(self, mock_get): stack_id = '1234' stack_name = 'test_stack' stk = stack.Stack(id=stack_id, name=stack_name) mock_get.return_value = {'file': 'content'} res = self.proxy.get_stack_files(stk) self.assertEqual({'file': 'content'}, res) mock_get.assert_called_once_with(self.proxy) @mock.patch.object(stack.Stack, 'find') def test_get_stack_template_with_stack_identity(self, mock_find): stack_id = '1234' stack_name = 'test_stack' stk = stack.Stack(id=stack_id, name=stack_name) mock_find.return_value = stk self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_stack_template, method_args=['IDENTITY'], expected_args=[stack_template.StackTemplate], expected_kwargs={'requires_id': False, 'stack_name': stack_name, 'stack_id': stack_id}) mock_find.assert_called_once_with(mock.ANY, 'IDENTITY', ignore_missing=False) def test_get_stack_template_with_stack_object(self): stack_id = '1234' stack_name = 'test_stack' stk = stack.Stack(id=stack_id, name=stack_name) self._verify2('openstack.proxy.BaseProxy._get', self.proxy.get_stack_template, method_args=[stk], expected_args=[stack_template.StackTemplate], expected_kwargs={'requires_id': False, 'stack_name': stack_name, 'stack_id': stack_id}) @mock.patch.object(stack.Stack, 'find') def test_resources_with_stack_object(self, mock_find): stack_id = '1234' stack_name = 'test_stack' stk = stack.Stack(id=stack_id, name=stack_name) self.verify_list(self.proxy.resources, resource.Resource, paginated=False, method_args=[stk], expected_kwargs={'stack_name': stack_name, 'stack_id': stack_id}) self.assertEqual(0, mock_find.call_count) @mock.patch.object(stack.Stack, 'find') def test_resources_with_stack_name(self, mock_find): stack_id = '1234' stack_name = 'test_stack' stk = stack.Stack(id=stack_id, name=stack_name) mock_find.return_value = stk self.verify_list(self.proxy.resources, resource.Resource, paginated=False, method_args=[stack_id], expected_kwargs={'stack_name': stack_name, 'stack_id': stack_id}) mock_find.assert_called_once_with(mock.ANY, stack_id, ignore_missing=False) @mock.patch.object(stack.Stack, 'find') @mock.patch.object(resource.Resource, 'list') def test_resources_stack_not_found(self, mock_list, mock_find): stack_name = 'test_stack' mock_find.side_effect = exceptions.ResourceNotFound( 'No stack found for test_stack') ex = self.assertRaises(exceptions.ResourceNotFound, self.proxy.resources, stack_name) self.assertEqual('No stack found for test_stack', six.text_type(ex)) def test_create_software_config(self): self.verify_create(self.proxy.create_software_config, sc.SoftwareConfig) def test_software_configs(self): self.verify_list(self.proxy.software_configs, sc.SoftwareConfig, paginated=True) def test_get_software_config(self): self.verify_get(self.proxy.get_software_config, sc.SoftwareConfig) def test_delete_software_config(self): self.verify_delete(self.proxy.delete_software_config, sc.SoftwareConfig, True) self.verify_delete(self.proxy.delete_software_config, sc.SoftwareConfig, False) def test_create_software_deployment(self): self.verify_create(self.proxy.create_software_deployment, sd.SoftwareDeployment) def test_software_deployments(self): self.verify_list(self.proxy.software_deployments, sd.SoftwareDeployment, paginated=False) def test_get_software_deployment(self): self.verify_get(self.proxy.get_software_deployment, sd.SoftwareDeployment) def test_update_software_deployment(self): self.verify_update(self.proxy.update_software_deployment, sd.SoftwareDeployment) def test_delete_software_deployment(self): self.verify_delete(self.proxy.delete_software_deployment, sd.SoftwareDeployment, True) self.verify_delete(self.proxy.delete_software_deployment, sd.SoftwareDeployment, False) @mock.patch.object(template.Template, 'validate') def test_validate_template(self, mock_validate): tmpl = mock.Mock() env = mock.Mock() tmpl_url = 'A_URI' ignore_errors = 'a_string' res = self.proxy.validate_template(tmpl, env, tmpl_url, ignore_errors) mock_validate.assert_called_once_with( self.proxy, tmpl, environment=env, template_url=tmpl_url, ignore_errors=ignore_errors) self.assertEqual(mock_validate.return_value, res) def test_validate_template_invalid_request(self): err = self.assertRaises(exceptions.InvalidRequest, self.proxy.validate_template, None, template_url=None) self.assertEqual("'template_url' must be specified when template is " "None", six.text_type(err)) openstacksdk-0.11.3/openstack/tests/unit/orchestration/__init__.py0000666000175100017510000000000013236151340025404 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/load_balancer/0000775000175100017510000000000013236151501023164 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_version.py0000666000175100017510000000267413236151340026276 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.load_balancer import version IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': '2', 'status': '3', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('load-balancer', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['status'], sot.status) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_member.py0000666000175100017510000000465413236151340026060 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.load_balancer.v2 import member IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'address': '192.0.2.16', 'admin_state_up': True, 'id': IDENTIFIER, 'monitor_address': '192.0.2.17', 'monitor_port': 9, 'name': 'test_member', 'pool_id': uuid.uuid4(), 'project_id': uuid.uuid4(), 'protocol_port': 5, 'subnet_id': uuid.uuid4(), 'weight': 7, } class TestPoolMember(testtools.TestCase): def test_basic(self): test_member = member.Member() self.assertEqual('member', test_member.resource_key) self.assertEqual('members', test_member.resources_key) self.assertEqual('/v2.0/lbaas/pools/%(pool_id)s/members', test_member.base_path) self.assertEqual('load-balancer', test_member.service.service_type) self.assertTrue(test_member.allow_create) self.assertTrue(test_member.allow_get) self.assertTrue(test_member.allow_update) self.assertTrue(test_member.allow_delete) self.assertTrue(test_member.allow_list) def test_make_it(self): test_member = member.Member(**EXAMPLE) self.assertEqual(EXAMPLE['address'], test_member.address) self.assertTrue(test_member.is_admin_state_up) self.assertEqual(EXAMPLE['id'], test_member.id) self.assertEqual(EXAMPLE['monitor_address'], test_member.monitor_address) self.assertEqual(EXAMPLE['monitor_port'], test_member.monitor_port) self.assertEqual(EXAMPLE['name'], test_member.name) self.assertEqual(EXAMPLE['pool_id'], test_member.pool_id) self.assertEqual(EXAMPLE['project_id'], test_member.project_id) self.assertEqual(EXAMPLE['protocol_port'], test_member.protocol_port) self.assertEqual(EXAMPLE['subnet_id'], test_member.subnet_id) self.assertEqual(EXAMPLE['weight'], test_member.weight) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_pool.py0000666000175100017510000000674413236151340025564 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.load_balancer.v2 import pool IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'name': 'test_pool', 'description': 'fake_description', 'admin_state_up': True, 'provisioning_status': 'ACTIVE', 'operating_status': 'ONLINE', 'protocol': 'HTTP', 'listener_id': uuid.uuid4(), 'loadbalancer_id': uuid.uuid4(), 'lb_algorithm': 'ROUND_ROBIN', 'session_persistence': {"type": "SOURCE_IP"}, 'project_id': uuid.uuid4(), 'loadbalancers': [{'id': uuid.uuid4()}], 'listeners': [{'id': uuid.uuid4()}], 'created_at': '2017-07-17T12:14:57.233772', 'updated_at': '2017-07-17T12:16:57.233772', 'health_monitor': 'healthmonitor', 'health_monitor_id': uuid.uuid4(), 'members': [{'id': uuid.uuid4()}] } class TestPool(testtools.TestCase): def test_basic(self): test_pool = pool.Pool() self.assertEqual('pool', test_pool.resource_key) self.assertEqual('pools', test_pool.resources_key) self.assertEqual('/v2.0/lbaas/pools', test_pool.base_path) self.assertEqual('load-balancer', test_pool.service.service_type) self.assertTrue(test_pool.allow_create) self.assertTrue(test_pool.allow_get) self.assertTrue(test_pool.allow_delete) self.assertTrue(test_pool.allow_list) self.assertTrue(test_pool.allow_update) def test_make_it(self): test_pool = pool.Pool(**EXAMPLE) self.assertEqual(EXAMPLE['name'], test_pool.name), self.assertEqual(EXAMPLE['description'], test_pool.description) self.assertEqual(EXAMPLE['admin_state_up'], test_pool.is_admin_state_up) self.assertEqual(EXAMPLE['provisioning_status'], test_pool.provisioning_status) self.assertEqual(EXAMPLE['protocol'], test_pool.protocol) self.assertEqual(EXAMPLE['operating_status'], test_pool.operating_status) self.assertEqual(EXAMPLE['listener_id'], test_pool.listener_id) self.assertEqual(EXAMPLE['loadbalancer_id'], test_pool.loadbalancer_id) self.assertEqual(EXAMPLE['lb_algorithm'], test_pool.lb_algorithm) self.assertEqual(EXAMPLE['session_persistence'], test_pool.session_persistence) self.assertEqual(EXAMPLE['project_id'], test_pool.project_id) self.assertEqual(EXAMPLE['loadbalancers'], test_pool.loadbalancers) self.assertEqual(EXAMPLE['listeners'], test_pool.listeners) self.assertEqual(EXAMPLE['created_at'], test_pool.created_at) self.assertEqual(EXAMPLE['updated_at'], test_pool.updated_at) self.assertEqual(EXAMPLE['health_monitor_id'], test_pool.health_monitor_id) self.assertEqual(EXAMPLE['members'], test_pool.members) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_l7rule.py0000666000175100017510000000526513236151340026022 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.load_balancer.v2 import l7_rule EXAMPLE = { 'admin_state_up': True, 'compare_type': 'REGEX', 'created_at': '2017-08-17T12:14:57.233772', 'id': uuid.uuid4(), 'invert': False, 'key': 'my_cookie', 'l7_policy_id': uuid.uuid4(), 'operating_status': 'ONLINE', 'project_id': uuid.uuid4(), 'provisioning_status': 'ACTIVE', 'type': 'COOKIE', 'updated_at': '2017-08-17T12:16:57.233772', 'value': 'chocolate' } class TestL7Rule(testtools.TestCase): def test_basic(self): test_l7rule = l7_rule.L7Rule() self.assertEqual('rule', test_l7rule.resource_key) self.assertEqual('rules', test_l7rule.resources_key) self.assertEqual('/v2.0/lbaas/l7policies/%(l7policy_id)s/rules', test_l7rule.base_path) self.assertEqual('load-balancer', test_l7rule.service.service_type) self.assertTrue(test_l7rule.allow_create) self.assertTrue(test_l7rule.allow_get) self.assertTrue(test_l7rule.allow_update) self.assertTrue(test_l7rule.allow_delete) self.assertTrue(test_l7rule.allow_list) def test_make_it(self): test_l7rule = l7_rule.L7Rule(**EXAMPLE) self.assertTrue(test_l7rule.is_admin_state_up) self.assertEqual(EXAMPLE['compare_type'], test_l7rule.compare_type) self.assertEqual(EXAMPLE['created_at'], test_l7rule.created_at) self.assertEqual(EXAMPLE['id'], test_l7rule.id) self.assertEqual(EXAMPLE['invert'], test_l7rule.invert) self.assertEqual(EXAMPLE['key'], test_l7rule.key) self.assertEqual(EXAMPLE['l7_policy_id'], test_l7rule.l7_policy_id) self.assertEqual(EXAMPLE['operating_status'], test_l7rule.operating_status) self.assertEqual(EXAMPLE['project_id'], test_l7rule.project_id) self.assertEqual(EXAMPLE['provisioning_status'], test_l7rule.provisioning_status) self.assertEqual(EXAMPLE['type'], test_l7rule.type) self.assertEqual(EXAMPLE['updated_at'], test_l7rule.updated_at) self.assertEqual(EXAMPLE['value'], test_l7rule.rule_value) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_load_balancer_service.py0000666000175100017510000000215513236151340031071 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.load_balancer import load_balancer_service as lb_service class TestLoadBalancingService(testtools.TestCase): def test_service(self): sot = lb_service.LoadBalancerService() self.assertEqual('load-balancer', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v2', sot.valid_versions[0].module) self.assertEqual('v2.0', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/__init__.py0000666000175100017510000000000013236151340025266 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_load_balancer.py0000666000175100017510000000707213236151340027354 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.load_balancer.v2 import load_balancer IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'created_at': '2017-07-17T12:14:57.233772', 'description': 'fake_description', 'flavor': uuid.uuid4(), 'id': IDENTIFIER, 'listeners': [{'id', uuid.uuid4()}], 'name': 'test_load_balancer', 'operating_status': 'ONLINE', 'pools': [{'id', uuid.uuid4()}], 'project_id': uuid.uuid4(), 'provider': 'fake_provider', 'provisioning_status': 'ACTIVE', 'updated_at': '2017-07-17T12:16:57.233772', 'vip_address': '192.0.2.5', 'vip_network_id': uuid.uuid4(), 'vip_port_id': uuid.uuid4(), 'vip_subnet_id': uuid.uuid4(), } class TestLoadBalancer(testtools.TestCase): def test_basic(self): test_load_balancer = load_balancer.LoadBalancer() self.assertEqual('loadbalancer', test_load_balancer.resource_key) self.assertEqual('loadbalancers', test_load_balancer.resources_key) self.assertEqual('/v2.0/lbaas/loadbalancers', test_load_balancer.base_path) self.assertEqual('load-balancer', test_load_balancer.service.service_type) self.assertTrue(test_load_balancer.allow_create) self.assertTrue(test_load_balancer.allow_get) self.assertTrue(test_load_balancer.allow_delete) self.assertTrue(test_load_balancer.allow_list) self.assertTrue(test_load_balancer.allow_update) def test_make_it(self): test_load_balancer = load_balancer.LoadBalancer(**EXAMPLE) self.assertTrue(test_load_balancer.is_admin_state_up) self.assertEqual(EXAMPLE['created_at'], test_load_balancer.created_at), self.assertEqual(EXAMPLE['description'], test_load_balancer.description) self.assertEqual(EXAMPLE['flavor'], test_load_balancer.flavor) self.assertEqual(EXAMPLE['id'], test_load_balancer.id) self.assertEqual(EXAMPLE['listeners'], test_load_balancer.listeners) self.assertEqual(EXAMPLE['name'], test_load_balancer.name) self.assertEqual(EXAMPLE['operating_status'], test_load_balancer.operating_status) self.assertEqual(EXAMPLE['pools'], test_load_balancer.pools) self.assertEqual(EXAMPLE['project_id'], test_load_balancer.project_id) self.assertEqual(EXAMPLE['provider'], test_load_balancer.provider) self.assertEqual(EXAMPLE['provisioning_status'], test_load_balancer.provisioning_status) self.assertEqual(EXAMPLE['updated_at'], test_load_balancer.updated_at), self.assertEqual(EXAMPLE['vip_address'], test_load_balancer.vip_address) self.assertEqual(EXAMPLE['vip_network_id'], test_load_balancer.vip_network_id) self.assertEqual(EXAMPLE['vip_port_id'], test_load_balancer.vip_port_id) self.assertEqual(EXAMPLE['vip_subnet_id'], test_load_balancer.vip_subnet_id) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_health_monitor.py0000666000175100017510000000612713236151340027622 0ustar zuulzuul00000000000000# Copyright 2017 Rackspace, US Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.load_balancer.v2 import health_monitor EXAMPLE = { 'admin_state_up': True, 'created_at': '2017-07-17T12:14:57.233772', 'delay': 10, 'expected_codes': '200, 202', 'http_method': 'HEAD', 'id': uuid.uuid4(), 'max_retries': 2, 'max_retries_down': 3, 'name': 'test_health_monitor', 'operating_status': 'ONLINE', 'pools': [{'id': uuid.uuid4()}], 'pool_id': uuid.uuid4(), 'project_id': uuid.uuid4(), 'provisioning_status': 'ACTIVE', 'timeout': 4, 'type': 'HTTP', 'updated_at': '2017-07-17T12:16:57.233772', 'url_path': '/health_page.html' } class TestPoolHealthMonitor(testtools.TestCase): def test_basic(self): test_hm = health_monitor.HealthMonitor() self.assertEqual('healthmonitor', test_hm.resource_key) self.assertEqual('healthmonitors', test_hm.resources_key) self.assertEqual('/v2.0/lbaas/healthmonitors', test_hm.base_path) self.assertEqual('load-balancer', test_hm.service.service_type) self.assertTrue(test_hm.allow_create) self.assertTrue(test_hm.allow_get) self.assertTrue(test_hm.allow_update) self.assertTrue(test_hm.allow_delete) self.assertTrue(test_hm.allow_list) def test_make_it(self): test_hm = health_monitor.HealthMonitor(**EXAMPLE) self.assertTrue(test_hm.is_admin_state_up) self.assertEqual(EXAMPLE['created_at'], test_hm.created_at) self.assertEqual(EXAMPLE['delay'], test_hm.delay) self.assertEqual(EXAMPLE['expected_codes'], test_hm.expected_codes) self.assertEqual(EXAMPLE['http_method'], test_hm.http_method) self.assertEqual(EXAMPLE['id'], test_hm.id) self.assertEqual(EXAMPLE['max_retries'], test_hm.max_retries) self.assertEqual(EXAMPLE['max_retries_down'], test_hm.max_retries_down) self.assertEqual(EXAMPLE['name'], test_hm.name) self.assertEqual(EXAMPLE['operating_status'], test_hm.operating_status) self.assertEqual(EXAMPLE['pools'], test_hm.pools) self.assertEqual(EXAMPLE['pool_id'], test_hm.pool_id) self.assertEqual(EXAMPLE['project_id'], test_hm.project_id) self.assertEqual(EXAMPLE['provisioning_status'], test_hm.provisioning_status) self.assertEqual(EXAMPLE['timeout'], test_hm.timeout) self.assertEqual(EXAMPLE['type'], test_hm.type) self.assertEqual(EXAMPLE['updated_at'], test_hm.updated_at) self.assertEqual(EXAMPLE['url_path'], test_hm.url_path) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_l7policy.py0000666000175100017510000000600513236151340026343 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.load_balancer.v2 import l7_policy EXAMPLE = { 'action': 'REJECT', 'admin_state_up': True, 'created_at': '2017-07-17T12:14:57.233772', 'description': 'test_description', 'id': uuid.uuid4(), 'listener_id': uuid.uuid4(), 'name': 'test_l7_policy', 'operating_status': 'ONLINE', 'position': 7, 'project_id': uuid.uuid4(), 'provisioning_status': 'ACTIVE', 'redirect_pool_id': uuid.uuid4(), 'redirect_url': '/test_url', 'rules': [{'id': uuid.uuid4()}], 'updated_at': '2017-07-17T12:16:57.233772', } class TestL7Policy(testtools.TestCase): def test_basic(self): test_l7_policy = l7_policy.L7Policy() self.assertEqual('l7policy', test_l7_policy.resource_key) self.assertEqual('l7policies', test_l7_policy.resources_key) self.assertEqual('/v2.0/lbaas/l7policies', test_l7_policy.base_path) self.assertEqual('load-balancer', test_l7_policy.service.service_type) self.assertTrue(test_l7_policy.allow_create) self.assertTrue(test_l7_policy.allow_get) self.assertTrue(test_l7_policy.allow_update) self.assertTrue(test_l7_policy.allow_delete) self.assertTrue(test_l7_policy.allow_list) def test_make_it(self): test_l7_policy = l7_policy.L7Policy(**EXAMPLE) self.assertTrue(test_l7_policy.is_admin_state_up) self.assertEqual(EXAMPLE['action'], test_l7_policy.action) self.assertEqual(EXAMPLE['created_at'], test_l7_policy.created_at) self.assertEqual(EXAMPLE['description'], test_l7_policy.description) self.assertEqual(EXAMPLE['id'], test_l7_policy.id) self.assertEqual(EXAMPLE['listener_id'], test_l7_policy.listener_id) self.assertEqual(EXAMPLE['name'], test_l7_policy.name) self.assertEqual(EXAMPLE['operating_status'], test_l7_policy.operating_status) self.assertEqual(EXAMPLE['position'], test_l7_policy.position) self.assertEqual(EXAMPLE['project_id'], test_l7_policy.project_id) self.assertEqual(EXAMPLE['provisioning_status'], test_l7_policy.provisioning_status) self.assertEqual(EXAMPLE['redirect_pool_id'], test_l7_policy.redirect_pool_id) self.assertEqual(EXAMPLE['redirect_url'], test_l7_policy.redirect_url) self.assertEqual(EXAMPLE['rules'], test_l7_policy.rules) self.assertEqual(EXAMPLE['updated_at'], test_l7_policy.updated_at) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_listener.py0000666000175100017510000000731713236151340026435 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import uuid from openstack.load_balancer.v2 import listener IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'admin_state_up': True, 'connection_limit': '2', 'default_pool_id': uuid.uuid4(), 'description': 'test description', 'id': IDENTIFIER, 'insert_headers': {"X-Forwarded-For": "true"}, 'l7policies': [{'id': uuid.uuid4()}], 'loadbalancers': [{'id': uuid.uuid4()}], 'name': 'test_listener', 'project_id': uuid.uuid4(), 'protocol': 'TEST_PROTOCOL', 'protocol_port': 10, 'default_tls_container_ref': ('http://198.51.100.10:9311/v1/containers/' 'a570068c-d295-4780-91d4-3046a325db51'), 'sni_container_refs': [], 'created_at': '2017-07-17T12:14:57.233772', 'updated_at': '2017-07-17T12:16:57.233772', 'operating_status': 'ONLINE', 'provisioning_status': 'ACTIVE', } class TestListener(testtools.TestCase): def test_basic(self): test_listener = listener.Listener() self.assertEqual('listener', test_listener.resource_key) self.assertEqual('listeners', test_listener.resources_key) self.assertEqual('/v2.0/lbaas/listeners', test_listener.base_path) self.assertEqual('load-balancer', test_listener.service.service_type) self.assertTrue(test_listener.allow_create) self.assertTrue(test_listener.allow_get) self.assertTrue(test_listener.allow_update) self.assertTrue(test_listener.allow_delete) self.assertTrue(test_listener.allow_list) def test_make_it(self): test_listener = listener.Listener(**EXAMPLE) self.assertTrue(test_listener.is_admin_state_up) self.assertEqual(EXAMPLE['connection_limit'], test_listener.connection_limit) self.assertEqual(EXAMPLE['default_pool_id'], test_listener.default_pool_id) self.assertEqual(EXAMPLE['description'], test_listener.description) self.assertEqual(EXAMPLE['id'], test_listener.id) self.assertEqual(EXAMPLE['insert_headers'], test_listener.insert_headers) self.assertEqual(EXAMPLE['l7policies'], test_listener.l7_policies) self.assertEqual(EXAMPLE['loadbalancers'], test_listener.load_balancers) self.assertEqual(EXAMPLE['name'], test_listener.name) self.assertEqual(EXAMPLE['project_id'], test_listener.project_id) self.assertEqual(EXAMPLE['protocol'], test_listener.protocol) self.assertEqual(EXAMPLE['protocol_port'], test_listener.protocol_port) self.assertEqual(EXAMPLE['default_tls_container_ref'], test_listener.default_tls_container_ref) self.assertEqual(EXAMPLE['sni_container_refs'], test_listener.sni_container_refs) self.assertEqual(EXAMPLE['created_at'], test_listener.created_at) self.assertEqual(EXAMPLE['updated_at'], test_listener.updated_at) self.assertEqual(EXAMPLE['provisioning_status'], test_listener.provisioning_status) self.assertEqual(EXAMPLE['operating_status'], test_listener.operating_status) openstacksdk-0.11.3/openstack/tests/unit/load_balancer/test_proxy.py0000666000175100017510000002211113236151340025756 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from openstack.load_balancer.v2 import _proxy from openstack.load_balancer.v2 import health_monitor from openstack.load_balancer.v2 import l7_policy from openstack.load_balancer.v2 import l7_rule from openstack.load_balancer.v2 import listener from openstack.load_balancer.v2 import load_balancer as lb from openstack.load_balancer.v2 import member from openstack.load_balancer.v2 import pool from openstack.tests.unit import test_proxy_base class TestLoadBalancerProxy(test_proxy_base.TestProxyBase): POOL_ID = uuid.uuid4() L7_POLICY_ID = uuid.uuid4() def setUp(self): super(TestLoadBalancerProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_load_balancers(self): self.verify_list(self.proxy.load_balancers, lb.LoadBalancer, paginated=True) def test_load_balancer_get(self): self.verify_get(self.proxy.get_load_balancer, lb.LoadBalancer) def test_load_balancer_create(self): self.verify_create(self.proxy.create_load_balancer, lb.LoadBalancer) def test_load_balancer_delete(self): self.verify_delete(self.proxy.delete_load_balancer, lb.LoadBalancer, True) def test_load_balancer_find(self): self.verify_find(self.proxy.find_load_balancer, lb.LoadBalancer) def test_load_balancer_update(self): self.verify_update(self.proxy.update_load_balancer, lb.LoadBalancer) def test_listeners(self): self.verify_list(self.proxy.listeners, listener.Listener, paginated=True) def test_listener_get(self): self.verify_get(self.proxy.get_listener, listener.Listener) def test_listener_create(self): self.verify_create(self.proxy.create_listener, listener.Listener) def test_listener_delete(self): self.verify_delete(self.proxy.delete_listener, listener.Listener, True) def test_listener_find(self): self.verify_find(self.proxy.find_listener, listener.Listener) def test_listener_update(self): self.verify_update(self.proxy.update_listener, listener.Listener) def test_pools(self): self.verify_list(self.proxy.pools, pool.Pool, paginated=True) def test_pool_get(self): self.verify_get(self.proxy.get_pool, pool.Pool) def test_pool_create(self): self.verify_create(self.proxy.create_pool, pool.Pool) def test_pool_delete(self): self.verify_delete(self.proxy.delete_pool, pool.Pool, True) def test_pool_find(self): self.verify_find(self.proxy.find_pool, pool.Pool) def test_pool_update(self): self.verify_update(self.proxy.update_pool, pool.Pool) def test_members(self): self.verify_list(self.proxy.members, member.Member, paginated=True, method_kwargs={'pool': self.POOL_ID}, expected_kwargs={'pool_id': self.POOL_ID}) def test_member_get(self): self.verify_get(self.proxy.get_member, member.Member, method_kwargs={'pool': self.POOL_ID}, expected_kwargs={'pool_id': self.POOL_ID}) def test_member_create(self): self.verify_create(self.proxy.create_member, member.Member, method_kwargs={'pool': self.POOL_ID}, expected_kwargs={'pool_id': self.POOL_ID}) def test_member_delete(self): self.verify_delete(self.proxy.delete_member, member.Member, True, method_kwargs={'pool': self.POOL_ID}, expected_kwargs={'pool_id': self.POOL_ID}) def test_member_find(self): self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_member, method_args=["MEMBER", self.POOL_ID], expected_args=[member.Member, "MEMBER"], expected_kwargs={"pool_id": self.POOL_ID, "ignore_missing": True}) def test_member_update(self): self._verify2('openstack.proxy.BaseProxy._update', self.proxy.update_member, method_args=["MEMBER", self.POOL_ID], expected_args=[member.Member, "MEMBER"], expected_kwargs={"pool_id": self.POOL_ID}) def test_health_monitors(self): self.verify_list(self.proxy.health_monitors, health_monitor.HealthMonitor, paginated=True) def test_health_monitor_get(self): self.verify_get(self.proxy.get_health_monitor, health_monitor.HealthMonitor) def test_health_monitor_create(self): self.verify_create(self.proxy.create_health_monitor, health_monitor.HealthMonitor) def test_health_monitor_delete(self): self.verify_delete(self.proxy.delete_health_monitor, health_monitor.HealthMonitor, True) def test_health_monitor_find(self): self.verify_find(self.proxy.find_health_monitor, health_monitor.HealthMonitor) def test_health_monitor_update(self): self.verify_update(self.proxy.update_health_monitor, health_monitor.HealthMonitor) def test_l7_policies(self): self.verify_list(self.proxy.l7_policies, l7_policy.L7Policy, paginated=True) def test_l7_policy_get(self): self.verify_get(self.proxy.get_l7_policy, l7_policy.L7Policy) def test_l7_policy_create(self): self.verify_create(self.proxy.create_l7_policy, l7_policy.L7Policy) def test_l7_policy_delete(self): self.verify_delete(self.proxy.delete_l7_policy, l7_policy.L7Policy, True) def test_l7_policy_find(self): self.verify_find(self.proxy.find_l7_policy, l7_policy.L7Policy) def test_l7_policy_update(self): self.verify_update(self.proxy.update_l7_policy, l7_policy.L7Policy) def test_l7_rules(self): self.verify_list(self.proxy.l7_rules, l7_rule.L7Rule, paginated=True, method_kwargs={'l7_policy': self.L7_POLICY_ID}, expected_kwargs={'l7policy_id': self.L7_POLICY_ID}) def test_l7_rule_get(self): self.verify_get(self.proxy.get_l7_rule, l7_rule.L7Rule, method_kwargs={'l7_policy': self.L7_POLICY_ID}, expected_kwargs={'l7policy_id': self.L7_POLICY_ID}) def test_l7_rule_create(self): self.verify_create(self.proxy.create_l7_rule, l7_rule.L7Rule, method_kwargs={'l7_policy': self.L7_POLICY_ID}, expected_kwargs={'l7policy_id': self.L7_POLICY_ID}) def test_l7_rule_delete(self): self.verify_delete(self.proxy.delete_l7_rule, l7_rule.L7Rule, True, method_kwargs={'l7_policy': self.L7_POLICY_ID}, expected_kwargs={'l7policy_id': self.L7_POLICY_ID}) def test_l7_rule_find(self): self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_l7_rule, method_args=["RULE", self.L7_POLICY_ID], expected_args=[l7_rule.L7Rule, "RULE"], expected_kwargs={"l7policy_id": self.L7_POLICY_ID, "ignore_missing": True}) def test_l7_rule_update(self): self._verify2('openstack.proxy.BaseProxy._update', self.proxy.update_l7_rule, method_args=["RULE", self.L7_POLICY_ID], expected_args=[l7_rule.L7Rule, "RULE"], expected_kwargs={"l7policy_id": self.L7_POLICY_ID}) openstacksdk-0.11.3/openstack/tests/unit/test__adapter.py0000666000175100017510000000304413236151340023612 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from testscenarios import load_tests_apply_scenarios as load_tests # noqa from openstack import _adapter from openstack.tests.unit import base class TestExtractName(base.TestCase): scenarios = [ ('slash_servers_bare', dict(url='/servers', parts=['servers'])), ('slash_servers_arg', dict(url='/servers/1', parts=['servers'])), ('servers_bare', dict(url='servers', parts=['servers'])), ('servers_arg', dict(url='servers/1', parts=['servers'])), ('networks_bare', dict(url='/v2.0/networks', parts=['networks'])), ('networks_arg', dict(url='/v2.0/networks/1', parts=['networks'])), ('tokens', dict(url='/v3/tokens', parts=['tokens'])), ('discovery', dict(url='/', parts=['discovery'])), ('secgroups', dict( url='/servers/1/os-security-groups', parts=['servers', 'os-security-groups'])), ] def test_extract_name(self): results = _adapter._extract_name(self.url) self.assertEqual(self.parts, results) openstacksdk-0.11.3/openstack/tests/unit/test_connection.py0000666000175100017510000001730313236151340024175 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures from keystoneauth1 import session import mock from openstack import connection import openstack.config from openstack import profile from openstack.tests.unit import base CONFIG_AUTH_URL = "http://127.0.0.1:5000/v2.0" CONFIG_USERNAME = "BozoTheClown" CONFIG_PASSWORD = "TopSecret" CONFIG_PROJECT = "TheGrandPrizeGame" CONFIG_CACERT = "TrustMe" CLOUD_CONFIG = """ clouds: sample: region_name: RegionOne auth: auth_url: {auth_url} username: {username} password: {password} project_name: {project} insecure: auth: auth_url: {auth_url} username: {username} password: {password} project_name: {project} cacert: {cacert} verify: False cacert: auth: auth_url: {auth_url} username: {username} password: {password} project_name: {project} cacert: {cacert} """.format(auth_url=CONFIG_AUTH_URL, username=CONFIG_USERNAME, password=CONFIG_PASSWORD, project=CONFIG_PROJECT, cacert=CONFIG_CACERT) class TestConnection(base.RequestsMockTestCase): def setUp(self): super(TestConnection, self).setUp() # Create a temporary directory where our test config will live # and insert it into the search path via OS_CLIENT_CONFIG_FILE. config_dir = self.useFixture(fixtures.TempDir()).path config_path = os.path.join(config_dir, "clouds.yaml") with open(config_path, "w") as conf: conf.write(CLOUD_CONFIG) self.useFixture(fixtures.EnvironmentVariable( "OS_CLIENT_CONFIG_FILE", config_path)) def test_other_parameters(self): conn = connection.Connection(cloud='sample', cert='cert') self.assertEqual(conn.session.cert, 'cert') def test_session_provided(self): mock_session = mock.Mock(spec=session.Session) mock_session.auth = mock.Mock() mock_session.auth.auth_url = 'https://auth.example.com' conn = connection.Connection(session=mock_session, cert='cert') self.assertEqual(mock_session, conn.session) self.assertEqual('auth.example.com', conn.config.name) def test_create_session(self): conn = connection.Connection(cloud='sample') self.assertEqual('openstack.proxy', conn.alarm.__class__.__module__) self.assertEqual('openstack.clustering.v1._proxy', conn.clustering.__class__.__module__) self.assertEqual('openstack.compute.v2._proxy', conn.compute.__class__.__module__) self.assertEqual('openstack.database.v1._proxy', conn.database.__class__.__module__) self.assertEqual('openstack.identity.v2._proxy', conn.identity.__class__.__module__) self.assertEqual('openstack.image.v2._proxy', conn.image.__class__.__module__) self.assertEqual('openstack.network.v2._proxy', conn.network.__class__.__module__) self.assertEqual('openstack.object_store.v1._proxy', conn.object_store.__class__.__module__) self.assertEqual('openstack.load_balancer.v2._proxy', conn.load_balancer.__class__.__module__) self.assertEqual('openstack.orchestration.v1._proxy', conn.orchestration.__class__.__module__) self.assertEqual('openstack.workflow.v2._proxy', conn.workflow.__class__.__module__) def test_from_config_given_config(self): cloud_region = openstack.config.OpenStackConfig().get_one("sample") sot = connection.from_config(config=cloud_region) self.assertEqual(CONFIG_USERNAME, sot.config.config['auth']['username']) self.assertEqual(CONFIG_PASSWORD, sot.config.config['auth']['password']) self.assertEqual(CONFIG_AUTH_URL, sot.config.config['auth']['auth_url']) self.assertEqual(CONFIG_PROJECT, sot.config.config['auth']['project_name']) def test_from_config_given_cloud(self): sot = connection.from_config(cloud="sample") self.assertEqual(CONFIG_USERNAME, sot.config.config['auth']['username']) self.assertEqual(CONFIG_PASSWORD, sot.config.config['auth']['password']) self.assertEqual(CONFIG_AUTH_URL, sot.config.config['auth']['auth_url']) self.assertEqual(CONFIG_PROJECT, sot.config.config['auth']['project_name']) def test_from_config_given_cloud_config(self): cloud_region = openstack.config.OpenStackConfig().get_one("sample") sot = connection.from_config(cloud_config=cloud_region) self.assertEqual(CONFIG_USERNAME, sot.config.config['auth']['username']) self.assertEqual(CONFIG_PASSWORD, sot.config.config['auth']['password']) self.assertEqual(CONFIG_AUTH_URL, sot.config.config['auth']['auth_url']) self.assertEqual(CONFIG_PROJECT, sot.config.config['auth']['project_name']) def test_from_config_given_cloud_name(self): sot = connection.from_config(cloud_name="sample") self.assertEqual(CONFIG_USERNAME, sot.config.config['auth']['username']) self.assertEqual(CONFIG_PASSWORD, sot.config.config['auth']['password']) self.assertEqual(CONFIG_AUTH_URL, sot.config.config['auth']['auth_url']) self.assertEqual(CONFIG_PROJECT, sot.config.config['auth']['project_name']) def test_from_config_given_options(self): version = "100" class Opts(object): compute_api_version = version sot = connection.from_config(cloud="sample", options=Opts) self.assertEqual(version, sot.compute.version) def test_from_config_verify(self): sot = connection.from_config(cloud="insecure") self.assertFalse(sot.session.verify) sot = connection.from_config(cloud="cacert") self.assertEqual(CONFIG_CACERT, sot.session.verify) def test_from_profile(self): """Copied from openstackclient/network/client.py make_client.""" API_NAME = "network" instance = self.cloud_config prof = profile.Profile() prof.set_region(API_NAME, instance.region_name) prof.set_version(API_NAME, instance.get_api_version(API_NAME)) prof.set_interface(API_NAME, instance.get_interface(API_NAME)) connection.Connection( authenticator=instance.get_session().auth, verify=instance.get_session().verify, cert=instance.get_session().cert, profile=prof) class TestAuthorize(base.RequestsMockTestCase): def test_authorize_works(self): res = self.conn.authorize() self.assertEqual('KeystoneToken-1', res) def test_authorize_failure(self): self.use_broken_keystone() self.assertRaises(openstack.exceptions.HttpException, self.conn.authorize) openstacksdk-0.11.3/openstack/tests/unit/fixtures/0000775000175100017510000000000013236151501022267 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/fixtures/clouds/0000775000175100017510000000000013236151501023560 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/fixtures/clouds/clouds.yaml0000666000175100017510000000123713236151340025743 0ustar zuulzuul00000000000000clouds: _test_cloud_: auth: auth_url: https://identity.example.com password: password project_name: admin username: admin user_domain_name: default project_domain_name: default region_name: RegionOne _test_cloud_v2_: auth: auth_url: https://identity.example.com password: password project_name: admin username: admin identity_api_version: '2.0' region_name: RegionOne _bogus_test_: auth_type: bogus auth: auth_url: https://identity.example.com/v2.0 username: _test_user_ password: _test_pass_ project_name: _test_project_ region_name: _test_region_ openstacksdk-0.11.3/openstack/tests/unit/fixtures/clouds/clouds_cache.yaml0000666000175100017510000000135513236151340027067 0ustar zuulzuul00000000000000cache: max_age: 90 class: dogpile.cache.memory expiration: server: 1 clouds: _test_cloud_: auth: auth_url: https://identity.example.com password: password project_name: admin username: admin user_domain_name: default project_domain_name: default region_name: RegionOne _test_cloud_v2_: auth: auth_url: https://identity.example.com password: password project_name: admin username: admin identity_api_version: '2.0' region_name: RegionOne _bogus_test_: auth_type: bogus auth: auth_url: http://identity.example.com/v2.0 username: _test_user_ password: _test_pass_ project_name: _test_project_ region_name: _test_region_ openstacksdk-0.11.3/openstack/tests/unit/fixtures/image-version-v2.json0000666000175100017510000000135113236151340026257 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v2.3", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.2", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.1", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.0", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] } ] } openstacksdk-0.11.3/openstack/tests/unit/fixtures/image-version-broken.json0000666000175100017510000000204313236151340027207 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v2.3", "links": [ { "href": "http://localhost/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.2", "links": [ { "href": "http://localhost/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.1", "links": [ { "href": "http://localhost/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.0", "links": [ { "href": "http://localhost/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.1", "links": [ { "href": "http://localhost/v1/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.0", "links": [ { "href": "http://localhost/v1/", "rel": "self" } ] } ] } openstacksdk-0.11.3/openstack/tests/unit/fixtures/image-version-v1.json0000666000175100017510000000057713236151340026267 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v1.1", "links": [ { "href": "http://image.example.com/v1/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.0", "links": [ { "href": "http://image.example.com/v1/", "rel": "self" } ] } ] } openstacksdk-0.11.3/openstack/tests/unit/fixtures/image-version.json0000666000175100017510000000212313236151340025730 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v2.3", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.2", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.1", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.0", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.1", "links": [ { "href": "http://image.example.com/v1/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.0", "links": [ { "href": "http://image.example.com/v1/", "rel": "self" } ] } ] } openstacksdk-0.11.3/openstack/tests/unit/fixtures/catalog-v2.json0000666000175100017510000001074613236151340025134 0ustar zuulzuul00000000000000{ "access": { "token": { "issued_at": "2016-04-14T10:09:58.014014Z", "expires": "9999-12-31T23:59:59Z", "id": "7fa3037ae2fe48ada8c626a51dc01ffd", "tenant": { "enabled": true, "description": "Bootstrap project for initializing the cloud.", "name": "admin", "id": "1c36b64c840a42cd9e9b931a369337f0" }, "audit_ids": [ "FgG3Q8T3Sh21r_7HyjHP8A" ] }, "serviceCatalog": [ { "endpoints_links": [], "endpoints": [ { "adminURL": "https://compute.example.com/v2.1/1c36b64c840a42cd9e9b931a369337f0", "region": "RegionOne", "publicURL": "https://compute.example.com/v2.1/1c36b64c840a42cd9e9b931a369337f0", "internalURL": "https://compute.example.com/v2.1/1c36b64c840a42cd9e9b931a369337f0", "id": "32466f357f3545248c47471ca51b0d3a" } ], "type": "compute", "name": "nova" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0", "region": "RegionOne", "publicURL": "https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0", "internalURL": "https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0", "id": "1e875ca2225b408bbf3520a1b8e1a537" } ], "type": "volumev2", "name": "cinderv2" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://image.example.com/v2", "region": "RegionOne", "publicURL": "https://image.example.com/v2", "internalURL": "https://image.example.com/v2", "id": "5a64de3c4a614d8d8f8d1ba3dee5f45f" } ], "type": "image", "name": "glance" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://volume.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "region": "RegionOne", "publicURL": "https://volume.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "internalURL": "https://volume.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "id": "3d15fdfc7d424f3c8923324417e1a3d1" } ], "type": "volume", "name": "cinder" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://identity.example.com/v2.0", "region": "RegionOne", "publicURL": "https://identity.example.com/v2.0", "internalURL": "https://identity.example.com/v2.0", "id": "4deb4d0504a044a395d4480741ba628c" } ], "type": "identity", "name": "keystone" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://network.example.com", "region": "RegionOne", "publicURL": "https://network.example.com", "internalURL": "https://network.example.com", "id": "4deb4d0504a044a395d4480741ba628d" } ], "type": "network", "name": "neutron" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://object-store.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "region": "RegionOne", "publicURL": "https://object-store.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "internalURL": "https://object-store.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "id": "4deb4d0504a044a395d4480741ba628c" } ], "type": "object-store", "name": "swift" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://dns.example.com", "region": "RegionOne", "publicURL": "https://dns.example.com", "internalURL": "https://dns.example.com", "id": "652f0612744042bfbb8a8bb2c777a16d" } ], "type": "dns", "name": "designate" } ], "user": { "username": "dummy", "roles_links": [], "id": "71675f719c3343e8ac441cc28f396474", "roles": [ { "name": "admin" } ], "name": "admin" }, "metadata": { "is_admin": 0, "roles": [ "6d813db50b6e4a1ababdbbb5a83c7de5" ] } } } openstacksdk-0.11.3/openstack/tests/unit/fixtures/dns.json0000666000175100017510000000101113236151340023742 0ustar zuulzuul00000000000000{ "versions": { "values": [{ "id": "v1", "links": [ { "href": "https://dns.example.com/v1", "rel": "self" } ], "status": "DEPRECATED" }, { "id": "v2", "links": [ { "href": "https://dns.example.com/v2", "rel": "self" } ], "status": "CURRENT" }] } } openstacksdk-0.11.3/openstack/tests/unit/fixtures/catalog-v3.json0000666000175100017510000001113513236151340025126 0ustar zuulzuul00000000000000{ "token": { "audit_ids": [ "Rvn7eHkiSeOwucBIPaKdYA" ], "catalog": [ { "endpoints": [ { "id": "32466f357f3545248c47471ca51b0d3a", "interface": "public", "region": "RegionOne", "url": "https://compute.example.com/v2.1/" } ], "name": "nova", "type": "compute" }, { "endpoints": [ { "id": "1e875ca2225b408bbf3520a1b8e1a537", "interface": "public", "region": "RegionOne", "url": "https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0" } ], "name": "cinderv2", "type": "volumev2" }, { "endpoints": [ { "id": "5a64de3c4a614d8d8f8d1ba3dee5f45f", "interface": "public", "region": "RegionOne", "url": "https://image.example.com" } ], "name": "glance", "type": "image" }, { "endpoints": [ { "id": "3d15fdfc7d424f3c8923324417e1a3d1", "interface": "public", "region": "RegionOne", "url": "https://volume.example.com/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "name": "cinder", "type": "volume" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://identity.example.com" }, { "id": "012322eeedcd459edabb4933021112bc", "interface": "admin", "region": "RegionOne", "url": "https://identity.example.com" } ], "endpoints_links": [], "name": "keystone", "type": "identity" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628d", "interface": "public", "region": "RegionOne", "url": "https://network.example.com" } ], "endpoints_links": [], "name": "neutron", "type": "network" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628e", "interface": "public", "region": "RegionOne", "url": "https://container-infra.example.com/v1" } ], "endpoints_links": [], "name": "magnum", "type": "container-infra" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://object-store.example.com/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "endpoints_links": [], "name": "swift", "type": "object-store" }, { "endpoints": [ { "id": "652f0612744042bfbb8a8bb2c777a16d", "interface": "public", "region": "RegionOne", "url": "https://bare-metal.example.com/" } ], "endpoints_links": [], "name": "ironic", "type": "baremetal" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://orchestration.example.com/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "endpoints_links": [], "name": "heat", "type": "orchestration" }, { "endpoints": [ { "id": "10c76ffd2b744a67950ed1365190d352", "interface": "public", "region": "RegionOne", "url": "https://dns.example.com" } ], "endpoints_links": [], "name": "designate", "type": "dns" } ], "expires_at": "9999-12-31T23:59:59Z", "issued_at": "2016-12-17T14:25:05.000000Z", "methods": [ "password" ], "project": { "domain": { "id": "default", "name": "default" }, "id": "1c36b64c840a42cd9e9b931a369337f0", "name": "Default Project" }, "roles": [ { "id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_" }, { "id": "37071fc082e14c2284c32a2761f71c63", "name": "swiftoperator" } ], "user": { "domain": { "id": "default", "name": "default" }, "id": "c17534835f8f42bf98fc367e0bf35e09", "name": "mordred" } } } openstacksdk-0.11.3/openstack/tests/unit/fixtures/discovery.json0000666000175100017510000000176213236151340025202 0ustar zuulzuul00000000000000{ "versions": { "values": [ { "status": "stable", "updated": "2016-04-04T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v3+json" } ], "id": "v3.6", "links": [ { "href": "https://identity.example.com/v3/", "rel": "self" } ] }, { "status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json" } ], "id": "v2.0", "links": [ { "href": "https://identity.example.com/v2.0/", "rel": "self" }, { "href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby" } ] } ] } } openstacksdk-0.11.3/openstack/tests/unit/fixtures/baremetal.json0000666000175100017510000000115313236151340025121 0ustar zuulzuul00000000000000{ "default_version": { "id": "v1", "links": [ { "href": "https://bare-metal.example.com/v1/", "rel": "self" } ], "min_version": "1.1", "status": "CURRENT", "version": "1.33" }, "description": "Ironic is an OpenStack project which aims to provision baremetal machines.", "name": "OpenStack Ironic API", "versions": [ { "id": "v1", "links": [ { "href": "https://bare-metal.example.com/v1/", "rel": "self" } ], "min_version": "1.1", "status": "CURRENT", "version": "1.33" } ] } openstacksdk-0.11.3/openstack/tests/unit/fixtures/catalog-v3-suburl.json0000666000175100017510000001113613236151340026441 0ustar zuulzuul00000000000000{ "token": { "audit_ids": [ "Rvn7eHkiSeOwucBIPaKdYA" ], "catalog": [ { "endpoints": [ { "id": "32466f357f3545248c47471ca51b0d3a", "interface": "public", "region": "RegionOne", "url": "https://example.com/compute/v2.1/" } ], "name": "nova", "type": "compute" }, { "endpoints": [ { "id": "1e875ca2225b408bbf3520a1b8e1a537", "interface": "public", "region": "RegionOne", "url": "https://example.com/volumev2/v2/1c36b64c840a42cd9e9b931a369337f0" } ], "name": "cinderv2", "type": "volumev2" }, { "endpoints": [ { "id": "5a64de3c4a614d8d8f8d1ba3dee5f45f", "interface": "public", "region": "RegionOne", "url": "https://example.com/image" } ], "name": "glance", "type": "image" }, { "endpoints": [ { "id": "3d15fdfc7d424f3c8923324417e1a3d1", "interface": "public", "region": "RegionOne", "url": "https://example.com/volume/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "name": "cinder", "type": "volume" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://identity.example.com" }, { "id": "012322eeedcd459edabb4933021112bc", "interface": "admin", "region": "RegionOne", "url": "https://example.com/identity" } ], "endpoints_links": [], "name": "keystone", "type": "identity" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628d", "interface": "public", "region": "RegionOne", "url": "https://example.com/example" } ], "endpoints_links": [], "name": "neutron", "type": "network" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628e", "interface": "public", "region": "RegionOne", "url": "https://example.com/container-infra/v1" } ], "endpoints_links": [], "name": "magnum", "type": "container-infra" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://example.com/object-store/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "endpoints_links": [], "name": "swift", "type": "object-store" }, { "endpoints": [ { "id": "652f0612744042bfbb8a8bb2c777a16d", "interface": "public", "region": "RegionOne", "url": "https://example.com/bare-metal" } ], "endpoints_links": [], "name": "ironic", "type": "baremetal" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://example.com/orchestration/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "endpoints_links": [], "name": "heat", "type": "orchestration" }, { "endpoints": [ { "id": "10c76ffd2b744a67950ed1365190d352", "interface": "public", "region": "RegionOne", "url": "https://example.com/dns" } ], "endpoints_links": [], "name": "designate", "type": "dns" } ], "expires_at": "9999-12-31T23:59:59Z", "issued_at": "2016-12-17T14:25:05.000000Z", "methods": [ "password" ], "project": { "domain": { "id": "default", "name": "default" }, "id": "1c36b64c840a42cd9e9b931a369337f0", "name": "Default Project" }, "roles": [ { "id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_" }, { "id": "37071fc082e14c2284c32a2761f71c63", "name": "swiftoperator" } ], "user": { "domain": { "id": "default", "name": "default" }, "id": "c17534835f8f42bf98fc367e0bf35e09", "name": "mordred" } } } openstacksdk-0.11.3/openstack/tests/unit/fixtures/image-version-suburl.json0000666000175100017510000000212313236151340027242 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v2.3", "links": [ { "href": "http://example.com/image/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.2", "links": [ { "href": "http://example.com/image/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.1", "links": [ { "href": "http://example.com/image/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.0", "links": [ { "href": "http://example.com/image/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.1", "links": [ { "href": "http://example.com/image/v1/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.0", "links": [ { "href": "http://example.com/image/v1/", "rel": "self" } ] } ] } openstacksdk-0.11.3/openstack/tests/unit/test_proxy_base.py0000666000175100017510000002453313236151364024222 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from openstack.tests.unit import base class TestProxyBase(base.TestCase): def setUp(self): super(TestProxyBase, self).setUp() self.session = mock.Mock() def _add_path_args_for_verify(self, path_args, method_args, expected_kwargs, value=None): if path_args is not None: if value is None: for key in path_args: method_args.append(path_args[key]) expected_kwargs['path_args'] = path_args def _verify(self, mock_method, test_method, method_args=None, method_kwargs=None, expected_args=None, expected_kwargs=None, expected_result=None): with mock.patch(mock_method) as mocked: mocked.return_value = expected_result if any([method_args, method_kwargs, expected_args, expected_kwargs]): method_args = method_args or () method_kwargs = method_kwargs or {} expected_args = expected_args or () expected_kwargs = expected_kwargs or {} self.assertEqual(expected_result, test_method(*method_args, **method_kwargs)) mocked.assert_called_with(test_method.__self__, *expected_args, **expected_kwargs) else: self.assertEqual(expected_result, test_method()) mocked.assert_called_with(test_method.__self__) # NOTE(briancurtin): This is a duplicate version of _verify that is # temporarily here while we shift APIs. The difference is that # calls from the Proxy classes aren't going to be going directly into # the Resource layer anymore, so they don't pass in the session which # was tested in assert_called_with. # This is being done in lieu of adding logic and complicating # the _verify method. It will be removed once there is one API to # be verifying. def _verify2(self, mock_method, test_method, method_args=None, method_kwargs=None, method_result=None, expected_args=None, expected_kwargs=None, expected_result=None): with mock.patch(mock_method) as mocked: mocked.return_value = expected_result if any([method_args, method_kwargs, expected_args, expected_kwargs]): method_args = method_args or () method_kwargs = method_kwargs or {} expected_args = expected_args or () expected_kwargs = expected_kwargs or {} if method_result: self.assertEqual(method_result, test_method(*method_args, **method_kwargs)) else: self.assertEqual(expected_result, test_method(*method_args, **method_kwargs)) mocked.assert_called_with(*expected_args, **expected_kwargs) else: self.assertEqual(expected_result, test_method()) mocked.assert_called_with(test_method.__self__) def verify_create(self, test_method, resource_type, mock_method="openstack.proxy.BaseProxy._create", expected_result="result", **kwargs): the_kwargs = {"x": 1, "y": 2, "z": 3} method_kwargs = kwargs.pop("method_kwargs", the_kwargs) expected_args = [resource_type] expected_kwargs = kwargs.pop("expected_kwargs", the_kwargs) self._verify2(mock_method, test_method, expected_result=expected_result, method_kwargs=method_kwargs, expected_args=expected_args, expected_kwargs=expected_kwargs, **kwargs) def verify_delete(self, test_method, resource_type, ignore, input_path_args=None, expected_path_args=None, method_kwargs=None, expected_args=None, expected_kwargs=None, mock_method="openstack.proxy.BaseProxy._delete"): method_args = ["resource_or_id"] method_kwargs = method_kwargs or {} method_kwargs["ignore_missing"] = ignore if isinstance(input_path_args, dict): for key in input_path_args: method_kwargs[key] = input_path_args[key] elif isinstance(input_path_args, list): method_args = input_path_args expected_kwargs = expected_kwargs or {} expected_kwargs["ignore_missing"] = ignore if expected_path_args: expected_kwargs.update(expected_path_args) expected_args = expected_args or [resource_type, "resource_or_id"] self._verify2(mock_method, test_method, method_args=method_args, method_kwargs=method_kwargs, expected_args=expected_args, expected_kwargs=expected_kwargs) def verify_get(self, test_method, resource_type, value=None, args=None, mock_method="openstack.proxy.BaseProxy._get", ignore_value=False, **kwargs): the_value = value if value is None: the_value = [] if ignore_value else ["value"] expected_args = kwargs.pop("expected_args", []) expected_kwargs = kwargs.pop("expected_kwargs", {}) method_kwargs = kwargs.pop("method_kwargs", kwargs) if args: expected_kwargs["args"] = args if kwargs: expected_kwargs["path_args"] = kwargs if not expected_args: expected_args = [resource_type] + the_value self._verify2(mock_method, test_method, method_args=the_value, method_kwargs=method_kwargs or {}, expected_args=expected_args, expected_kwargs=expected_kwargs) def verify_head(self, test_method, resource_type, mock_method="openstack.proxy.BaseProxy._head", value=None, **kwargs): the_value = [value] if value is not None else [] expected_kwargs = {"path_args": kwargs} if kwargs else {} self._verify2(mock_method, test_method, method_args=the_value, method_kwargs=kwargs, expected_args=[resource_type] + the_value, expected_kwargs=expected_kwargs) def verify_find(self, test_method, resource_type, value=None, mock_method="openstack.proxy.BaseProxy._find", path_args=None, **kwargs): method_args = value or ["name_or_id"] expected_kwargs = {} self._add_path_args_for_verify(path_args, method_args, expected_kwargs, value=value) # TODO(briancurtin): if sub-tests worked in this mess of # test dependencies, the following would be a lot easier to work with. expected_kwargs["ignore_missing"] = False self._verify2(mock_method, test_method, method_args=method_args + [False], expected_args=[resource_type, "name_or_id"], expected_kwargs=expected_kwargs, expected_result="result", **kwargs) expected_kwargs["ignore_missing"] = True self._verify2(mock_method, test_method, method_args=method_args + [True], expected_args=[resource_type, "name_or_id"], expected_kwargs=expected_kwargs, expected_result="result", **kwargs) def verify_list(self, test_method, resource_type, paginated=False, mock_method="openstack.proxy.BaseProxy._list", **kwargs): expected_kwargs = kwargs.pop("expected_kwargs", {}) expected_kwargs.update({"paginated": paginated}) method_kwargs = kwargs.pop("method_kwargs", {}) self._verify2(mock_method, test_method, method_kwargs=method_kwargs, expected_args=[resource_type], expected_kwargs=expected_kwargs, expected_result=["result"], **kwargs) def verify_list_no_kwargs(self, test_method, resource_type, paginated=False, mock_method="openstack.proxy.BaseProxy._list"): self._verify2(mock_method, test_method, method_kwargs={}, expected_args=[resource_type], expected_kwargs={"paginated": paginated}, expected_result=["result"]) def verify_update(self, test_method, resource_type, value=None, mock_method="openstack.proxy.BaseProxy._update", expected_result="result", path_args=None, **kwargs): method_args = value or ["resource_or_id"] method_kwargs = {"x": 1, "y": 2, "z": 3} expected_args = kwargs.pop("expected_args", ["resource_or_id"]) expected_kwargs = method_kwargs.copy() self._add_path_args_for_verify(path_args, method_args, expected_kwargs, value=value) self._verify2(mock_method, test_method, expected_result=expected_result, method_args=method_args, method_kwargs=method_kwargs, expected_args=[resource_type] + expected_args, expected_kwargs=expected_kwargs, **kwargs) def verify_wait_for_status( self, test_method, mock_method="openstack.resource.wait_for_status", **kwargs): self._verify(mock_method, test_method, **kwargs) openstacksdk-0.11.3/openstack/tests/unit/identity/0000775000175100017510000000000013236151501022247 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/identity/test_version.py0000666000175100017510000000430713236151340025354 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.identity import version IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'media-types': '2', 'status': '3', 'updated': '4', } class TestVersion(testtools.TestCase): def test_basic(self): sot = version.Version() self.assertEqual('version', sot.resource_key) self.assertEqual('versions', sot.resources_key) self.assertEqual('/', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = version.Version(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['media-types'], sot.media_types) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['updated'], sot.updated) def test_list(self): resp = mock.Mock() resp.body = { "versions": { "values": [ {"status": "stable", "updated": "a", "id": "v1.0"}, {"status": "stable", "updated": "b", "id": "v1.1"}, ] } } resp.json = mock.Mock(return_value=resp.body) session = mock.Mock() session.get = mock.Mock(return_value=resp) sot = version.Version(**EXAMPLE) result = sot.list(session) self.assertEqual(next(result).id, 'v1.0') self.assertEqual(next(result).id, 'v1.1') self.assertRaises(StopIteration, next, result) openstacksdk-0.11.3/openstack/tests/unit/identity/v2/0000775000175100017510000000000013236151501022576 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/identity/v2/test_tenant.py0000666000175100017510000000276213236151340025512 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v2 import tenant IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'description': '1', 'enabled': True, 'id': '3', 'name': '4', } class TestTenant(testtools.TestCase): def test_basic(self): sot = tenant.Tenant() self.assertEqual('tenant', sot.resource_key) self.assertEqual('tenants', sot.resources_key) self.assertEqual('/tenants', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = tenant.Tenant(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertTrue(sot.is_enabled) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) openstacksdk-0.11.3/openstack/tests/unit/identity/v2/test_extension.py0000666000175100017510000000452113236151340026230 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.identity.v2 import extension IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'alias': '1', 'description': '2', 'links': '3', 'name': '4', 'namespace': '5', 'updated': '2015-03-09T12:14:57.233772', } class TestExtension(testtools.TestCase): def test_basic(self): sot = extension.Extension() self.assertEqual('extension', sot.resource_key) self.assertEqual('extensions', sot.resources_key) self.assertEqual('/extensions', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = extension.Extension(**EXAMPLE) self.assertEqual(EXAMPLE['alias'], sot.alias) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['namespace'], sot.namespace) self.assertEqual(EXAMPLE['updated'], sot.updated_at) def test_list(self): resp = mock.Mock() resp.body = { "extensions": { "values": [ {"name": "a"}, {"name": "b"}, ] } } resp.json = mock.Mock(return_value=resp.body) session = mock.Mock() session.get = mock.Mock(return_value=resp) sot = extension.Extension(**EXAMPLE) result = sot.list(session) self.assertEqual(next(result).name, 'a') self.assertEqual(next(result).name, 'b') self.assertRaises(StopIteration, next, result) openstacksdk-0.11.3/openstack/tests/unit/identity/v2/test_role.py0000666000175100017510000000276213236151340025162 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v2 import role IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'enabled': 'True', 'description': '1', 'id': IDENTIFIER, 'name': '3', } class TestRole(testtools.TestCase): def test_basic(self): sot = role.Role() self.assertEqual('role', sot.resource_key) self.assertEqual('roles', sot.resources_key) self.assertEqual('/OS-KSADM/roles', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = role.Role(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertTrue(sot.is_enabled) openstacksdk-0.11.3/openstack/tests/unit/identity/v2/__init__.py0000666000175100017510000000000013236151340024700 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/identity/v2/test_user.py0000666000175100017510000000271613236151340025176 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v2 import user IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'email': '1', 'enabled': True, 'id': '3', 'name': '4', } class TestUser(testtools.TestCase): def test_basic(self): sot = user.User() self.assertEqual('user', sot.resource_key) self.assertEqual('users', sot.resources_key) self.assertEqual('/users', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = user.User(**EXAMPLE) self.assertEqual(EXAMPLE['email'], sot.email) self.assertTrue(sot.is_enabled) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) openstacksdk-0.11.3/openstack/tests/unit/identity/v2/test_proxy.py0000666000175100017510000000567613236151340025411 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity.v2 import _proxy from openstack.identity.v2 import role from openstack.identity.v2 import tenant from openstack.identity.v2 import user from openstack.tests.unit import test_proxy_base as test_proxy_base class TestIdentityProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestIdentityProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_role_create_attrs(self): self.verify_create(self.proxy.create_role, role.Role) def test_role_delete(self): self.verify_delete(self.proxy.delete_role, role.Role, False) def test_role_delete_ignore(self): self.verify_delete(self.proxy.delete_role, role.Role, True) def test_role_find(self): self.verify_find(self.proxy.find_role, role.Role) def test_role_get(self): self.verify_get(self.proxy.get_role, role.Role) def test_roles(self): self.verify_list(self.proxy.roles, role.Role) def test_role_update(self): self.verify_update(self.proxy.update_role, role.Role) def test_tenant_create_attrs(self): self.verify_create(self.proxy.create_tenant, tenant.Tenant) def test_tenant_delete(self): self.verify_delete(self.proxy.delete_tenant, tenant.Tenant, False) def test_tenant_delete_ignore(self): self.verify_delete(self.proxy.delete_tenant, tenant.Tenant, True) def test_tenant_find(self): self.verify_find(self.proxy.find_tenant, tenant.Tenant) def test_tenant_get(self): self.verify_get(self.proxy.get_tenant, tenant.Tenant) def test_tenants(self): self.verify_list(self.proxy.tenants, tenant.Tenant, paginated=True) def test_tenant_update(self): self.verify_update(self.proxy.update_tenant, tenant.Tenant) def test_user_create_attrs(self): self.verify_create(self.proxy.create_user, user.User) def test_user_delete(self): self.verify_delete(self.proxy.delete_user, user.User, False) def test_user_delete_ignore(self): self.verify_delete(self.proxy.delete_user, user.User, True) def test_user_find(self): self.verify_find(self.proxy.find_user, user.User) def test_user_get(self): self.verify_get(self.proxy.get_user, user.User) def test_users(self): self.verify_list(self.proxy.users, user.User) def test_user_update(self): self.verify_update(self.proxy.update_user, user.User) openstacksdk-0.11.3/openstack/tests/unit/identity/test_identity_service.py0000666000175100017510000000272613236151340027243 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity import identity_service class TestIdentityService(testtools.TestCase): def test_regular_service(self): sot = identity_service.IdentityService() self.assertEqual('identity', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(2, len(sot.valid_versions)) self.assertEqual('v3', sot.valid_versions[0].module) self.assertEqual('v3', sot.valid_versions[0].path) self.assertEqual('v2', sot.valid_versions[1].module) self.assertEqual('v2', sot.valid_versions[1].path) def test_admin_service(self): sot = identity_service.AdminService() self.assertEqual('identity', sot.service_type) self.assertEqual('admin', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) openstacksdk-0.11.3/openstack/tests/unit/identity/__init__.py0000666000175100017510000000000013236151340024351 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/identity/v3/0000775000175100017510000000000013236151501022577 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_project.py0000666000175100017510000000422113236151364025666 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import project IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'description': '1', 'domain_id': '2', 'enabled': True, 'id': IDENTIFIER, 'is_domain': False, 'name': '5', 'parent_id': '6', } class TestProject(testtools.TestCase): def test_basic(self): sot = project.Project() self.assertEqual('project', sot.resource_key) self.assertEqual('projects', sot.resources_key) self.assertEqual('/projects', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) self.assertDictEqual( { 'domain_id': 'domain_id', 'is_domain': 'is_domain', 'name': 'name', 'parent_id': 'parent_id', 'is_enabled': 'enabled', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = project.Project(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['domain_id'], sot.domain_id) self.assertFalse(sot.is_domain) self.assertTrue(sot.is_enabled) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['parent_id'], sot.parent_id) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_role_domain_group_assignment.py0000666000175100017510000000322013236151340032144 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import role_domain_group_assignment IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': {'self': 'http://example.com/user1'}, 'name': '2', 'domain_id': '3', 'group_id': '4' } class TestRoleDomainGroupAssignment(testtools.TestCase): def test_basic(self): sot = role_domain_group_assignment.RoleDomainGroupAssignment() self.assertEqual('role', sot.resource_key) self.assertEqual('roles', sot.resources_key) self.assertEqual('/domains/%(domain_id)s/groups/%(group_id)s/roles', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_list) def test_make_it(self): sot = \ role_domain_group_assignment.RoleDomainGroupAssignment(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['domain_id'], sot.domain_id) self.assertEqual(EXAMPLE['group_id'], sot.group_id) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_role_project_group_assignment.py0000666000175100017510000000323313236151340032347 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import role_project_group_assignment IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': {'self': 'http://example.com/user1'}, 'name': '2', 'project_id': '3', 'group_id': '4' } class TestRoleProjectGroupAssignment(testtools.TestCase): def test_basic(self): sot = role_project_group_assignment.RoleProjectGroupAssignment() self.assertEqual('role', sot.resource_key) self.assertEqual('roles', sot.resources_key) self.assertEqual('/projects/%(project_id)s/groups/%(group_id)s/roles', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_list) def test_make_it(self): sot = \ role_project_group_assignment.RoleProjectGroupAssignment(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['project_id'], sot.project_id) self.assertEqual(EXAMPLE['group_id'], sot.group_id) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_trust.py0000666000175100017510000000470513236151340025402 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import trust IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'allow_redelegation': False, 'expires_at': '2016-03-09T12:14:57.233772', 'id': IDENTIFIER, 'impersonation': True, 'links': {'self': 'fake_link'}, 'project_id': '1', 'redelegated_trust_id': None, 'redelegation_count': '0', 'remaining_uses': 10, 'role_links': {'self': 'other_fake_link'}, 'trustee_user_id': '2', 'trustor_user_id': '3', 'roles': [{'name': 'test-role'}], } class TestTrust(testtools.TestCase): def test_basic(self): sot = trust.Trust() self.assertEqual('trust', sot.resource_key) self.assertEqual('trusts', sot.resources_key) self.assertEqual('/OS-TRUST/trusts', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = trust.Trust(**EXAMPLE) self.assertEqual(EXAMPLE['allow_redelegation'], sot.allow_redelegation) self.assertEqual(EXAMPLE['expires_at'], sot.expires_at) self.assertEqual(EXAMPLE['id'], sot.id) self.assertTrue(sot.is_impersonation) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['project_id'], sot.project_id) self.assertEqual(EXAMPLE['role_links'], sot.role_links) self.assertEqual(EXAMPLE['redelegated_trust_id'], sot.redelegated_trust_id) self.assertEqual(EXAMPLE['remaining_uses'], sot.remaining_uses) self.assertEqual(EXAMPLE['trustee_user_id'], sot.trustee_user_id) self.assertEqual(EXAMPLE['trustor_user_id'], sot.trustor_user_id) self.assertEqual(EXAMPLE['roles'], sot.roles) self.assertEqual(EXAMPLE['redelegation_count'], sot.redelegation_count) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_role_project_user_assignment.py0000666000175100017510000000322013236151340032165 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import role_project_user_assignment IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': {'self': 'http://example.com/user1'}, 'name': '2', 'project_id': '3', 'user_id': '4' } class TestRoleProjectUserAssignment(testtools.TestCase): def test_basic(self): sot = role_project_user_assignment.RoleProjectUserAssignment() self.assertEqual('role', sot.resource_key) self.assertEqual('roles', sot.resources_key) self.assertEqual('/projects/%(project_id)s/users/%(user_id)s/roles', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_list) def test_make_it(self): sot = \ role_project_user_assignment.RoleProjectUserAssignment(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['project_id'], sot.project_id) self.assertEqual(EXAMPLE['user_id'], sot.user_id) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_role.py0000666000175100017510000000325613236151340025162 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import role IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': {'self': 'http://example.com/user1'}, 'name': '2', } class TestRole(testtools.TestCase): def test_basic(self): sot = role.Role() self.assertEqual('role', sot.resource_key) self.assertEqual('roles', sot.resources_key) self.assertEqual('/roles', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertDictEqual( { 'domain_id': 'domain_id', 'name': 'name', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = role.Role(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_group.py0000666000175100017510000000346113236151340025353 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import group IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'description': '1', 'domain_id': '2', 'id': IDENTIFIER, 'name': '4', } class TestGroup(testtools.TestCase): def test_basic(self): sot = group.Group() self.assertEqual('group', sot.resource_key) self.assertEqual('groups', sot.resources_key) self.assertEqual('/groups', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) self.assertDictEqual( { 'domain_id': 'domain_id', 'name': 'name', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = group.Group(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['domain_id'], sot.domain_id) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['name'], sot.name) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_endpoint.py0000666000175100017510000000411713236151340026036 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import endpoint IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'enabled': True, 'id': IDENTIFIER, 'interface': '3', 'links': {'self': 'http://example.com/endpoint1'}, 'region_id': '4', 'service_id': '5', 'url': '6', } class TestEndpoint(testtools.TestCase): def test_basic(self): sot = endpoint.Endpoint() self.assertEqual('endpoint', sot.resource_key) self.assertEqual('endpoints', sot.resources_key) self.assertEqual('/endpoints', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) self.assertDictEqual( { 'interface': 'interface', 'service_id': 'service_id', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = endpoint.Endpoint(**EXAMPLE) self.assertTrue(sot.is_enabled) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['interface'], sot.interface) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['region_id'], sot.region_id) self.assertEqual(EXAMPLE['service_id'], sot.service_id) self.assertEqual(EXAMPLE['url'], sot.url) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_role_domain_user_assignment.py0000666000175100017510000000320513236151340031771 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import role_domain_user_assignment IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': {'self': 'http://example.com/user1'}, 'name': '2', 'domain_id': '3', 'user_id': '4' } class TestRoleDomainUserAssignment(testtools.TestCase): def test_basic(self): sot = role_domain_user_assignment.RoleDomainUserAssignment() self.assertEqual('role', sot.resource_key) self.assertEqual('roles', sot.resources_key) self.assertEqual('/domains/%(domain_id)s/users/%(user_id)s/roles', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_list) def test_make_it(self): sot = \ role_domain_user_assignment.RoleDomainUserAssignment(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['domain_id'], sot.domain_id) self.assertEqual(EXAMPLE['user_id'], sot.user_id) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/__init__.py0000666000175100017510000000000013236151340024701 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_credential.py0000666000175100017510000000362613236151340026334 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import credential IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'blob': '1', 'id': IDENTIFIER, 'project_id': '3', 'type': '4', 'user_id': '5', } class TestCredential(testtools.TestCase): def test_basic(self): sot = credential.Credential() self.assertEqual('credential', sot.resource_key) self.assertEqual('credentials', sot.resources_key) self.assertEqual('/credentials', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) self.assertDictEqual( { 'type': 'type', 'user_id': 'user_id', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = credential.Credential(**EXAMPLE) self.assertEqual(EXAMPLE['blob'], sot.blob) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['project_id'], sot.project_id) self.assertEqual(EXAMPLE['type'], sot.type) self.assertEqual(EXAMPLE['user_id'], sot.user_id) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_domain.py0000666000175100017510000000363413236151340025470 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import domain IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'description': '1', 'enabled': True, 'id': IDENTIFIER, 'links': {'self': 'http://example.com/identity/v3/domains/id'}, 'name': '4', } class TestDomain(testtools.TestCase): def test_basic(self): sot = domain.Domain() self.assertEqual('domain', sot.resource_key) self.assertEqual('domains', sot.resources_key) self.assertEqual('/domains', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) self.assertDictEqual( { 'name': 'name', 'is_enabled': 'enabled', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = domain.Domain(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertTrue(sot.is_enabled) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_user.py0000666000175100017510000000474113236151340025177 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import user IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'default_project_id': '1', 'description': '2', 'domain_id': '3', 'email': '4', 'enabled': True, 'id': IDENTIFIER, 'links': {'self': 'http://example.com/user1'}, 'name': '6', 'password': '7', 'password_expires_at': '8', } class TestUser(testtools.TestCase): def test_basic(self): sot = user.User() self.assertEqual('user', sot.resource_key) self.assertEqual('users', sot.resources_key) self.assertEqual('/users', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) self.assertDictEqual( { 'domain_id': 'domain_id', 'name': 'name', 'password_expires_at': 'password_expires_at', 'is_enabled': 'enabled', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = user.User(**EXAMPLE) self.assertEqual(EXAMPLE['default_project_id'], sot.default_project_id) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['domain_id'], sot.domain_id) self.assertEqual(EXAMPLE['email'], sot.email) self.assertTrue(sot.is_enabled) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['password'], sot.password) self.assertEqual(EXAMPLE['password_expires_at'], sot.password_expires_at) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_role_assignment.py0000666000175100017510000000310013236151340027376 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import role_assignment IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': {'self': 'http://example.com/user1'}, 'scope': {'domain': {'id': '2'}}, 'user': {'id': '3'}, 'group': {'id': '4'} } class TestRoleAssignment(testtools.TestCase): def test_basic(self): sot = role_assignment.RoleAssignment() self.assertEqual('role_assignment', sot.resource_key) self.assertEqual('role_assignments', sot.resources_key) self.assertEqual('/role_assignments', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_list) def test_make_it(self): sot = role_assignment.RoleAssignment(**EXAMPLE) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['scope'], sot.scope) self.assertEqual(EXAMPLE['user'], sot.user) self.assertEqual(EXAMPLE['group'], sot.group) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_policy.py0000666000175100017510000000333513236151340025516 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import policy IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'blob': '1', 'id': IDENTIFIER, 'links': {'self': 'a-pointer'}, 'project_id': '2', 'type': '3', 'user_id': '4', } class TestPolicy(testtools.TestCase): def test_basic(self): sot = policy.Policy() self.assertEqual('policy', sot.resource_key) self.assertEqual('policies', sot.resources_key) self.assertEqual('/policies', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) def test_make_it(self): sot = policy.Policy(**EXAMPLE) self.assertEqual(EXAMPLE['blob'], sot.blob) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['project_id'], sot.project_id) self.assertEqual(EXAMPLE['type'], sot.type) self.assertEqual(EXAMPLE['user_id'], sot.user_id) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_region.py0000666000175100017510000000355413236151340025505 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import region IDENTIFIER = 'RegionOne' EXAMPLE = { 'description': '1', 'id': IDENTIFIER, 'links': {'self': 'http://example.com/region1'}, 'parent_region_id': 'FAKE_PARENT', } class TestRegion(testtools.TestCase): def test_basic(self): sot = region.Region() self.assertEqual('region', sot.resource_key) self.assertEqual('regions', sot.resources_key) self.assertEqual('/regions', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) self.assertDictEqual( { 'parent_region_id': 'parent_region_id', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = region.Region(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['parent_region_id'], sot.parent_region_id) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_service.py0000666000175100017510000000366313236151340025663 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.identity.v3 import service IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'description': '1', 'enabled': True, 'id': IDENTIFIER, 'links': {'self': 'http://example.com/service1'}, 'name': '4', 'type': '5', } class TestService(testtools.TestCase): def test_basic(self): sot = service.Service() self.assertEqual('service', sot.resource_key) self.assertEqual('services', sot.resources_key) self.assertEqual('/services', sot.base_path) self.assertEqual('identity', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) self.assertEqual('PATCH', sot.update_method) self.assertDictEqual( { 'type': 'type', 'limit': 'limit', 'marker': 'marker', }, sot._query_mapping._mapping) def test_make_it(self): sot = service.Service(**EXAMPLE) self.assertEqual(EXAMPLE['description'], sot.description) self.assertTrue(sot.is_enabled) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['type'], sot.type) openstacksdk-0.11.3/openstack/tests/unit/identity/v3/test_proxy.py0000666000175100017510000002242413236151364025406 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity.v3 import _proxy from openstack.identity.v3 import credential from openstack.identity.v3 import domain from openstack.identity.v3 import endpoint from openstack.identity.v3 import group from openstack.identity.v3 import policy from openstack.identity.v3 import project from openstack.identity.v3 import region from openstack.identity.v3 import role from openstack.identity.v3 import service from openstack.identity.v3 import trust from openstack.identity.v3 import user from openstack.tests.unit import test_proxy_base class TestIdentityProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestIdentityProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_credential_create_attrs(self): self.verify_create(self.proxy.create_credential, credential.Credential) def test_credential_delete(self): self.verify_delete(self.proxy.delete_credential, credential.Credential, False) def test_credential_delete_ignore(self): self.verify_delete(self.proxy.delete_credential, credential.Credential, True) def test_credential_find(self): self.verify_find(self.proxy.find_credential, credential.Credential) def test_credential_get(self): self.verify_get(self.proxy.get_credential, credential.Credential) def test_credentials(self): self.verify_list(self.proxy.credentials, credential.Credential, paginated=False) def test_credential_update(self): self.verify_update(self.proxy.update_credential, credential.Credential) def test_domain_create_attrs(self): self.verify_create(self.proxy.create_domain, domain.Domain) def test_domain_delete(self): self.verify_delete(self.proxy.delete_domain, domain.Domain, False) def test_domain_delete_ignore(self): self.verify_delete(self.proxy.delete_domain, domain.Domain, True) def test_domain_find(self): self.verify_find(self.proxy.find_domain, domain.Domain) def test_domain_get(self): self.verify_get(self.proxy.get_domain, domain.Domain) def test_domains(self): self.verify_list(self.proxy.domains, domain.Domain, paginated=False) def test_domain_update(self): self.verify_update(self.proxy.update_domain, domain.Domain) def test_endpoint_create_attrs(self): self.verify_create(self.proxy.create_endpoint, endpoint.Endpoint) def test_endpoint_delete(self): self.verify_delete(self.proxy.delete_endpoint, endpoint.Endpoint, False) def test_endpoint_delete_ignore(self): self.verify_delete(self.proxy.delete_endpoint, endpoint.Endpoint, True) def test_endpoint_find(self): self.verify_find(self.proxy.find_endpoint, endpoint.Endpoint) def test_endpoint_get(self): self.verify_get(self.proxy.get_endpoint, endpoint.Endpoint) def test_endpoints(self): self.verify_list(self.proxy.endpoints, endpoint.Endpoint, paginated=False) def test_endpoint_update(self): self.verify_update(self.proxy.update_endpoint, endpoint.Endpoint) def test_group_create_attrs(self): self.verify_create(self.proxy.create_group, group.Group) def test_group_delete(self): self.verify_delete(self.proxy.delete_group, group.Group, False) def test_group_delete_ignore(self): self.verify_delete(self.proxy.delete_group, group.Group, True) def test_group_find(self): self.verify_find(self.proxy.find_group, group.Group) def test_group_get(self): self.verify_get(self.proxy.get_group, group.Group) def test_groups(self): self.verify_list(self.proxy.groups, group.Group, paginated=False) def test_group_update(self): self.verify_update(self.proxy.update_group, group.Group) def test_policy_create_attrs(self): self.verify_create(self.proxy.create_policy, policy.Policy) def test_policy_delete(self): self.verify_delete(self.proxy.delete_policy, policy.Policy, False) def test_policy_delete_ignore(self): self.verify_delete(self.proxy.delete_policy, policy.Policy, True) def test_policy_find(self): self.verify_find(self.proxy.find_policy, policy.Policy) def test_policy_get(self): self.verify_get(self.proxy.get_policy, policy.Policy) def test_policies(self): self.verify_list(self.proxy.policies, policy.Policy, paginated=False) def test_policy_update(self): self.verify_update(self.proxy.update_policy, policy.Policy) def test_project_create_attrs(self): self.verify_create(self.proxy.create_project, project.Project) def test_project_delete(self): self.verify_delete(self.proxy.delete_project, project.Project, False) def test_project_delete_ignore(self): self.verify_delete(self.proxy.delete_project, project.Project, True) def test_project_find(self): self.verify_find(self.proxy.find_project, project.Project) def test_project_get(self): self.verify_get(self.proxy.get_project, project.Project) def test_projects(self): self.verify_list(self.proxy.projects, project.Project, paginated=False) def test_project_update(self): self.verify_update(self.proxy.update_project, project.Project) def test_service_create_attrs(self): self.verify_create(self.proxy.create_service, service.Service) def test_service_delete(self): self.verify_delete(self.proxy.delete_service, service.Service, False) def test_service_delete_ignore(self): self.verify_delete(self.proxy.delete_service, service.Service, True) def test_service_find(self): self.verify_find(self.proxy.find_service, service.Service) def test_service_get(self): self.verify_get(self.proxy.get_service, service.Service) def test_services(self): self.verify_list(self.proxy.services, service.Service, paginated=False) def test_service_update(self): self.verify_update(self.proxy.update_service, service.Service) def test_user_create_attrs(self): self.verify_create(self.proxy.create_user, user.User) def test_user_delete(self): self.verify_delete(self.proxy.delete_user, user.User, False) def test_user_delete_ignore(self): self.verify_delete(self.proxy.delete_user, user.User, True) def test_user_find(self): self.verify_find(self.proxy.find_user, user.User) def test_user_get(self): self.verify_get(self.proxy.get_user, user.User) def test_users(self): self.verify_list(self.proxy.users, user.User, paginated=False) def test_user_update(self): self.verify_update(self.proxy.update_user, user.User) def test_trust_create_attrs(self): self.verify_create(self.proxy.create_trust, trust.Trust) def test_trust_delete(self): self.verify_delete(self.proxy.delete_trust, trust.Trust, False) def test_trust_delete_ignore(self): self.verify_delete(self.proxy.delete_trust, trust.Trust, True) def test_trust_find(self): self.verify_find(self.proxy.find_trust, trust.Trust) def test_trust_get(self): self.verify_get(self.proxy.get_trust, trust.Trust) def test_trusts(self): self.verify_list(self.proxy.trusts, trust.Trust, paginated=False) def test_region_create_attrs(self): self.verify_create(self.proxy.create_region, region.Region) def test_region_delete(self): self.verify_delete(self.proxy.delete_region, region.Region, False) def test_region_delete_ignore(self): self.verify_delete(self.proxy.delete_region, region.Region, True) def test_region_find(self): self.verify_find(self.proxy.find_region, region.Region) def test_region_get(self): self.verify_get(self.proxy.get_region, region.Region) def test_regions(self): self.verify_list(self.proxy.regions, region.Region, paginated=False) def test_region_update(self): self.verify_update(self.proxy.update_region, region.Region) def test_role_create_attrs(self): self.verify_create(self.proxy.create_role, role.Role) def test_role_delete(self): self.verify_delete(self.proxy.delete_role, role.Role, False) def test_role_delete_ignore(self): self.verify_delete(self.proxy.delete_role, role.Role, True) def test_role_find(self): self.verify_find(self.proxy.find_role, role.Role) def test_role_get(self): self.verify_get(self.proxy.get_role, role.Role) def test_roles(self): self.verify_list(self.proxy.roles, role.Role, paginated=False) def test_role_update(self): self.verify_update(self.proxy.update_role, role.Role) openstacksdk-0.11.3/openstack/tests/unit/test_proxy.py0000666000175100017510000003276613236151340023231 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack import exceptions from openstack import proxy from openstack import resource class DeleteableResource(resource.Resource): allow_delete = True class UpdateableResource(resource.Resource): allow_update = True class CreateableResource(resource.Resource): allow_create = True class RetrieveableResource(resource.Resource): allow_retrieve = True class ListableResource(resource.Resource): allow_list = True class HeadableResource(resource.Resource): allow_head = True class TestProxyPrivate(testtools.TestCase): def setUp(self): super(TestProxyPrivate, self).setUp() def method(self, expected_type, value): return value self.sot = mock.Mock() self.sot.method = method self.fake_proxy = proxy.BaseProxy("session") def _test_correct(self, value): decorated = proxy._check_resource(strict=False)(self.sot.method) rv = decorated(self.sot, resource.Resource, value) self.assertEqual(value, rv) def test__check_resource_correct_resource(self): res = resource.Resource() self._test_correct(res) def test__check_resource_notstrict_id(self): self._test_correct("abc123-id") def test__check_resource_strict_id(self): decorated = proxy._check_resource(strict=True)(self.sot.method) self.assertRaisesRegex(ValueError, "A Resource must be passed", decorated, self.sot, resource.Resource, "this-is-not-a-resource") def test__check_resource_incorrect_resource(self): class OneType(resource.Resource): pass class AnotherType(resource.Resource): pass value = AnotherType() decorated = proxy._check_resource(strict=False)(self.sot.method) self.assertRaisesRegex(ValueError, "Expected OneType but received AnotherType", decorated, self.sot, OneType, value) def test__get_uri_attribute_no_parent(self): class Child(resource.Resource): something = resource.Body("something") attr = "something" value = "nothing" child = Child(something=value) result = self.fake_proxy._get_uri_attribute(child, None, attr) self.assertEqual(value, result) def test__get_uri_attribute_with_parent(self): class Parent(resource.Resource): pass value = "nothing" parent = Parent(id=value) result = self.fake_proxy._get_uri_attribute("child", parent, "attr") self.assertEqual(value, result) def test__get_resource_new(self): value = "hello" fake_type = mock.Mock(spec=resource.Resource) fake_type.new = mock.Mock(return_value=value) attrs = {"first": "Brian", "last": "Curtin"} result = self.fake_proxy._get_resource(fake_type, None, **attrs) fake_type.new.assert_called_with(**attrs) self.assertEqual(value, result) def test__get_resource_from_id(self): id = "eye dee" value = "hello" attrs = {"first": "Brian", "last": "Curtin"} # The isinstance check needs to take a type, not an instance, # so the mock.assert_called_with method isn't helpful here since # we can't pass in a mocked object. This class is a crude version # of that same behavior to let us check that `new` gets # called with the expected arguments. class Fake(object): call = {} @classmethod def new(cls, **kwargs): cls.call = kwargs return value result = self.fake_proxy._get_resource(Fake, id, **attrs) self.assertDictEqual(dict(id=id, **attrs), Fake.call) self.assertEqual(value, result) def test__get_resource_from_resource(self): res = mock.Mock(spec=resource.Resource) res._update = mock.Mock() attrs = {"first": "Brian", "last": "Curtin"} result = self.fake_proxy._get_resource(resource.Resource, res, **attrs) res._update.assert_called_once_with(**attrs) self.assertEqual(result, res) class TestProxyDelete(testtools.TestCase): def setUp(self): super(TestProxyDelete, self).setUp() self.session = mock.Mock() self.fake_id = 1 self.res = mock.Mock(spec=DeleteableResource) self.res.id = self.fake_id self.res.delete = mock.Mock() self.sot = proxy.BaseProxy(self.session) DeleteableResource.new = mock.Mock(return_value=self.res) def test_delete(self): self.sot._delete(DeleteableResource, self.res) self.res.delete.assert_called_with(self.sot, error_message=mock.ANY) self.sot._delete(DeleteableResource, self.fake_id) DeleteableResource.new.assert_called_with(id=self.fake_id) self.res.delete.assert_called_with(self.sot, error_message=mock.ANY) # Delete generally doesn't return anything, so we will normally # swallow any return from within a service's proxy, but make sure # we can still return for any cases where values are returned. self.res.delete.return_value = self.fake_id rv = self.sot._delete(DeleteableResource, self.fake_id) self.assertEqual(rv, self.fake_id) def test_delete_ignore_missing(self): self.res.delete.side_effect = exceptions.NotFoundException( message="test", http_status=404) rv = self.sot._delete(DeleteableResource, self.fake_id) self.assertIsNone(rv) def test_delete_NotFound(self): self.res.delete.side_effect = exceptions.NotFoundException( message="test", http_status=404) self.assertRaisesRegex( exceptions.NotFoundException, # TODO(shade) The mocks here are hiding the thing we want to test. "test", self.sot._delete, DeleteableResource, self.res, ignore_missing=False) def test_delete_HttpException(self): self.res.delete.side_effect = exceptions.HttpException( message="test", http_status=500) self.assertRaises(exceptions.HttpException, self.sot._delete, DeleteableResource, self.res, ignore_missing=False) class TestProxyUpdate(testtools.TestCase): def setUp(self): super(TestProxyUpdate, self).setUp() self.session = mock.Mock() self.fake_id = 1 self.fake_result = "fake_result" self.res = mock.Mock(spec=UpdateableResource) self.res.update = mock.Mock(return_value=self.fake_result) self.sot = proxy.BaseProxy(self.session) self.attrs = {"x": 1, "y": 2, "z": 3} UpdateableResource.new = mock.Mock(return_value=self.res) def test_update_resource(self): rv = self.sot._update(UpdateableResource, self.res, **self.attrs) self.assertEqual(rv, self.fake_result) self.res._update.assert_called_once_with(**self.attrs) self.res.update.assert_called_once_with(self.sot) def test_update_id(self): rv = self.sot._update(UpdateableResource, self.fake_id, **self.attrs) self.assertEqual(rv, self.fake_result) self.res.update.assert_called_once_with(self.sot) class TestProxyCreate(testtools.TestCase): def setUp(self): super(TestProxyCreate, self).setUp() self.session = mock.Mock() self.fake_result = "fake_result" self.res = mock.Mock(spec=CreateableResource) self.res.create = mock.Mock(return_value=self.fake_result) self.sot = proxy.BaseProxy(self.session) def test_create_attributes(self): CreateableResource.new = mock.Mock(return_value=self.res) attrs = {"x": 1, "y": 2, "z": 3} rv = self.sot._create(CreateableResource, **attrs) self.assertEqual(rv, self.fake_result) CreateableResource.new.assert_called_once_with(**attrs) self.res.create.assert_called_once_with(self.sot) class TestProxyGet(testtools.TestCase): def setUp(self): super(TestProxyGet, self).setUp() self.session = mock.Mock() self.fake_id = 1 self.fake_name = "fake_name" self.fake_result = "fake_result" self.res = mock.Mock(spec=RetrieveableResource) self.res.id = self.fake_id self.res.get = mock.Mock(return_value=self.fake_result) self.sot = proxy.BaseProxy(self.session) RetrieveableResource.new = mock.Mock(return_value=self.res) def test_get_resource(self): rv = self.sot._get(RetrieveableResource, self.res) self.res.get.assert_called_with(self.sot, requires_id=True, error_message=mock.ANY) self.assertEqual(rv, self.fake_result) def test_get_resource_with_args(self): args = {"key": "value"} rv = self.sot._get(RetrieveableResource, self.res, **args) self.res._update.assert_called_once_with(**args) self.res.get.assert_called_with(self.sot, requires_id=True, error_message=mock.ANY) self.assertEqual(rv, self.fake_result) def test_get_id(self): rv = self.sot._get(RetrieveableResource, self.fake_id) RetrieveableResource.new.assert_called_with(id=self.fake_id) self.res.get.assert_called_with(self.sot, requires_id=True, error_message=mock.ANY) self.assertEqual(rv, self.fake_result) def test_get_not_found(self): self.res.get.side_effect = exceptions.NotFoundException( message="test", http_status=404) self.assertRaisesRegex( exceptions.NotFoundException, "test", self.sot._get, RetrieveableResource, self.res) class TestProxyList(testtools.TestCase): def setUp(self): super(TestProxyList, self).setUp() self.session = mock.Mock() self.args = {"a": "A", "b": "B", "c": "C"} self.fake_response = [resource.Resource()] self.sot = proxy.BaseProxy(self.session) ListableResource.list = mock.Mock() ListableResource.list.return_value = self.fake_response def _test_list(self, paginated): rv = self.sot._list(ListableResource, paginated=paginated, **self.args) self.assertEqual(self.fake_response, rv) ListableResource.list.assert_called_once_with( self.sot, paginated=paginated, **self.args) def test_list_paginated(self): self._test_list(True) def test_list_non_paginated(self): self._test_list(False) class TestProxyHead(testtools.TestCase): def setUp(self): super(TestProxyHead, self).setUp() self.session = mock.Mock() self.fake_id = 1 self.fake_name = "fake_name" self.fake_result = "fake_result" self.res = mock.Mock(spec=HeadableResource) self.res.id = self.fake_id self.res.head = mock.Mock(return_value=self.fake_result) self.sot = proxy.BaseProxy(self.session) HeadableResource.new = mock.Mock(return_value=self.res) def test_head_resource(self): rv = self.sot._head(HeadableResource, self.res) self.res.head.assert_called_with(self.sot) self.assertEqual(rv, self.fake_result) def test_head_id(self): rv = self.sot._head(HeadableResource, self.fake_id) HeadableResource.new.assert_called_with(id=self.fake_id) self.res.head.assert_called_with(self.sot) self.assertEqual(rv, self.fake_result) class TestProxyWaits(testtools.TestCase): def setUp(self): super(TestProxyWaits, self).setUp() self.session = mock.Mock() self.sot = proxy.BaseProxy(self.session) @mock.patch("openstack.resource.wait_for_status") def test_wait_for(self, mock_wait): mock_resource = mock.Mock() mock_wait.return_value = mock_resource self.sot.wait_for_status(mock_resource, 'ACTIVE') mock_wait.assert_called_once_with( self.sot, mock_resource, 'ACTIVE', [], 2, 120) @mock.patch("openstack.resource.wait_for_status") def test_wait_for_params(self, mock_wait): mock_resource = mock.Mock() mock_wait.return_value = mock_resource self.sot.wait_for_status(mock_resource, 'ACTIVE', ['ERROR'], 1, 2) mock_wait.assert_called_once_with( self.sot, mock_resource, 'ACTIVE', ['ERROR'], 1, 2) @mock.patch("openstack.resource.wait_for_delete") def test_wait_for_delete(self, mock_wait): mock_resource = mock.Mock() mock_wait.return_value = mock_resource self.sot.wait_for_delete(mock_resource) mock_wait.assert_called_once_with(self.sot, mock_resource, 2, 120) @mock.patch("openstack.resource.wait_for_delete") def test_wait_for_delete_params(self, mock_wait): mock_resource = mock.Mock() mock_wait.return_value = mock_resource self.sot.wait_for_delete(mock_resource, 1, 2) mock_wait.assert_called_once_with(self.sot, mock_resource, 1, 2) openstacksdk-0.11.3/openstack/tests/unit/database/0000775000175100017510000000000013236151501022162 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/database/v1/0000775000175100017510000000000013236151501022510 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/database/v1/test_flavor.py0000666000175100017510000000275413236151340025425 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.database.v1 import flavor IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'id': IDENTIFIER, 'links': '1', 'name': '2', 'ram': '3', } class TestFlavor(testtools.TestCase): def test_basic(self): sot = flavor.Flavor() self.assertEqual('flavor', sot.resource_key) self.assertEqual('flavors', sot.resources_key) self.assertEqual('/flavors', sot.base_path) self.assertEqual('database', sot.service.service_type) self.assertTrue(sot.allow_list) self.assertFalse(sot.allow_create) self.assertTrue(sot.allow_get) self.assertFalse(sot.allow_update) self.assertFalse(sot.allow_delete) def test_make_it(self): sot = flavor.Flavor(**EXAMPLE) self.assertEqual(IDENTIFIER, sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['ram'], sot.ram) openstacksdk-0.11.3/openstack/tests/unit/database/v1/test_instance.py0000666000175100017510000001070213236151340025730 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from openstack.database.v1 import instance IDENTIFIER = 'IDENTIFIER' EXAMPLE = { 'flavor': '1', 'id': IDENTIFIER, 'links': '3', 'name': '4', 'status': '5', 'volume': '6', 'datastore': {'7': 'seven'}, 'region': '8', 'hostname': '9', 'created': '10', 'updated': '11', } class TestInstance(testtools.TestCase): def test_basic(self): sot = instance.Instance() self.assertEqual('instance', sot.resource_key) self.assertEqual('instances', sot.resources_key) self.assertEqual('/instances', sot.base_path) self.assertEqual('database', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertTrue(sot.allow_get) self.assertTrue(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make_it(self): sot = instance.Instance(**EXAMPLE) self.assertEqual(EXAMPLE['flavor'], sot.flavor) self.assertEqual(EXAMPLE['id'], sot.id) self.assertEqual(EXAMPLE['links'], sot.links) self.assertEqual(EXAMPLE['name'], sot.name) self.assertEqual(EXAMPLE['status'], sot.status) self.assertEqual(EXAMPLE['volume'], sot.volume) self.assertEqual(EXAMPLE['datastore'], sot.datastore) self.assertEqual(EXAMPLE['region'], sot.region) self.assertEqual(EXAMPLE['hostname'], sot.hostname) self.assertEqual(EXAMPLE['created'], sot.created_at) self.assertEqual(EXAMPLE['updated'], sot.updated_at) def test_enable_root_user(self): sot = instance.Instance(**EXAMPLE) response = mock.Mock() response.body = {'user': {'name': 'root', 'password': 'foo'}} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.post = mock.Mock(return_value=response) self.assertEqual(response.body['user'], sot.enable_root_user(sess)) url = ("instances/%s/root" % IDENTIFIER) sess.post.assert_called_with(url,) def test_is_root_enabled(self): sot = instance.Instance(**EXAMPLE) response = mock.Mock() response.body = {'rootEnabled': True} response.json = mock.Mock(return_value=response.body) sess = mock.Mock() sess.get = mock.Mock(return_value=response) self.assertTrue(sot.is_root_enabled(sess)) url = ("instances/%s/root" % IDENTIFIER) sess.get.assert_called_with(url,) def test_action_restart(self): sot = instance.Instance(**EXAMPLE) response = mock.Mock() response.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=response) self.assertIsNone(sot.restart(sess)) url = ("instances/%s/action" % IDENTIFIER) body = {'restart': {}} sess.post.assert_called_with(url, json=body) def test_action_resize(self): sot = instance.Instance(**EXAMPLE) response = mock.Mock() response.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=response) flavor = 'http://flavor/flav' self.assertIsNone(sot.resize(sess, flavor)) url = ("instances/%s/action" % IDENTIFIER) body = {'resize': {'flavorRef': flavor}} sess.post.assert_called_with(url, json=body) def test_action_resize_volume(self): sot = instance.Instance(**EXAMPLE) response = mock.Mock() response.json = mock.Mock(return_value='') sess = mock.Mock() sess.post = mock.Mock(return_value=response) size = 4 self.assertIsNone(sot.resize_volume(sess, size)) url = ("instances/%s/action" % IDENTIFIER) body = {'resize': {'volume': size}} sess.post.assert_called_with(url, json=body) openstacksdk-0.11.3/openstack/tests/unit/database/v1/__init__.py0000666000175100017510000000000013236151340024612 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/database/v1/test_user.py0000666000175100017510000000337413236151340025111 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.database.v1 import user INSTANCE_ID = 'INSTANCE_ID' CREATING = { 'databases': '1', 'name': '2', 'password': '3', } class TestUser(testtools.TestCase): def test_basic(self): sot = user.User() self.assertEqual('user', sot.resource_key) self.assertEqual('users', sot.resources_key) self.assertEqual('/instances/%(instance_id)s/users', sot.base_path) self.assertEqual('database', sot.service.service_type) self.assertTrue(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) self.assertTrue(sot.allow_list) def test_make(self): sot = user.User(**CREATING) self.assertEqual(CREATING['name'], sot.id) self.assertEqual(CREATING['databases'], sot.databases) self.assertEqual(CREATING['name'], sot.name) self.assertEqual(CREATING['name'], sot.id) self.assertEqual(CREATING['password'], sot.password) def test_create(self): sot = user.User(instance_id=INSTANCE_ID, **CREATING) result = sot._prepare_request() self.assertEqual(result.body, {sot.resources_key: CREATING}) openstacksdk-0.11.3/openstack/tests/unit/database/v1/test_database.py0000666000175100017510000000334113236151340025671 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.database.v1 import database IDENTIFIER = 'NAME' INSTANCE_ID = 'INSTANCE_ID' EXAMPLE = { 'character_set': '1', 'collate': '2', 'instance_id': INSTANCE_ID, 'name': IDENTIFIER, } class TestDatabase(testtools.TestCase): def test_basic(self): sot = database.Database() self.assertEqual('database', sot.resource_key) self.assertEqual('databases', sot.resources_key) path = '/instances/%(instance_id)s/databases' self.assertEqual(path, sot.base_path) self.assertEqual('database', sot.service.service_type) self.assertTrue(sot.allow_list) self.assertTrue(sot.allow_create) self.assertFalse(sot.allow_get) self.assertFalse(sot.allow_update) self.assertTrue(sot.allow_delete) def test_make_it(self): sot = database.Database(**EXAMPLE) self.assertEqual(IDENTIFIER, sot.id) self.assertEqual(EXAMPLE['character_set'], sot.character_set) self.assertEqual(EXAMPLE['collate'], sot.collate) self.assertEqual(EXAMPLE['instance_id'], sot.instance_id) self.assertEqual(IDENTIFIER, sot.name) self.assertEqual(IDENTIFIER, sot.id) openstacksdk-0.11.3/openstack/tests/unit/database/v1/test_proxy.py0000666000175100017510000001175013236151340025311 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.database.v1 import _proxy from openstack.database.v1 import database from openstack.database.v1 import flavor from openstack.database.v1 import instance from openstack.database.v1 import user from openstack.tests.unit import test_proxy_base class TestDatabaseProxy(test_proxy_base.TestProxyBase): def setUp(self): super(TestDatabaseProxy, self).setUp() self.proxy = _proxy.Proxy(self.session) def test_database_create_attrs(self): self.verify_create(self.proxy.create_database, database.Database, method_kwargs={"instance": "id"}, expected_kwargs={"instance_id": "id"}) def test_database_delete(self): self.verify_delete(self.proxy.delete_database, database.Database, False, input_path_args={"instance": "test_id"}, expected_path_args={"instance_id": "test_id"}) def test_database_delete_ignore(self): self.verify_delete(self.proxy.delete_database, database.Database, True, input_path_args={"instance": "test_id"}, expected_path_args={"instance_id": "test_id"}) def test_database_find(self): self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_database, method_args=["db", "instance"], expected_args=[database.Database, "db"], expected_kwargs={"instance_id": "instance", "ignore_missing": True}) def test_databases(self): self.verify_list(self.proxy.databases, database.Database, paginated=False, method_args=["id"], expected_kwargs={"instance_id": "id"}) def test_database_get(self): self.verify_get(self.proxy.get_database, database.Database) def test_flavor_find(self): self.verify_find(self.proxy.find_flavor, flavor.Flavor) def test_flavor_get(self): self.verify_get(self.proxy.get_flavor, flavor.Flavor) def test_flavors(self): self.verify_list(self.proxy.flavors, flavor.Flavor, paginated=False) def test_instance_create_attrs(self): self.verify_create(self.proxy.create_instance, instance.Instance) def test_instance_delete(self): self.verify_delete(self.proxy.delete_instance, instance.Instance, False) def test_instance_delete_ignore(self): self.verify_delete(self.proxy.delete_instance, instance.Instance, True) def test_instance_find(self): self.verify_find(self.proxy.find_instance, instance.Instance) def test_instance_get(self): self.verify_get(self.proxy.get_instance, instance.Instance) def test_instances(self): self.verify_list(self.proxy.instances, instance.Instance, paginated=False) def test_instance_update(self): self.verify_update(self.proxy.update_instance, instance.Instance) def test_user_create_attrs(self): self.verify_create(self.proxy.create_user, user.User, method_kwargs={"instance": "id"}, expected_kwargs={"instance_id": "id"}) def test_user_delete(self): self.verify_delete(self.proxy.delete_user, user.User, False, input_path_args={"instance": "id"}, expected_path_args={"instance_id": "id"}) def test_user_delete_ignore(self): self.verify_delete(self.proxy.delete_user, user.User, True, input_path_args={"instance": "id"}, expected_path_args={"instance_id": "id"}) def test_user_find(self): self._verify2('openstack.proxy.BaseProxy._find', self.proxy.find_user, method_args=["user", "instance"], expected_args=[user.User, "user"], expected_kwargs={"instance_id": "instance", "ignore_missing": True}) def test_users(self): self.verify_list(self.proxy.users, user.User, paginated=False, method_args=["test_instance"], expected_kwargs={"instance_id": "test_instance"}) def test_user_get(self): self.verify_get(self.proxy.get_user, user.User) openstacksdk-0.11.3/openstack/tests/unit/database/__init__.py0000666000175100017510000000000013236151340024264 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/tests/unit/database/test_database_service.py0000666000175100017510000000211313236151340027057 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from openstack.database import database_service class TestDatabaseService(testtools.TestCase): def test_service(self): sot = database_service.DatabaseService() self.assertEqual('database', sot.service_type) self.assertEqual('public', sot.interface) self.assertIsNone(sot.region) self.assertIsNone(sot.service_name) self.assertEqual(1, len(sot.valid_versions)) self.assertEqual('v1', sot.valid_versions[0].module) self.assertEqual('v1', sot.valid_versions[0].path) openstacksdk-0.11.3/openstack/proxy2.py0000666000175100017510000000157513236151340020125 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import proxy from openstack import utils class Proxy(proxy.Proxy): @utils.deprecated(deprecated_in="0.10.0", removed_in="1.0", details="openstack.proxy2 is now openstack.proxy") def __init__(self, *args, **kwargs): super(Proxy, self).__init__(*args, **kwargs) openstacksdk-0.11.3/openstack/identity/0000775000175100017510000000000013236151501020126 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/identity/version.py0000666000175100017510000000244513236151340022175 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Version(resource.Resource): resource_key = 'version' resources_key = 'versions' base_path = '/' service = identity_service.IdentityService( version=identity_service.IdentityService.UNVERSIONED ) # capabilities allow_list = True # Properties media_types = resource.Body('media-types') status = resource.Body('status') updated = resource.Body('updated') @classmethod def list(cls, session, paginated=False, **params): resp = session.get(cls.base_path, params=params) resp = resp.json() for data in resp[cls.resources_key]['values']: yield cls.existing(**data) openstacksdk-0.11.3/openstack/identity/v2/0000775000175100017510000000000013236151501020455 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/identity/v2/tenant.py0000666000175100017510000000266013236151340022327 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Tenant(resource.Resource): resource_key = 'tenant' resources_key = 'tenants' base_path = '/tenants' service = identity_service.AdminService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: The description of the tenant. *Type: string* description = resource.Body('description') #: Setting this attribute to ``False`` prevents users from authorizing #: against this tenant. Additionally, all pre-existing tokens authorized #: for the tenant are immediately invalidated. Re-enabling a tenant #: does not re-enable pre-existing tokens. *Type: bool* is_enabled = resource.Body('enabled', type=bool) #: Unique tenant name. *Type: string* name = resource.Body('name') openstacksdk-0.11.3/openstack/identity/v2/extension.py0000666000175100017510000000411513236151340023047 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Extension(resource.Resource): resource_key = 'extension' resources_key = 'extensions' base_path = '/extensions' service = identity_service.IdentityService() # capabilities allow_list = True allow_get = True # Properties #: A unique identifier, which will be used for accessing the extension #: through a dedicated url ``/extensions/*alias*``. The extension #: alias uniquely identifies an extension and is prefixed by a vendor #: identifier. *Type: string* alias = resource.Body('alias', alternate_id=True) #: A description of the extension. *Type: string* description = resource.Body('description') #: Links to the documentation in various format. *Type: string* links = resource.Body('links') #: The name of the extension. *Type: string* name = resource.Body('name') #: The second unique identifier of the extension after the alias. #: It is usually a URL which will be used. Example: #: "http://docs.openstack.org/identity/api/ext/s3tokens/v1.0" #: *Type: string* namespace = resource.Body('namespace') #: The last time the extension has been modified (update date). updated_at = resource.Body('updated') @classmethod def list(cls, session, paginated=False, **params): resp = session.get(cls.base_path, params=params) resp = resp.json() for data in resp[cls.resources_key]['values']: yield cls.existing(**data) openstacksdk-0.11.3/openstack/identity/v2/user.py0000666000175100017510000000262613236151340022016 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class User(resource.Resource): resource_key = 'user' resources_key = 'users' base_path = '/users' service = identity_service.AdminService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: The email of this user. *Type: string* email = resource.Body('email') #: Setting this value to ``False`` prevents the user from authenticating or #: receiving authorization. Additionally, all pre-existing tokens held by #: the user are immediately invalidated. Re-enabling a user does not #: re-enable pre-existing tokens. *Type: bool* is_enabled = resource.Body('enabled', type=bool) #: The name of this user. *Type: string* name = resource.Body('name') openstacksdk-0.11.3/openstack/identity/v2/__init__.py0000666000175100017510000000000013236151340022557 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/identity/v2/role.py0000666000175100017510000000246413236151340022001 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import format from openstack.identity import identity_service from openstack import resource class Role(resource.Resource): resource_key = 'role' resources_key = 'roles' base_path = '/OS-KSADM/roles' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: The description of the role. *Type: string* description = resource.Body('description') #: Setting this attribute to ``False`` prevents this role from being #: available in the role list. *Type: bool* is_enabled = resource.Body('enabled', type=format.BoolStr) #: Unique role name. *Type: string* name = resource.Body('name') openstacksdk-0.11.3/openstack/identity/v2/_proxy.py0000666000175100017510000002531313236151340022356 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity.v2 import extension as _extension from openstack.identity.v2 import role as _role from openstack.identity.v2 import tenant as _tenant from openstack.identity.v2 import user as _user from openstack import proxy class Proxy(proxy.BaseProxy): def extensions(self): """Retrieve a generator of extensions :returns: A generator of extension instances. :rtype: :class:`~openstack.identity.v2.extension.Extension` """ return self._list(_extension.Extension, paginated=False) def get_extension(self, extension): """Get a single extension :param extension: The value can be the ID of an extension or a :class:`~openstack.identity.v2.extension.Extension` instance. :returns: One :class:`~openstack.identity.v2.extension.Extension` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no extension can be found. """ return self._get(_extension.Extension, extension) def create_role(self, **attrs): """Create a new role from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v2.role.Role`, comprised of the properties on the Role class. :returns: The results of role creation :rtype: :class:`~openstack.identity.v2.role.Role` """ return self._create(_role.Role, **attrs) def delete_role(self, role, ignore_missing=True): """Delete a role :param role: The value can be either the ID of a role or a :class:`~openstack.identity.v2.role.Role` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the role does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent role. :returns: ``None`` """ self._delete(_role.Role, role, ignore_missing=ignore_missing) def find_role(self, name_or_id, ignore_missing=True): """Find a single role :param name_or_id: The name or ID of a role. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v2.role.Role` or None """ return self._find(_role.Role, name_or_id, ignore_missing=ignore_missing) def get_role(self, role): """Get a single role :param role: The value can be the ID of a role or a :class:`~openstack.identity.v2.role.Role` instance. :returns: One :class:`~openstack.identity.v2.role.Role` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_role.Role, role) def roles(self, **query): """Retrieve a generator of roles :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of role instances. :rtype: :class:`~openstack.identity.v2.role.Role` """ return self._list(_role.Role, paginated=False, **query) def update_role(self, role, **attrs): """Update a role :param role: Either the ID of a role or a :class:`~openstack.identity.v2.role.Role` instance. :attrs kwargs: The attributes to update on the role represented by ``value``. :returns: The updated role :rtype: :class:`~openstack.identity.v2.role.Role` """ return self._update(_role.Role, role, **attrs) def create_tenant(self, **attrs): """Create a new tenant from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v2.tenant.Tenant`, comprised of the properties on the Tenant class. :returns: The results of tenant creation :rtype: :class:`~openstack.identity.v2.tenant.Tenant` """ return self._create(_tenant.Tenant, **attrs) def delete_tenant(self, tenant, ignore_missing=True): """Delete a tenant :param tenant: The value can be either the ID of a tenant or a :class:`~openstack.identity.v2.tenant.Tenant` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the tenant does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent tenant. :returns: ``None`` """ self._delete(_tenant.Tenant, tenant, ignore_missing=ignore_missing) def find_tenant(self, name_or_id, ignore_missing=True): """Find a single tenant :param name_or_id: The name or ID of a tenant. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v2.tenant.Tenant` or None """ return self._find(_tenant.Tenant, name_or_id, ignore_missing=ignore_missing) def get_tenant(self, tenant): """Get a single tenant :param tenant: The value can be the ID of a tenant or a :class:`~openstack.identity.v2.tenant.Tenant` instance. :returns: One :class:`~openstack.identity.v2.tenant.Tenant` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_tenant.Tenant, tenant) def tenants(self, **query): """Retrieve a generator of tenants :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of tenant instances. :rtype: :class:`~openstack.identity.v2.tenant.Tenant` """ return self._list(_tenant.Tenant, paginated=True, **query) def update_tenant(self, tenant, **attrs): """Update a tenant :param tenant: Either the ID of a tenant or a :class:`~openstack.identity.v2.tenant.Tenant` instance. :attrs kwargs: The attributes to update on the tenant represented by ``value``. :returns: The updated tenant :rtype: :class:`~openstack.identity.v2.tenant.Tenant` """ return self._update(_tenant.Tenant, tenant, **attrs) def create_user(self, **attrs): """Create a new user from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v2.user.User`, comprised of the properties on the User class. :returns: The results of user creation :rtype: :class:`~openstack.identity.v2.user.User` """ return self._create(_user.User, **attrs) def delete_user(self, user, ignore_missing=True): """Delete a user :param user: The value can be either the ID of a user or a :class:`~openstack.identity.v2.user.User` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the user does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent user. :returns: ``None`` """ self._delete(_user.User, user, ignore_missing=ignore_missing) def find_user(self, name_or_id, ignore_missing=True): """Find a single user :param name_or_id: The name or ID of a user. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v2.user.User` or None """ return self._find(_user.User, name_or_id, ignore_missing=ignore_missing) def get_user(self, user): """Get a single user :param user: The value can be the ID of a user or a :class:`~openstack.identity.v2.user.User` instance. :returns: One :class:`~openstack.identity.v2.user.User` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_user.User, user) def users(self, **query): """Retrieve a generator of users :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of user instances. :rtype: :class:`~openstack.identity.v2.user.User` """ return self._list(_user.User, paginated=False, **query) def update_user(self, user, **attrs): """Update a user :param user: Either the ID of a user or a :class:`~openstack.identity.v2.user.User` instance. :attrs kwargs: The attributes to update on the user represented by ``value``. :returns: The updated user :rtype: :class:`~openstack.identity.v2.user.User` """ return self._update(_user.User, user, **attrs) openstacksdk-0.11.3/openstack/identity/identity_service.py0000666000175100017510000000220313236151340024051 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class IdentityService(service_filter.ServiceFilter): """The identity service.""" valid_versions = [ service_filter.ValidVersion('v3'), service_filter.ValidVersion('v2'), ] def __init__(self, **kwargs): """Create an identity service.""" kwargs['service_type'] = 'identity' super(IdentityService, self).__init__(**kwargs) class AdminService(IdentityService): def __init__(self, **kwargs): kwargs['interface'] = service_filter.ServiceFilter.ADMIN super(AdminService, self).__init__(**kwargs) openstacksdk-0.11.3/openstack/identity/__init__.py0000666000175100017510000000000013236151340022230 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/identity/v3/0000775000175100017510000000000013236151501020456 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/identity/v3/credential.py0000666000175100017510000000332313236151340023146 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Credential(resource.Resource): resource_key = 'credential' resources_key = 'credentials' base_path = '/credentials' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'type', 'user_id', ) # Properties #: Arbitrary blob of the credential data, to be parsed according to the #: ``type``. *Type: string* blob = resource.Body('blob') #: References a project ID which limits the scope the credential applies #: to. This attribute is **mandatory** if the credential type is ``ec2``. #: *Type: string* project_id = resource.Body('project_id') #: Representing the credential type, such as ``ec2`` or ``cert``. #: A specific implementation may determine the list of supported types. #: *Type: string* type = resource.Body('type') #: References the user ID which owns the credential. *Type: string* user_id = resource.Body('user_id') openstacksdk-0.11.3/openstack/identity/v3/role_assignment.py0000666000175100017510000000307213236151340024226 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class RoleAssignment(resource.Resource): resource_key = 'role_assignment' resources_key = 'role_assignments' base_path = '/role_assignments' service = identity_service.IdentityService() # capabilities allow_list = True _query_mapping = resource.QueryParameters( 'group_id', 'role_id', 'scope_domain_id', 'scope_project_id', 'user_id', 'effective', 'include_names', 'include_subtree' ) # Properties #: The links for the service resource. links = resource.Body('links') #: The role (dictionary contains only id) *Type: dict* role = resource.Body('role', type=dict) #: The scope (either domain or group dictionary contains id) *Type: dict* scope = resource.Body('scope', type=dict) #: The user (dictionary contains only id) *Type: dict* user = resource.Body('user', type=dict) #: The group (dictionary contains only id) *Type: dict* group = resource.Body('group', type=dict) openstacksdk-0.11.3/openstack/identity/v3/region.py0000666000175100017510000000250613236151340022321 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Region(resource.Resource): resource_key = 'region' resources_key = 'regions' base_path = '/regions' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'parent_region_id', ) # Properties #: User-facing description of the region. *Type: string* description = resource.Body('description') #: The links for the region resource. links = resource.Body('links') #: ID of parent region, if any. *Type: string* parent_region_id = resource.Body('parent_region_id') openstacksdk-0.11.3/openstack/identity/v3/trust.py0000666000175100017510000000705213236151340022220 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Trust(resource.Resource): resource_key = 'trust' resources_key = 'trusts' base_path = '/OS-TRUST/trusts' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'trustor_user_id', 'trustee_user_id') # Properties #: A boolean indicating whether the trust can be issued by the trustee as #: a regulart trust. Default is ``False``. allow_redelegation = resource.Body('allow_redelegation', type=bool) #: If ``impersonation`` is set to ``False``, then the token's ``user`` #: attribute will represent that of the trustee. *Type: bool* is_impersonation = resource.Body('impersonation', type=bool) #: Specifies the expiration time of the trust. A trust may be revoked #: ahead of expiration. If the value represents a time in the past, #: the trust is deactivated. expires_at = resource.Body('expires_at') #: If ``impersonation`` is set to true, then the ``user`` attribute #: of tokens that are generated based on the trust will represent #: that of the trustor rather than the trustee, thus allowing the trustee #: to impersonate the trustor. #: If ``impersonation`` is set to ``False``, then the token's ``user`` #: attribute will represent that of the trustee. *Type: bool* is_impersonation = resource.Body('impersonation', type=bool) #: Links for the trust resource. links = resource.Body('links') #: ID of the project upon which the trustor is #: delegating authorization. *Type: string* project_id = resource.Body('project_id') #: A role links object that includes 'next', 'previous', and self links #: for roles. role_links = resource.Body('role_links') #: Specifies the subset of the trustor's roles on the ``project_id`` #: to be granted to the trustee when the token in consumed. The #: trustor must already be granted these roles in the project referenced #: by the ``project_id`` attribute. *Type: list* roles = resource.Body('roles') #: Returned with redelegated trust provides information about the #: predecessor in the trust chain. redelegated_trust_id = resource.Body('redelegated_trust_id') #: Redelegation count redelegation_count = resource.Body('redelegation_count') #: How many times the trust can be used to obtain a token. The value is #: decreased each time a token is issued through the trust. Once it #: reaches zero, no further tokens will be isued through the trust. remaining_uses = resource.Body('remaining_uses') #: Represents the user ID who is capable of consuming the trust. #: *Type: string* trustee_user_id = resource.Body('trustee_user_id') #: Represents the user ID who created the trust, and who's authorization is #: being delegated. *Type: string* trustor_user_id = resource.Body('trustor_user_id') openstacksdk-0.11.3/openstack/identity/v3/role_domain_group_assignment.py0000666000175100017510000000237413236151340026775 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class RoleDomainGroupAssignment(resource.Resource): resource_key = 'role' resources_key = 'roles' base_path = '/domains/%(domain_id)s/groups/%(group_id)s/roles' service = identity_service.IdentityService() # capabilities allow_list = True # Properties #: name of the role *Type: string* name = resource.Body('name') #: The links for the service resource. links = resource.Body('links') #: The ID of the domain to list assignment from. *Type: string* domain_id = resource.URI('domain_id') #: The ID of the group to list assignment from. *Type: string* group_id = resource.URI('group_id') openstacksdk-0.11.3/openstack/identity/v3/project.py0000666000175100017510000001052113236151364022506 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource from openstack import utils class Project(resource.Resource): resource_key = 'project' resources_key = 'projects' base_path = '/projects' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'domain_id', 'is_domain', 'name', 'parent_id', is_enabled='enabled', ) # Properties #: The description of the project. *Type: string* description = resource.Body('description') #: References the domain ID which owns the project; if a domain ID is not #: specified by the client, the Identity service implementation will #: default it to the domain ID to which the client's token is scoped. #: *Type: string* domain_id = resource.Body('domain_id') #: Indicates whether the project also acts as a domain. If set to True, #: the project acts as both a project and a domain. Default is False. #: New in version 3.6 is_domain = resource.Body('is_domain', type=bool) #: Setting this attribute to ``False`` prevents users from authorizing #: against this project. Additionally, all pre-existing tokens authorized #: for the project are immediately invalidated. Re-enabling a project #: does not re-enable pre-existing tokens. *Type: bool* is_enabled = resource.Body('enabled', type=bool) #: Unique project name, within the owning domain. *Type: string* name = resource.Body('name') #: The ID of the parent of the project. #: New in version 3.4 parent_id = resource.Body('parent_id') def assign_role_to_user(self, session, user, role): """Assign role to user on project""" url = utils.urljoin(self.base_path, self.id, 'users', user.id, 'roles', role.id) resp = session.put(url,) if resp.status_code == 204: return True return False def validate_user_has_role(self, session, user, role): """Validates that a user has a role on a project""" url = utils.urljoin(self.base_path, self.id, 'users', user.id, 'roles', role.id) resp = session.head(url,) if resp.status_code == 201: return True return False def unassign_role_from_user(self, session, user, role): """Unassigns a role from a user on a project""" url = utils.urljoin(self.base_path, self.id, 'users', user.id, 'roles', role.id) resp = session.delete(url,) if resp.status_code == 204: return True return False def assign_role_to_group(self, session, group, role): """Assign role to group on project""" url = utils.urljoin(self.base_path, self.id, 'groups', group.id, 'roles', role.id) resp = session.put(url,) if resp.status_code == 204: return True return False def validate_group_has_role(self, session, group, role): """Validates that a group has a role on a project""" url = utils.urljoin(self.base_path, self.id, 'groups', group.id, 'roles', role.id) resp = session.head(url,) if resp.status_code == 201: return True return False def unassign_role_from_group(self, session, group, role): """Unassigns a role from a group on a project""" url = utils.urljoin(self.base_path, self.id, 'groups', group.id, 'roles', role.id) resp = session.delete(url,) if resp.status_code == 204: return True return False openstacksdk-0.11.3/openstack/identity/v3/policy.py0000666000175100017510000000262213236151340022334 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Policy(resource.Resource): resource_key = 'policy' resources_key = 'policies' base_path = '/policies' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' # Properties #: The policy rule set itself, as a serialized blob. *Type: string* blob = resource.Body('blob') #: The links for the policy resource. links = resource.Body('links') #: The ID for the project. project_id = resource.Body('project_id') #: The MIME Media Type of the serialized policy blob. *Type: string* type = resource.Body('type') #: The ID of the user who owns the policy user_id = resource.Body('user_id') openstacksdk-0.11.3/openstack/identity/v3/user.py0000666000175100017510000000611713236151340022016 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class User(resource.Resource): resource_key = 'user' resources_key = 'users' base_path = '/users' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'domain_id', 'name', 'password_expires_at', is_enabled='enabled', ) # Properties #: References the user's default project ID against which to authorize, #: if the API user does not explicitly specify one when creating a token. #: Setting this attribute does not grant any actual authorization on the #: project, and is merely provided for the user's convenience. #: Therefore, the referenced project does not need to exist within the #: user's domain. #: #: *New in version 3.1* If the user does not have authorization to #: their default project, the default project will be ignored at token #: creation. *Type: string* default_project_id = resource.Body('default_project_id') #: The description of this user. *Type: string* description = resource.Body('description') #: References the domain ID which owns the user; if a domain ID is not #: specified by the client, the Identity service implementation will #: default it to the domain ID to which the client's token is scoped. #: *Type: string* domain_id = resource.Body('domain_id') #: The email of this user. *Type: string* email = resource.Body('email') #: Setting this value to ``False`` prevents the user from authenticating or #: receiving authorization. Additionally, all pre-existing tokens held by #: the user are immediately invalidated. Re-enabling a user does not #: re-enable pre-existing tokens. *Type: bool* is_enabled = resource.Body('enabled', type=bool) #: The links for the user resource. links = resource.Body('links') #: Unique user name, within the owning domain. *Type: string* name = resource.Body('name') #: The default form of credential used during authentication. #: *Type: string* password = resource.Body('password') #: The date and time when the pasword expires. The time zone is UTC. #: A None value means the password never expires. #: This is a response object attribute, not valid for requests. #: *New in version 3.7* password_expires_at = resource.Body('password_expires_at') openstacksdk-0.11.3/openstack/identity/v3/domain.py0000666000175100017510000000757513236151340022320 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource from openstack import utils class Domain(resource.Resource): resource_key = 'domain' resources_key = 'domains' base_path = '/domains' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'name', is_enabled='enabled', ) # Properties #: The description of this domain. *Type: string* description = resource.Body('description') #: Setting this attribute to ``False`` prevents users from authorizing #: against this domain or any projects owned by this domain, and prevents #: users owned by this domain from authenticating or receiving any other #: authorization. Additionally, all pre-existing tokens applicable #: to the above entities are immediately invalidated. #: Re-enabling a domain does not re-enable pre-existing tokens. #: *Type: bool* is_enabled = resource.Body('enabled', type=bool) #: The globally unique name of this domain. *Type: string* name = resource.Body('name') #: The links related to the domain resource. links = resource.Body('links') def assign_role_to_user(self, session, user, role): """Assign role to user on domain""" url = utils.urljoin(self.base_path, self.id, 'users', user.id, 'roles', role.id) resp = session.put(url,) if resp.status_code == 204: return True return False def validate_user_has_role(self, session, user, role): """Validates that a user has a role on a domain""" url = utils.urljoin(self.base_path, self.id, 'users', user.id, 'roles', role.id) resp = session.head(url,) if resp.status_code == 201: return True return False def unassign_role_from_user(self, session, user, role): """Unassigns a role from a user on a domain""" url = utils.urljoin(self.base_path, self.id, 'users', user.id, 'roles', role.id) resp = session.delete(url,) if resp.status_code == 204: return True return False def assign_role_to_group(self, session, group, role): """Assign role to group on domain""" url = utils.urljoin(self.base_path, self.id, 'groups', group.id, 'roles', role.id) resp = session.put(url,) if resp.status_code == 204: return True return False def validate_group_has_role(self, session, group, role): """Validates that a group has a role on a domain""" url = utils.urljoin(self.base_path, self.id, 'groups', group.id, 'roles', role.id) resp = session.head(url,) if resp.status_code == 201: return True return False def unassign_role_from_group(self, session, group, role): """Unassigns a role from a group on a domain""" url = utils.urljoin(self.base_path, self.id, 'groups', group.id, 'roles', role.id) resp = session.delete(url,) if resp.status_code == 204: return True return False openstacksdk-0.11.3/openstack/identity/v3/service.py0000666000175100017510000000352513236151340022500 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Service(resource.Resource): resource_key = 'service' resources_key = 'services' base_path = '/services' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'type', ) # Properties #: User-facing description of the service. *Type: string* description = resource.Body('description') #: Setting this value to ``False`` prevents the service and #: its endpoints from appearing in the service catalog. *Type: bool* is_enabled = resource.Body('enabled', type=bool) #: The links for the service resource. links = resource.Body('links') #: User-facing name of the service. *Type: string* name = resource.Body('name') #: Describes the API implemented by the service. The following values are #: recognized within the OpenStack ecosystem: ``compute``, ``image``, #: ``ec2``, ``identity``, ``volume``, ``network``. To support non-core and #: future projects, the value should not be validated against this list. #: *Type: string* type = resource.Body('type') openstacksdk-0.11.3/openstack/identity/v3/__init__.py0000666000175100017510000000000013236151340022560 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/identity/v3/role.py0000666000175100017510000000225113236151340021774 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Role(resource.Resource): resource_key = 'role' resources_key = 'roles' base_path = '/roles' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True _query_mapping = resource.QueryParameters( 'name', 'domain_id') # Properties #: Unique role name, within the owning domain. *Type: string* name = resource.Body('name') #: The links for the service resource. links = resource.Body('links') openstacksdk-0.11.3/openstack/identity/v3/role_domain_user_assignment.py0000666000175100017510000000236613236151340026620 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class RoleDomainUserAssignment(resource.Resource): resource_key = 'role' resources_key = 'roles' base_path = '/domains/%(domain_id)s/users/%(user_id)s/roles' service = identity_service.IdentityService() # capabilities allow_list = True # Properties #: name of the role *Type: string* name = resource.Body('name') #: The links for the service resource. links = resource.Body('links') #: The ID of the domain to list assignment from. *Type: string* domain_id = resource.URI('domain_id') #: The ID of the user to list assignment from. *Type: string* user_id = resource.URI('user_id') openstacksdk-0.11.3/openstack/identity/v3/group.py0000666000175100017510000000300613236151340022166 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Group(resource.Resource): resource_key = 'group' resources_key = 'groups' base_path = '/groups' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'domain_id', 'name', ) # Properties #: The description of this group. *Type: string* description = resource.Body('description') #: References the domain ID which owns the group; if a domain ID is not #: specified by the client, the Identity service implementation will #: default it to the domain ID to which the client's token is scoped. #: *Type: string* domain_id = resource.Body('domain_id') #: Unique group name, within the owning domain. *Type: string* name = resource.Body('name') openstacksdk-0.11.3/openstack/identity/v3/endpoint.py0000666000175100017510000000433513236151340022660 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class Endpoint(resource.Resource): resource_key = 'endpoint' resources_key = 'endpoints' base_path = '/endpoints' service = identity_service.IdentityService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True update_method = 'PATCH' _query_mapping = resource.QueryParameters( 'interface', 'service_id', ) # Properties #: Describes the interface of the endpoint according to one of the #: following values: #: #: - `public`: intended for consumption by end users, generally on a #: publicly available network interface #: - `internal`: not intended for consumption by end users, generally on an #: unmetered internal network interface #: - `admin`: intended only for consumption by those needing administrative #: access to the service, generally on a secure network interface #: #: *Type: string* interface = resource.Body('interface') #: Setting this value to ``False`` prevents the endpoint from appearing #: in the service catalog. *Type: bool* is_enabled = resource.Body('enabled', type=bool) #: The links for the region resource. links = resource.Body('links') #: Represents the containing region ID of the service endpoint. #: *New in v3.2* *Type: string* region_id = resource.Body('region_id') #: References the service ID to which the endpoint belongs. *Type: string* service_id = resource.Body('service_id') #: Fully qualified URL of the service endpoint. *Type: string* url = resource.Body('url') openstacksdk-0.11.3/openstack/identity/v3/role_project_group_assignment.py0000666000175100017510000000240213236151340027164 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class RoleProjectGroupAssignment(resource.Resource): resource_key = 'role' resources_key = 'roles' base_path = '/projects/%(project_id)s/groups/%(group_id)s/roles' service = identity_service.IdentityService() # capabilities allow_list = True # Properties #: name of the role *Type: string* name = resource.Body('name') #: The links for the service resource. links = resource.Body('links') #: The ID of the project to list assignment from. *Type: string* project_id = resource.URI('project_id') #: The ID of the group to list assignment from. *Type: string* group_id = resource.URI('group_id') openstacksdk-0.11.3/openstack/identity/v3/role_project_user_assignment.py0000666000175100017510000000237413236151340027016 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.identity import identity_service from openstack import resource class RoleProjectUserAssignment(resource.Resource): resource_key = 'role' resources_key = 'roles' base_path = '/projects/%(project_id)s/users/%(user_id)s/roles' service = identity_service.IdentityService() # capabilities allow_list = True # Properties #: name of the role *Type: string* name = resource.Body('name') #: The links for the service resource. links = resource.Body('links') #: The ID of the project to list assignment from. *Type: string* project_id = resource.URI('project_id') #: The ID of the user to list assignment from. *Type: string* user_id = resource.URI('user_id') openstacksdk-0.11.3/openstack/identity/v3/_proxy.py0000666000175100017510000011761413236151364022373 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack.exceptions as exception from openstack.identity.v3 import credential as _credential from openstack.identity.v3 import domain as _domain from openstack.identity.v3 import endpoint as _endpoint from openstack.identity.v3 import group as _group from openstack.identity.v3 import policy as _policy from openstack.identity.v3 import project as _project from openstack.identity.v3 import region as _region from openstack.identity.v3 import role as _role from openstack.identity.v3 import role_assignment as _role_assignment from openstack.identity.v3 import role_domain_group_assignment \ as _role_domain_group_assignment from openstack.identity.v3 import role_domain_user_assignment \ as _role_domain_user_assignment from openstack.identity.v3 import role_project_group_assignment \ as _role_project_group_assignment from openstack.identity.v3 import role_project_user_assignment \ as _role_project_user_assignment from openstack.identity.v3 import service as _service from openstack.identity.v3 import trust as _trust from openstack.identity.v3 import user as _user from openstack import proxy class Proxy(proxy.BaseProxy): def create_credential(self, **attrs): """Create a new credential from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.credential.Credential`, comprised of the properties on the Credential class. :returns: The results of credential creation :rtype: :class:`~openstack.identity.v3.credential.Credential` """ return self._create(_credential.Credential, **attrs) def delete_credential(self, credential, ignore_missing=True): """Delete a credential :param credential: The value can be either the ID of a credential or a :class:`~openstack.identity.v3.credential.Credential` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the credential does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent credential. :returns: ``None`` """ self._delete(_credential.Credential, credential, ignore_missing=ignore_missing) def find_credential(self, name_or_id, ignore_missing=True): """Find a single credential :param name_or_id: The name or ID of a credential. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.credential.Credential` or None """ return self._find(_credential.Credential, name_or_id, ignore_missing=ignore_missing) def get_credential(self, credential): """Get a single credential :param credential: The value can be the ID of a credential or a :class:`~openstack.identity.v3.credential.Credential` instance. :returns: One :class:`~openstack.identity.v3.credential.Credential` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_credential.Credential, credential) def credentials(self, **query): """Retrieve a generator of credentials :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of credentials instances. :rtype: :class:`~openstack.identity.v3.credential.Credential` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_credential.Credential, paginated=False, **query) def update_credential(self, credential, **attrs): """Update a credential :param credential: Either the ID of a credential or a :class:`~openstack.identity.v3.credential.Credential` instance. :attrs kwargs: The attributes to update on the credential represented by ``value``. :returns: The updated credential :rtype: :class:`~openstack.identity.v3.credential.Credential` """ return self._update(_credential.Credential, credential, **attrs) def create_domain(self, **attrs): """Create a new domain from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.domain.Domain`, comprised of the properties on the Domain class. :returns: The results of domain creation :rtype: :class:`~openstack.identity.v3.domain.Domain` """ return self._create(_domain.Domain, **attrs) def delete_domain(self, domain, ignore_missing=True): """Delete a domain :param domain: The value can be either the ID of a domain or a :class:`~openstack.identity.v3.domain.Domain` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the domain does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent domain. :returns: ``None`` """ self._delete(_domain.Domain, domain, ignore_missing=ignore_missing) def find_domain(self, name_or_id, ignore_missing=True): """Find a single domain :param name_or_id: The name or ID of a domain. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.domain.Domain` or None """ return self._find(_domain.Domain, name_or_id, ignore_missing=ignore_missing) def get_domain(self, domain): """Get a single domain :param domain: The value can be the ID of a domain or a :class:`~openstack.identity.v3.domain.Domain` instance. :returns: One :class:`~openstack.identity.v3.domain.Domain` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_domain.Domain, domain) def domains(self, **query): """Retrieve a generator of domains :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of domain instances. :rtype: :class:`~openstack.identity.v3.domain.Domain` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_domain.Domain, paginated=False, **query) def update_domain(self, domain, **attrs): """Update a domain :param domain: Either the ID of a domain or a :class:`~openstack.identity.v3.domain.Domain` instance. :attrs kwargs: The attributes to update on the domain represented by ``value``. :returns: The updated domain :rtype: :class:`~openstack.identity.v3.domain.Domain` """ return self._update(_domain.Domain, domain, **attrs) def create_endpoint(self, **attrs): """Create a new endpoint from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.endpoint.Endpoint`, comprised of the properties on the Endpoint class. :returns: The results of endpoint creation :rtype: :class:`~openstack.identity.v3.endpoint.Endpoint` """ return self._create(_endpoint.Endpoint, **attrs) def delete_endpoint(self, endpoint, ignore_missing=True): """Delete an endpoint :param endpoint: The value can be either the ID of an endpoint or a :class:`~openstack.identity.v3.endpoint.Endpoint` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the endpoint does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent endpoint. :returns: ``None`` """ self._delete(_endpoint.Endpoint, endpoint, ignore_missing=ignore_missing) def find_endpoint(self, name_or_id, ignore_missing=True): """Find a single endpoint :param name_or_id: The name or ID of a endpoint. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.endpoint.Endpoint` or None """ return self._find(_endpoint.Endpoint, name_or_id, ignore_missing=ignore_missing) def get_endpoint(self, endpoint): """Get a single endpoint :param endpoint: The value can be the ID of an endpoint or a :class:`~openstack.identity.v3.endpoint.Endpoint` instance. :returns: One :class:`~openstack.identity.v3.endpoint.Endpoint` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_endpoint.Endpoint, endpoint) def endpoints(self, **query): """Retrieve a generator of endpoints :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of endpoint instances. :rtype: :class:`~openstack.identity.v3.endpoint.Endpoint` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_endpoint.Endpoint, paginated=False, **query) def update_endpoint(self, endpoint, **attrs): """Update a endpoint :param endpoint: Either the ID of a endpoint or a :class:`~openstack.identity.v3.endpoint.Endpoint` instance. :attrs kwargs: The attributes to update on the endpoint represented by ``value``. :returns: The updated endpoint :rtype: :class:`~openstack.identity.v3.endpoint.Endpoint` """ return self._update(_endpoint.Endpoint, endpoint, **attrs) def create_group(self, **attrs): """Create a new group from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.group.Group`, comprised of the properties on the Group class. :returns: The results of group creation :rtype: :class:`~openstack.identity.v3.group.Group` """ return self._create(_group.Group, **attrs) def delete_group(self, group, ignore_missing=True): """Delete a group :param group: The value can be either the ID of a group or a :class:`~openstack.identity.v3.group.Group` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the group does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent group. :returns: ``None`` """ self._delete(_group.Group, group, ignore_missing=ignore_missing) def find_group(self, name_or_id, ignore_missing=True): """Find a single group :param name_or_id: The name or ID of a group. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.group.Group` or None """ return self._find(_group.Group, name_or_id, ignore_missing=ignore_missing) def get_group(self, group): """Get a single group :param group: The value can be the ID of a group or a :class:`~openstack.identity.v3.group.Group` instance. :returns: One :class:`~openstack.identity.v3.group.Group` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_group.Group, group) def groups(self, **query): """Retrieve a generator of groups :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of group instances. :rtype: :class:`~openstack.identity.v3.group.Group` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_group.Group, paginated=False, **query) def update_group(self, group, **attrs): """Update a group :param group: Either the ID of a group or a :class:`~openstack.identity.v3.group.Group` instance. :attrs kwargs: The attributes to update on the group represented by ``value``. :returns: The updated group :rtype: :class:`~openstack.identity.v3.group.Group` """ return self._update(_group.Group, group, **attrs) def create_policy(self, **attrs): """Create a new policy from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.policy.Policy`, comprised of the properties on the Policy class. :returns: The results of policy creation :rtype: :class:`~openstack.identity.v3.policy.Policy` """ return self._create(_policy.Policy, **attrs) def delete_policy(self, policy, ignore_missing=True): """Delete a policy :param policy: The value can be either the ID of a policy or a :class:`~openstack.identity.v3.policy.Policy` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the policy does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent policy. :returns: ``None`` """ self._delete(_policy.Policy, policy, ignore_missing=ignore_missing) def find_policy(self, name_or_id, ignore_missing=True): """Find a single policy :param name_or_id: The name or ID of a policy. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.policy.Policy` or None """ return self._find(_policy.Policy, name_or_id, ignore_missing=ignore_missing) def get_policy(self, policy): """Get a single policy :param policy: The value can be the ID of a policy or a :class:`~openstack.identity.v3.policy.Policy` instance. :returns: One :class:`~openstack.identity.v3.policy.Policy` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_policy.Policy, policy) def policies(self, **query): """Retrieve a generator of policies :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of policy instances. :rtype: :class:`~openstack.identity.v3.policy.Policy` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_policy.Policy, paginated=False, **query) def update_policy(self, policy, **attrs): """Update a policy :param policy: Either the ID of a policy or a :class:`~openstack.identity.v3.policy.Policy` instance. :attrs kwargs: The attributes to update on the policy represented by ``value``. :returns: The updated policy :rtype: :class:`~openstack.identity.v3.policy.Policy` """ return self._update(_policy.Policy, policy, **attrs) def create_project(self, **attrs): """Create a new project from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.project.Project`, comprised of the properties on the Project class. :returns: The results of project creation :rtype: :class:`~openstack.identity.v3.project.Project` """ return self._create(_project.Project, **attrs) def delete_project(self, project, ignore_missing=True): """Delete a project :param project: The value can be either the ID of a project or a :class:`~openstack.identity.v3.project.Project` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the project does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent project. :returns: ``None`` """ self._delete(_project.Project, project, ignore_missing=ignore_missing) def find_project(self, name_or_id, ignore_missing=True, **attrs): """Find a single project :param name_or_id: The name or ID of a project. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.project.Project` or None """ return self._find(_project.Project, name_or_id, ignore_missing=ignore_missing, **attrs) def get_project(self, project): """Get a single project :param project: The value can be the ID of a project or a :class:`~openstack.identity.v3.project.Project` instance. :returns: One :class:`~openstack.identity.v3.project.Project` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_project.Project, project) def projects(self, **query): """Retrieve a generator of projects :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of project instances. :rtype: :class:`~openstack.identity.v3.project.Project` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_project.Project, paginated=False, **query) def update_project(self, project, **attrs): """Update a project :param project: Either the ID of a project or a :class:`~openstack.identity.v3.project.Project` instance. :attrs kwargs: The attributes to update on the project represented by ``value``. :returns: The updated project :rtype: :class:`~openstack.identity.v3.project.Project` """ return self._update(_project.Project, project, **attrs) def create_service(self, **attrs): """Create a new service from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.service.Service`, comprised of the properties on the Service class. :returns: The results of service creation :rtype: :class:`~openstack.identity.v3.service.Service` """ return self._create(_service.Service, **attrs) def delete_service(self, service, ignore_missing=True): """Delete a service :param service: The value can be either the ID of a service or a :class:`~openstack.identity.v3.service.Service` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the service does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent service. :returns: ``None`` """ self._delete(_service.Service, service, ignore_missing=ignore_missing) def find_service(self, name_or_id, ignore_missing=True): """Find a single service :param name_or_id: The name or ID of a service. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.service.Service` or None """ return self._find(_service.Service, name_or_id, ignore_missing=ignore_missing) def get_service(self, service): """Get a single service :param service: The value can be the ID of a service or a :class:`~openstack.identity.v3.service.Service` instance. :returns: One :class:`~openstack.identity.v3.service.Service` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_service.Service, service) def services(self, **query): """Retrieve a generator of services :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of service instances. :rtype: :class:`~openstack.identity.v3.service.Service` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_service.Service, paginated=False, **query) def update_service(self, service, **attrs): """Update a service :param service: Either the ID of a service or a :class:`~openstack.identity.v3.service.Service` instance. :attrs kwargs: The attributes to update on the service represented by ``value``. :returns: The updated service :rtype: :class:`~openstack.identity.v3.service.Service` """ return self._update(_service.Service, service, **attrs) def create_user(self, **attrs): """Create a new user from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.user.User`, comprised of the properties on the User class. :returns: The results of user creation :rtype: :class:`~openstack.identity.v3.user.User` """ return self._create(_user.User, **attrs) def delete_user(self, user, ignore_missing=True): """Delete a user :param user: The value can be either the ID of a user or a :class:`~openstack.identity.v3.user.User` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the user does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent user. :returns: ``None`` """ self._delete(_user.User, user, ignore_missing=ignore_missing) def find_user(self, name_or_id, ignore_missing=True, **attrs): """Find a single user :param name_or_id: The name or ID of a user. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.user.User` or None """ return self._find(_user.User, name_or_id, ignore_missing=ignore_missing, **attrs) def get_user(self, user): """Get a single user :param user: The value can be the ID of a user or a :class:`~openstack.identity.v3.user.User` instance. :returns: One :class:`~openstack.identity.v3.user.User` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_user.User, user) def users(self, **query): """Retrieve a generator of users :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of user instances. :rtype: :class:`~openstack.identity.v3.user.User` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_user.User, paginated=False, **query) def update_user(self, user, **attrs): """Update a user :param user: Either the ID of a user or a :class:`~openstack.identity.v3.user.User` instance. :attrs kwargs: The attributes to update on the user represented by ``value``. :returns: The updated user :rtype: :class:`~openstack.identity.v3.user.User` """ return self._update(_user.User, user, **attrs) def create_trust(self, **attrs): """Create a new trust from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.trust.Trust`, comprised of the properties on the Trust class. :returns: The results of trust creation :rtype: :class:`~openstack.identity.v3.trust.Trust` """ return self._create(_trust.Trust, **attrs) def delete_trust(self, trust, ignore_missing=True): """Delete a trust :param trust: The value can be either the ID of a trust or a :class:`~openstack.identity.v3.trust.Trust` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the credential does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent credential. :returns: ``None`` """ self._delete(_trust.Trust, trust, ignore_missing=ignore_missing) def find_trust(self, name_or_id, ignore_missing=True): """Find a single trust :param name_or_id: The name or ID of a trust. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.identity.v3.trust.Trust` or None """ return self._find(_trust.Trust, name_or_id, ignore_missing=ignore_missing) def get_trust(self, trust): """Get a single trust :param trust: The value can be the ID of a trust or a :class:`~openstack.identity.v3.trust.Trust` instance. :returns: One :class:`~openstack.identity.v3.trust.Trust` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_trust.Trust, trust) def trusts(self, **query): """Retrieve a generator of trusts :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of trust instances. :rtype: :class:`~openstack.identity.v3.trust.Trust` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_trust.Trust, paginated=False, **query) def create_region(self, **attrs): """Create a new region from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.region.Region`, comprised of the properties on the Region class. :returns: The results of region creation. :rtype: :class:`~openstack.identity.v3.region.Region` """ return self._create(_region.Region, **attrs) def delete_region(self, region, ignore_missing=True): """Delete a region :param region: The value can be either the ID of a region or a :class:`~openstack.identity.v3.region.Region` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the region does not exist. When set to ``True``, no exception will be thrown when attempting to delete a nonexistent region. :returns: ``None`` """ self._delete(_region.Region, region, ignore_missing=ignore_missing) def find_region(self, name_or_id, ignore_missing=True): """Find a single region :param name_or_id: The name or ID of a region. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the region does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent region. :returns: One :class:`~openstack.identity.v3.region.Region` or None """ return self._find(_region.Region, name_or_id, ignore_missing=ignore_missing) def get_region(self, region): """Get a single region :param region: The value can be the ID of a region or a :class:`~openstack.identity.v3.region.Region` instance. :returns: One :class:`~openstack.identity.v3.region.Region` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no matching region can be found. """ return self._get(_region.Region, region) def regions(self, **query): """Retrieve a generator of regions :param kwargs \*\*query: Optional query parameters to be sent to limit the regions being returned. :returns: A generator of region instances. :rtype: :class:`~openstack.identity.v3.region.Region` """ # TODO(briancurtin): This is paginated but requires base list changes. return self._list(_region.Region, paginated=False, **query) def update_region(self, region, **attrs): """Update a region :param region: Either the ID of a region or a :class:`~openstack.identity.v3.region.Region` instance. :attrs kwargs: The attributes to update on the region represented by ``value``. :returns: The updated region. :rtype: :class:`~openstack.identity.v3.region.Region` """ return self._update(_region.Region, region, **attrs) def create_role(self, **attrs): """Create a new role from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.identity.v3.role.Role`, comprised of the properties on the Role class. :returns: The results of role creation. :rtype: :class:`~openstack.identity.v3.role.Role` """ return self._create(_role.Role, **attrs) def delete_role(self, role, ignore_missing=True): """Delete a role :param role: The value can be either the ID of a role or a :class:`~openstack.identity.v3.role.Role` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the role does not exist. When set to ``True``, no exception will be thrown when attempting to delete a nonexistent role. :returns: ``None`` """ self._delete(_role.Role, role, ignore_missing=ignore_missing) def find_role(self, name_or_id, ignore_missing=True): """Find a single role :param name_or_id: The name or ID of a role. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the role does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent role. :returns: One :class:`~openstack.identity.v3.role.Role` or None """ return self._find(_role.Role, name_or_id, ignore_missing=ignore_missing) def get_role(self, role): """Get a single role :param role: The value can be the ID of a role or a :class:`~openstack.identity.v3.role.Role` instance. :returns: One :class:`~openstack.identity.v3.role.Role` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no matching role can be found. """ return self._get(_role.Role, role) def roles(self, **query): """Retrieve a generator of roles :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. The options are: domain_id, name. :return: A generator of role instances. :rtype: :class:`~openstack.identity.v3.role.Role` """ return self._list(_role.Role, paginated=False, **query) def update_role(self, role, **attrs): """Update a role :param role: Either the ID of a role or a :class:`~openstack.identity.v3.role.Role` instance. :param dict kwargs: The attributes to update on the role represented by ``value``. Only name can be updated :returns: The updated role. :rtype: :class:`~openstack.identity.v3.role.Role` """ return self._update(_role.Role, role, **attrs) def role_assignments_filter(self, domain=None, project=None, group=None, user=None): """Retrieve a generator of roles assigned to user/group :param domain: Either the ID of a domain or a :class:`~openstack.identity.v3.domain.Domain` instance. :param project: Either the ID of a project or a :class:`~openstack.identity.v3.project.Project` instance. :param group: Either the ID of a group or a :class:`~openstack.identity.v3.group.Group` instance. :param user: Either the ID of a user or a :class:`~openstack.identity.v3.user.User` instance. :return: A generator of role instances. :rtype: :class:`~openstack.identity.v3.role.Role` """ if domain and project: raise exception.InvalidRequest( 'Only one of domain or project can be specified') if domain is None and project is None: raise exception.InvalidRequest( 'Either domain or project should be specified') if group and user: raise exception.InvalidRequest( 'Only one of group or user can be specified') if group is None and user is None: raise exception.InvalidRequest( 'Either group or user should be specified') if domain: domain = self._get_resource(_domain.Domain, domain) if group: group = self._get_resource(_group.Group, group) return self._list( _role_domain_group_assignment.RoleDomainGroupAssignment, paginated=False, domain_id=domain.id, group_id=group.id) else: user = self._get_resource(_user.User, user) return self._list( _role_domain_user_assignment.RoleDomainUserAssignment, paginated=False, domain_id=domain.id, user_id=user.id) else: project = self._get_resource(_project.Project, project) if group: group = self._get_resource(_group.Group, group) return self._list( _role_project_group_assignment.RoleProjectGroupAssignment, paginated=False, project_id=project.id, group_id=group.id) else: user = self._get_resource(_user.User, user) return self._list( _role_project_user_assignment.RoleProjectUserAssignment, paginated=False, project_id=project.id, user_id=user.id) def role_assignments(self, **query): """Retrieve a generator of role assignments :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. The options are: group_id, role_id, scope_domain_id, scope_project_id, user_id, include_names, include_subtree. :return: :class:`~openstack.identity.v3.role_assignment.RoleAssignment` """ return self._list(_role_assignment.RoleAssignment, paginated=False, **query) openstacksdk-0.11.3/openstack/database/0000775000175100017510000000000013236151501020041 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/database/v1/0000775000175100017510000000000013236151501020367 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/database/v1/flavor.py0000666000175100017510000000210013236151340022226 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.database import database_service from openstack import resource class Flavor(resource.Resource): resource_key = 'flavor' resources_key = 'flavors' base_path = '/flavors' service = database_service.DatabaseService() # capabilities allow_list = True allow_get = True # Properties #: Links associated with the flavor links = resource.Body('links') #: The name of the flavor name = resource.Body('name') #: The size in MB of RAM the flavor has ram = resource.Body('ram') openstacksdk-0.11.3/openstack/database/v1/database.py0000666000175100017510000000255413236151340022516 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.database import database_service from openstack import resource class Database(resource.Resource): resource_key = 'database' resources_key = 'databases' base_path = '/instances/%(instance_id)s/databases' service = database_service.DatabaseService() # capabilities allow_create = True allow_delete = True allow_list = True # Properties #: Set of symbols and encodings. The default character set is ``utf8``. character_set = resource.Body('character_set') #: Set of rules for comparing characters in a character set. #: The default value for collate is ``utf8_general_ci``. collate = resource.Body('collate') #: The ID of the instance instance_id = resource.URI('instance_id') #: The name of the database name = resource.Body('name', alternate_id=True) openstacksdk-0.11.3/openstack/database/v1/user.py0000666000175100017510000000325413236151340021726 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.database import database_service from openstack import resource from openstack import utils class User(resource.Resource): resource_key = 'user' resources_key = 'users' base_path = '/instances/%(instance_id)s/users' service = database_service.DatabaseService() # capabilities allow_create = True allow_delete = True allow_list = True instance_id = resource.URI('instance_id') # Properties #: Databases the user has access to databases = resource.Body('databases') #: The name of the user name = resource.Body('name', alternate_id=True) #: The password of the user password = resource.Body('password') def _prepare_request(self, requires_id=True, prepend_key=True): """Prepare a request for the database service's create call User.create calls require the resources_key. The base_prepare_request would insert the resource_key (singular) """ body = {self.resources_key: self._body.dirty} uri = self.base_path % self._uri.attributes uri = utils.urljoin(uri, self.id) return resource._Request(uri, body, None) openstacksdk-0.11.3/openstack/database/v1/__init__.py0000666000175100017510000000000013236151340022471 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/database/v1/instance.py0000666000175100017510000000744413236151340022561 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.database import database_service from openstack import resource from openstack import utils class Instance(resource.Resource): resource_key = 'instance' resources_key = 'instances' base_path = '/instances' service = database_service.DatabaseService() # capabilities allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True # Properties #: The flavor of the instance flavor = resource.Body('flavor') #: Links associated with the instance links = resource.Body('links') #: The name of the instance name = resource.Body('name') #: The status of the instance status = resource.Body('status') #: The size of the volume volume = resource.Body('volume') #: A dictionary of datastore details, often including 'type' and 'version' #: keys datastore = resource.Body('datastore', type=dict) #: The ID of this instance id = resource.Body('id') #: The region this instance resides in region = resource.Body('region') #: The name of the host hostname = resource.Body('hostname') #: The timestamp when this instance was created created_at = resource.Body('created') #: The timestamp when this instance was updated updated_at = resource.Body('updated') def enable_root_user(self, session): """Enable login for the root user. This operation enables login from any host for the root user and provides the user with a generated root password. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :returns: A dictionary with keys ``name`` and ``password`` specifying the login credentials. """ url = utils.urljoin(self.base_path, self.id, 'root') resp = session.post(url,) return resp.json()['user'] def is_root_enabled(self, session): """Determine if root is enabled on an instance. Determine if root is enabled on this particular instance. :param session: The session to use for making this request. :type session: :class:`~keystoneauth1.adapter.Adapter` :returns: ``True`` if root user is enabled for a specified database instance or ``False`` otherwise. """ url = utils.urljoin(self.base_path, self.id, 'root') resp = session.get(url,) return resp.json()['rootEnabled'] def restart(self, session): """Restart the database instance :returns: ``None`` """ body = {'restart': {}} url = utils.urljoin(self.base_path, self.id, 'action') session.post(url, json=body) def resize(self, session, flavor_reference): """Resize the database instance :returns: ``None`` """ body = {'resize': {'flavorRef': flavor_reference}} url = utils.urljoin(self.base_path, self.id, 'action') session.post(url, json=body) def resize_volume(self, session, volume_size): """Resize the volume attached to the instance :returns: ``None`` """ body = {'resize': {'volume': volume_size}} url = utils.urljoin(self.base_path, self.id, 'action') session.post(url, json=body) openstacksdk-0.11.3/openstack/database/v1/_proxy.py0000666000175100017510000003375213236151340022276 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack.database.v1 import database as _database from openstack.database.v1 import flavor as _flavor from openstack.database.v1 import instance as _instance from openstack.database.v1 import user as _user from openstack import proxy class Proxy(proxy.BaseProxy): def create_database(self, instance, **attrs): """Create a new database from attributes :param instance: This can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.database.v1.database.Database`, comprised of the properties on the Database class. :returns: The results of server creation :rtype: :class:`~openstack.database.v1.database.Database` """ instance = self._get_resource(_instance.Instance, instance) return self._create(_database.Database, instance_id=instance.id, **attrs) def delete_database(self, database, instance=None, ignore_missing=True): """Delete a database :param database: The value can be either the ID of a database or a :class:`~openstack.database.v1.database.Database` instance. :param instance: This parameter needs to be specified when an ID is given as `database`. It can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the database does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent database. :returns: ``None`` """ instance_id = self._get_uri_attribute(database, instance, "instance_id") self._delete(_database.Database, database, instance_id=instance_id, ignore_missing=ignore_missing) def find_database(self, name_or_id, instance, ignore_missing=True): """Find a single database :param name_or_id: The name or ID of a database. :param instance: This can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.database.v1.database.Database` or None """ instance = self._get_resource(_instance.Instance, instance) return self._find(_database.Database, name_or_id, instance_id=instance.id, ignore_missing=ignore_missing) def databases(self, instance, **query): """Return a generator of databases :param instance: This can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` instance that the interface belongs to. :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of database objects :rtype: :class:`~openstack.database.v1.database.Database` """ instance = self._get_resource(_instance.Instance, instance) return self._list(_database.Database, paginated=False, instance_id=instance.id, **query) def get_database(self, database, instance=None): """Get a single database :param instance: This parameter needs to be specified when an ID is given as `database`. It can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :param database: The value can be the ID of a database or a :class:`~openstack.database.v1.database.Database` instance. :returns: One :class:`~openstack.database.v1.database.Database` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_database.Database, database) def find_flavor(self, name_or_id, ignore_missing=True): """Find a single flavor :param name_or_id: The name or ID of a flavor. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.database.v1.flavor.Flavor` or None """ return self._find(_flavor.Flavor, name_or_id, ignore_missing=ignore_missing) def get_flavor(self, flavor): """Get a single flavor :param flavor: The value can be the ID of a flavor or a :class:`~openstack.database.v1.flavor.Flavor` instance. :returns: One :class:`~openstack.database.v1.flavor.Flavor` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_flavor.Flavor, flavor) def flavors(self, **query): """Return a generator of flavors :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of flavor objects :rtype: :class:`~openstack.database.v1.flavor.Flavor` """ return self._list(_flavor.Flavor, paginated=False, **query) def create_instance(self, **attrs): """Create a new instance from attributes :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.database.v1.instance.Instance`, comprised of the properties on the Instance class. :returns: The results of server creation :rtype: :class:`~openstack.database.v1.instance.Instance` """ return self._create(_instance.Instance, **attrs) def delete_instance(self, instance, ignore_missing=True): """Delete an instance :param instance: The value can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the instance does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent instance. :returns: ``None`` """ self._delete(_instance.Instance, instance, ignore_missing=ignore_missing) def find_instance(self, name_or_id, ignore_missing=True): """Find a single instance :param name_or_id: The name or ID of a instance. :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.database.v1.instance.Instance` or None """ return self._find(_instance.Instance, name_or_id, ignore_missing=ignore_missing) def get_instance(self, instance): """Get a single instance :param instance: The value can be the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` instance. :returns: One :class:`~openstack.database.v1.instance.Instance` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ return self._get(_instance.Instance, instance) def instances(self, **query): """Return a generator of instances :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of instance objects :rtype: :class:`~openstack.database.v1.instance.Instance` """ return self._list(_instance.Instance, paginated=False, **query) def update_instance(self, instance, **attrs): """Update a instance :param instance: Either the id of a instance or a :class:`~openstack.database.v1.instance.Instance` instance. :attrs kwargs: The attributes to update on the instance represented by ``value``. :returns: The updated instance :rtype: :class:`~openstack.database.v1.instance.Instance` """ return self._update(_instance.Instance, instance, **attrs) def create_user(self, instance, **attrs): """Create a new user from attributes :param instance: This can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :param dict attrs: Keyword arguments which will be used to create a :class:`~openstack.database.v1.user.User`, comprised of the properties on the User class. :returns: The results of server creation :rtype: :class:`~openstack.database.v1.user.User` """ instance = self._get_resource(_instance.Instance, instance) return self._create(_user.User, instance_id=instance.id, **attrs) def delete_user(self, user, instance=None, ignore_missing=True): """Delete a user :param user: The value can be either the ID of a user or a :class:`~openstack.database.v1.user.User` instance. :param instance: This parameter needs to be specified when an ID is given as `user`. It can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the user does not exist. When set to ``True``, no exception will be set when attempting to delete a nonexistent user. :returns: ``None`` """ instance = self._get_resource(_instance.Instance, instance) self._delete(_user.User, user, ignore_missing=ignore_missing, instance_id=instance.id) def find_user(self, name_or_id, instance, ignore_missing=True): """Find a single user :param name_or_id: The name or ID of a user. :param instance: This can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :param bool ignore_missing: When set to ``False`` :class:`~openstack.exceptions.ResourceNotFound` will be raised when the resource does not exist. When set to ``True``, None will be returned when attempting to find a nonexistent resource. :returns: One :class:`~openstack.database.v1.user.User` or None """ instance = self._get_resource(_instance.Instance, instance) return self._find(_user.User, name_or_id, instance_id=instance.id, ignore_missing=ignore_missing) def users(self, instance, **query): """Return a generator of users :param instance: This can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :param kwargs \*\*query: Optional query parameters to be sent to limit the resources being returned. :returns: A generator of user objects :rtype: :class:`~openstack.database.v1.user.User` """ instance = self._get_resource(_instance.Instance, instance) return self._list(_user.User, instance_id=instance.id, paginated=False, **query) def get_user(self, user, instance=None): """Get a single user :param user: The value can be the ID of a user or a :class:`~openstack.database.v1.user.User` instance. :param instance: This parameter needs to be specified when an ID is given as `database`. It can be either the ID of an instance or a :class:`~openstack.database.v1.instance.Instance` :returns: One :class:`~openstack.database.v1.user.User` :raises: :class:`~openstack.exceptions.ResourceNotFound` when no resource can be found. """ instance = self._get_resource(_instance.Instance, instance) return self._get(_user.User, user) openstacksdk-0.11.3/openstack/database/__init__.py0000666000175100017510000000000013236151340022143 0ustar zuulzuul00000000000000openstacksdk-0.11.3/openstack/database/database_service.py0000666000175100017510000000165613236151340023712 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import service_filter class DatabaseService(service_filter.ServiceFilter): """The database service.""" valid_versions = [service_filter.ValidVersion('v1')] def __init__(self, version=None): """Create a database service.""" super(DatabaseService, self).__init__(service_type='database', version=version) openstacksdk-0.11.3/openstacksdk.egg-info/0000775000175100017510000000000013236151501020471 5ustar zuulzuul00000000000000openstacksdk-0.11.3/openstacksdk.egg-info/PKG-INFO0000664000175100017510000002143213236151501021570 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: openstacksdk Version: 0.11.3 Summary: An SDK for building applications to work with OpenStack Home-page: http://developer.openstack.org/sdks/python/openstacksdk/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: openstacksdk ============ openstacksdk is a client library for for building applications to work with OpenStack clouds. The project aims to provide a consistent and complete set of interactions with OpenStack's many services, along with complete documentation, examples, and tools. It also contains an abstraction interface layer. Clouds can do many things, but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, the per-service oriented portions of the SDK are for you. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then the Cloud Abstraction layer is for you. A Brief History --------------- .. TODO(shade) This history section should move to the docs. We can put a link to the published URL here in the README, but it's too long. openstacksdk started its life as three different libraries: shade, os-client-config and python-openstacksdk. ``shade`` started its life as some code inside of OpenStack Infra's `nodepool`_ project, and as some code inside of the `Ansible OpenStack Modules`_. Ansible had a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding the logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. Because of its background from nodepool, shade contained abstractions to work around deployment differences and is resource oriented rather than service oriented. This allows a user to think about Security Groups without having to know whether Security Groups are provided by Nova or Neutron on a given cloud. On the other hand, as an interface that provides an abstraction, it deviates from the published OpenStack REST API and adds its own opinions, which may not get in the way of more advanced users with specific needs. ``os-client-config`` was a library for collecting client configuration for using an OpenStack cloud in a consistent and comprehensive manner, which introduced the ``clouds.yaml`` file for expressing named cloud configurations. ``python-openstacksdk`` was a library that exposed the OpenStack APIs to developers in a consistent and predictable manner. After a while it became clear that there was value in both the high-level layer that contains additional business logic and the lower-level SDK that exposes services and their resources faithfully and consistently as Python objects. Even with both of those layers, it is still beneficial at times to be able to make direct REST calls and to do so with the same properly configured `Session`_ from `python-requests`_. This led to the merge of the three projects. The original contents of the shade library have been moved into ``openstack.cloud`` and os-client-config has been moved in to ``openstack.config``. Future releases of shade will provide a thin compatibility layer that subclasses the objects from ``openstack.cloud`` and provides different argument defaults where needed for compatibility. Similarly future releases of os-client-config will provide a compatibility layer shim around ``openstack.config``. .. note:: The ``openstack.cloud.OpenStackCloud`` object and the ``openstack.connection.Connection`` object are going to be merged. It is recommended to not write any new code which consumes objects from the ``openstack.cloud`` namespace until that merge is complete. .. _nodepool: https://docs.openstack.org/infra/nodepool/ .. _Ansible OpenStack Modules: http://docs.ansible.com/ansible/latest/list_of_cloud_modules.html#openstack .. _Session: http://docs.python-requests.org/en/master/user/advanced/#session-objects .. _python-requests: http://docs.python-requests.org/en/master/ openstack ========= List servers using objects configured with the ``clouds.yaml`` file: .. code-block:: python import openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) # Initialize cloud conn = openstack.connect(cloud='mordred') for server in conn.compute.servers(): print(server.to_dict()) openstack.config ================ ``openstack.config`` will find cloud configuration for as few as 1 clouds and as many as you want to put in a config file. It will read environment variables and config files, and it also contains some vendor specific default values so that you don't have to know extra info to use OpenStack * If you have a config file, you will get the clouds listed in it * If you have environment variables, you will get a cloud named `envvars` * If you have neither, you will get a cloud named `defaults` with base defaults Sometimes an example is nice. Create a ``clouds.yaml`` file: .. code-block:: yaml clouds: mordred: region_name: Dallas auth: username: 'mordred' password: XXXXXXX project_name: 'shade' auth_url: 'https://identity.example.com' Please note: ``openstack.config`` will look for a file called ``clouds.yaml`` in the following locations: * Current Directory * ``~/.config/openstack`` * ``/etc/openstack`` More information at https://developer.openstack.org/sdks/python/openstacksdk/users/config openstack.cloud =============== Create a server using objects configured with the ``clouds.yaml`` file: .. code-block:: python import openstack.cloud # Initialize and turn on debug logging openstack.enable_logging(debug=True) # Initialize cloud # Cloud configs are read with openstack.config cloud = openstack.cloud.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Links ===== * `Issue Tracker `_ * `Code Review `_ * `Documentation `_ * `PyPI `_ * `Mailing list `_ * `Bugs `_ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 openstacksdk-0.11.3/openstacksdk.egg-info/requires.txt0000664000175100017510000000046013236151501023071 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 PyYAML>=3.10 appdirs>=1.3.0 requestsexceptions>=1.2.0 jsonpatch!=1.20,>=1.16 six>=1.10.0 os-service-types>=1.1.0 keystoneauth1>=3.3.0 deprecation>=1.0 munch>=2.1.0 decorator>=3.4.0 jmespath>=0.9.0 ipaddress>=1.0.16 futures>=3.0.0 iso8601>=0.1.11 netifaces>=0.10.4 dogpile.cache>=0.6.2 openstacksdk-0.11.3/openstacksdk.egg-info/entry_points.txt0000664000175100017510000000011413236151501023763 0ustar zuulzuul00000000000000[console_scripts] openstack-inventory = openstack.cloud.cmd.inventory:main openstacksdk-0.11.3/openstacksdk.egg-info/SOURCES.txt0000664000175100017510000014074113236151501022364 0ustar zuulzuul00000000000000.coveragerc .mailmap .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE MANIFEST.in README.rst SHADE-MERGE-TODO.rst babel.cfg bindep.txt create_yaml.sh docs-requirements.txt post_test_hook.sh requirements.txt setup.cfg setup.py test-requirements.txt tox.ini devstack/plugin.sh doc/requirements.txt doc/source/conf.py doc/source/enforcer.py doc/source/glossary.rst doc/source/index.rst doc/source/releasenotes.rst doc/source/contributor/clouds.yaml doc/source/contributor/coding.rst doc/source/contributor/contributing.rst doc/source/contributor/index.rst doc/source/contributor/layout.rst doc/source/contributor/layout.txt doc/source/contributor/local.conf doc/source/contributor/setup.rst doc/source/contributor/testing.rst doc/source/contributor/create/resource.rst doc/source/contributor/create/examples/resource/fake.py doc/source/contributor/create/examples/resource/fake_service.py doc/source/install/index.rst doc/source/user/connection.rst doc/source/user/examples doc/source/user/index.rst doc/source/user/microversions.rst doc/source/user/model.rst doc/source/user/multi-cloud-demo.rst doc/source/user/resource.rst doc/source/user/service_filter.rst doc/source/user/transition_from_profile.rst doc/source/user/usage.rst doc/source/user/utils.rst doc/source/user/config/configuration.rst doc/source/user/config/index.rst doc/source/user/config/network-config.rst doc/source/user/config/reference.rst doc/source/user/config/using.rst doc/source/user/config/vendor-support.rst doc/source/user/guides/baremetal.rst doc/source/user/guides/block_storage.rst doc/source/user/guides/clustering.rst doc/source/user/guides/compute.rst doc/source/user/guides/connect.rst doc/source/user/guides/connect_from_config.rst doc/source/user/guides/database.rst doc/source/user/guides/identity.rst doc/source/user/guides/image.rst doc/source/user/guides/key_manager.rst doc/source/user/guides/logging.rst doc/source/user/guides/message.rst doc/source/user/guides/network.rst doc/source/user/guides/object_store.rst doc/source/user/guides/orchestration.rst doc/source/user/guides/clustering/action.rst doc/source/user/guides/clustering/cluster.rst doc/source/user/guides/clustering/event.rst doc/source/user/guides/clustering/node.rst doc/source/user/guides/clustering/policy.rst doc/source/user/guides/clustering/policy_type.rst doc/source/user/guides/clustering/profile.rst doc/source/user/guides/clustering/profile_type.rst doc/source/user/guides/clustering/receiver.rst doc/source/user/proxies/baremetal.rst doc/source/user/proxies/block_storage.rst doc/source/user/proxies/clustering.rst doc/source/user/proxies/compute.rst doc/source/user/proxies/database.rst doc/source/user/proxies/identity_v2.rst doc/source/user/proxies/identity_v3.rst doc/source/user/proxies/image_v1.rst doc/source/user/proxies/image_v2.rst doc/source/user/proxies/key_manager.rst doc/source/user/proxies/load_balancer_v2.rst doc/source/user/proxies/message_v2.rst doc/source/user/proxies/network.rst doc/source/user/proxies/object_store.rst doc/source/user/proxies/orchestration.rst doc/source/user/proxies/workflow.rst doc/source/user/resources/baremetal/index.rst doc/source/user/resources/baremetal/v1/chassis.rst doc/source/user/resources/baremetal/v1/driver.rst doc/source/user/resources/baremetal/v1/node.rst doc/source/user/resources/baremetal/v1/port.rst doc/source/user/resources/baremetal/v1/port_group.rst doc/source/user/resources/block_storage/index.rst doc/source/user/resources/block_storage/v2/snapshot.rst doc/source/user/resources/block_storage/v2/type.rst doc/source/user/resources/block_storage/v2/volume.rst doc/source/user/resources/clustering/index.rst doc/source/user/resources/clustering/v1/action.rst doc/source/user/resources/clustering/v1/build_info.rst doc/source/user/resources/clustering/v1/cluster.rst doc/source/user/resources/clustering/v1/cluster_policy.rst doc/source/user/resources/clustering/v1/event.rst doc/source/user/resources/clustering/v1/node.rst doc/source/user/resources/clustering/v1/policy.rst doc/source/user/resources/clustering/v1/policy_type.rst doc/source/user/resources/clustering/v1/profile.rst doc/source/user/resources/clustering/v1/profile_type.rst doc/source/user/resources/clustering/v1/receiver.rst doc/source/user/resources/compute/index.rst doc/source/user/resources/compute/v2/extension.rst doc/source/user/resources/compute/v2/flavor.rst doc/source/user/resources/compute/v2/image.rst doc/source/user/resources/compute/v2/keypair.rst doc/source/user/resources/compute/v2/limits.rst doc/source/user/resources/compute/v2/server.rst doc/source/user/resources/compute/v2/server_interface.rst doc/source/user/resources/compute/v2/server_ip.rst doc/source/user/resources/database/index.rst doc/source/user/resources/database/v1/database.rst doc/source/user/resources/database/v1/flavor.rst doc/source/user/resources/database/v1/instance.rst doc/source/user/resources/database/v1/user.rst doc/source/user/resources/identity/index.rst doc/source/user/resources/identity/v2/extension.rst doc/source/user/resources/identity/v2/role.rst doc/source/user/resources/identity/v2/tenant.rst doc/source/user/resources/identity/v2/user.rst doc/source/user/resources/identity/v3/credential.rst doc/source/user/resources/identity/v3/domain.rst doc/source/user/resources/identity/v3/endpoint.rst doc/source/user/resources/identity/v3/group.rst doc/source/user/resources/identity/v3/policy.rst doc/source/user/resources/identity/v3/project.rst doc/source/user/resources/identity/v3/service.rst doc/source/user/resources/identity/v3/trust.rst doc/source/user/resources/identity/v3/user.rst doc/source/user/resources/image/index.rst doc/source/user/resources/image/v1/image.rst doc/source/user/resources/image/v2/image.rst doc/source/user/resources/image/v2/member.rst doc/source/user/resources/key_manager/index.rst doc/source/user/resources/key_manager/v1/container.rst doc/source/user/resources/key_manager/v1/order.rst doc/source/user/resources/key_manager/v1/secret.rst doc/source/user/resources/load_balancer/index.rst doc/source/user/resources/load_balancer/v2/health_monitor.rst doc/source/user/resources/load_balancer/v2/l7_policy.rst doc/source/user/resources/load_balancer/v2/l7_rule.rst doc/source/user/resources/load_balancer/v2/listener.rst doc/source/user/resources/load_balancer/v2/load_balancer.rst doc/source/user/resources/load_balancer/v2/member.rst doc/source/user/resources/load_balancer/v2/pool.rst doc/source/user/resources/network/index.rst doc/source/user/resources/network/v2/address_scope.rst doc/source/user/resources/network/v2/agent.rst doc/source/user/resources/network/v2/auto_allocated_topology.rst doc/source/user/resources/network/v2/availability_zone.rst doc/source/user/resources/network/v2/extension.rst doc/source/user/resources/network/v2/flavor.rst doc/source/user/resources/network/v2/floating_ip.rst doc/source/user/resources/network/v2/health_monitor.rst doc/source/user/resources/network/v2/listener.rst doc/source/user/resources/network/v2/load_balancer.rst doc/source/user/resources/network/v2/metering_label.rst doc/source/user/resources/network/v2/metering_label_rule.rst doc/source/user/resources/network/v2/network.rst doc/source/user/resources/network/v2/network_ip_availability.rst doc/source/user/resources/network/v2/pool.rst doc/source/user/resources/network/v2/pool_member.rst doc/source/user/resources/network/v2/port.rst doc/source/user/resources/network/v2/qos_bandwidth_limit_rule.rst doc/source/user/resources/network/v2/qos_dscp_marking_rule.rst doc/source/user/resources/network/v2/qos_minimum_bandwidth_rule.rst doc/source/user/resources/network/v2/qos_policy.rst doc/source/user/resources/network/v2/qos_rule_type.rst doc/source/user/resources/network/v2/quota.rst doc/source/user/resources/network/v2/rbac_policy.rst doc/source/user/resources/network/v2/router.rst doc/source/user/resources/network/v2/security_group.rst doc/source/user/resources/network/v2/security_group_rule.rst doc/source/user/resources/network/v2/segment.rst doc/source/user/resources/network/v2/service_profile.rst doc/source/user/resources/network/v2/service_provider.rst doc/source/user/resources/network/v2/subnet.rst doc/source/user/resources/network/v2/subnet_pool.rst doc/source/user/resources/object_store/index.rst doc/source/user/resources/object_store/v1/account.rst doc/source/user/resources/object_store/v1/container.rst doc/source/user/resources/object_store/v1/obj.rst doc/source/user/resources/orchestration/index.rst doc/source/user/resources/orchestration/v1/resource.rst doc/source/user/resources/orchestration/v1/stack.rst doc/source/user/resources/workflow/index.rst doc/source/user/resources/workflow/v2/execution.rst doc/source/user/resources/workflow/v2/workflow.rst examples/__init__.py examples/connect.py examples/cloud/cleanup-servers.py examples/cloud/create-server-dict.py examples/cloud/create-server-name-or-id.py examples/cloud/debug-logging.py examples/cloud/find-an-image.py examples/cloud/http-debug-logging.py examples/cloud/munch-dict-object.py examples/cloud/normalization.py examples/cloud/server-information.py examples/cloud/service-conditional-overrides.py examples/cloud/service-conditionals.py examples/cloud/strict-mode.py examples/cloud/upload-large-object.py examples/cloud/upload-object.py examples/cloud/user-agent.py examples/clustering/__init__.py examples/clustering/action.py examples/clustering/cluster.py examples/clustering/event.py examples/clustering/node.py examples/clustering/policy.py examples/clustering/policy_type.py examples/clustering/profile.py examples/clustering/profile_type.py examples/clustering/receiver.py examples/compute/__init__.py examples/compute/create.py examples/compute/delete.py examples/compute/find.py examples/compute/list.py examples/identity/__init__.py examples/identity/list.py examples/image/__init__.py examples/image/create.py examples/image/delete.py examples/image/download.py examples/image/list.py examples/key_manager/__init__.py examples/key_manager/create.py examples/key_manager/get.py examples/key_manager/list.py examples/network/__init__.py examples/network/create.py examples/network/delete.py examples/network/find.py examples/network/list.py examples/network/security_group_rules.py extras/delete-network.sh extras/run-ansible-tests.sh openstack/__init__.py openstack/_adapter.py openstack/_log.py openstack/_meta.py openstack/connection.py openstack/exceptions.py openstack/format.py openstack/profile.py openstack/proxy.py openstack/proxy2.py openstack/resource.py openstack/resource2.py openstack/service_description.py openstack/service_filter.py openstack/task_manager.py openstack/utils.py openstack/version.py openstack/baremetal/__init__.py openstack/baremetal/baremetal_service.py openstack/baremetal/version.py openstack/baremetal/v1/__init__.py openstack/baremetal/v1/_proxy.py openstack/baremetal/v1/chassis.py openstack/baremetal/v1/driver.py openstack/baremetal/v1/node.py openstack/baremetal/v1/port.py openstack/baremetal/v1/port_group.py openstack/block_storage/__init__.py openstack/block_storage/block_storage_service.py openstack/block_storage/v2/__init__.py openstack/block_storage/v2/_proxy.py openstack/block_storage/v2/snapshot.py openstack/block_storage/v2/stats.py openstack/block_storage/v2/type.py openstack/block_storage/v2/volume.py openstack/cloud/__init__.py openstack/cloud/_normalize.py openstack/cloud/_tasks.py openstack/cloud/_utils.py openstack/cloud/exc.py openstack/cloud/inventory.py openstack/cloud/meta.py openstack/cloud/openstackcloud.py openstack/cloud/_heat/__init__.py openstack/cloud/_heat/environment_format.py openstack/cloud/_heat/event_utils.py openstack/cloud/_heat/template_format.py openstack/cloud/_heat/template_utils.py openstack/cloud/_heat/utils.py openstack/cloud/cmd/__init__.py openstack/cloud/cmd/inventory.py openstack/cloud/tests/__init__.py openstack/clustering/__init__.py openstack/clustering/clustering_service.py openstack/clustering/version.py openstack/clustering/v1/__init__.py openstack/clustering/v1/_proxy.py openstack/clustering/v1/action.py openstack/clustering/v1/build_info.py openstack/clustering/v1/cluster.py openstack/clustering/v1/cluster_attr.py openstack/clustering/v1/cluster_policy.py openstack/clustering/v1/event.py openstack/clustering/v1/node.py openstack/clustering/v1/policy.py openstack/clustering/v1/policy_type.py openstack/clustering/v1/profile.py openstack/clustering/v1/profile_type.py openstack/clustering/v1/receiver.py openstack/clustering/v1/service.py openstack/compute/__init__.py openstack/compute/compute_service.py openstack/compute/version.py openstack/compute/v2/__init__.py openstack/compute/v2/_proxy.py openstack/compute/v2/availability_zone.py openstack/compute/v2/extension.py openstack/compute/v2/flavor.py openstack/compute/v2/hypervisor.py openstack/compute/v2/image.py openstack/compute/v2/keypair.py openstack/compute/v2/limits.py openstack/compute/v2/metadata.py openstack/compute/v2/server.py openstack/compute/v2/server_group.py openstack/compute/v2/server_interface.py openstack/compute/v2/server_ip.py openstack/compute/v2/service.py openstack/compute/v2/volume_attachment.py openstack/config/__init__.py openstack/config/cloud_config.py openstack/config/cloud_region.py openstack/config/defaults.json openstack/config/defaults.py openstack/config/exceptions.py openstack/config/loader.py openstack/config/schema.json openstack/config/vendor-schema.json openstack/config/vendors/__init__.py openstack/config/vendors/auro.json openstack/config/vendors/betacloud.json openstack/config/vendors/bluebox.json openstack/config/vendors/catalyst.json openstack/config/vendors/citycloud.json openstack/config/vendors/conoha.json openstack/config/vendors/datacentred.json openstack/config/vendors/dreamcompute.json openstack/config/vendors/dreamhost.json openstack/config/vendors/elastx.json openstack/config/vendors/entercloudsuite.json openstack/config/vendors/fuga.json openstack/config/vendors/ibmcloud.json openstack/config/vendors/internap.json openstack/config/vendors/otc.json openstack/config/vendors/ovh.json openstack/config/vendors/rackspace.json openstack/config/vendors/switchengines.json openstack/config/vendors/ultimum.json openstack/config/vendors/unitedstack.json openstack/config/vendors/vexxhost.json openstack/config/vendors/zetta.json openstack/database/__init__.py openstack/database/database_service.py openstack/database/v1/__init__.py openstack/database/v1/_proxy.py openstack/database/v1/database.py openstack/database/v1/flavor.py openstack/database/v1/instance.py openstack/database/v1/user.py openstack/identity/__init__.py openstack/identity/identity_service.py openstack/identity/version.py openstack/identity/v2/__init__.py openstack/identity/v2/_proxy.py openstack/identity/v2/extension.py openstack/identity/v2/role.py openstack/identity/v2/tenant.py openstack/identity/v2/user.py openstack/identity/v3/__init__.py openstack/identity/v3/_proxy.py openstack/identity/v3/credential.py openstack/identity/v3/domain.py openstack/identity/v3/endpoint.py openstack/identity/v3/group.py openstack/identity/v3/policy.py openstack/identity/v3/project.py openstack/identity/v3/region.py openstack/identity/v3/role.py openstack/identity/v3/role_assignment.py openstack/identity/v3/role_domain_group_assignment.py openstack/identity/v3/role_domain_user_assignment.py openstack/identity/v3/role_project_group_assignment.py openstack/identity/v3/role_project_user_assignment.py openstack/identity/v3/service.py openstack/identity/v3/trust.py openstack/identity/v3/user.py openstack/image/__init__.py openstack/image/image_service.py openstack/image/v1/__init__.py openstack/image/v1/_proxy.py openstack/image/v1/image.py openstack/image/v2/__init__.py openstack/image/v2/_proxy.py openstack/image/v2/image.py openstack/image/v2/member.py openstack/key_manager/__init__.py openstack/key_manager/key_manager_service.py openstack/key_manager/v1/__init__.py openstack/key_manager/v1/_format.py openstack/key_manager/v1/_proxy.py openstack/key_manager/v1/container.py openstack/key_manager/v1/order.py openstack/key_manager/v1/secret.py openstack/load_balancer/__init__.py openstack/load_balancer/load_balancer_service.py openstack/load_balancer/version.py openstack/load_balancer/v2/__init__.py openstack/load_balancer/v2/_proxy.py openstack/load_balancer/v2/health_monitor.py openstack/load_balancer/v2/l7_policy.py openstack/load_balancer/v2/l7_rule.py openstack/load_balancer/v2/listener.py openstack/load_balancer/v2/load_balancer.py openstack/load_balancer/v2/member.py openstack/load_balancer/v2/pool.py openstack/message/__init__.py openstack/message/message_service.py openstack/message/version.py openstack/message/v2/__init__.py openstack/message/v2/_proxy.py openstack/message/v2/claim.py openstack/message/v2/message.py openstack/message/v2/queue.py openstack/message/v2/subscription.py openstack/network/__init__.py openstack/network/network_service.py openstack/network/version.py openstack/network/v2/__init__.py openstack/network/v2/_proxy.py openstack/network/v2/address_scope.py openstack/network/v2/agent.py openstack/network/v2/auto_allocated_topology.py openstack/network/v2/availability_zone.py openstack/network/v2/extension.py openstack/network/v2/flavor.py openstack/network/v2/floating_ip.py openstack/network/v2/health_monitor.py openstack/network/v2/listener.py openstack/network/v2/load_balancer.py openstack/network/v2/metering_label.py openstack/network/v2/metering_label_rule.py openstack/network/v2/network.py openstack/network/v2/network_ip_availability.py openstack/network/v2/pool.py openstack/network/v2/pool_member.py openstack/network/v2/port.py openstack/network/v2/qos_bandwidth_limit_rule.py openstack/network/v2/qos_dscp_marking_rule.py openstack/network/v2/qos_minimum_bandwidth_rule.py openstack/network/v2/qos_policy.py openstack/network/v2/qos_rule_type.py openstack/network/v2/quota.py openstack/network/v2/rbac_policy.py openstack/network/v2/router.py openstack/network/v2/security_group.py openstack/network/v2/security_group_rule.py openstack/network/v2/segment.py openstack/network/v2/service_profile.py openstack/network/v2/service_provider.py openstack/network/v2/subnet.py openstack/network/v2/subnet_pool.py openstack/network/v2/tag.py openstack/network/v2/vpn_service.py openstack/object_store/__init__.py openstack/object_store/object_store_service.py openstack/object_store/v1/__init__.py openstack/object_store/v1/_base.py openstack/object_store/v1/_proxy.py openstack/object_store/v1/account.py openstack/object_store/v1/container.py openstack/object_store/v1/obj.py openstack/orchestration/__init__.py openstack/orchestration/orchestration_service.py openstack/orchestration/version.py openstack/orchestration/v1/__init__.py openstack/orchestration/v1/_proxy.py openstack/orchestration/v1/resource.py openstack/orchestration/v1/software_config.py openstack/orchestration/v1/software_deployment.py openstack/orchestration/v1/stack.py openstack/orchestration/v1/stack_environment.py openstack/orchestration/v1/stack_files.py openstack/orchestration/v1/stack_template.py openstack/orchestration/v1/template.py openstack/tests/__init__.py openstack/tests/base.py openstack/tests/fakes.py openstack/tests/ansible/README.txt openstack/tests/ansible/run.yml openstack/tests/ansible/hooks/post_test_hook.sh openstack/tests/ansible/roles/auth/tasks/main.yml openstack/tests/ansible/roles/client_config/tasks/main.yml openstack/tests/ansible/roles/group/tasks/main.yml openstack/tests/ansible/roles/group/vars/main.yml openstack/tests/ansible/roles/image/tasks/main.yml openstack/tests/ansible/roles/image/vars/main.yml openstack/tests/ansible/roles/keypair/tasks/main.yml openstack/tests/ansible/roles/keypair/vars/main.yml openstack/tests/ansible/roles/keystone_domain/tasks/main.yml openstack/tests/ansible/roles/keystone_domain/vars/main.yml openstack/tests/ansible/roles/keystone_role/tasks/main.yml openstack/tests/ansible/roles/keystone_role/vars/main.yml openstack/tests/ansible/roles/network/tasks/main.yml openstack/tests/ansible/roles/network/vars/main.yml openstack/tests/ansible/roles/nova_flavor/tasks/main.yml openstack/tests/ansible/roles/object/tasks/main.yml openstack/tests/ansible/roles/port/tasks/main.yml openstack/tests/ansible/roles/port/vars/main.yml openstack/tests/ansible/roles/router/tasks/main.yml openstack/tests/ansible/roles/router/vars/main.yml openstack/tests/ansible/roles/security_group/tasks/main.yml openstack/tests/ansible/roles/security_group/vars/main.yml openstack/tests/ansible/roles/server/tasks/main.yml openstack/tests/ansible/roles/server/vars/main.yaml openstack/tests/ansible/roles/subnet/tasks/main.yml openstack/tests/ansible/roles/subnet/vars/main.yml openstack/tests/ansible/roles/user/tasks/main.yml openstack/tests/ansible/roles/user_group/tasks/main.yml openstack/tests/ansible/roles/volume/tasks/main.yml openstack/tests/examples/__init__.py openstack/tests/examples/test_compute.py openstack/tests/examples/test_identity.py openstack/tests/examples/test_image.py openstack/tests/examples/test_network.py openstack/tests/functional/__init__.py openstack/tests/functional/base.py openstack/tests/functional/block_storage/__init__.py openstack/tests/functional/block_storage/v2/__init__.py openstack/tests/functional/block_storage/v2/test_snapshot.py openstack/tests/functional/block_storage/v2/test_type.py openstack/tests/functional/block_storage/v2/test_volume.py openstack/tests/functional/block_store/v2/test_stats.py openstack/tests/functional/cloud/__init__.py openstack/tests/functional/cloud/base.py openstack/tests/functional/cloud/test_aggregate.py openstack/tests/functional/cloud/test_cluster_templates.py openstack/tests/functional/cloud/test_compute.py openstack/tests/functional/cloud/test_devstack.py openstack/tests/functional/cloud/test_domain.py openstack/tests/functional/cloud/test_endpoints.py openstack/tests/functional/cloud/test_flavor.py openstack/tests/functional/cloud/test_floating_ip.py openstack/tests/functional/cloud/test_floating_ip_pool.py openstack/tests/functional/cloud/test_groups.py openstack/tests/functional/cloud/test_identity.py openstack/tests/functional/cloud/test_image.py openstack/tests/functional/cloud/test_inventory.py openstack/tests/functional/cloud/test_keypairs.py openstack/tests/functional/cloud/test_limits.py openstack/tests/functional/cloud/test_magnum_services.py openstack/tests/functional/cloud/test_network.py openstack/tests/functional/cloud/test_object.py openstack/tests/functional/cloud/test_port.py openstack/tests/functional/cloud/test_project.py openstack/tests/functional/cloud/test_qos_bandwidth_limit_rule.py openstack/tests/functional/cloud/test_qos_dscp_marking_rule.py openstack/tests/functional/cloud/test_qos_minimum_bandwidth_rule.py openstack/tests/functional/cloud/test_qos_policy.py openstack/tests/functional/cloud/test_quotas.py openstack/tests/functional/cloud/test_range_search.py openstack/tests/functional/cloud/test_recordset.py openstack/tests/functional/cloud/test_router.py openstack/tests/functional/cloud/test_security_groups.py openstack/tests/functional/cloud/test_server_group.py openstack/tests/functional/cloud/test_services.py openstack/tests/functional/cloud/test_stack.py openstack/tests/functional/cloud/test_users.py openstack/tests/functional/cloud/test_volume.py openstack/tests/functional/cloud/test_volume_backup.py openstack/tests/functional/cloud/test_volume_type.py openstack/tests/functional/cloud/test_zone.py openstack/tests/functional/cloud/util.py openstack/tests/functional/cloud/hooks/post_test_hook.sh openstack/tests/functional/compute/__init__.py openstack/tests/functional/compute/v2/__init__.py openstack/tests/functional/compute/v2/test_extension.py openstack/tests/functional/compute/v2/test_flavor.py openstack/tests/functional/compute/v2/test_image.py openstack/tests/functional/compute/v2/test_keypair.py openstack/tests/functional/compute/v2/test_limits.py openstack/tests/functional/compute/v2/test_server.py openstack/tests/functional/image/__init__.py openstack/tests/functional/image/v2/__init__.py openstack/tests/functional/image/v2/test_image.py openstack/tests/functional/load_balancer/__init__.py openstack/tests/functional/load_balancer/base.py openstack/tests/functional/load_balancer/v2/__init__.py openstack/tests/functional/load_balancer/v2/test_load_balancer.py openstack/tests/functional/network/__init__.py openstack/tests/functional/network/v2/__init__.py openstack/tests/functional/network/v2/test_address_scope.py openstack/tests/functional/network/v2/test_agent.py openstack/tests/functional/network/v2/test_agent_add_remove_network.py openstack/tests/functional/network/v2/test_agent_add_remove_router.py openstack/tests/functional/network/v2/test_auto_allocated_topology.py openstack/tests/functional/network/v2/test_availability_zone.py openstack/tests/functional/network/v2/test_dvr_router.py openstack/tests/functional/network/v2/test_extension.py openstack/tests/functional/network/v2/test_flavor.py openstack/tests/functional/network/v2/test_floating_ip.py openstack/tests/functional/network/v2/test_network.py openstack/tests/functional/network/v2/test_network_ip_availability.py openstack/tests/functional/network/v2/test_port.py openstack/tests/functional/network/v2/test_qos_bandwidth_limit_rule.py openstack/tests/functional/network/v2/test_qos_dscp_marking_rule.py openstack/tests/functional/network/v2/test_qos_minimum_bandwidth_rule.py openstack/tests/functional/network/v2/test_qos_policy.py openstack/tests/functional/network/v2/test_qos_rule_type.py openstack/tests/functional/network/v2/test_quota.py openstack/tests/functional/network/v2/test_rbac_policy.py openstack/tests/functional/network/v2/test_router.py openstack/tests/functional/network/v2/test_router_add_remove_interface.py openstack/tests/functional/network/v2/test_security_group.py openstack/tests/functional/network/v2/test_security_group_rule.py openstack/tests/functional/network/v2/test_segment.py openstack/tests/functional/network/v2/test_service_profile.py openstack/tests/functional/network/v2/test_service_provider.py openstack/tests/functional/network/v2/test_subnet.py openstack/tests/functional/network/v2/test_subnet_pool.py openstack/tests/functional/object_store/__init__.py openstack/tests/functional/object_store/v1/__init__.py openstack/tests/functional/object_store/v1/test_account.py openstack/tests/functional/object_store/v1/test_container.py openstack/tests/functional/object_store/v1/test_obj.py openstack/tests/functional/orchestration/__init__.py openstack/tests/functional/orchestration/v1/__init__.py openstack/tests/functional/orchestration/v1/hello_world.yaml openstack/tests/functional/orchestration/v1/test_stack.py openstack/tests/unit/__init__.py openstack/tests/unit/base.py openstack/tests/unit/fakes.py openstack/tests/unit/test__adapter.py openstack/tests/unit/test_connection.py openstack/tests/unit/test_exceptions.py openstack/tests/unit/test_format.py openstack/tests/unit/test_proxy.py openstack/tests/unit/test_proxy_base.py openstack/tests/unit/test_proxy_base2.py openstack/tests/unit/test_resource.py openstack/tests/unit/test_service_filter.py openstack/tests/unit/test_utils.py openstack/tests/unit/baremetal/__init__.py openstack/tests/unit/baremetal/test_baremetal_service.py openstack/tests/unit/baremetal/test_version.py openstack/tests/unit/baremetal/v1/__init__.py openstack/tests/unit/baremetal/v1/test_chassis.py openstack/tests/unit/baremetal/v1/test_driver.py openstack/tests/unit/baremetal/v1/test_node.py openstack/tests/unit/baremetal/v1/test_port.py openstack/tests/unit/baremetal/v1/test_port_group.py openstack/tests/unit/baremetal/v1/test_proxy.py openstack/tests/unit/block_storage/__init__.py openstack/tests/unit/block_storage/test_block_storage_service.py openstack/tests/unit/block_storage/v2/__init__.py openstack/tests/unit/block_storage/v2/test_proxy.py openstack/tests/unit/block_storage/v2/test_snapshot.py openstack/tests/unit/block_storage/v2/test_type.py openstack/tests/unit/block_storage/v2/test_volume.py openstack/tests/unit/block_store/v2/test_stats.py openstack/tests/unit/cloud/__init__.py openstack/tests/unit/cloud/test__utils.py openstack/tests/unit/cloud/test_aggregate.py openstack/tests/unit/cloud/test_availability_zones.py openstack/tests/unit/cloud/test_baremetal_node.py openstack/tests/unit/cloud/test_baremetal_ports.py openstack/tests/unit/cloud/test_caching.py openstack/tests/unit/cloud/test_cluster_templates.py openstack/tests/unit/cloud/test_create_server.py openstack/tests/unit/cloud/test_create_volume_snapshot.py openstack/tests/unit/cloud/test_delete_server.py openstack/tests/unit/cloud/test_delete_volume_snapshot.py openstack/tests/unit/cloud/test_domain_params.py openstack/tests/unit/cloud/test_domains.py openstack/tests/unit/cloud/test_endpoints.py openstack/tests/unit/cloud/test_flavors.py openstack/tests/unit/cloud/test_floating_ip_common.py openstack/tests/unit/cloud/test_floating_ip_neutron.py openstack/tests/unit/cloud/test_floating_ip_nova.py openstack/tests/unit/cloud/test_floating_ip_pool.py openstack/tests/unit/cloud/test_groups.py openstack/tests/unit/cloud/test_identity_roles.py openstack/tests/unit/cloud/test_image.py openstack/tests/unit/cloud/test_image_snapshot.py openstack/tests/unit/cloud/test_inventory.py openstack/tests/unit/cloud/test_keypair.py openstack/tests/unit/cloud/test_limits.py openstack/tests/unit/cloud/test_magnum_services.py openstack/tests/unit/cloud/test_meta.py openstack/tests/unit/cloud/test_network.py openstack/tests/unit/cloud/test_normalize.py openstack/tests/unit/cloud/test_object.py openstack/tests/unit/cloud/test_operator.py openstack/tests/unit/cloud/test_operator_noauth.py openstack/tests/unit/cloud/test_port.py openstack/tests/unit/cloud/test_project.py openstack/tests/unit/cloud/test_qos_bandwidth_limit_rule.py openstack/tests/unit/cloud/test_qos_dscp_marking_rule.py openstack/tests/unit/cloud/test_qos_minimum_bandwidth_rule.py openstack/tests/unit/cloud/test_qos_policy.py openstack/tests/unit/cloud/test_qos_rule_type.py openstack/tests/unit/cloud/test_quotas.py openstack/tests/unit/cloud/test_rebuild_server.py openstack/tests/unit/cloud/test_recordset.py openstack/tests/unit/cloud/test_role_assignment.py openstack/tests/unit/cloud/test_router.py openstack/tests/unit/cloud/test_security_groups.py openstack/tests/unit/cloud/test_server_console.py openstack/tests/unit/cloud/test_server_delete_metadata.py openstack/tests/unit/cloud/test_server_group.py openstack/tests/unit/cloud/test_server_set_metadata.py openstack/tests/unit/cloud/test_services.py openstack/tests/unit/cloud/test_shade.py openstack/tests/unit/cloud/test_shade_operator.py openstack/tests/unit/cloud/test_stack.py openstack/tests/unit/cloud/test_subnet.py openstack/tests/unit/cloud/test_task_manager.py openstack/tests/unit/cloud/test_update_server.py openstack/tests/unit/cloud/test_usage.py openstack/tests/unit/cloud/test_users.py openstack/tests/unit/cloud/test_volume.py openstack/tests/unit/cloud/test_volume_access.py openstack/tests/unit/cloud/test_volume_backups.py openstack/tests/unit/cloud/test_zone.py openstack/tests/unit/cluster/__init__.py openstack/tests/unit/cluster/test_cluster_service.py openstack/tests/unit/cluster/test_version.py openstack/tests/unit/cluster/v1/__init__.py openstack/tests/unit/cluster/v1/test_action.py openstack/tests/unit/cluster/v1/test_build_info.py openstack/tests/unit/cluster/v1/test_cluster.py openstack/tests/unit/cluster/v1/test_cluster_attr.py openstack/tests/unit/cluster/v1/test_cluster_policy.py openstack/tests/unit/cluster/v1/test_event.py openstack/tests/unit/cluster/v1/test_node.py openstack/tests/unit/cluster/v1/test_policy.py openstack/tests/unit/cluster/v1/test_policy_type.py openstack/tests/unit/cluster/v1/test_profile.py openstack/tests/unit/cluster/v1/test_profile_type.py openstack/tests/unit/cluster/v1/test_proxy.py openstack/tests/unit/cluster/v1/test_receiver.py openstack/tests/unit/cluster/v1/test_service.py openstack/tests/unit/compute/__init__.py openstack/tests/unit/compute/test_compute_service.py openstack/tests/unit/compute/test_version.py openstack/tests/unit/compute/v2/__init__.py openstack/tests/unit/compute/v2/test_availability_zone.py openstack/tests/unit/compute/v2/test_extension.py openstack/tests/unit/compute/v2/test_flavor.py openstack/tests/unit/compute/v2/test_hypervisor.py openstack/tests/unit/compute/v2/test_image.py openstack/tests/unit/compute/v2/test_keypair.py openstack/tests/unit/compute/v2/test_limits.py openstack/tests/unit/compute/v2/test_metadata.py openstack/tests/unit/compute/v2/test_proxy.py openstack/tests/unit/compute/v2/test_server.py openstack/tests/unit/compute/v2/test_server_group.py openstack/tests/unit/compute/v2/test_server_interface.py openstack/tests/unit/compute/v2/test_server_ip.py openstack/tests/unit/compute/v2/test_service.py openstack/tests/unit/compute/v2/test_volume_attachment.py openstack/tests/unit/config/__init__.py openstack/tests/unit/config/base.py openstack/tests/unit/config/test_cloud_config.py openstack/tests/unit/config/test_config.py openstack/tests/unit/config/test_environ.py openstack/tests/unit/config/test_from_session.py openstack/tests/unit/config/test_init.py openstack/tests/unit/config/test_json.py openstack/tests/unit/database/__init__.py openstack/tests/unit/database/test_database_service.py openstack/tests/unit/database/v1/__init__.py openstack/tests/unit/database/v1/test_database.py openstack/tests/unit/database/v1/test_flavor.py openstack/tests/unit/database/v1/test_instance.py openstack/tests/unit/database/v1/test_proxy.py openstack/tests/unit/database/v1/test_user.py openstack/tests/unit/fixtures/baremetal.json openstack/tests/unit/fixtures/catalog-v2.json openstack/tests/unit/fixtures/catalog-v3-suburl.json openstack/tests/unit/fixtures/catalog-v3.json openstack/tests/unit/fixtures/discovery.json openstack/tests/unit/fixtures/dns.json openstack/tests/unit/fixtures/image-version-broken.json openstack/tests/unit/fixtures/image-version-suburl.json openstack/tests/unit/fixtures/image-version-v1.json openstack/tests/unit/fixtures/image-version-v2.json openstack/tests/unit/fixtures/image-version.json openstack/tests/unit/fixtures/clouds/clouds.yaml openstack/tests/unit/fixtures/clouds/clouds_cache.yaml openstack/tests/unit/identity/__init__.py openstack/tests/unit/identity/test_identity_service.py openstack/tests/unit/identity/test_version.py openstack/tests/unit/identity/v2/__init__.py openstack/tests/unit/identity/v2/test_extension.py openstack/tests/unit/identity/v2/test_proxy.py openstack/tests/unit/identity/v2/test_role.py openstack/tests/unit/identity/v2/test_tenant.py openstack/tests/unit/identity/v2/test_user.py openstack/tests/unit/identity/v3/__init__.py openstack/tests/unit/identity/v3/test_credential.py openstack/tests/unit/identity/v3/test_domain.py openstack/tests/unit/identity/v3/test_endpoint.py openstack/tests/unit/identity/v3/test_group.py openstack/tests/unit/identity/v3/test_policy.py openstack/tests/unit/identity/v3/test_project.py openstack/tests/unit/identity/v3/test_proxy.py openstack/tests/unit/identity/v3/test_region.py openstack/tests/unit/identity/v3/test_role.py openstack/tests/unit/identity/v3/test_role_assignment.py openstack/tests/unit/identity/v3/test_role_domain_group_assignment.py openstack/tests/unit/identity/v3/test_role_domain_user_assignment.py openstack/tests/unit/identity/v3/test_role_project_group_assignment.py openstack/tests/unit/identity/v3/test_role_project_user_assignment.py openstack/tests/unit/identity/v3/test_service.py openstack/tests/unit/identity/v3/test_trust.py openstack/tests/unit/identity/v3/test_user.py openstack/tests/unit/image/__init__.py openstack/tests/unit/image/test_image_service.py openstack/tests/unit/image/v1/__init__.py openstack/tests/unit/image/v1/test_image.py openstack/tests/unit/image/v1/test_proxy.py openstack/tests/unit/image/v2/__init__.py openstack/tests/unit/image/v2/test_image.py openstack/tests/unit/image/v2/test_member.py openstack/tests/unit/image/v2/test_proxy.py openstack/tests/unit/key_manager/__init__.py openstack/tests/unit/key_manager/test_key_management_service.py openstack/tests/unit/key_manager/v1/__init__.py openstack/tests/unit/key_manager/v1/test_container.py openstack/tests/unit/key_manager/v1/test_order.py openstack/tests/unit/key_manager/v1/test_proxy.py openstack/tests/unit/key_manager/v1/test_secret.py openstack/tests/unit/load_balancer/__init__.py openstack/tests/unit/load_balancer/test_health_monitor.py openstack/tests/unit/load_balancer/test_l7policy.py openstack/tests/unit/load_balancer/test_l7rule.py openstack/tests/unit/load_balancer/test_listener.py openstack/tests/unit/load_balancer/test_load_balancer.py openstack/tests/unit/load_balancer/test_load_balancer_service.py openstack/tests/unit/load_balancer/test_member.py openstack/tests/unit/load_balancer/test_pool.py openstack/tests/unit/load_balancer/test_proxy.py openstack/tests/unit/load_balancer/test_version.py openstack/tests/unit/message/__init__.py openstack/tests/unit/message/test_message_service.py openstack/tests/unit/message/test_version.py openstack/tests/unit/message/v2/__init__.py openstack/tests/unit/message/v2/test_claim.py openstack/tests/unit/message/v2/test_message.py openstack/tests/unit/message/v2/test_proxy.py openstack/tests/unit/message/v2/test_queue.py openstack/tests/unit/message/v2/test_subscription.py openstack/tests/unit/network/__init__.py openstack/tests/unit/network/test_network_service.py openstack/tests/unit/network/test_version.py openstack/tests/unit/network/v2/__init__.py openstack/tests/unit/network/v2/test_address_scope.py openstack/tests/unit/network/v2/test_agent.py openstack/tests/unit/network/v2/test_auto_allocated_topology.py openstack/tests/unit/network/v2/test_availability_zone.py openstack/tests/unit/network/v2/test_extension.py openstack/tests/unit/network/v2/test_flavor.py openstack/tests/unit/network/v2/test_floating_ip.py openstack/tests/unit/network/v2/test_health_monitor.py openstack/tests/unit/network/v2/test_listener.py openstack/tests/unit/network/v2/test_load_balancer.py openstack/tests/unit/network/v2/test_metering_label.py openstack/tests/unit/network/v2/test_metering_label_rule.py openstack/tests/unit/network/v2/test_network.py openstack/tests/unit/network/v2/test_network_ip_availability.py openstack/tests/unit/network/v2/test_pool.py openstack/tests/unit/network/v2/test_pool_member.py openstack/tests/unit/network/v2/test_port.py openstack/tests/unit/network/v2/test_proxy.py openstack/tests/unit/network/v2/test_qos_bandwidth_limit_rule.py openstack/tests/unit/network/v2/test_qos_dscp_marking_rule.py openstack/tests/unit/network/v2/test_qos_minimum_bandwidth_rule.py openstack/tests/unit/network/v2/test_qos_policy.py openstack/tests/unit/network/v2/test_qos_rule_type.py openstack/tests/unit/network/v2/test_quota.py openstack/tests/unit/network/v2/test_rbac_policy.py openstack/tests/unit/network/v2/test_router.py openstack/tests/unit/network/v2/test_security_group.py openstack/tests/unit/network/v2/test_security_group_rule.py openstack/tests/unit/network/v2/test_segment.py openstack/tests/unit/network/v2/test_service_profile.py openstack/tests/unit/network/v2/test_service_provider.py openstack/tests/unit/network/v2/test_subnet.py openstack/tests/unit/network/v2/test_subnet_pool.py openstack/tests/unit/network/v2/test_tag.py openstack/tests/unit/network/v2/test_vpn_service.py openstack/tests/unit/object_store/__init__.py openstack/tests/unit/object_store/test_object_store_service.py openstack/tests/unit/object_store/v1/__init__.py openstack/tests/unit/object_store/v1/test_account.py openstack/tests/unit/object_store/v1/test_container.py openstack/tests/unit/object_store/v1/test_obj.py openstack/tests/unit/object_store/v1/test_proxy.py openstack/tests/unit/orchestration/__init__.py openstack/tests/unit/orchestration/test_orchestration_service.py openstack/tests/unit/orchestration/test_version.py openstack/tests/unit/orchestration/v1/__init__.py openstack/tests/unit/orchestration/v1/test_proxy.py openstack/tests/unit/orchestration/v1/test_resource.py openstack/tests/unit/orchestration/v1/test_software_config.py openstack/tests/unit/orchestration/v1/test_software_deployment.py openstack/tests/unit/orchestration/v1/test_stack.py openstack/tests/unit/orchestration/v1/test_stack_environment.py openstack/tests/unit/orchestration/v1/test_stack_files.py openstack/tests/unit/orchestration/v1/test_stack_template.py openstack/tests/unit/orchestration/v1/test_template.py openstack/tests/unit/workflow/__init__.py openstack/tests/unit/workflow/test_execution.py openstack/tests/unit/workflow/test_proxy.py openstack/tests/unit/workflow/test_version.py openstack/tests/unit/workflow/test_workflow.py openstack/tests/unit/workflow/test_workflow_service.py openstack/workflow/__init__.py openstack/workflow/version.py openstack/workflow/workflow_service.py openstack/workflow/v2/__init__.py openstack/workflow/v2/_proxy.py openstack/workflow/v2/execution.py openstack/workflow/v2/workflow.py openstacksdk.egg-info/PKG-INFO openstacksdk.egg-info/SOURCES.txt openstacksdk.egg-info/dependency_links.txt openstacksdk.egg-info/entry_points.txt openstacksdk.egg-info/not-zip-safe openstacksdk.egg-info/pbr.json openstacksdk.egg-info/requires.txt openstacksdk.egg-info/top_level.txt playbooks/devstack/legacy-git.yaml releasenotes/notes/add-current-user-id-49b6463e6bcc3b31.yaml releasenotes/notes/add-jmespath-support-f47b7a503dbbfda1.yaml releasenotes/notes/add-list_flavor_access-e038253e953e6586.yaml releasenotes/notes/add-server-console-078ed2696e5b04d9.yaml releasenotes/notes/add-service-0bcc16eb026eade3.yaml releasenotes/notes/add-show-all-images-flag-352748b6c3d99f3f.yaml releasenotes/notes/add_description_create_user-0ddc9a0ef4da840d.yaml releasenotes/notes/add_designate_recordsets_support-69af0a6b317073e7.yaml releasenotes/notes/add_designate_zones_support-35fa9b8b09995b43.yaml releasenotes/notes/add_heat_tag_support-135aa43ba1dce3bb.yaml releasenotes/notes/add_host_aggregate_support-471623faf45ec3c3.yaml releasenotes/notes/add_magnum_baymodel_support-e35e5aab0b14ff75.yaml releasenotes/notes/add_magnum_services_support-3d95f9dcc60b5573.yaml releasenotes/notes/add_server_group_support-dfa472e3dae7d34d.yaml releasenotes/notes/add_update_server-8761059d6de7e68b.yaml releasenotes/notes/add_update_service-28e590a7a7524053.yaml releasenotes/notes/alternate-auth-context-3939f1492a0e1355.yaml releasenotes/notes/always-detail-cluster-templates-3eb4b5744ba327ac.yaml releasenotes/notes/boot-on-server-group-a80e51850db24b3d.yaml releasenotes/notes/bug-2001080-de52ead3c5466792.yaml releasenotes/notes/cache-in-use-volumes-c7fa8bb378106fe3.yaml releasenotes/notes/catch-up-release-notes-e385fad34e9f3d6e.yaml releasenotes/notes/change-attach-vol-return-value-4834a1f78392abb1.yaml releasenotes/notes/cinder_volume_backups_support-6f7ceab440853833.yaml releasenotes/notes/cinderv2-norm-fix-037189c60b43089f.yaml releasenotes/notes/cleanup-objects-f99aeecf22ac13dd.yaml releasenotes/notes/cloud-profile-status-e0d29b5e2f10e95c.yaml releasenotes/notes/compute-quotas-b07a0f24dfac8444.yaml releasenotes/notes/compute-usage-defaults-5f5b2936f17ff400.yaml releasenotes/notes/config-flavor-specs-ca712e17971482b6.yaml releasenotes/notes/create-stack-fix-12dbb59a48ac7442.yaml releasenotes/notes/create_server_network_fix-c4a56b31d2850a4b.yaml releasenotes/notes/create_service_norm-319a97433d68fa6a.yaml releasenotes/notes/data-model-cf50d86982646370.yaml releasenotes/notes/default-cloud-7ee0bcb9e5dd24b9.yaml releasenotes/notes/delete-autocreated-1839187b0aa35022.yaml releasenotes/notes/delete-image-objects-9d4b4e0fff36a23f.yaml releasenotes/notes/delete-obj-return-a3ecf0415b7a2989.yaml releasenotes/notes/delete_project-399f9b3107014dde.yaml releasenotes/notes/deprecated-profile-762afdef0e8fc9e8.yaml releasenotes/notes/domain_operations_name_or_id-baba4cac5b67234d.yaml releasenotes/notes/dual-stack-networks-8a81941c97d28deb.yaml releasenotes/notes/endpoint-from-catalog-bad36cb0409a4e6a.yaml releasenotes/notes/false-not-attribute-error-49484d0fdc61f75d.yaml releasenotes/notes/feature-server-metadata-50caf18cec532160.yaml releasenotes/notes/fip_timeout-035c4bb3ff92fa1f.yaml releasenotes/notes/fix-compat-with-old-keystoneauth-66e11ee9d008b962.yaml releasenotes/notes/fix-config-drive-a148b7589f7e1022.yaml releasenotes/notes/fix-delete-ips-1d4eebf7bc4d4733.yaml releasenotes/notes/fix-list-networks-a592725df64c306e.yaml releasenotes/notes/fix-missing-futures-a0617a1c1ce6e659.yaml releasenotes/notes/fix-properties-key-conflict-2161ca1faaad6731.yaml releasenotes/notes/fix-supplemental-fips-c9cd58aac12eb30e.yaml releasenotes/notes/fix-update-domain-af47b066ac52eb7f.yaml releasenotes/notes/fixed-magnum-type-7406f0a60525f858.yaml releasenotes/notes/fixed-url-parameters-89c57c3dd64f1573.yaml releasenotes/notes/flavor_fix-a53c6b326dc34a2c.yaml releasenotes/notes/fnmatch-name-or-id-f658fe26f84086c8.yaml releasenotes/notes/get-limits-c383c512f8e01873.yaml releasenotes/notes/get-usage-72d249ff790d1b8f.yaml releasenotes/notes/get_object_api-968483adb016bce1.yaml releasenotes/notes/glance-image-pagination-0b4dfef22b25852b.yaml releasenotes/notes/grant-revoke-assignments-231d3f9596a1ae75.yaml releasenotes/notes/image-flavor-by-name-54865b00ebbf1004.yaml releasenotes/notes/image-from-volume-9acf7379f5995b5b.yaml releasenotes/notes/infer-secgroup-source-58d840aaf1a1f485.yaml releasenotes/notes/ironic-microversion-ba5b0f36f11196a6.yaml releasenotes/notes/less-file-hashing-d2497337da5acbef.yaml releasenotes/notes/list-az-names-a38c277d1192471b.yaml releasenotes/notes/list-role-assignments-keystone-v2-b127b12b4860f50c.yaml releasenotes/notes/list-servers-all-projects-349e6dc665ba2e8d.yaml releasenotes/notes/load-yaml-3177efca78e5c67a.yaml releasenotes/notes/log-request-ids-37507cb6eed9a7da.yaml releasenotes/notes/magic-fixes-dca4ae4dac2441a8.yaml releasenotes/notes/make-rest-client-dd3d365632a26fa0.yaml releasenotes/notes/make-rest-client-version-discovery-84125700f159491a.yaml releasenotes/notes/make_object_metadata_easier.yaml-e9751723e002e06f.yaml releasenotes/notes/merge-shade-os-client-config-29878734ad643e33.yaml releasenotes/notes/meta-passthrough-d695bff4f9366b65.yaml releasenotes/notes/min-max-legacy-version-301242466ddefa93.yaml releasenotes/notes/multiple-updates-b48cc2f6db2e526d.yaml releasenotes/notes/nat-source-field-7c7db2a724616d59.yaml releasenotes/notes/net_provider-dd64b697476b7094.yaml releasenotes/notes/network-list-e6e9dafdd8446263.yaml releasenotes/notes/network-quotas-b98cce9ffeffdbf4.yaml releasenotes/notes/neutron_availability_zone_extension-675c2460ebb50a09.yaml releasenotes/notes/new-floating-attributes-213cdf5681d337e1.yaml releasenotes/notes/no-more-troveclient-0a4739c21432ac63.yaml releasenotes/notes/norm_role_assignments-a13f41768e62d40c.yaml releasenotes/notes/normalize-images-1331bea7bfffa36a.yaml releasenotes/notes/nova-flavor-to-rest-0a5757e35714a690.yaml releasenotes/notes/nova-old-microversion-5e4b8e239ba44096.yaml releasenotes/notes/option-precedence-1fecab21fdfb2c33.yaml releasenotes/notes/remove-magnumclient-875b3e513f98f57c.yaml releasenotes/notes/remove-metric-fe5ddfd52b43c852.yaml releasenotes/notes/remove-novaclient-3f8d4db20d5f9582.yaml releasenotes/notes/removed-glanceclient-105c7fba9481b9be.yaml releasenotes/notes/removed-meter-6f6651b6e452e000.yaml releasenotes/notes/removed-profile-437f3038025b0fb3.yaml releasenotes/notes/removed-swiftclient-aff22bfaeee5f59f.yaml releasenotes/notes/renamed-bare-metal-b1cdbc52af14e042.yaml releasenotes/notes/renamed-block-store-bc5e0a7315bfeb67.yaml releasenotes/notes/renamed-cluster-743da6d321fffcba.yaml releasenotes/notes/renamed-telemetry-c08ae3e72afca24f.yaml releasenotes/notes/resource2-migration-835590b300bef621.yaml releasenotes/notes/router_ext_gw-b86582317bca8b39.yaml releasenotes/notes/sdk-helper-41f8d815cfbcfb00.yaml releasenotes/notes/server-create-error-id-66c698c7e633fb8b.yaml releasenotes/notes/server-security-groups-840ab28c04f359de.yaml releasenotes/notes/service_enabled_flag-c917b305d3f2e8fd.yaml releasenotes/notes/session-client-b581a6e5d18c8f04.yaml releasenotes/notes/set-bootable-volume-454a7a41e7e77d08.yaml releasenotes/notes/shade-helper-568f8cb372eef6d9.yaml releasenotes/notes/stack-update-5886e91fd6e423bf.yaml releasenotes/notes/started-using-reno-242e2b0cd27f9480.yaml releasenotes/notes/stream-to-file-91f48d6dcea399c6.yaml releasenotes/notes/strict-mode-d493abc0c3e87945.yaml releasenotes/notes/swift-upload-lock-d18f3d42b3a0719a.yaml releasenotes/notes/update_endpoint-f87c1f42d0c0d1ef.yaml releasenotes/notes/use-interface-ip-c5cb3e7c91150096.yaml releasenotes/notes/vendor-add-betacloud-03872c3485104853.yaml releasenotes/notes/vendor-updates-f11184ba56bb27cf.yaml releasenotes/notes/version-discovery-a501c4e9e9869f77.yaml releasenotes/notes/volume-quotas-5b674ee8c1f71eb6.yaml releasenotes/notes/volume-types-a07a14ae668e7dd2.yaml releasenotes/notes/wait-on-image-snapshot-27cd2eacab2fabd8.yaml releasenotes/notes/wait_for_server-8dc8446b7c673d36.yaml releasenotes/notes/workaround-transitive-deps-1e7a214f3256b77e.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder tools/keystone_version.py tools/nova_version.pyopenstacksdk-0.11.3/openstacksdk.egg-info/top_level.txt0000664000175100017510000000001213236151501023214 0ustar zuulzuul00000000000000openstack openstacksdk-0.11.3/openstacksdk.egg-info/pbr.json0000664000175100017510000000005613236151501022150 0ustar zuulzuul00000000000000{"git_version": "8cdf409", "is_release": true}openstacksdk-0.11.3/openstacksdk.egg-info/dependency_links.txt0000664000175100017510000000000113236151501024537 0ustar zuulzuul00000000000000 openstacksdk-0.11.3/openstacksdk.egg-info/not-zip-safe0000664000175100017510000000000113236151470022724 0ustar zuulzuul00000000000000 openstacksdk-0.11.3/test-requirements.txt0000666000175100017510000000107513236151340020555 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. coverage!=4.4,>=4.0 # Apache-2.0 extras>=1.0.0 # MIT fixtures>=3.0.0 # Apache-2.0/BSD jsonschema<3.0.0,>=2.6.0 # MIT mock>=2.0.0 # BSD python-subunit>=1.0.0 # Apache-2.0/BSD oslotest>=3.2.0 # Apache-2.0 requests-mock>=1.1.0 # Apache-2.0 stestr>=1.0.0 # Apache-2.0 testrepository>=0.0.18 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT openstacksdk-0.11.3/.mailmap0000666000175100017510000000033613236151340015734 0ustar zuulzuul00000000000000# Format is: # # openstacksdk-0.11.3/tox.ini0000666000175100017510000000605613236151364015641 0ustar zuulzuul00000000000000[tox] minversion = 1.6 envlist = py35,py27,pypy,pep8 skipsdist = True [testenv] usedevelop = True install_command = pip install {opts} {packages} setenv = VIRTUAL_ENV={envdir} LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=C deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens} -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt commands = stestr run {posargs} stestr slowest [testenv:examples] passenv = OS_* OPENSTACKSDK_* commands = stestr --test-path ./openstack/tests/examples run {posargs} stestr slowest [testenv:functional] basepython = {env:OPENSTACKSDK_TOX_PYTHON:python2} passenv = OS_* OPENSTACKSDK_* commands = stestr --test-path ./openstack/tests/functional run --serial {posargs} stestr slowest [testenv:pep8] usedevelop = False skip_install = True deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens} doc8 hacking pygments readme commands = doc8 doc/source python setup.py check -r -s flake8 [testenv:venv] commands = {posargs} [testenv:debug] whitelist_externals = find commands = find . -type f -name "*.pyc" -delete oslo_debug_helper {posargs} [testenv:cover] setenv = {[testenv]setenv} PYTHON=coverage run --source shade --parallel-mode commands = stestr run {posargs} coverage combine coverage html -d cover coverage xml -o cover/coverage.xml [testenv:ansible] # Need to pass some env vars for the Ansible playbooks basepython = {env:OPENSTACKSDK_TOX_PYTHON:python2} passenv = HOME USER commands = {toxinidir}/extras/run-ansible-tests.sh -e {envdir} {posargs} [testenv:docs] deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens} -r{toxinidir}/requirements.txt -r{toxinidir}/doc/requirements.txt commands = sphinx-build -W -d doc/build/doctrees -b html doc/source/ doc/build/html [testenv:releasenotes] usedevelop = False skip_install = True commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html [flake8] # The following are ignored on purpose. It's not super worth it to fix them. # However, if you feel strongly about it, patches will be accepted to fix them # if they fix ALL of the occurances of one and only one of them. # H103 Is about the Apache license. It's strangely strict about the use of # single vs double quotes in the license text. If someone decides to fix # this, please be sure to preseve all copyright lines. # H306 Is about alphabetical imports - there's a lot to fix. # H4 Are about docstrings and there's just a huge pile of pre-existing issues. # D* Came from sdk, unknown why they're skipped. ignore = H103,H306,H4,D100,D101,D102,D103,D104,D105,D200,D202,D204,D205,D211,D301,D400,D401 show-source = True exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build [doc8] extensions = .rst, .yaml openstacksdk-0.11.3/CONTRIBUTING.rst0000666000175100017510000000244013236151340016752 0ustar zuulzuul00000000000000.. _contributing: =================================== Contributing to python-openstacksdk =================================== If you're interested in contributing to the python-openstacksdk project, the following will help get you started. Contributor License Agreement ----------------------------- .. index:: single: license; agreement In order to contribute to the python-openstacksdk project, you need to have signed OpenStack's contributor's agreement. Please read `DeveloperWorkflow`_ before sending your first patch for review. Pull requests submitted through GitHub will be ignored. .. seealso:: * http://wiki.openstack.org/HowToContribute * http://wiki.openstack.org/CLA .. _DeveloperWorkflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow Project Hosting Details ------------------------- Project Documentation http://docs.openstack.org/sdks/python/openstacksdk/ Bug tracker https://bugs.launchpad.net/python-openstacksdk Mailing list (prefix subjects with ``[sdk]`` for faster responses) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Code Hosting https://git.openstack.org/cgit/openstack/python-openstacksdk Code Review https://review.openstack.org/#/q/status:open+project:openstack/python-openstacksdk,n,z openstacksdk-0.11.3/examples/0000775000175100017510000000000013236151501016124 5ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/network/0000775000175100017510000000000013236151501017615 5ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/network/create.py0000666000175100017510000000213113236151340021432 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Create resources with the Network service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def create_network(conn): print("Create Network:") example_network = conn.network.create_network( name='openstacksdk-example-project-network') print(example_network) example_subnet = conn.network.create_subnet( name='openstacksdk-example-project-subnet', network_id=example_network.id, ip_version='4', cidr='10.0.2.0/24', gateway_ip='10.0.2.1') print(example_subnet) openstacksdk-0.11.3/examples/network/find.py0000666000175100017510000000153513236151340021116 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import examples.connect """ Find a resource from the Network service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def find_network(conn): print("Find Network:") network = conn.network.find_network(examples.connect.NETWORK_NAME) print(network) return network openstacksdk-0.11.3/examples/network/security_group_rules.py0000666000175100017510000000322113236151340024465 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Create resources with the Network service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def open_port(conn): print("Open a port:") example_sec_group = conn.network.create_security_group( name='openstacksdk-example-security-group') print(example_sec_group) example_rule = conn.network.create_security_group_rule( security_group_id=example_sec_group.id, direction='ingress', remote_ip_prefix='0.0.0.0/0', protocol='HTTPS', port_range_max='443', port_range_min='443', ethertype='IPv4') print(example_rule) def allow_ping(conn): print("Allow pings:") example_sec_group = conn.network.create_security_group( name='openstacksdk-example-security-group2') print(example_sec_group) example_rule = conn.network.create_security_group_rule( security_group_id=example_sec_group.id, direction='ingress', remote_ip_prefix='0.0.0.0/0', protocol='icmp', port_range_max=None, port_range_min=None, ethertype='IPv4') print(example_rule) openstacksdk-0.11.3/examples/network/__init__.py0000666000175100017510000000000013236151340021717 0ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/network/list.py0000666000175100017510000000257413236151340021155 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ List resources from the Network service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def list_networks(conn): print("List Networks:") for network in conn.network.networks(): print(network) def list_subnets(conn): print("List Subnets:") for subnet in conn.network.subnets(): print(subnet) def list_ports(conn): print("List Ports:") for port in conn.network.ports(): print(port) def list_security_groups(conn): print("List Security Groups:") for port in conn.network.security_groups(): print(port) def list_routers(conn): print("List Routers:") for router in conn.network.routers(): print(router) def list_network_agents(conn): print("List Network Agents:") for agent in conn.network.agents(): print(agent) openstacksdk-0.11.3/examples/network/delete.py0000666000175100017510000000200213236151340021426 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Delete resources with the Network service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def delete_network(conn): print("Delete Network:") example_network = conn.network.find_network( 'openstacksdk-example-project-network') for example_subnet in example_network.subnet_ids: conn.network.delete_subnet(example_subnet, ignore_missing=False) conn.network.delete_network(example_network, ignore_missing=False) openstacksdk-0.11.3/examples/connect.py0000666000175100017510000000552413236151340020140 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Connect to an OpenStack cloud. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ import argparse import os import openstack from openstack import config as occ from openstack import utils import sys utils.enable_logging(True, stream=sys.stdout) #: Defines the OpenStack Client Config (OCC) cloud key in your OCC config #: file, typically in $HOME/.config/openstack/clouds.yaml. That configuration #: will determine where the examples will be run and what resource defaults #: will be used to run the examples. TEST_CLOUD = os.getenv('OS_TEST_CLOUD', 'devstack-admin') class Opts(object): def __init__(self, cloud_name='devstack-admin', debug=False): self.cloud = cloud_name self.debug = debug # Use identity v3 API for examples. self.identity_api_version = '3' def _get_resource_value(resource_key, default): try: return cloud.config['example'][resource_key] except KeyError: return default config = occ.OpenStackConfig() cloud = openstack.connect(cloud=TEST_CLOUD) SERVER_NAME = 'openstacksdk-example' IMAGE_NAME = _get_resource_value('image_name', 'cirros-0.3.5-x86_64-disk') FLAVOR_NAME = _get_resource_value('flavor_name', 'm1.small') NETWORK_NAME = _get_resource_value('network_name', 'private') KEYPAIR_NAME = _get_resource_value('keypair_name', 'openstacksdk-example') SSH_DIR = _get_resource_value( 'ssh_dir', '{home}/.ssh'.format(home=os.path.expanduser("~"))) PRIVATE_KEYPAIR_FILE = _get_resource_value( 'private_keypair_file', '{ssh_dir}/id_rsa.{key}'.format( ssh_dir=SSH_DIR, key=KEYPAIR_NAME)) EXAMPLE_IMAGE_NAME = 'openstacksdk-example-public-image' def create_connection_from_config(): return openstack.connect(cloud=TEST_CLOUD) def create_connection_from_args(): parser = argparse.ArgumentParser() config = occ.OpenStackConfig() config.register_argparse_arguments(parser, sys.argv[1:]) args = parser.parse_args() return openstack.connect(config=config.get_one(argparse=args)) def create_connection(auth_url, region, project_name, username, password): return openstack.connect( auth_url=auth_url, project_name=project_name, username=username, password=password, region_name=region, app_name='examples', app_version='1.0', ) openstacksdk-0.11.3/examples/cloud/0000775000175100017510000000000013236151501017232 5ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/cloud/strict-mode.py0000666000175100017510000000143613236151364022053 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack openstack.enable_logging() cloud = openstack.openstack_cloud( cloud='fuga', region_name='cystack', strict=True) image = cloud.get_image( 'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image') cloud.pprint(image) openstacksdk-0.11.3/examples/cloud/debug-logging.py0000666000175100017510000000137013236151364022330 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud( cloud='my-vexxhost', region_name='ca-ymq-1') cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]') openstacksdk-0.11.3/examples/cloud/service-conditionals.py0000666000175100017510000000137013236151364023742 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='kiss', region_name='region1') print(cloud.has_service('network')) print(cloud.has_service('container-orchestration')) openstacksdk-0.11.3/examples/cloud/user-agent.py0000666000175100017510000000133213236151364021666 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack openstack.enable_logging(http_debug=True) cloud = openstack.openstack_cloud( cloud='datacentred', app_name='AmazingApp', app_version='1.0') cloud.list_networks() openstacksdk-0.11.3/examples/cloud/upload-object.py0000666000175100017510000000160113236151364022343 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1') cloud.create_object( container='my-container', name='my-object', filename='/home/mordred/briarcliff.sh3d', segment_size=1000000) cloud.delete_object('my-container', 'my-object') cloud.delete_container('my-container') openstacksdk-0.11.3/examples/cloud/server-information.py0000666000175100017510000000231413236151364023446 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='my-citycloud', region_name='Buf1') try: server = cloud.create_server( 'my-server', image='Ubuntu 16.04 Xenial Xerus', flavor=dict(id='0dab10b5-42a2-438e-be7b-505741a7ffcc'), wait=True, auto_ip=True) print("\n\nFull Server\n\n") cloud.pprint(server) print("\n\nTurn Detailed Off\n\n") cloud.pprint(cloud.get_server('my-server', detailed=False)) print("\n\nBare Server\n\n") cloud.pprint(cloud.get_server('my-server', bare=True)) finally: # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) openstacksdk-0.11.3/examples/cloud/create-server-dict.py0000666000175100017510000000270513236151364023311 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import cloud as openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) for cloud_name, region_name, image, flavor_id in [ ('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]', '5cf64088-893b-46b5-9bb1-ee020277635d'), ('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus', '0dab10b5-42a2-438e-be7b-505741a7ffcc'), ('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]: # Initialize cloud cloud = openstack.openstack_cloud( cloud=cloud_name, region_name=region_name) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. server = cloud.create_server( 'my-server', image=image, flavor=dict(id=flavor_id), wait=True, auto_ip=True) # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) openstacksdk-0.11.3/examples/cloud/create-server-name-or-id.py0000666000175100017510000000301213236151364024306 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import cloud as openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) for cloud_name, region_name, image, flavor in [ ('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]', 'v1-standard-4'), ('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus', '4C-4GB-100GB'), ('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]: # Initialize cloud cloud = openstack.openstack_cloud( cloud=cloud_name, region_name=region_name) cloud.delete_server('my-server', wait=True, delete_ips=True) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. server = cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) print(server.name) print(server['name']) cloud.pprint(server) # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) openstacksdk-0.11.3/examples/cloud/cleanup-servers.py0000666000175100017510000000201213236151364022726 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import cloud as openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = openstack.openstack_cloud( cloud=cloud_name, region_name=region_name) for server in cloud.search_servers('my-server'): cloud.delete_server(server, wait=True, delete_ips=True) openstacksdk-0.11.3/examples/cloud/upload-large-object.py0000666000175100017510000000160113236151364023433 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1') cloud.create_object( container='my-container', name='my-object', filename='/home/mordred/briarcliff.sh3d', segment_size=1000000) cloud.delete_object('my-container', 'my-object') cloud.delete_container('my-container') openstacksdk-0.11.3/examples/cloud/http-debug-logging.py0000666000175100017510000000137513236151364023312 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import cloud as openstack openstack.enable_logging(http_debug=True) cloud = openstack.openstack_cloud( cloud='my-vexxhost', region_name='ca-ymq-1') cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]') openstacksdk-0.11.3/examples/cloud/normalization.py0000666000175100017510000000144413236151364022506 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import cloud as openstack openstack.enable_logging() cloud = openstack.openstack_cloud(cloud='fuga', region_name='cystack') image = cloud.get_image( 'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image') cloud.pprint(image) openstacksdk-0.11.3/examples/cloud/munch-dict-object.py0000666000175100017510000000140313236151364023112 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1') image = cloud.get_image('Ubuntu 16.10') print(image.name) print(image['name']) openstacksdk-0.11.3/examples/cloud/find-an-image.py0000666000175100017510000000142013236151364022206 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from openstack import cloud as openstack openstack.enable_logging() cloud = openstack.openstack_cloud(cloud='fuga', region_name='cystack') cloud.pprint([ image for image in cloud.list_images() if 'ubuntu' in image.name.lower()]) openstacksdk-0.11.3/examples/cloud/service-conditional-overrides.py0000666000175100017510000000127713236151364025565 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='rax', region_name='DFW') print(cloud.has_service('network')) openstacksdk-0.11.3/examples/compute/0000775000175100017510000000000013236151501017600 5ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/compute/create.py0000666000175100017510000000406213236151340021422 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import os from examples.connect import FLAVOR_NAME from examples.connect import IMAGE_NAME from examples.connect import KEYPAIR_NAME from examples.connect import NETWORK_NAME from examples.connect import PRIVATE_KEYPAIR_FILE from examples.connect import SERVER_NAME from examples.connect import SSH_DIR """ Create resources with the Compute service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def create_keypair(conn): keypair = conn.compute.find_keypair(KEYPAIR_NAME) if not keypair: print("Create Key Pair:") keypair = conn.compute.create_keypair(name=KEYPAIR_NAME) print(keypair) try: os.mkdir(SSH_DIR) except OSError as e: if e.errno != errno.EEXIST: raise e with open(PRIVATE_KEYPAIR_FILE, 'w') as f: f.write("%s" % keypair.private_key) os.chmod(PRIVATE_KEYPAIR_FILE, 0o400) return keypair def create_server(conn): print("Create Server:") image = conn.compute.find_image(IMAGE_NAME) flavor = conn.compute.find_flavor(FLAVOR_NAME) network = conn.network.find_network(NETWORK_NAME) keypair = create_keypair(conn) server = conn.compute.create_server( name=SERVER_NAME, image_id=image.id, flavor_id=flavor.id, networks=[{"uuid": network.id}], key_name=keypair.name) server = conn.compute.wait_for_server(server) print("ssh -i {key} root@{ip}".format( key=PRIVATE_KEYPAIR_FILE, ip=server.access_ipv4)) openstacksdk-0.11.3/examples/compute/find.py0000666000175100017510000000222213236151340021073 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import examples.connect """ Find a resource from the Compute service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def find_image(conn): print("Find Image:") image = conn.compute.find_image(examples.connect.IMAGE_NAME) print(image) return image def find_flavor(conn): print("Find Flavor:") flavor = conn.compute.find_flavor(examples.connect.FLAVOR_NAME) print(flavor) return flavor def find_keypair(conn): print("Find Keypair:") keypair = conn.compute.find_keypair(examples.connect.KEYPAIR_NAME) print(keypair) return keypair openstacksdk-0.11.3/examples/compute/__init__.py0000666000175100017510000000000013236151340021702 0ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/compute/list.py0000666000175100017510000000216613236151340021135 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ List resources from the Compute service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def list_servers(conn): print("List Servers:") for server in conn.compute.servers(): print(server) def list_images(conn): print("List Images:") for image in conn.compute.images(): print(image) def list_flavors(conn): print("List Flavors:") for flavor in conn.compute.flavors(): print(flavor) def list_keypairs(conn): print("List Keypairs:") for keypair in conn.compute.keypairs(): print(keypair) openstacksdk-0.11.3/examples/compute/delete.py0000666000175100017510000000242613236151340021423 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import os from examples.connect import KEYPAIR_NAME from examples.connect import PRIVATE_KEYPAIR_FILE from examples.connect import SERVER_NAME """ Delete resources with the Compute service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def delete_keypair(conn): print("Delete Key Pair:") keypair = conn.compute.find_keypair(KEYPAIR_NAME) try: os.remove(PRIVATE_KEYPAIR_FILE) except OSError as e: if e.errno != errno.ENOENT: raise e print(keypair) conn.compute.delete_keypair(keypair) def delete_server(conn): print("Delete Server:") server = conn.compute.find_server(SERVER_NAME) print(server) conn.compute.delete_server(server) openstacksdk-0.11.3/examples/clustering/0000775000175100017510000000000013236151501020303 5ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/clustering/policy_type.py0000666000175100017510000000174613236151340023230 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Managing policy types in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/cluster.html """ def list_policy_types(conn): print("List Policy Types:") for pt in conn.clustering.policy_types(): print(pt.to_dict()) def get_policy_type(conn): print("Get Policy Type:") pt = conn.clustering.get_policy_type('senlin.policy.deletion-1.0') print(pt.to_dict()) openstacksdk-0.11.3/examples/clustering/action.py0000666000175100017510000000212513236151340022135 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Managing policies in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/cluster.html """ ACTION_ID = "06ad259b-d6ab-4eb2-a0fa-fb144437eab1" def list_actions(conn): print("List Actions:") for actions in conn.clustering.actions(): print(actions.to_dict()) for actions in conn.clustering.actions(sort='name:asc'): print(actions.to_dict()) def get_action(conn): print("Get Action:") action = conn.clustering.get_action(ACTION_ID) print(action.to_dict()) openstacksdk-0.11.3/examples/clustering/profile_type.py0000666000175100017510000000175013236151340023364 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Managing profile types in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/clustering.html """ def list_profile_types(conn): print("List Profile Types:") for pt in conn.clustering.profile_types(): print(pt.to_dict()) def get_profile_type(conn): print("Get Profile Type:") pt = conn.clustering.get_profile_type('os.nova.server-1.0') print(pt.to_dict()) openstacksdk-0.11.3/examples/clustering/cluster.py0000666000175100017510000001034613236151340022345 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Managing policies in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/cluster.html """ CLUSTER_NAME = "Test_Cluster" CLUSTER_ID = "47d808e5-ce75-4a1e-bfd2-4ed4639e8640" PROFILE_ID = "b0e3a680-e270-4eb8-9361-e5c9503fba0a" NODE_ID = "dd803d4a-015d-4223-b15f-db29bad3146c" POLICY_ID = "c0e3a680-e270-4eb8-9361-e5c9503fba00" def list_cluster(conn): print("List clusters:") for cluster in conn.clustering.clusters(): print(cluster.to_dict()) for cluster in conn.clustering.clusters(sort='name:asc'): print(cluster.to_dict()) def create_cluster(conn): print("Create cluster:") spec = { "name": CLUSTER_NAME, "profile_id": PROFILE_ID, "min_size": 0, "max_size": -1, "desired_capacity": 1, } cluster = conn.clustering.create_cluster(**spec) print(cluster.to_dict()) def get_cluster(conn): print("Get cluster:") cluster = conn.clustering.get_cluster(CLUSTER_ID) print(cluster.to_dict()) def find_cluster(conn): print("Find cluster:") cluster = conn.clustering.find_cluster(CLUSTER_ID) print(cluster.to_dict()) def update_cluster(conn): print("Update cluster:") spec = { "name": "Test_Cluster001", "profile_id": "c0e3a680-e270-4eb8-9361-e5c9503fba0a", "profile_only": True, } cluster = conn.clustering.update_cluster(CLUSTER_ID, **spec) print(cluster.to_dict()) def delete_cluster(conn): print("Delete cluster:") conn.clustering.delete_cluster(CLUSTER_ID) print("Cluster deleted.") # cluster support force delete conn.clustering.delete_cluster(CLUSTER_ID, False, True) print("Cluster deleted") def cluster_add_nodes(conn): print("Add nodes to cluster:") node_ids = [NODE_ID] res = conn.clustering.cluster_add_nodes(CLUSTER_ID, node_ids) print(res.to_dict()) def cluster_del_nodes(conn): print("Remove nodes from a cluster:") node_ids = [NODE_ID] res = conn.clustering.cluster_del_nodes(CLUSTER_ID, node_ids) print(res.to_dict()) def cluster_replace_nodes(conn): print("Replace the nodes in a cluster with specified nodes:") old_node = NODE_ID new_node = "cd803d4a-015d-4223-b15f-db29bad3146c" spec = { old_node: new_node } res = conn.clustering.cluster_replace_nodes(CLUSTER_ID, **spec) print(res.to_dict()) def cluster_scale_out(conn): print("Inflate the size of a cluster:") res = conn.clustering.cluster_scale_out(CLUSTER_ID, 1) print(res.to_dict()) def cluster_scale_in(conn): print("Shrink the size of a cluster:") res = conn.clustering.cluster_scale_in(CLUSTER_ID, 1) print(res.to_dict()) def cluster_resize(conn): print("Resize of cluster:") spec = { 'min_size': 1, 'max_size': 6, 'adjustment_type': 'EXACT_CAPACITY', 'number': 2 } res = conn.clustering.cluster_resize(CLUSTER_ID, **spec) print(res.to_dict()) def cluster_attach_policy(conn): print("Attach policy to a cluster:") spec = {'enabled': True} res = conn.clustering.cluster_attach_policy(CLUSTER_ID, POLICY_ID, **spec) print(res.to_dict()) def cluster_detach_policy(conn): print("Detach a policy from a cluster:") res = conn.clustering.cluster_detach_policy(CLUSTER_ID, POLICY_ID) print(res.to_dict()) def check_cluster(conn): print("Check cluster:") res = conn.clustering.check_cluster(CLUSTER_ID) print(res.to_dict()) def recover_cluster(conn): print("Recover cluster:") spec = {'check': True} res = conn.clustering.recover_cluster(CLUSTER_ID, **spec) print(res.to_dict()) openstacksdk-0.11.3/examples/clustering/policy.py0000666000175100017510000000341613236151340022163 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Managing policies in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/cluster.html """ def list_policies(conn): print("List Policies:") for policy in conn.clustering.policies(): print(policy.to_dict()) for policy in conn.clustering.policies(sort='name:asc'): print(policy.to_dict()) def create_policy(conn): print("Create Policy:") spec = { 'policy': 'senlin.policy.deletion', 'version': 1.0, 'properties': { 'criteria': 'oldest_first', 'destroy_after_deletion': True, } } policy = conn.clustering.create_policy('dp01', spec) print(policy.to_dict()) def get_policy(conn): print("Get Policy:") policy = conn.clustering.get_policy('dp01') print(policy.to_dict()) def find_policy(conn): print("Find Policy:") policy = conn.clustering.find_policy('dp01') print(policy.to_dict()) def update_policy(conn): print("Update Policy:") policy = conn.clustering.update_policy('dp01', name='dp02') print(policy.to_dict()) def delete_policy(conn): print("Delete Policy:") conn.clustering.delete_policy('dp01') print("Policy deleted.") openstacksdk-0.11.3/examples/clustering/node.py0000666000175100017510000000430113236151340021603 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Managing policies in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/cluster.html """ NODE_NAME = 'Test_Node' NODE_ID = 'dd803d4a-015d-4223-b15f-db29bad3146c' PROFILE_ID = "b0e3a680-e270-4eb8-9361-e5c9503fba0a" def list_nodes(conn): print("List Nodes:") for node in conn.clustering.nodes(): print(node.to_dict()) for node in conn.clustering.nodes(sort='asc:name'): print(node.to_dict()) def create_node(conn): print("Create Node:") spec = { 'name': NODE_NAME, 'profile_id': PROFILE_ID, } node = conn.clustering.create_node(**spec) print(node.to_dict()) def get_node(conn): print("Get Node:") node = conn.clustering.get_node(NODE_ID) print(node.to_dict()) def find_node(conn): print("Find Node:") node = conn.clustering.find_node(NODE_ID) print(node.to_dict()) def update_node(conn): print("Update Node:") spec = { 'name': 'Test_Node01', 'profile_id': 'c0e3a680-e270-4eb8-9361-e5c9503fba0b', } node = conn.clustering.update_node(NODE_ID, **spec) print(node.to_dict()) def delete_node(conn): print("Delete Node:") conn.clustering.delete_node(NODE_ID) print("Node deleted.") # node support force delete conn.clustering.delete_node(NODE_ID, False, True) print("Node deleted") def check_node(conn): print("Check Node:") node = conn.clustering.check_node(NODE_ID) print(node.to_dict()) def recover_node(conn): print("Recover Node:") spec = {'check': True} node = conn.clustering.recover_node(NODE_ID, **spec) print(node.to_dict()) openstacksdk-0.11.3/examples/clustering/__init__.py0000666000175100017510000000000013236151340022405 0ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/clustering/profile.py0000666000175100017510000000411013236151340022314 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from examples.connect import FLAVOR_NAME from examples.connect import IMAGE_NAME from examples.connect import NETWORK_NAME from examples.connect import SERVER_NAME """ Managing profiles in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/cluster.html """ def list_profiles(conn): print("List Profiles:") for profile in conn.clustering.profiles(): print(profile.to_dict()) for profile in conn.clustering.profiles(sort='name:asc'): print(profile.to_dict()) def create_profile(conn): print("Create Profile:") spec = { 'profile': 'os.nova.server', 'version': 1.0, 'properties': { 'name': SERVER_NAME, 'flavor': FLAVOR_NAME, 'image': IMAGE_NAME, 'networks': { 'network': NETWORK_NAME } } } profile = conn.clustering.create_profile('os_server', spec) print(profile.to_dict()) def get_profile(conn): print("Get Profile:") profile = conn.clustering.get_profile('os_server') print(profile.to_dict()) def find_profile(conn): print("Find Profile:") profile = conn.clustering.find_profile('os_server') print(profile.to_dict()) def update_profile(conn): print("Update Profile:") profile = conn.clustering.update_profile('os_server', name='old_server') print(profile.to_dict()) def delete_profile(conn): print("Delete Profile:") conn.clustering.delete_profile('os_server') print("Profile deleted.") openstacksdk-0.11.3/examples/clustering/event.py0000666000175100017510000000210613236151340022000 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Managing policies in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/cluster.html """ EVENT_ID = "5d982071-76c5-4733-bf35-b9e38a563c99" def list_events(conn): print("List Events:") for events in conn.clustering.events(): print(events.to_dict()) for events in conn.clustering.events(sort='name:asc'): print(events.to_dict()) def get_event(conn): print("Get Event:") event = conn.clustering.get_event(EVENT_ID) print(event.to_dict()) openstacksdk-0.11.3/examples/clustering/receiver.py0000666000175100017510000000407713236151340022474 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Managing policies in the Cluster service. For a full guide see https://developer.openstack.org/sdks/python/openstacksdk/user/guides/cluster.html """ FAKE_NAME = 'test_receiver' CLUSTER_ID = "ae63a10b-4a90-452c-aef1-113a0b255ee3" def list_receivers(conn): print("List Receivers:") for receiver in conn.clustering.receivers(): print(receiver.to_dict()) for receiver in conn.clustering.receivers(sort='name:asc'): print(receiver.to_dict()) def create_receiver(conn): print("Create Receiver:") # Build the receiver attributes and create the recever. spec = { "action": "CLUSTER_SCALE_OUT", "cluster_id": CLUSTER_ID, "name": FAKE_NAME, "params": { "count": "1" }, "type": "webhook" } receiver = conn.clustering.create_receiver(**spec) print(receiver.to_dict()) def get_receiver(conn): print("Get Receiver:") receiver = conn.clustering.get_receiver(FAKE_NAME) print(receiver.to_dict()) def find_receiver(conn): print("Find Receiver:") receiver = conn.clustering.find_receiver(FAKE_NAME) print(receiver.to_dict()) def update_receiver(conn): print("Update Receiver:") spec = { "name": "test_receiver2", "params": { "count": "2" } } receiver = conn.clustering.update_receiver(FAKE_NAME, **spec) print(receiver.to_dict()) def delete_receiver(conn): print("Delete Receiver:") conn.clustering.delete_receiver(FAKE_NAME) print("Receiver deleted.") openstacksdk-0.11.3/examples/image/0000775000175100017510000000000013236151501017206 5ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/image/create.py0000666000175100017510000000221213236151340021023 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from examples.connect import EXAMPLE_IMAGE_NAME """ Create resources with the Image service. For a full guide see http://developer.openstack.org/sdks/python/openstacksdk/user/guides/image.html """ def upload_image(conn): print("Upload Image:") # Load fake image data for the example. data = 'This is fake image data.' # Build the image attributes and upload the image. image_attrs = { 'name': EXAMPLE_IMAGE_NAME, 'data': data, 'disk_format': 'raw', 'container_format': 'bare', 'visibility': 'public', } conn.image.upload_image(**image_attrs) openstacksdk-0.11.3/examples/image/__init__.py0000666000175100017510000000000013236151340021310 0ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/image/list.py0000666000175100017510000000144513236151340020542 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ List resources from the Image service. For a full guide see http://developer.openstack.org/sdks/python/openstacksdk/user/guides/image.html """ def list_images(conn): print("List Images:") for image in conn.image.images(): print(image) openstacksdk-0.11.3/examples/image/delete.py0000666000175100017510000000163713236151340021034 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from examples.connect import EXAMPLE_IMAGE_NAME """ Delete resources with the Image service. For a full guide see http://developer.openstack.org/sdks/python/openstacksdk/user/guides/image.html """ def delete_image(conn): print("Delete Image:") example_image = conn.image.find_image(EXAMPLE_IMAGE_NAME) conn.image.delete_image(example_image, ignore_missing=False) openstacksdk-0.11.3/examples/image/download.py0000666000175100017510000000432513236151340021376 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib """ Download an image with the Image service. For a full guide see http://developer.openstack.org/sdks/python/openstacksdk/user/guides/image.html """ def download_image_stream(conn): print("Download Image via streaming:") # Find the image you would like to download. image = conn.image.find_image("myimage") # As the actual download now takes place outside of the library # and in your own code, you are now responsible for checking # the integrity of the data. Create an MD5 has to be computed # after all of the data has been consumed. md5 = hashlib.md5() with open("myimage.qcow2", "wb") as local_image: response = conn.image.download_image(image, stream=True) # Read only 1024 bytes of memory at a time until # all of the image data has been consumed. for chunk in response.iter_content(chunk_size=1024): # With each chunk, add it to the hash to be computed. md5.update(chunk) local_image.write(chunk) # Now that you've consumed all of the data the response gave you, # ensure that the checksums of what the server offered and # what you downloaded are the same. if response.headers["Content-MD5"] != md5.hexdigest(): raise Exception("Checksum mismatch in downloaded content") def download_image(conn): print("Download Image:") # Find the image you would like to download. image = conn.image.find_image("myimage") with open("myimage.qcow2", "w") as local_image: response = conn.image.download_image(image) # Response will contain the entire contents of the Image. local_image.write(response) openstacksdk-0.11.3/examples/__init__.py0000666000175100017510000000000013236151340020226 0ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/key_manager/0000775000175100017510000000000013236151501020406 5ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/key_manager/create.py0000666000175100017510000000170713236151340022233 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ List resources from the Key Manager service. """ def create_secret(conn): print("Create a secret:") conn.key_manager.create_secret(name="My public key", secret_type="public", expiration="2020-02-28T23:59:59", payload="ssh rsa...", payload_content_type="text/plain") openstacksdk-0.11.3/examples/key_manager/get.py0000666000175100017510000000155013236151340021543 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ List resources from the Key Manager service. """ s = None def get_secret_payload(conn): print("Get a secret's payload:") # Assuming you have an object `s` which you perhaps received from # a conn.key_manager.secrets() call... secret = conn.key_manager.get_secret(s.secret_id) print(secret.payload) openstacksdk-0.11.3/examples/key_manager/__init__.py0000666000175100017510000000000013236151340022510 0ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/key_manager/list.py0000666000175100017510000000164713236151340021746 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ List resources from the Key Manager service. """ def list_secrets(conn): print("List Secrets:") for secret in conn.key_manager.secrets(): print(secret) def list_secrets_query(conn): print("List Secrets:") for secret in conn.key_manager.secrets( secret_type="symmetric", expiration="gte:2020-01-01T00:00:00"): print(secret) openstacksdk-0.11.3/examples/identity/0000775000175100017510000000000013236151501017755 5ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/identity/__init__.py0000666000175100017510000000000013236151340022057 0ustar zuulzuul00000000000000openstacksdk-0.11.3/examples/identity/list.py0000666000175100017510000000473213236151340021313 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ List resources from the Identity service. For a full guide see TODO(etoews):link to docs on developer.openstack.org """ def list_users(conn): print("List Users:") for user in conn.identity.users(): print(user) def list_credentials(conn): print("List Credentials:") for credential in conn.identity.credentials(): print(credential) def list_projects(conn): print("List Projects:") for project in conn.identity.projects(): print(project) def list_domains(conn): print("List Domains:") for domain in conn.identity.domains(): print(domain) def list_groups(conn): print("List Groups:") for group in conn.identity.groups(): print(group) def list_services(conn): print("List Services:") for service in conn.identity.services(): print(service) def list_endpoints(conn): print("List Endpoints:") for endpoint in conn.identity.endpoints(): print(endpoint) def list_regions(conn): print("List Regions:") for region in conn.identity.regions(): print(region) def list_roles(conn): print("List Roles:") for role in conn.identity.roles(): print(role) def list_role_domain_group_assignments(conn): print("List Roles assignments for a group on domain:") for role in conn.identity.role_domain_group_assignments(): print(role) def list_role_domain_user_assignments(conn): print("List Roles assignments for a user on domain:") for role in conn.identity.role_project_user_assignments(): print(role) def list_role_project_group_assignments(conn): print("List Roles assignments for a group on project:") for role in conn.identity.role_project_group_assignments(): print(role) def list_role_project_user_assignments(conn): print("List Roles assignments for a user on project:") for role in conn.identity.role_project_user_assignments(): print(role) openstacksdk-0.11.3/post_test_hook.sh0000777000175100017510000000216513236151340017720 0ustar zuulzuul00000000000000#!/bin/bash # # This is a script that kicks off a series of functional tests against a # OpenStack devstack cloud. This script is intended to work as a gate # in project-config for the Python SDK. DIR=$(cd $(dirname "$0") && pwd) echo "Running SDK functional test suite" sudo -H -u stack -i <\_by\_id * De-client-ify User Update * Use new keystoneauth version discovery * Fix typo in tox.ini * Updated from global requirements * Updated from global requirements * Updated from global requirements * Add tox\_install.sh to deal with upper-constraints * Support domain\_id for user operations * Add domain\_id to groups * Add handling timeout in servers cleanup function * Fix handling timeouts in volume functional tests cleanup * Connection doc add arguments * Fix switched params 0.9.18 ------ * Add parameter\_groups and conditions params for StackTemplate * Allow filtering network ports by fixed\_ips * Switch to \_is\_client\_version in list\_services * De-client-ify Service Delete * De-client-ify Service Update * Fix cleaning of Cinder volumes in functional tests * De-client-ify Service List * Add doc8 rule and check doc/source files * Fix some typos * Fix octavia l7rules * Update links in README * Add option to force delete cinder volume * fix the bug that cannot create a listener by openstacksdk * Introduce L7Rule for Octavia (load balancing) * Introduce L7Policy for Octavia (load balancing) * Updated from global requirements * Introduce Health Monitor for Octavia * Add required pool\_id property to HealthMonitor * Updated from global requirements * fix the bug that cannot create a pool by openstacksdk * Updated from global requirements * Introduce Member for Octavia (load balancing) * Fix determining if IPv6 is supported when it's disabled * Don't determine local IPv6 support if force\_ip4=True * Fix stack\_file function return body * Introduce Pool for Octavia (load balancing) * Introduce Listener for Octavia (load balancing) * Consolidate client version checks in an utility method * Support node-adopt/preview CLI * Add functional tests for Neutron QoS policies and rules * Updated from global requirements * DataCentred supports Keystone V3 and Glance V2 * Support to get resource by id * Make get\_server\_console tests more resilient * Update globals safely * Update the documentation link for doc migration * Remove OSIC * Make QoS rules required parameters to be not optional * Use valid\_kwargs decorator in QoS related functions * Add support for get details of available QoS rule type * Use more specific asserts in tests * Add Neutron QoS minimum bandwidth rule commands * Update reno for stable/pike * Update reno for stable/pike * Add Neutron QoS dscp marking rule commands * Updated from global requirements * Updated from global requirements * Updated from global requirements * router: Ignore L3 HA ports when listing interfaces * Initial commit of zuulv3 jobs * Manually sync with g-r * Update external links which have moved * Updated from global requirements * Update the documentation link for doc migration * Replace six.itervalues with dict.values() * Consolidate the use of self.\_get\_and\_munchify * De-client-ify Role Delete * De-client-ify Role List * De-client-ify Role Create * De-client-ify Group Delete * De-client-ify Group Update * De-client-ify Group List * De-client-ify Group Create * Fix comment in services function * Updated from global requirements * Don't remove top-container element in the adapter * Add config param for cluster object * Update load\_balancer for v2 API * Support to node-adopt and node-adopt-preview * Updated from global requirements * Improve doc formatting a bit * Unify style of 'domain' field * Added useful links to README * Add Neutron QoS bandwidth limit rule commands * De-client-ify Service Create * Add debug to tox environment * Remove hard-coding of timeout from API * Make sure we don't fail open on bad input to validate * Make sure we pass propert dicts to validate * Add flag to include all images in image list * Add support for list available QoS rule types * Add validation of required QoS extensions in Neutron * De-client-ify Domain Search * De-client-ify Domain Get * De-client-ify Domain List * De-client-ify User Create * Use the right variable name in userdata encoding * Add searching for Neutron API extensions * Add Neutron QoS policies commands * De-client-ify Domain Update and Delete * De-client-ify Domain Create * switch from oslosphinx to openstackdocstheme * reorganize docs using the new standard layout * use openstackdocstheme html context * Replace six.iteritems() with .items() * Remove dead links about OpenStack RC file * Don't remove top-container element for flavor, zones and server groups * Updated from global requirements * Updated from global requirements * Don't remove top-container element for flavors and clusters * Add query filters for find\_network * Project update to change enabled only when provided * switch from oslosphinx to openstackdocstheme * turn on warning-is-error in documentation build * rearrange existing documentation to follow the new standard layout * Fix mismatch between port and port-id for REST call * Remove a direct mocking of \_image\_client * Fix image normalization when image has properties property * Fix delete\_ips on delete\_server and add tests * Fix config\_drive, scheduler\_hints and key\_name in create\_server * Don't fail hard on 404 from neutron FIP listing * Only search for floating ips if the server has them * Don't try to delete fips on non-fip clouds * Return an empty list on FIP listing failure * Don't remove top-container element for server REST API calls * base64 encode user\_data sent to create server * Remove novaclient from shade's dependencies * Translate final nova calls to REST * Convert remaining nova tests to requests\_mock * Convert host aggregates calls to REST * Convert host aggregate tests to requests\_mock * Convert hypervisor list to REST * Convert hypervisor test to requests\_mock * Convert Server Groups to REST * Convert server group tests to requests\_mock * Convert FakeSecGroup to dict * Remove use of FakeServer from tests * Don't remove top-container element for user and project REST API calls * Convert keypairs calls to REST * Add normalization and functional tests for keypairs * Remove future document * Add text about microversions * Convert keypairs tests to requests\_mock * Convert list\_servers to REST * Convert list servers tests to requests\_mock * Remove some unused mocks * Break early from volume cleanup loop * Add some release notes we forgot to add * Retry to fetch paginated volumes if we get 404 for next link * docs: make the first example easier to understand * Properly expand server dicts after rebuild and update * Migrate non-list server interactions to REST * Increase timeout for volume tests * Skip pagination test for now * Fix title in Network Agent resource doc 0.9.17 ------ * Add compute support server live migrate operation * Fix urljoin for neutron endpoint * Added server console output method * Add compute support server backup operation * Remove get\_service method from compute * Remove py34 and pypy in tox * Replace six.iteritems() with .items() * Update tests for server calls that aren't list * Convert delete server calls to REST * Convert delete server mocks to requests\_mock * Convert get\_server\_by\_id * RESTify create\_server * Don't fetch extra\_specs in functional tests * Convert create\_server mocks to request\_mock * Add boot from volume unit tests * Cleanup volumes in functional tests in parallel * De-client-ify Project Update * De-client-ify Project Create * De-client-ify Project Delete * De-client-ify Project List * Don't remove top-container element for sec group REST API calls * Improve grant docs on when and how use domain arg * Don't remove top-container for stack and zone REST API calls * Updated from global requirements * Updated from global requirements * Rename obj\_to\_dict and obj\_list\_to\_dict * Don't remove top-container element for network REST API calls * Convert data from raw clients to Munch objects * Remove unneeded calls to shade\_exceptions * Don't remove top-container element for volume REST API calls * Fix update\_image unsupported media type * Remove support for py34 * Use get\_discovery from keystoneauth * De-client-ify User Ops * Add links to user list dict * Avoid keystoneclient making yet another discovery call * Use shade discovery for keystone * Updated from global requirements * Updated from global requirements * Fix py3 compatibility (dict.iteritems()) in object\_store * Migrate dns to new discovery method * Generalize version discovery for re-use * Pass hints to Cinder scheduler in create\_volume * Replace assertRaisesRegexp with assertRaisesRegex * Remove designate client from shade's dependencies * Add cluster support receiver update operation * Do less work when deleting a server and floating ips * Remove designateclient from commands related to recordsets * Add pagination for the list\_volumes call * Handle ports with no 'created\_at' attribute * Update test\_user\_update\_password to overlay clouds.yaml * Fix legacy clients helpers * Remove unused occ version tie * Add new parameter "is\_default" to Network QoS policy * Remove designateclient from commands related to zones * Add documentation about shade's use of logging * Add novaclient interactions to http\_debug * Set some logger names explicitly * Add logging of non-standard error message documents * Log specific error message from RetriableConnectionFailure * Don't pop from os.environ * Updated from global requirements * Fix python3 issues in functional tests * Add time reporting to Connection Retry message * Log cloud name on Connection retry issues * Use catalog endpoint on any errors in image version discovery * Fix cluster action list filter * Pick most recent rather than first fixed address * Allow a user to submit start and end time as strings * Fix get\_compute\_limits error message * Fix get\_compute\_usage normalization problem * update params about cluster filter event * Find private ip addr based on fip attachment * Network tag support * Add ability to run any tox env in python3 * Fix issue with list\_volumes when pagination is used * Add compute support server migrate operation * Make sure security\_groups is always a list * Updated from global requirements * Remove direct uses of nova\_client in functional tests * Keep a singleton to support multiple get\_config calls * Updated from global requirements * Remove designateclient mock from recordset tests * Convert list\_server\_security\_groups to REST * Remove two unused nova tasks * Include error message from server if one exists * Optimize the case of versioned image endpoint in catalog * Fix broken version discovery endpoints * Remove cinderclient from install-tips.sh * Fix tips jobs and convert Nova Floating IP calls * Convert first ironic\_client test to REST * Move mocks of designate API discovery calls to base test class * Fix exception when using boot\_from\_volume for create\_server * Revert "Revert "Use interface not endpoint\_type for keystoneclient"" * Revert "Use interface not endpoint\_type for keystoneclient" * Move legacy client constructors to mixin * Add ironicclient to constructors list * Fix pep8 errors that were lurking * Remove cinder client * Make deprecated client helper method * Add 'public' as a default interface for get\_mock\_url * Add super basic machine normalization * Remove designateclient mock from zones tests * Remove direct calls to cinderclient * Add "Multi Cloud with Shade" presentation * Use REST API for volume quotas calls * Add pprint and pformat helper methods * Add helper method to fetch service catalog * extend security\_group and \_rule with project id * Remove neutronclient from shade's dependencies * Remove cinderclient mocks from quotas tests * Fix Neutron floating IP test * Use REST API for volume snapshot calls * Remove usage of neutron\_client from functional tests * Enable neutron service in server create and rebuild tests * Replace neutronclient with REST API calls in FIP commands * Updated from global requirements * Add assert\_calls check testing volume calls with timeout enabled * Remove has\_service mock from Neutron FIP tests * Remove cinderclient mocks from snapshot tests * Remove neutronclient mocks from floating ips tests * Add 'service list' resource for senlin * Get endpoint versions with domain scope session * Use REST API for volume attach and volume backup calls * Use https instead of http in cluster examples * Specify alternate\_id in network quota * Updated from global requirements * Replace neutronclient with REST API calls in ports commands * Add direction field to QoS bandwidth limit * Don't get ports info from unavailable neutron service * Removing unsed fake methods and classes * Replace neutronclient with REST API calls in quotas commands * Replace neutronclient with REST API calls in security groups commands * Updated from global requirements * Use REST API for volume delete and detach calls * Use REST API for volume type\_access and volume create * Refactor the test\_create\_volume\_invalidates test * Replace neutronclient with REST API calls in router commands * Move REST error\_messages to error\_message argument * Remove two lines that are leftover and broken * Convert test\_role\_assignments to requests mock * Remove neutronclient mocks from sec groups tests * Fix document warnings * functional tests: minor cleanup * Remove neutronclient mocks from quotas tests * Remove neutronclient mocks from ports tests * Add optional error\_message to adapter.request * Fix interactions with keystoneauth from newton * Add in a bunch of TODOs about interface=admin * Set interface=admin for keystonev2 keystone tests * Port unversioned Version resources to resource2 * Port metric v1 to resource2 0.9.16 ------ * Deprecate Message v1 * Port image v1 to resource2 * Port identity v2 to resource2 * Port database v1 to resource2 * Add a \_normalize\_volume\_backups method * Correct Network \`ports\` query parameters * Use requests-mock for the volume backup tests * Remove neutronclient mocks from router tests * Replace neutronclient with REST API calls in subnet commands * Define a base function to remove unneeded attributes * Remove neutronclient mocks from subnet tests * Replace neutronclient with REST API calls in network commands * Move router related tests to separate module * Updated from global requirements * Move subnet related tests to separate module * Fix list\_servers tests to not need a ton of neutron * Remove neutronclient mocks from network create tests * Make \_fix\_argv() somewhat compatible with Argparse action='append' * Remove neutronclient mocks from network exceptions tests * Remove neutronclient mocks from network delete tests * Remove neutronclient mocks from network list tests * Use requests-mock for the list/add/remove volume types tests * Fix create/rebuild tests to not need a ton of neutron * Don't do all the network stuff in the rebuild poll * Move unit tests for list networks to test\_network.py file * Include two transitive dependencies to work around conflicts * Use requests-mock for all the attach/detach/delete tests * Add data plane status support to Network Port obj * Remove stray line * Revert "HAProxy uses milliseconds ..." * Strip trailing slashes in test helper method * Clarify some variable names in glance discovery * Allow router related functions to receive an ID * \_discover\_latest\_version is private and not used * Remove extra unneeded API calls * Change versioned\_endpoint to endpoint\_uri * Futureproof keystone unit tests against new occ * Actually fix the app\_name protection * Replace nova security groups with REST * Transition nova security group tests to REST * Remove dead ImageSnapshotCreate task * Pass in app\_name information to keystoneauth * Use REST for cinder list volumes * Add ability to pass in user\_agent * Upgrade list volumes tests to use requests-mock * Updated from global requirements 0.9.15 ------ * Pass shade version info to session user\_agent * Enable warnings\_as\_errors in doc enforcer * Add is\_profile\_only to Cluster resource * Use keystone\_session in \_get\_raw\_client * Add docs for volume\_attachment compute methods * Add support for volume attachments in compute v2 * Don't fail on security\_groups=None * Updated from global requirements * Stop defaulting container\_format to ovf for vhd * Don't run extra server info on every server in list * Add 'project\_id' to Server query parameters * Use REST for neutron floating IP list * Clean up some errant doc warnings/errors * Add get\_stack\_\* methods to documentation * Migrate create\_image\_snapshot to REST * Introduce Base for Octavia (load balancing) * Add ability to configure extra\_specs to be off * Migrate server snapshot tests to requests\_mock * Add test to validate multi \_ heat stack\_status * Fixed stack\_status.split() exception * Add server security groups to shade * Updated from global requirements * Fix doc build if git is absent * Add bare parameter to get/list/search server * Docs: add a note about rackspace API keys * Update tox build settings * Take care of multiple imports and update explanation * Reenable hacking tests that already pass * Enable H201 - don't throw bare exceptions * Enable H238 - classes should be subclasses of object * Fix a few minor annoyances that snuck in * Add vlan\_transparent property to network resource * Don't use project-id in catalog tests * Change metadata to align with team affiliation * Remove out of date comment * Filtering support by is\_router\_external to network resource * Move futures to requirements * Stop special-casing idenity catalog lookups * Find floating ip by ip address * Remove python-heatclient and replace with REST * Replace heatclient testing with requests\_mock * Add normalization for heat stacks * Add list\_availability\_zone\_names method * Switch list\_floating\_ip\_pools to REST * Strip out novaclient extra attributes * Convert floating\_ip\_pools unittest to requests\_mock * Migrate get\_server\_console to REST * Migrate server console tests to requests\_mock * Fix old-style mocking of nova\_client * Accept device\_id option when updating ports * Get rid of magnumclient dependency * attach\_volume should always return a vol attachment * wait\_for\_server: ensure we sleep a bit when waiting for server * delete\_server: make sure we sleep a bit when waiting for server deletion * Add designateclient to constructors list * Add StackFiles resource to orchestration v1 * Convert magnum service to requests\_mock * RESTify cluster template tests * Add normalization for cluster templates * Get the ball rolling on magnumclient * Use data when the request has a non-json content type * Cleanup some workarounds for old OCC versions * Expose ha\_state property from HA enabled L3 Agents * Remove type restrict of block\_device\_mapping * Add StackEnvironment resource to orchestration v1 * Shift some compute attributes within request body * StackTemplate resource for orchestration * Trivial: fix Template resource in orchestration * Avoid imports in openstack/\_\_init\_\_.py * add separate releasenotes build * Update sphinx and turn on warnings-is-error * Convert test\_identity\_roles to requests mock * Expose OS-EXT-SRV-ATTR:{hypervisor\_hostname,instance\_name} for Server * change test\_endpoints to use requests mock * Add port property: trunk\_details * OVH supports qcow2 * Add image download example * Depend on pbr>=2.0.0 * Fix the network flavor disassociate method * Convert test\_services to requests\_mock * Fix the telemetry statistics test * Only do fnmatch compilation and logging once per loop * Correct a copy/paste mistake in a docstring * Fix the telemetry sample test * Fix network quota test so it works on gate * Use interface not endpoint\_type for keystoneclient * Add support for bailing on invalid service versions * Put fnmatch code back, but safely this time * modify test-requirement according to requirements project * Replace keystone\_client mock in test\_groups * Use unicode match for name\_or\_id * Raise a more specific exception on nova 400 errors * Don't glob match name\_or\_id * Enable streaming responses in download\_image * [Fix gate]Update test requirement * Updated from global requirements * Update devstack config to point to a valid image * Rename ClusterTemplate in OpenStackCloud docs * Fix OpenStack and ID misspellings * Remove service names in OpenStackCloud docs * Add wait\_for\_xxx methods to cluster proxy * Change version of hacking in test-requirements * Reorganize cluster docs * Reorganize object\_store docs * Reorganize workflow docs * Reorganize network docs * Pass ironic microversion through from api\_version * Reorganize telemetry docs * Reorganize block store docs 0.9.14 ------ * Add missing attribute to Subnet resource * Add ability to skip yaml loading * keystone api v2.0 does not paginate roles or users * the role resource should not have put\_create=True * Fix the object store set metadata functional test * Remove unsupported telemetry create\_sample method * Add network flavor associate, disassociate to SDK * Fix problem with update including id * Support profile-only to cluster update * Fix the network auto allocate validate * Remove old telemetry capability * Remove unnecessary get\_id call in \_prepare\_request * Fix the network floating ip test for get * Fix the network service provider test * Fix the network quota tests * Fix the service profile meta info test * Fix the agent add remove test * Fix the nextwork agent add remove test * Update the image used for functional tests * Implement metric docs * Fix function test for compute images * Convert test\_object to use .register\_uris * Convert use of .register\_uri to .register\_uris * Reorganize orchestration docs * Implement message docs * Reorganize key\_manager docs * Change request\_id logging to match nova format * Actually normalize nova usage data * Reorganize identity docs * Reorganize image docs * Reorganize database docs * Reorganize compute docs * Update intersphinx linking to python.org * Fix several concurrent shade gate issues * Reorganize bare\_metal docs * Privatize session instance on Proxy subclasses * Deprecate "wait\_for" methods on ProxyBase * Remove the keystoneclient auth fallback * Remove two remaining doc warnings * Add support for overriding mistral service type * Add helper scripts to print version discovery info * Wait for volumes to detach before deleting them * Deprecate port and ping methods in Network proxy * Add accessor method to pull URLs from the catalog * Convert use of .register\_uri to .register\_uris * Remove keystoneclient mocks in test\_caching for users * Remove mock of keystoneclient for test\_caching for projects * Remove mock of keystone where single projects are consumed * Rename demo\_cloud to user\_cloud * Add all\_projects parameter to list and search servers * Updated from global requirements * Convert test\_project to requests\_mock * convert test\_domain to use requests\_mock * Move mock utilies into base * Convert test\_users to requests\_mock * Add request validation to user v2 test * Enforce inclusion of pulic proxy methods in docs * Updated from global requirements * Convert first V3 keystone test to requests\_mock * Cleanup new requests\_mock stuff for test\_users * First keystone test using request\_mock * Add test of attaching a volume at boot time * Cleanup more Sphinx warnings during doc build * Add support for indicating required floating IPs * pass -1 for boot\_index of non-boot volumes * Adjust some proxy method names in bare\_metal * Adjust some proxy method names in cluster * Pass task to post\_task\_run hook * Rename ENDPOINT to COMPUTE\_ENDPOINT * Transition half of test\_floating\_ip\_neutron to requests\_mock * Start switching neutron tests * Added project role assignment * Port in log-on-failure code from zuul v3 * Honor cloud.private in the check for public connectivity * Cleanup various Sphinx warnings during doc build * Support globbing in name or id checks * Stop spamming logs with unreachable address message * Remove troveclient from the direct dependency list * Move nova flavor interactions to REST * Migrate flavor usage in test\_create\_server to request\_mock * Migrate final flavor tests to requests\_mock * Move flavor cache tests to requests\_mock * Transition nova flavor tests to requests\_mock * Add ability to create image from volume * Use port list to find missing floating ips * Process json based on content-type * Update reno for stable/ocata * fix location of team tags in README * Copy in needed template processing utils from heatclient * Fix exception parsing error * Add 'tags' property to orchestration stack 0.9.13 ------ * Add docs for the workflow service * Initial docs for bare-metal service * Upload images to swift as application/octet-stream * Add ability to stream object directly to file * Update coding document to mention direct REST calls * Fix error messages are not displayed correctly * Add project ID in QuotaDefault requests * Fix Setting Quotas in Neutron * Updated from global requirements * Skip discovery for neutron * Add helper test method for registering REST calls * Do neutron version discovery and change one test * Add raw client constructors for all the things * Replace SwiftService with direct REST uploads * Modified DHCP/Network Resource * Fix spin-lock behavior in \_iterate\_timeout * Fix typo for baremetal\_service\_type * Network L3 Router Commands * Add helper script to install branch tips * Revert "Fix interface\_key for identity clients" * Add support for Murano * Corrections in DHCP Agent Resource listing * Basic volume\_type access * Add OpenTelekomCloud to the vendors * Add support to task manager for async tasks * Updated from global requirements * Add workflow service (mistral) * Add cluster\_operation and node\_operation * Added list\_flavor\_access * Remove 3.4 from tox envlist * Use upper-constraints for tox envs * Removes unnecessary utf-8 encoding * Log request ids when debug logging is enabled * Honor image\_endpoint\_override for image discovery * Add support\_status to policy type and profile type * Rework limits normalization * Handle pagination for glance images 0.9.12 ------ * Add missing query parameters to compute v2 Server * Add support for Role resource in Identity v3 * Add support for using the default subnetpool * Remove unnecessary coding format in the head of files * Add filter "user\_id" for cluster receiver list * Add params to ClusterDelNodes action * Remove discover from test-requirements * Remove link to modindex * Add user\_id in resource class Action/Node * Fix exception name typo * Add failure check to node\_set\_provision\_state * Update swift constructor to be Session aware * Add test to verify devstack keystone config * Make assert\_calls a bit more readable * Update swift exception tests to use 416 * Make delete\_object return True and False * Switch swift calls to REST * Stop using full\_listing in prep for REST calls * Stop calling HEAD before DELETE for objects * Replace mocks of swiftclient with request\_mock * Enable bare-metal service * Proxy module for bare-metal service * Put in magnumclient service\_type workaround * Let use\_glance handle adding the entry to self.calls * Combine list of calls with list of request assertions * Extract helper methods and change test default to v3 * Make munch aware assertEqual test method * Extract assertion method for asserting calls made * Base for workflow service (mistral) * Change get\_object\_metadata to use REST * Update test of object metadata to mock requests * Add release notes and an error message for release * Port resource for bare-metal service * PortGroup resource for bare-metal service * Magnum's service\_type is container\_infra * Add docutils contraint on 0.13.1 to fix building * Add total image import time to debug log * Clear the exception stack when we catch and continue * Magnum's keystone id is container-infra, not container * Stop double-reporting extra\_data in exceptions * Pass md5 and sha256 to create\_object sanely * Updated from global requirements * Add user\_id in resource class Policy * Node resource for bare-metal service * Convert glance parts of task test to requests\_mock * Chassis resource for bare-metal service * Driver resource for bare-metal service * Support for node replace in cluster service * Collapse base classes in test\_image * Skip volume backup tests on clouds without swift * Add new attributes to floating ips 0.9.11 ------ * Rebase network proxy to proxy2 * Add test to trap for missing services * Change fixtures to use https * Honor image\_api\_version when doing version discovery * Replace swift capabilities call with REST * Change register\_uri to use the per-method calls * Convert test\_create\_image\_put\_v2 to requests\_mock * Remove caching config from test\_image * Move image tests from caching to image test file * Remove glanceclient and warlock from shade * Remove a few glance client mocks we missed * Change image update to REST * Make available\_floating\_ips use normalized keys * Fix \_neutron\_available\_floating\_ips filtering * Rebase network resources to resource2 (4) * Rebase network resources to resource2 (3) * Stop telling users to check logs * Plumb nat\_destination through for ip\_pool case * Update image downloads to use direct REST * Move image tasks to REST * Add 'project\_id' field to volume resource * Add support for limits * Rebase network resources onto resource2 (2) * Rebase network resources onto resource2 (1) * Fix interface\_key for identity clients * Tox: optimize the \`docs\` target * Add more server operations based on Nova API * Add user\_id in profile resource * Add filters to the network proxy agents() method * Replace Image Create/Delete v2 PUT with REST calls * Replace Image Creation v1 with direct REST calls * Remove test of having a thundering herd * Pull service\_type directly off of the Adapter * Add auto-allocated-topology to SDK * Add compute usage support * Updated from global requirements * Document the \`synchronized\` parameter * Re-add metadata to image in non-strict mode * Show team and repo badges on README * Add 'project\_id' field to cluster's action resource * Added documentation for delete\_image() * Add QoS support to Network object * Add an e to the word therefore * Allow server to be snapshot to be name, id or dict * Add docstring for create\_image\_snapshot * Allow security\_groups to be a scalar * Remove stray debugging line * Start using requests-mock for REST unit tests * Have OpenStackHTTPError inherit from HTTPError * Use REST for listing images * Create and use a Adapter wrapper for REST in TaskManager * Normalize volumes * Expose visibility on images 0.9.10 ------ * Be specific about protected being bool * Remove pointless and fragile unittest * Revert "Remove validate\_auth\_ksc" * Revert "Display neutron api error message more precisely" * Remove validate\_auth\_ksc * Fail up to date check on one out of sync value * Normalize projects * Cache file checksums by filename and mtime * Only generate checksums if neither is given * Make search\_projects a special case of list\_projects * Make a private method more privater * Updated from global requirements * Add resource for DHCP Agent * Add unit test to show herd protection in action * Refactor out the fallback-to-router logic * Update floating ip polling to account for DOWN status * Use floating-ip-by-router * Don't fail on trying to delete non-existant images * Allow server-side filtering of Neutron floating IPs * Add fuga.io to vendors * Add "sort" in policy binding list * Add filters "policy\_type" and "policy\_name" for policy binding list * list\_servers(): thread safety: never return bogus data * Add filters to the router proxy routers() method * Depend on normalization in list\_flavors * Add unit tests for image and flavor normalization * Add strict mode for trimming out non-API data * list\_security\_groups: enable server-side filtering 0.9.9 ----- * Add support for network Service Flavor Profile * Don't fail image create on failure of cleanup * Add filter "enabled" for cluster-policy-list * Add resources for Service Provider * Fix metadata property of Senlin node resource * Display neutron api error message more precisely * Add list method and query support for cinder volume and snapshot * Add Python 3.5 classifier and venv * Try to return working IP if we get more than one * Add filter options to the network proxy address\_scopes() method() * Support token\_endpoint as an auth\_type * Add test for os\_keystone\_role Ansible module * Document and be more explicit in normalization * Updated from global requirements * Add support for volumev3 service type * Add filters provider-\* to the network proxy networks() method * Normalize cloud config before osc-lib call * Fix a bunch of tests * Clarify how to set SSL settings * Add external\_ipv4\_floating\_networks * Logging: avoid string interpolation when not needed * Add a devstack plugin for shade * Allow setting env variables for functional options * Support to delete claimed message * Update ECS image\_api\_version to 1 * Add test for os\_keystone\_domain Ansible module * Add abililty to find floating IP network by subnet * Remove useless mocking in tests/unit/test\_shade.py * Fix TypeError in list\_router\_interfaces * Fix problem about location header in Zaqar resource2 * Updated from global requirements * Add filter mac\_address to the network proxy ports() method * Add dns-domain support to Network object * Fix a NameError exc in operatorcloud.py * Fix some docstrings * Fix a NameError exception in \_nat\_destination\_port * Implement create/get/list/delete volume backups * Move normalize\_neutron\_floating\_ips to \_normalize * Prepare for baremetal API implementation * Updated from global requirements * Delete image if we timeout waiting for it to upload * Revert "Split auth plugin loading into its own method" * Add reset\_state api for compute * Add description field to create\_user method * Allow boolean values to pass through to glance * Add limit and marker to QueryParameters class * Update location info to include object owner * Move and fix security group normalization * Add location field to flavors * Move normalize\_flavors to \_normalize * Move image normalize calls to \_normalize * Add location to server record * Start splitting normalize functions into a mixin * Make sure we're matching image status properly * Normalize images * Add helper properties to generate location info * Update simple\_logging to not not log request ids by default * Add setter for session constructor * Enable release notes translation * Updated from global requirements * cloud\_config:get\_session\_endpoint: catch Keystone EndpointNotFound * Document network resource query filters used by OSC * Add standard attributes to the core network resources * Add service\_type resource to Subnets * Add simple field for disabled flavors * List py35 in the default tox env list * remove\_router\_interface: check subnet\_id or port\_id is provided 0.9.8 ----- * avoid usage of keystoneauth1 sessions * Clarify argparse connections * Updated from global requirements * Add support for network Flavor * Add test for os\_group Ansible module * Remove dead code * Provide better fallback when finding id values * Updated from global requirements * Remove beta label for network segment resource * Using assertIsNone() instead of assertEqual(None, ...) * Add support for filter "status" in node list * Modified Metering Rule base\_path * Update homepage with developer documentation page * Update homepage with developer documentation page * List py35 in the default tox env list * Fix AttributeError in \`get\_config\` * Modified Metering base\_path * Updated from global requirements * Added is\_shared resource to Metering Label * Add QoS support to Network Port object 0.9.7 ----- * Revert "Event list can not display "timestamp" * Generalize endpoint determination * modify the home-page info with the developer documentation * Event list can not display "timestamp" * Add project\_id field to cluster's policy and profile * Fix the issue non-admin user failed to list trusts * Don't create envvars cloud if cloud or region are set * Fix error in node action * compute/v2/server: add ?all\_tenants=bool to list 0.9.6 ----- * Add extended Glance Image properties * Fix connection init when session is provided * Rebase keystone v3 proxy to proxy2 * Fix 'config\_drive' and 'networks' for compute server * Fix cluster query mapping * Rebase keystone resources onto resource2 * Add new function for router-gateway * Obtain Image checksum via additional GET * Adjust router add/remove interface method names * Add 'dependents' property to Node and Cluster class * Add support for jmespath filter expressions * Add QoS rule type object and CRUD commands * Add QoS bandwidth limit rule object and CRUD commands * Add QoS DSCP marking rule object and CRUD commands * Add QoS minimum bandwidth rule object and CRUD commands * Add libffi-dev to bindep.txt * Add network segment create, delete and update support * Rebase telemetry resources to resource2/proxy2 * Fix telemetry/metering service version * Don't build releasenotes in normal docs build * Update reno for stable/newton * Use list\_servers for polling rather than get\_server\_by\_id * Fix the issue that 'type' field is missing in profile list * Add ability to configure Session constructor * Fix up image and flavor by name in create\_server * Batch calls to list\_floating\_ips * Split auth plugin loading into its own method 0.9.5 ----- * Allow str for ip\_version param in create\_subnet * Skip test creating provider network if one exists * Revert per-resource dogpile.cache work * Updated from global requirements * Fix two minor bugs in generate\_task\_class * Go ahead and handle YAML list in region\_name * Change naming style of submitTask * Add prompting for KSA options * Add submit\_function method to TaskManager * Refactor TaskManager to be more generic * Poll for image to be ready for PUT protocol * Cleanup old internal/external network handling * Support dual-stack neutron networks * Fix issue "SDKException: Connection failure that may be retried." * Rename \_get\_free\_fixed\_port to \_nat\_destination\_port * Log request ids * Detect the need for FIPs better in auto\_ip * Updated from global requirements * Clean up vendor support list * Delete objname in image\_delete 0.9.4 ----- * Refactor Key Manager for resource2 * Move list\_server cache to dogpile * Fix problems about location header in resource2 * Add support for claim for Zaqar V2 API * Ensure per-resource caches work without global cache * Support more than one network in create\_server 0.9.3 ----- * Add support for fetching console logs from servers * Allow image and flavor by name for create\_server * Add support for subscription for Zaqar V2 API * Allow object storage endpoint to return 404 for missing /info endpoint * Add policy validation for senlin * Add profile validation for senlin * Batch calls to list\_floating\_ips * Add QoS policy object and CRUD commands * Get the status of the ip with ip.get('status') * Stop getting extra flavor specs where they're useless * Change deprecated assertEquals to assertEqual * Use cloud fixtures from the unittest base class * Add debug logging to unit test base class * Update HACKING.rst with a couple of shade specific notes * Only run flake8 on shade directory * Add bindep.txt file listing distro depends * Set physical\_network to public in devstack test * Precedence final solution * Updated from global requirements * Add support for configuring split-stack networks * Fix orchestration service initialization * Use "image" as argument for Glance V1 upload error path * Minor network RBAC policy updates * Honor default\_interface OCC setting in create\_server * Validate config vs reality better than length of list * Base auto\_ip on interface\_ip not public\_v4 * Add tests to show IP inference in missed conditions * Deal with clouds that don't have fips betterer * Infer nova-net security groups better * Add update\_endpoint() * Protect cinderclient import * Do not instantiate logging on import * Don't supplement floating ip list on clouds without * Add 'check\_stack' operation to proxy * Tweak endpoint discovery for apache-style services * Move list\_ports to using dogpile.cache * Create and return per-resource caches * Lay the groundwork for per-resource cache * Pop domain-id from the config if we infer values * Rename baymodel to cluster\_template 0.9.2 ----- * Add template validation support to orchestration * Add SoftwareDeployment resource to orchestration * Add SoftwareConfig resource to orchestration * Rebase orchestration to resource2/proxy2 * Relocate alarm service into a submodule * Get endpoints directly from services * Add force-delete into compute service * Make shared an optional keyword param to create\_network * Add services operations into compute service * Fix nova server image and flavor * Add support for message resource of Zaqar v2 API * Add support for Zaqar V2 queue resource * Add a 'meta' passthrough parameter for glance images * Allow creating a floating ip on an arbitrary port * Add collect\_cluster\_attrs API to cluster service * Add ability to upload duplicate images * Updated from global requirements * Update Internap information * Fix requirements for broken os-client-config * Add new test with betamax for create flavors * Stop creating cloud objects in functional tests * Move list\_magnum\_services to OperatorCloud * Add test for precedence rules * Pass the argparse data into to validate\_auth * Revert "Fix precedence for pass-in options" * Add release notes for 1.19.0 release * Add the new DreamCompute cloud * Go ahead and admit that we return Munch objects * Depend on python-heatclient>=1.0.0 * Add update\_server method * Fix precedence for pass-in options * Fix cluster resource in cluster service * Update citycloud to list new regions * Add API microversion support * Updated from global requirements * Refactor image v2 to use resource2/proxy2 0.9.1 ----- * Rebase cluster service to resource2/proxy2 * Improve docstring for some resource2 methods * Add 'to\_dict()' method to resource2.Resource * \_alternate\_id should return a server-side name * Make end-user modules accessible from top level * Remove discover from test-requirements * Updated from global requirements * Replace \_transpose\_component with \_filter\_component * Fix test\_limits functional test failure * Remove update\_flavor method from compute * Expose 'requires\_id' to get\_xxx proxy functions * Update hacking version * Updated from global requirements * Add support for listing a cloud as shut down * Change operating to interacting with in README * Add floating IPs to server dict ourselves * Add support for deprecating cloud profiles * HAProxy uses milliseconds for its timeout values * Support fetching network project default quota 0.9.0 ----- * Refactor compute for new resource/proxy * Allow alternate\_id to be accessed directly * Add neutron rbac support * Updated from global requirements * Treat DELETE\_COMPLETE stacks as NotFound * Updated from global requirements * Add support for changing metadata of compute instances * Refactor fix magic in get\_one\_cloud() * Add temporary test\_proxy\_base2 * Add segment\_id property to subnet resource * Use keystoneauth.betamax for shade mocks * Allow resources to check their equality * Remove type=timestamp usages * Cluster user guide - part 2 * Move version definition * Updated from global requirements * Add network quotas support * Reword the entries in the README a bit * Add shade constructor helper method * Updated from global requirements * Add reno note for create\_object and update\_object * Rename session\_client to make\_rest\_client * Add magnum services call to shade * Add helper method for OpenStack SDK constructor * Add function to update object metadata * incorporate unit test in test\_shade.py, remove test\_router.py fix tenant\_id in router add functional test test\_create\_router\_project to functional/test\_router.py add unit/test\_router.py add project\_id to create\_router * Fix clustering event properties * Add magnum baymodel calls to shade * Updated from global requirements * Updated from global requirements * Make it easier to give swift objects metadata * Updated from global requirements * Add volume quotas support * Add quotas support * Add missing "cloud" argument to \_validate\_auth\_ksc * Add error logging around FIP delete 0.8.6 ----- * Be more precise in our detection of provider networks * Rework delete\_unattached\_floating\_ips function * Implement network agents * Updated from global requirements * Remove data type enforcement on fields (cluster) * Add network segment resource * Make sure Ansible tests only use cirros images * Don't fail getting flavors if extra\_specs is off * Add initial setup for magnum in shade * Updated from global requirements * Workaround bad required params in troveclient * Trivial: Remove 'MANIFEST.in' * Trivial: remove openstack/common from flake8 exclude list * drop python3.3 support in classifier * Set name\_attribute on NetworkIPAvailability * Amend the valid fields to update on recordsets * Move cloud fixtures to independent yaml files * Add support for host aggregates * Add support for server groups * Add release note doc to dev guide * Remove update\_trust method from identity * Updated from global requirements * [Trivial] Remove executable privilege of doc/source/conf.py * Add Designate recordsets support * Remove openstack/common from tox.ini * Fix formatting in readme file * Add support for Designate zones * Fail if FIP doens't have the requested port\_id * Add support for Network IP Availability * Add public helper method for cleaning floating ips * Fix Resource.list usage of limit and marker params * Rework floating ip use test to be neutron based * Delete floating IP on nova refresh failure * Retry floating ip deletion before deleting server * Have delete\_server use the timed server list cache * Document create\_stack * delete\_stack add wait argument * Implement update\_stack * Updated from global requirements * Fix string formatting * Add domain\_id param to project operations * Remove get\_extra parameter from get\_flavor * Honor floating\_ip\_source: nova everywhere * Use configured overrides for internal/external * Don't hide cacert when insecure == False * Start stamping the has\_service debug messages * Consume floating\_ip\_source config value * Honor default\_network for interface\_ip 0.8.5 ----- * Trivial: Fix typo in update\_port() comment * Support :/// endpoints * Refactor the port search logic * Allow passing nat\_destination to get\_active\_server * Use fixtures.TempDir * Use fixtures.EnvironmentVariable * Add nat\_destination filter to floating IP creation * Refactor guts of \_find\_interesting\_networks * Search subnets for gateway\_ip to discover NAT dest * Support client certificate/key * Consume config values for NAT destination * Return boolean from delete\_project * Correct error message when domain is required * Remove discover from test-requirements.txt * Add version string * Add release note about the swift Large Object changes * Delete image objects after failed upload * Add network resource properties * Delete uploaded swift objects on image delete * Add option to control whether SLO or DLO is used * Upload large objects as SLOs * Set min\_segment\_size from the swift capabilities * Don't use singleton dicts unwittingly * Updated from global requirements * Update func tests for latest devstack flavors * Pull the network settings from the actual dict * Fix search\_domains when not passing filters * Properly handle overridden Body properties * Wrap stack operations in a heat\_exceptions * Use event\_utils.poll\_for\_events for stack polling * Clarify one-per-cloud network values * Flesh out netowrk config list 0.8.4 ----- * Follow name\_or\_id pattern on domain operations * Remove conditional blocking on server list * Cache ports like servers * Change network info indication to a generic list * Workaround multiple private network ports * Reset network caches after network create/delete * Fix test\_list\_servers unit test * Fix test\_get\_server\_ip unit test * Remove duplicate FakeServer class from unit tests * BaseProxy refactoring for new Resource * Mutex protect internal/external network detection * Support provider networks in public network detection * Refactor Resource to better serve Proxy * Re-allow list of networks for FIP assignment 0.8.3 ----- * Consistent resource.prop for timestamps and booleans (cluster) * Add address scope CRUD * Support InsecureRequestWarning == None * Add release notes for new create\_image\_snapshot() args * Split waiting for images into its own method * Add wait support to create\_image\_snapshot() * Also add server interfaces for server get * Import os module as it is referenced in line 2097 * Consistent resource.prop for timestamps and booleans (object store) * Fix grant\_role docstring * Add default value to wait parameter * Consistent resource.prop for timestamps and booleans (network) * Use OpenStackCloudException when \_delete\_server() raises * Always do network interface introspection * Fix race condition in deleting volumes * Use direct requests for flavor extra\_specs set/unset * Fix search\_projects docstring * Fix search\_users docstring * Add new tasks to os\_port playbook * Fix serialize BoolStr formatter * Deal with is\_public and ephemeral in normalize\_flavors * Create clouds in Functional Test base class * Consistent resource.prop for timestamps and booleans (identity) * Run extra specs through TaskManager and use requests * Bug fix: Make set/unset of flavor specs work again * Refactor unit tests to construct cloud in base * Add constructor param to turn on inner logging * Log inner\_exception in test runs * Cluster user guide - first step * Pass specific cloud to openstack\_clouds function * Consistent resource.prop for timestamps and booleans (orchestration) 0.8.2 ----- * Consistent resource.prop for timestamps and booleans (telemetry) * Fix image member apis * Make get\_stack fetch a single full stack * Add environment\_files to stack\_create * Add normalize stack function for heat stack\_list * Fix content-type for swift upload * Fix key manager secret resource object * Consistent resource.prop for timestamps and booleans (key manager) * Add wait\_for\_server API call * Consistent resource.prop for timestamps and booleans (image) * Make metadata handling consistent in Compute * Fix coverage configuration and execution * Update create\_endpoint() * Make delete\_project to call get\_project * Update reno for stable/mitaka * Consistent resource.prop for timestamps and booleans (compute) * Add osic vendor profile * Test v3 params on v2.0 endpoint; Add v3 unit * Add update\_service() * Use network in neutron\_available\_floating\_ips * Fix functional tests * Allow passing project\_id to create\_network * In the service lock, reset the service, not the lock * Add/Remove port interface to a router * Consistent resource.prop for timestamps and booleans (block store) * Bug fix: Do not fail on routers with no ext gw * Consistent resource.prop for timestamps and booleans (metric) * Mock glance v1 image with object not dict * Use warlock in the glance v2 tests * Fixes for latest cinder and neutron clients 0.8.1 ----- * Add debug message about file hash calculation * Pass username/password to SwiftService * Add Hypervisor support to Compute Service * Also reset swift service object at upload time * Invalidate volume cache when waiting for attach * Use isinstance() for result type checking * Add test for os\_server Ansible module * Fix create\_server() with a named network * os\_router playbook cleanup * Fix heat create\_stack and delete\_stack * Catch failures with particular clouds * Allow testing against Ansible dev branch * Recognize subclasses of list types 0.8.0 ----- * Add Nova server group resource * Update the README a bit * Allow session\_client to take the same args as make\_client * Remove pool\_id attr from creation request body of pool\_member * Add ability to pass just filename to create\_image * Make metadata handling consistent in Object Store * Updated from global requirements * Override delete function of senlin cluster/node * Add support for provider network options * Remove mock testing of os-client-config for swift * Basic resource.prop for ID attributes (message) * Fix formulation * Add release notes * Add a method to download an image from glance * Basic resource.prop for ID attributes (cluster) * Adding Check/Recover Actions to Clusters * Basic resource.prop for ID attributes (block store) * Basic resource.prop for ID attributes (orchestration) * Fix compute tests for resource.prop ID attributes * Send swiftclient username/password and token * Add test option to use Ansible source repo * Basic resource.prop for ID attributes (compute) * Basic resource.prop for ID attributes (image) * Add enabled flag to keystone service data * Clarify Munch object usage in documentation * Add docs tox target * create\_service() should normalize return value * Prepare functional test subunit stream for collection * Basic resource.prop for ID attributes (identity) * Use release version of Ansible for testing * Basic resource.prop for ID attributes (telemetry) * Modify test workaround for extra\_dhcp\_opts * Remove HP and RunAbove from vendor profiles * Added SSL support for VEXXHOST * Fix for stable/liberty job * Update attributes uses hard coded id * Adding check/recover actions to cluster nodes * Basic resource.prop for ID attributes (network) * granting and revoking privs to users and groups * Remove 'date' from Object resource * Add support for zetta.io * Make functional test resources configurable * Fix Port resource properties * Refactor profile set\_ methods * Add UNIXEpoch formatter as a type for properties * Update create\_network function in test\_network * Stop ignoring v2password plugin 0.7.4 ----- * Add release note for FIP timeout fix * Documentation for cluster API and resources * Go ahead and remove final excludes * Resource object attributes not updated on some interfaces * include keystonev2 role assignments * Add release note for new get\_object() API call * Pass timeout through to floating ip creation * Fix normalize\_role\_assignments() return value * Don't set project\_domain if not project scoped * Add ISO8601 formatter as a type for properties * Add LoadBalancer vip\_port\_id and provider properties * Remove a done todo list item * Raise NotFound exception when get a deleted stack * add the ability to get an object back from swift * Clean up removed hacking rule from [flake8] ignore lists * Updated from global requirements * allow for updating passwords in keystone v2 * download\_object/get\_object must have the same API * Map KSA exception to SDK exceptions * Fix URLs for CLI Reference * Support neutron subnets without gateway IPs * Updated from global requirements * Send keystoneauth a better user-agent string * Add network availability zone support * set up release notes build * Allow resource get to carry query string * Rework cluster API * Save the adminPass if returned on server create * Skip test class unless a service exists * Fix unit tests that validate client call arguments * Add attribute 'location' to base resource * Add preview\_stack for orchestration * Fix a precedence problem with auth arguments * Return empty dict instead of None for lack of file * Pass version arg by name not position * Allow inventory filtering by cloud name * Update Quota documentation and properties * Use \_get\_client in make\_client helper function * Add barbicanclient support * Update Subnet Pools Documentation * Add range search functionality * Update router's functional tests to validate is\_ha property * Fix create\_pool\_member and update\_pool\_member * Updated from global requirements * Remove openstack-common.conf * Use assertTrue/False instead of assertEqual(T/F) * Add IBM Public Cloud * Remove status property from LBaaS resources * Add functional tests for DVR router * Add missing Listener resource properties * Better support for metadata in Compute service * Replace assertEqual(None, \*) with assertIsNone in tests * Update auth urls and identity API versions * Stop hardcoding compute in simple\_client * correct rpmlint errors * Add tests for stack search API * Fix filtering in search\_stacks() * Add image user guide * Bug fix: Cinder v2 returns bools now * s/save/download/ * Normalize server objects * Replace assertTrue(isinstance()) with assertIsInstance() * Replace assertEqual(None, \*) with assertIsNone in tests * Add support for availability zone request * Add proxy methods for node actions (cluster) * Rename timestamp fields for cluster service * Add cluster actions to cluster proxy * Update volume API default version from v1 to v2 * Debug log a deferred keystone exception, else we mask some useful diag * Fix README.rst, add a check for it to fit PyPI rules * Make server variable expansion optional * Use reno for release notes * add URLs for release announcement tools * Have inventory use os-client-config extra\_config * Fix unittest stack status * Allow filtering clouds on command line * Fix docstring of resource\_id parameter in resource module * Fix server action resource call * Munge region\_name to '' if set to None * Fix some README typos * Correct response value in resource unittests * Fix token\_endpoint usage * Raise not found error if stack is deleted when find\_stack * Add Receiver resource to cluster service * remove python 2.6 os-client-config classifier * Add Subnet Pool CRUD * remove python 2.6 trove classifier * Fix shade tests with OCC 1.13.0 * If cloud doesn't list regions expand passed name * No Mutable Defaults * Add Quota RUD and missing properties * Add 'resize' action to cluster * Add option to enable HTTP tracing * Fix glance endpoints with endpoint\_override * Allow passing in explicit version for legacy\_client * Pass endpoint override to constructors * Return None when getting an attr which is None when using resource.prop() * Support backwards compat for \_ args * Add backwards compat mapping for auth-token * Add support for querying role assignments * Add Network mtu and port\_security\_enabled properties * Replace assertEqual(None, \*) with assertIsNone in tests * Support block\_store types where IDs are taken * Remove requests from requirements * cluster: Use typed props instead of \*\_id * Add inventory unit tests * Updated from global requirements * Rename key\_management to key\_manager * Replace 'value' arguments in telemetry proxy * Add Port port\_security\_enabled property * Replace 'value' arguments in orchestration proxy * Replace 'value' arguments in object\_store proxy * Replace 'value' arguments in network proxy 0.7.3 ----- * Replace 'value' arguments in key\_management proxies * Replace 'value' arguments in image proxies * Allow arbitrary client-specific options * Fix server deletes when cinder isn't available * Pedantic spelling correction * Fix exceptions to catch for ignore\_missing * Bug fix: create\_stack() fails when waiting * Updated from global requirements * Stack API improvements * Add admonition to telemetry code * Bug fix: delete\_object() returns True/False * Add Router ha, distributed and routes properties * Fix "report a bug" launchpad project * Add wait support for ironic node [de]activate * Add PolicyType resource for clustering * Add 'ProfileType' resource for senlin * block\_store and cluster: replace 'value' arguments * Add cluster-policy binding resource to Senlin * Skip orchestration functional tests * Replace 'value' arguments in identity proxies * Replace 'value' arguments in database proxy * Replace 'value' arguments in compute proxy 0.7.2 ----- * Update doc link in README * Remove oslosphinx * Improve test coverage: container/object list API * Make a new swift client prior to each image upload * Improve test coverage: volume attach/detach API * Skip broken functional tests * Add ceilometer constructor to known constructors * Delete key pair and server for Compute example * Fix 400 error in compute examples * Fix post test hook script * Remove the Metric proxy * Remove an extra dangling doc reference to CDN * Bug fix: Allow name update for domains * Improve test coverage: network delete API * Bug fix: Fix pass thru filtering in list\_networks * Consider 'in-use' a non-pending volume for caching * Remove incomplete CDN code * Improve test coverage: private extension API * Improve test coverage: hypervisor list * Fix failing compute example * Use reno for release notes * Add support for generalized per-region settings * Fix a README typo - hepler is not actually a thing * Make client constructor optional * Updated README to clarify legacy client usage * Add simple helper function for client construction * Add method for registering argparse options * Updated from global requirements * Update vexxhost to Identity v3 * Updated from global requirements * Add identity user guide * Doc: Add instructions for creating cloud.yaml * Improve test coverage: list\_router\_interfaces API * Change the client imports to stop shadowing * Use non-versioned cinderclient constructor * Replace stackforge with openstack * Improve test coverage: server secgroup API * Improve test coverage: container API * Make sure that cloud always has a name * Add BuildInfo resource to cluster service * Updated from global requirements * Improve test coverage: project API * Improve test coverage: user API * Provide a better comment for the object short-circuit * Add network user guide * Remove cinderclient version pin * Add functional tests for boot from volume * Remove optional keystoneauth1 imports * Enable running tests against RAX and IBM * Don't double-print exception subjects * Accept objects in name\_or\_id parameter * Add authorize method to Connection * Avoid Pool object creating in pool\_member functional calls * Fix cluster action api invocations * Normalize volume objects * Add rebuild\_server function call * Replace 'MagicMock' with 'Mock' * Fix argument sequences for boot from volume * Updated from global requirements * Trivial: Fix a typo in resource.py * Add server resize function calls * Make nova server\_interface function calls work * Fix typo in action test case * Add event resource for senlin(cluster) service * Remove missing capability * Remove some dead exception types * Fix senlin update verb * Replace 'MagicMock' with 'Mock' * Publicize the \_convert\_id call of Resource class * Try running examples tests on gate * Add documentation for testing examples * Make delete\_server() return True/False * Add BHS1 to OVH * Adjust conditions when enable\_snat is specified * Only log errors in exceptions on demand * Fix resource leak in test\_compute * Clean up compute functional tests * Cleanup doc references to past modules * Use consistent argument names for find proxies * Handle cinder v2 * find\_security\_group\_rule does not find by name * Stop using nova client in test\_compute * Updates doc enviro to use OpenStack Docs theme * Retry API calls if they get a Retryable failure 0.7.1 ----- * Fix call to shade\_exceptions in update\_project * Set "password" as default auth plugin * Add test for os\_volume Ansible module * Add find support to BaseProxy * Fix for min\_disk/min\_ram in create\_image API * Add test for os\_image Ansible module * Add support for secure.yaml file for auth info * Fix warnings.filterwarnings call * boot-from-volume and network params for server create * Do not send 'router:external' unless it is set * Add test for os\_port Ansible module * Allow specifying cloud name to ansible tests 0.7.0 ----- * Fix a 60 second unit test * Make sure timeouts are floats * Remove default values from innner method * Bump os-client-config requirement * Do not allow security group rule update * Fix lack of parenthesis around boolean logic * Keystone auth integration * Only pass timeout to swift if we have a value * Refactor os-client-config usage in from\_config * Updated from global requirements * Updated from global requirements * Add test for os\_user\_group Ansible module * Add user group assignment API * Add test for os\_user Ansible module * Add test for os\_nova\_flavor Ansible module * Stop using uuid in functional tests * Make functional object tests actually run * Fix name of the object-store api key * Refactor per-service key making * Add Ansible object role * Fix for create\_object * Add support for legacy envvar prefixes * Four minor fixes that make debugging better * Add new context manager for shade exceptions, final * Add ability to selectively run ansible tests * Add Ansible testing infrastructure * Create Key Pair * Fix JSON schema * Add new context manager for shade exceptions, cont. again * Pull server list cache setting via API * Plumb fixed\_address through add\_ips\_to\_server * Workaround a dispute between osc and neutronclient * Workaround for int value with verbose\_level * Support ignore\_missing in find\_pool\_member method * Remove unneeded workaround for ksc * Add default API version for magnum service * Let os-client-config handle session creation * Remove designate support * Remove test reference to api\_versions * Update dated project methods * Fix incorrect variable name * Add CRUD methods for keystone groups * Adjust image v1 to use upload instead of create * Adjust object\_store to use upload/download names * Work around a bug in keystoneclient constructor * Return cache settings as numbers not strings * Add method to get a mounted session from config * Bump ironicclient depend * Make sure cache expiration time is an int * Convert floats to string * Add new context manager for shade exceptions, cont * Don't assume pass\_version\_arg=False for network * Update network api version in defaults.json * Dont turn bools into strings * Use requestsexceptions for urllib squelching * Use the requestsexceptions library * Don't warn on configured insecure certs * Normalize domain data * Normalization methods should return Munch * Fix keystone domain searching * Normalize int config values to string * Fix server.action does not work * Remove the example code that mimics a CLI * Add new context manager for shade exceptions * teach shade how to list\_hypervisors * Update ansible router playbook * Disable spurious urllib warnings * Add logging module support * Add methods for getting Session and Client objects * Update conoha's vendor profile to include SJC * Use json for in-tree cloud data * Stop calling obj\_to\_dict everwhere * Always return a munch from Tasks * Make raw-requests calls behave like client calls * Minor logging improvements * Updated from global requirements * Update auro to indicate move to neutron * Copy values in backwards\_interface differently * Remove another extraneous get for create\_server * Don't wrap wrapped exception in create\_server * Skip an extra unneeded server get * Fix typo in Catalyst region configs * A better create server example * Don't wrap wrapped exceptions in operatorcloud.py * Add docs for create\_server * Update README to not reference client passthrough * Move ironic client attribute to correct class * Move \_neutron\_exceptions context manager to \_utils * Fix misspelling of ironic state name * Timeout too aggressive for inspection tests * Split out OpenStackCloud and OperatorCloud classes * Adds volume snapshot functionality to shade * Fix the return values of create and delete volume * Remove removal of jenkins clouds.yaml * Consume /etc/openstack/clouds.yaml * Add logic to support baremetal inspection * node\_set\_provision\_state wait/timeout support * Add warning suppression for keystoneauth loggers * Suppress Rackspace SAN warnings again * Aligned a few words in the docs * Sort vendor list * Add conoha public cloud * Allow for templated variables in auth\_url * Use assertDictEqual to test dict equality * return additional detail about servers * expand security groups in get\_hostvars\_from\_server * Always pull regions from vendor profiles * add list\_server\_security\_groups method * Add swift object and container list functionality * Translate task name in log message always * Add debug logging to iterate timeout * Change the fallback on server wait to 2 seconds * Add entry for James Blair to .mailmap * handle routers without an external gateway in list\_router\_interfaces * Support to Profile resource for cluster service * Add node resource for cluster service * Fix projects list/search/get interface * Remove unused parameter from create\_stack * Move valid\_kwargs decorator to \_utils * Add heat support * Abstract out the name of the name key * Add heatclient support * Use OCC to create clouds in inventory * Add action resource for cluster service * novaclient 2.32.0 does not work against rackspace * Add policy resource for cluster service * Support private address override in inventory * Normalize user information * Set cache information from clouds.yaml * Make designate record methods private for now * Fix typos in docstrings: * s/stackforge/openstack/ * Rely on devstack for clouds.yaml * Rename identity\_domain to domain * Rename designate domains to zones * Replace Bunch with compatible fork Munch * Make a few IP methods private * Update .gitreview for new namespace * Push filtering down into neutron * Clean up cache interface, add support for services * Make floating IP func tests less racey * Make router func tests less racey * Create neutron floating ips with server info * Undecorate cache decorated methods on null cache * Tweak create\_server to use list\_servers cache * Add Rackspace LON region * Add API method to list router interfaces * Handle list\_servers caching more directly * Split the nova server active check out * Pass wait to add\_ips\_to\_server * Fix floating ip removal on delete server * Document filters for get methods * Add some more docstrings * Validate requested region against region list * Fix documentation around regions * Add an API reference to the docs * Pass OpenStackConfig in to CloudConfig for caches * Remove shared=False from get\_internal\_network * Make attach\_instance return updated volume object * Tell git to ignore .eggs directory * Align users with list/search/get interface * Add script to document deleting private networks * Add region resource to identity service * Add create/delete for keystone roles * Accept and emit union of keystone v2/v3 service * Use keystone v3 service type argument * Add auth hook for OpenStackClient * Add get/list/search methods for identity roles * Add methods to update internal router interfaces * Add get\_server\_by\_id optmization * Add option to floating ip creation to not reuse * Adds some lines to complete table formatting * Provide option to delete floating IP with server * Update python-troveclient requirement * Add a private method for nodepool server vars * Update required ironicclient version * Split get\_hostvars\_from\_server into two * Invalidate image cache everytime we make a change * Use the ipaddress library for ip calculations * Optimize network finding * Fix create\_image\_snapshot * Add universal=1 to setup.cfg to build python 3 wheels * Return IPv6 address for interface\_ip on request * Plumb wait and timout down to add\_auto\_ip * Pass parameters correctly for image snapshots * Fix mis-named has\_service entry * Provide shortcut around has\_service * Provide short-circuit for finding server networks * Update fake to match latest OCC * Some cleanup * The Compute User Guide * Fix two typos * Put in override for Rackspace broken neutron * Support passing force\_ipv4 to the constructor * identity version is 2.0 * Dont throw exception on missing service * Handle OS\_CLOUD and OS\_REGION\_NAME friendly-like * Server functional test - image and flavor * Added SWITCHengines vendor file * Add functional test for private\_v4 * Attempt to use glanceclient strip\_version * Fix baremetal port deletion * Add router ansible test and update network role * Trap exceptions in helper functions * Add more info to some exceptions * Allow more complex router updates * Allow more complex router creation * Allow creating externally accessible networks * Handle glance v1 and v2 difference with is\_public * Get defaults for image type from occ * Use the get\_auth function from occ * update RST for readme so pypi looks pretty * Add a NullHandler to all of our loggers * Remove many redundant debug logs * Fix a little error with the None auth type * Add support to stack update * Make inner\_exception a private member * Add support for Catalyst as vendor * Just do the error logging in the base exception * Store the inner exception when creating an OSCException * Start using keystoneauth for keystone sessions * Change ignore-errors to ignore\_errors * Updated from global requirements * Change ignore-errors to ignore\_errors * Handle ksa opt with no deprecated field * Fall back to keystoneclient arg processing * Fix typo in ovh region names * Move plugin loader creation to try block * Convert auth kwargs '-' to '\_' * Properly handle os- prefixed args in fix\_args * Test kwargs passing not just argparse * Allow configuring domain id once * Add internap to the vendor list * Fix typo in comment - we use ksa not ksc * Defer plugin validation to keystoneauth * Remove an extra line * Add Datacentred to the vendor list * Add ultimum to list of vendors * Add Enter Cloud Suite to vendors list * Add elastx to vendor support matrix * Switch the image default to v2 * Update auro auth\_url and region information * Add citycloud to the vendors list * Return keystoneauth plugins based on auth args * Move keystone to common identity client interface * Remove duplicate lines that are the same as default * Add default version number for heat * Bump the default API version for python-ironicclient * Update OVH public cloud information * Do not use name attribute for path argument * Copy attributes in resource constructor * Don't use positional for keypair loaded * Avoid 2.27.0 of novaclient * Handle empty defaults.yaml file * unregister\_machine blocking logic * Fix exception lists in functional tests * Migrate neutron to the common client interface * Remove last vestige of glanceclient being different * Pass timeout to session, not constructors * Delete floating ip by ID instead of name * Move glanceclient to new common interface * Add tox targets for functional testing on 2 and 3 * Fix find available floating IP test * Image import * Updated from global requirements * add scheduler\_hints support for server creation * Make Resource.find more generically applicable * Get rid of example command line options * Delete transport test test\_debug\_post * Remove unecessary parameters to resource methods * Get url for object store object in the normal way * Fix set resource property id attribute * Fix resource property id * Fix image v2 member base\_path * Add object store object functional tests * Object store get sending bad headers * Remove the ips method from server.py * Addition of shade unregister\_machine timeout * More Pythonic Connection example usage * Move service filter out of auth 0.6.2 ----- * Get container off of an Object if its passed * Rename userguides to guides * Add a functional test for find\_extension * Initial support for ironic enroll state * Make sure there is data for the meter test * Fix find\_extension for Network and Compute proxies * Properly pass on Container in path\_args * Do not treat project\_name and project\_id the same * Remove spaces around data in transport debug print * Rename extensions to plugins * Remove redundant connection tests * Move TestTransportBase out of base * Improve the from\_config doc * Remove connection CRUD methods * Add flavor access API * Make client constructor calls consistent * Revert "Revert "Use the correct auth\_plugin for token authentication"" * Only log text strings in requests * Updated from global requirements * Change functional testing to use clouds.yaml * Updated from global requirements * Revert "Use the correct auth\_plugin for token authentication" * Add a developer coding standards doc * Ignore infra CI env vars * Fix from\_config argument * Use the correct auth\_plugin for token authentication * Updated from global requirements * Add flavor functional tests * Bug fix for obj\_to\_dict() * Add log message for when IP addresses fail * Add methods to set and unset flavor extra specs 0.6.1 ----- * Fixed problem with service name in comments * Listing flavors should pull all flavors * Be consistent with accessing server dict * Throw an exception on a server without an IP * Be smarter finding private IP * Clarify future changes in docs * Fix KeyError when server's response doesn't contain resource key * Align to generic password auth-type * Change visibility to interface * Fix call to get\_interface * Add functional tests for compute limits * Fixes a typo in test name * Changes in the new marker, initialise new marker to aviod bug 0.6.0 ----- * Remove meta.get\_server\_public\_ip() function * Document create\_object * Remove unused server functions * Fix two typos and one readablity on shade documentation * Clean up services in profile * Pass socket timeout to swiftclient * Process config options via os-client-config * Update ansible subnet test * Fix test\_object.py test class name * Claim no messages correctly * Fix for swift servers older than 1.11.0 * Clarify floating ip use for vendors * Add per-service endpoint overrides * Use disable\_vendor\_agent flags in create\_image * Use os-client-config SSL arg processing * Correctly pass the server ID to add\_ip\_from\_pool * Add functional tests for telemetry sample * Add configuration function using os-client-config * Updated from global requirements * Add initial designate read-only operations * Add functional tests for telemetry meter * Fix key management proxy docs * add .eggs to .gitignore * Add wait for delete method * Always use a fixed address when attaching a floating IP to a server * Fix spelling in proxy * Added functional tests for compute image API * Drop py33 support * Remove requirements.txt from tox.ini * Remove requirements.txt from tox.ini * Updated from global requirements * Update mock requirements * Catch leaky exceptions from create\_image() * Add missing docstrings * Dynamically load services * Remove py26 and py33 from tox.ini * Rename 'endpoint\_type' to 'interface' * Have service name default to None * Add flavor admin support * Remove region list from single cloud * Clean up document warnings * Fix debug logging lines * Split account/container metadata to own resources * Change auth plugin names to match KSA * Account for Error 396 on Rackspace * Updated from global requirements * Fix small error in README.rst * Generallize example so it can be modified easily * Fix set\_default() when used before config init * Fix logger name for examples * Allow use of admin tokens in keystone * Add query params to all the proxy list calls * Remove limit/marker from object\_store proxy * Updated from global requirements * Remove thin interface * Convert list and find to use params parameter * Add ignore\_missing to proxy find * Fix identity domain methods * Update ansible module playbooks * Rework how we get domains * Argument consistency in test\_proxy\_base * Log reauth * Specify the config file with environment variable * Add support for configuring region lists with yaml * Fix "Bad floatingip request" when multiple fixed IPs are present * Add docstrings for database resources * Add or change timestamp suffixes to "\_at" * Remove unnecessary None handling * Add Ansible module test for subnet * Add Ansible module test for networks * Fix rendering issue in Readme * Add a testing framework for the Ansible modules * Some updates to object\_store user guide * Support project/tenant and domain vs. None * Add CRUD methods for Keystone domains * Don't allow users to set all API versions the same * Raise exception for nova egress secgroup rule * Add docstrings to key\_management resources * Add docstrings for Metric resources * Rename keystore key-management * Fix cacert for tests and examples * Modify secgroup rule processing * Have resource find use get if possible * Updated from global requirements * Check results of find before returning * Move object\_store functional tests to proper name * Make sure we are returning floating IPs in current domain * Correctly name the functional TestImage class * Include examples in toctree * Change docs links to generic format * Add the pbr generated changelog to the docs * Locking ironic API microversion * Add Neutron/Nova Floating IP tests * Refactor verify\_get tests * Add docstrings to telemetry resources * Add docs for Image v1 and v2 resources * Add orchestration resource docs * Adding SSL arguments to glance client * Clean up vendor data * Add support for indicating preference for IPv6 * Use Message.existing() to create existing messages * Add normal list params to list method for telemetry statistics * Fix SSL verify/insecure translation * Add functional tests for telemetry statistics * Add functional tests for telemetry alarm\_change * Add functional tests for telemetry alarm crud * Add functional tests for telementry resource * Set sys.stdout for logging for examples and tests * Remove list\_keypair\_dicts method * Do not use environment for Swift unit tests * Add Neutron/Nova Floating IP attach/detach * Fix available\_floating\_ip when using Nova network * Skip Swift functional tests if needed * Fix AttributeError in keystone functional tests * Update keypair APIs to latest standards * Remove namespace from network ext test * Add Neutron/Nova Floating IP delete (i.e. deallocate from project) * Add Neutron/Nova Floating IP create (i.e. allocate to project) * Docs for logging * More selective logging * Convert ironicclient node.update() call to Task * Convert ironicclient node.get() call to Task * Move TestShadeOperator in a separate file * Fix intermittent error in unit tests * Pin cinderclient * Normalize project\_name aliases * Add comment explaining why finding an IP is hard * Add IPv6 to the server information too * Use accessIPv4 and accessIPv6 if they're there * Add Neutron/Nova Floating IP list/search/get * Catch all exceptions around port for ip finding * Centralize exception management for Neutron * Fix MD5 headers regression * Enable Orchestration in DevStack * Ensure that service values are strings * Pass token and endpoint to swift os\_options * Correct test\_quota functional test * Add path\_args to create and update proxy methods * Clean up a few more python-openstacksdk references * Move volume docs to block\_store * Add \_at suffix to created/updated Server attrs * Convert ironicclient node.validate() call to Task * Convert ironicclient node.list() call to Task * Refactor verify\_list tests * Return True/False for delete methods * Updated from global requirements * Return the entire response in an InvalidResponse * Rename volume to block\_store * Rename project to openstacksdk * Add some accessor methods to CloudConfig * Add delete method for security group rules * Add get\_server\_external\_ipv6() to meta * Refactor find\_nova\_addresses() * Replace get\_server\_public\_ip() with get\_server\_external\_ipv4() * Add get\_server\_external\_ipv4() to meta * Add more parameters to update\_port() * Improve documentation for create\_port() * Correct get\_machine\_by\_mac and add test * Add create method for secgroup rule * Add functional tests for update\_ip and find\_available\_ip * Coalesce port values in secgroup rules * Move \_utils unit testing to separate file * Updated from global requirements * Add funtcional tests for port * Rename clustering to cluster service * Switch put\_update to patch\_update * Add functional tests for floating IP * Add pool\_id param for pool\_member related proxy methods * Updated from global requirements * Fix missing doc on identity v2 * Add Heat resource support * Convert telemetry capability list to generator * Fix vpn service docstring error * Use ostestr for tests * Fix missing doc on identity v3 * Add functional tests for telemetry capabiliities * Updated from global requirements * Use very long functional test linger * Increase time we linger waiting for delete * Fix functional test gate * Fix create proxy issue * Support variations of external network attribute * Move Server.created to created\_at * Compare message data outside of assert\_called\_with * Add action() and check() method for heat support * Add secgroup update API * Add missing tests * Add very initial support for passing in occ object * Add test to check cert and key as a tuple * Don't emit volume tracebacks in inventory debug * Return new secgroup object * Add functional tests for security group rule * Add functional tests for add & remove router interface * Use one yaml file per vendor * Add functional test for Network Quota * Raise warning when a vendor profile is missing * Some cleanup in the README.rst * Allow create\_object to create objects * Refactor verify\_delete in proxy tests * Add support for OVH Public Cloud * Refactor verify\_create in proxy tests * Add SSL documentation to README.rst * Port ironic client port.get\_by\_address() to a Task * Port ironic client port.get() to a Task * Add inventory command to shade * Extract logging config into a helper function * Refactor verify\_update in proxy tests * Move stray metric test under proper directory * Stringify project details * Raise a warning with conflicting SSL params * Change references of "clouds.yaml" for real file * Add create method for security groups * Add delete method for security groups * Switch to SwiftService for segmented uploads * Add support to get a SwiftService object * Add functional tests for servers * Add functional tests for security groups * Add functional tests for container metadata and delete * Claim messages and delete messages * Add cover/ folder to .gitignore * Raise a warning when using 'cloud' in config * Add cloud vendor files config in doc * Add network/v2 vpn service resource * Add 'to\_dict' operation to Resource class * Senlin cluster resource and unit tests * Add port resource methods * Split security group list operations * Add keystone endpoint resource methods * Add Keystone service resource methods * Rely on defaults being present * Consume os\_client\_config defaults as base defaults * Remove hacking select line * Provide a helper method to get requests ssl values * Add design for an object interface * Fix proxy docs * Port ironic client node.list\_ports() to a Task * Port ironic client port.list() to a Task * Split list filtering into \_utils * Add path\_args when invoking Resource.list() from proxy layer * Complete property definition in some lb resources * Cast nova server object to dict after refetch * Split iterate\_timeout into \_utils * Cleanup OperatorCloud doc errors/warnings * Add more defaults to our defaults file * Remove fingerprint as keypair name * Add docstring to heat stack resource * Create messages on a queue * Add comment for tox coverage failure * Move compute limits to new \_get API * Create clouds.yaml for functional tests * Change naming in vendor doc to match vendors.py * Add auro to list of known vendors * Add list of image params needed to disable agents * Added functional tests for subnet * Delete queue * Added functional tests for router * Fix proxy delete error * Rename messaging module to message * Update pbr version pins * Add set\_one\_cloud method * Add tests for get\_cloud\_names * Add flag to indicate handling of security groups * Don't normalize too deeply * Add tests for cloud config comparison * Metric resource docs framework * Keystore resource docs framework * Image resource docs framework * Add inequality method * Decorator for functional tests to check services * Add an equality method for CloudConfig * Capture the filename used for config * Normalize all keys down to \_ instead of - * Expose method for getting a list of cloud names * Set metadata headers on object create * Fix catalog error * Rename cloud to profile * Don't pass None as the cloud name * Initial commit for the Messaging service (Zaqar) * Always refresh glanceclient for tokens validity * Don't cache keystone tokens as KSC does it for us * Make sure glance image list actually runs in Tasks * Remove oslo incubator config file * Make caching work when cloud name is None * Accept intermediate path arguments at proxy * Removed fields=id\_attribute in find function * Handle novaclient exception in delete\_server wait * Minor changes to top level docs * Module loader docs * Orchestration resource docs * Identity resource doc framework * Add telemetry resource docs * Support PUT in Image v2 API * Add some examples documentation * Fix functional tests deletes * Add id\_attribute to base proxy calls * Remove pass from delete functional tests * Fix underline for docs * Add keypair functional tests * Remove some mentions to preferences from docs * Add requirements.txt file for readthedocs * Make ironic use the API version system * Correct the API base path of lbaas resources * Fix documentation warnings 0.5.0 ----- * Change example for preferences * Move from UserPreference to Profile * Update orchestration functional tests * Add proxy docs and empty user guides * Set OS\_CLOUD for functional tests * proxy find telemetry * proxy find orchestration * proxy find network * proxy find keystore * proxy image find * proxy find identity * proxy find database * AFT compute extension * AFT network network CRUD * AFT network * Enable occ cloud region for example * change hacking requirements and fix hacking problems * proxy find compute * Make images list paginated * Create a method to format urls * Identity list updates * proxy image lists * Proxy database lists * Proxy keystore lists * Proxy network lists * Fix telemetry proxy comment * Proxy metric lists * Proxy orchestration lists * Proxy lists telemetry * Support for verify option for examples and tests * Rename list\_flavors flavors in example * Functional tests use OS\_CLOUD environment variable * Fix flavor functional test * No headers in body for create and update * Fix proxy object get comment * Catch client exceptions during list ops * Replace ci.o.o links with docs.o.o/infra * Changes for get calls in image proxies * Changes for database proxy gets * Changes for compute proxy list calls * Fix comment error on gets and heads * Common head method for proxies * Common list method for proxies * Pass OS\_ variables through to functional tests * Remove httpretty * Changes for get calls in object\_store proxy * Orchestration proxy changes * Changes for get calls in volume proxy * Changes for get calls in telemetry proxy * Improve error message on auth\_plugin failure * Handle novaclient exceptions during delete\_server * Changes for get calls in network proxy * Changes for get calls in keystore proxy * Changes for get calls in identity proxies * Get changes for compute proxy * Basic object store container functional tests * Create base class for functional tests * Add floating IP pool resource methods * Proxy get method * Don't error on missing certs * Remove clouds.yaml from gitignore * Add clouds.yaml file for contributor testing docs * Activate the cdn stuff * Temporarily work around httpretty again * Change overriding defaults to kwarg * Stop leaking server objects * Add tests for OSC usage * Use fakes instead of mocks for data objects * Use appdirs for platform-independent locations * Add UnitedStack * Expose function to get defaults dict * Add default versions for trove and ironic * Sort defaults list for less conflicts * Only add fall through cloud as a fall through * Fix several small nits in network v2 proxy * Update images API for get/list/search interface * Rewrite extension checking methods * Update server API for get/list/search interface * Compute proxy update changes * Volume proxy create changes * Telemetry proxy create changes * Object Store proxy create changes * Network proxy create changes * Add the IPv6 subnet attributes * Updated from global requirements * Keystore proxy create changes * Image create proxy changes * Add flag to indicate where floating ips come from * Identity create proxy changes * Database create changes for proxy * Create changes for compute proxy * get\_one\_cloud should use auth merge * Also accept .yml as a suffix * Updated from global requirements * Proxy create method * Fix delete\_server when wait=True * Initial version of clustering service support * Return Bunch objects instead of plain dicts * Add os-client-config support for examples * Fix docs for volume proxy delete * Proxy update telemetry changes * Proxy update network changes * Proxy update keystore changes * Proxy update image changes * Proxy update identity changes * proxy update database changes * Switch tasks vs put on a boolean config flag * Enhance the OperatorCloud constructor * Convert node\_set\_provision\_state to task * Update recent Ironic exceptions * Enhance error message in update\_machine * Remove crufty lines from README * Rename get\_endpoint() to get\_session\_endpoint() * Update vendor support to reflect v2 non-task * Make warlock filtering match dict filtering * Fix exception re-raise during task execution for py34 * Add more tests for server metadata processing * Add thread sync points to Task * Add early service fail and active check method * Add a method for getting an endpoint * Raise a shade exception on broken volumes * Split exceptions into their own file * Add minor OperatorCloud documentation * Proxy update method * Allow for int or string ID comparisons * Add flag to trigger task interface for rackspace * Change ironic maintenance method to align with power method * Add Ironic machine power state pass-through * Update secgroup API for new get/list/search API * Remove references to v2\_0 from examples * Move network example into directory * Move keypair to standalone example * Synchronize global requirements * Fix functional tests to run against live clouds * Add functional tests for create\_image * Do not cache unsteady state images * Add tests and invalidation for glance v2 upload * Allow complex filtering with embedded dicts * Add proxy for trust operations * Move jenkins create and delete in their onw files * Call super in OpenStackCloudException * Add Ironic maintenance state pass-through * Add update\_machine method * Replace e.message with str(e) * Update flavor API for new get/list/search API * Add a docstring to the Task class * Remove REST links from inventory metadata * Have get\_image\_name return an image\_name * Add post hook file for a functional test gate * Move wait\_for\_status to resource module * Fix get\_hostvars\_from\_server for volume API update * Add test for create\_image with glance v1 * Explicitly request cloud name in test\_caching * Add test for caching in list\_images * Test flavor cache and add invalidation * Fix major update\_user issues * create\_user should return the user created * Test that deleting user invalidates user cache * Use new getters in update\_subnet and update\_router * Update volume API for new getters and dict retval * Search methods for networks, subnets and routers * Update unregister\_machine to use tasks * Invalidate user cache on user create * Apply delete changes to image proxies * Apply delete changes to keystore proxy * Apply delete changes to identity proxies * Apply delete changes to volume proxy * Apply telemetry delete change * Apply orchestration delete change * Apply object\_store delete changes * Apply network delete changes * Apply delete API changes * Update register\_machine to use tasks * Add test of OperatorCloud auth\_type=None * Allow name or ID for update\_router() * Allow name or ID for update\_subnet() * Add test for user\_cache * MonkeyPatch time.sleep in unit tests to avoid wait * Create stack * Updated from global requirements * Add more detail to method not supported exception * Add module name to repr string * Add patch\_machine method and operator unit test substrate * Wrap ironicclient methods that leak objects * Basic test for meta method obj\_list\_to\_dict * Change Ironic node lookups to support names * Add meta method obj\_list\_to\_dict * The first functional test * Document vendor support information * Reset cache default to 0 * Add test for invalidation after delete * Deprecate use of cache in list\_volumes * Invalidate volume list cache when creating * Make cache key generator ignore cache argument * Add get\_subnet() method * add .venv to gitignore * Move region\_names out of auth dict * Add runabove to vendors * Add image information to vexxhost account * Add API method update\_subnet() * Add API method delete\_subnet() * Add API method create\_subnet() * Add vexxhost * Add DreamCompute to vendors list * Allow overriding envvars as the name of the cloud * Put env vars into their own cloud config * Add keystoneclient to test-requirements * Actually put some content into our sphinx docs * Unsteady state in volume list should prevent cache * Test volume list caching * Allow passing config into shade.openstack\_cloud * Refactor caching to allow per-method invalidate * Add tests for caching * Rename auth\_plugin to auth\_type * Update os-client-config min version * Fix volume operations * Determine limit based on page size * Improve image.v2.tag * Proxy delete method * Add \_check\_resource to BaseProxy * Rework update\_attrs so dirty list always updated * Fix exception in update\_router() * Add API auto-generation based on docstrings * Fix docs nit - make it clear the arg is a string * Poll on the actual image showing up * Add delete\_image call * Skip passing in timeout to glance if it's not set * Add some unit test for create\_server * Migrate API calls to task management * Fix naming inconsistencies in rebuild\_server tests * identity/v3 trust resource * Add task management framework * Namespace caching per cloud * Allow for passing cache class in as a parameter * Make way for the functional tests * Add 'rebuild' to shade * Let router update to specify external gw net ID * Create swift container if it does not exist * Fix a use of in where it should be equality * Disable warnings about old Rackspace certificates * Add trust-id to command line arguments * Pass socket timeout to all of the Client objects * Add methods for logical router management * Add api-level timeout parameter * Update .gitreview for git section rename * Add a Proxy for the Volume service * Custom exception needs str representation * metric: add support for generic resources * Adjust httpretty inclusion * Add new \_verify to proxy base tests * Add ResourceNotFound exception * Updated from global requirements * Raise NotFoundException for 404s * Remove httpretty from resource tests * Remove httpretty from Transport tests * Start moving Transport tests off of httpretty * Add requests\_mock to test-requirements.txt * Add basic unit test for shade.openstack\_cloud * Small fixes found working on ansible modules * Disable dogpile.cache if cache\_interval is None * Add support for keystone projects * Fix up and document input parameters * Handle image name for boot from volume * Clean up race condition in functional tests * Remove unused utils module in auth tests * Make get\_id public * Change dogpile cache defaults * Add initial compute functional tests to Shade * Image v2 Proxy should inhert from BaseProxy * Get the ID of a single sub-resource * Avoid httpretty 0.8.8 because it is broken * Add missing equal sign to README * Remove repr calls from telemetry classes * Canonical request/response logging * Add cover to .gitignore * Make the keypair example a bit more robust * Delete a Stack * Convert example --data to use eval instead of json * Add cover to .gitignore * Fix jenkins name and floating ip * Add ServerDetail so list can include details * Set Flavor and Image resources on Server * Set put\_update for compute.v2.server.Server * Catch AttributeError in header with no alias * Set int type on several container headers * Move the network stuff out of the jenkins example * Fix compute proxy for server wait * Some convenience methods for security groups * Flesh out api version defaults * Set headers on object before creating/updating * Handle project\_name/tenant\_name in the auth dict * Remove py26 jobs * Remove CaseInsensitiveDict * Add two newlines to the ends of files * Rename auth\_plugin to auth\_type * Add ironic node deployment support * identity: use set() for valid\_options * identity: add missing tenant options to valid options * Add base for Proxy classes to inherit from * Fix assert order in test\_resource * Ensure that assert order is (expected, actual) * Remove transaction timestamp from container * Fix glossary and other 404s * metric: add archive policy support * metric: add support for metric resource * Align cert, key, cacert and verify with requests * Add methods to create and delete networks * Add behavior to enable ironic noauth mode * Add support for configuring dogpile.cache * Fix coverage report * Add more testing of vendor yaml loading * More comprehensive unit tests for os-client-config * Adjust paginate argument usage * Allow keystone validation bypass for noauth usage * Add basic unit test for config * Removed x-auth-token from obj.py * Fix bad links out of the index * Allow user to set a prop back to default * Reorder envlist to avoid the rm -fr .testrepository when running tox -epy34 * Make image processing work for v2 * Utilize dogpile.cache for caching * Add support for volume attach/detach * Do not allow to pass \*-cache on init * Import from v2 instead of v1\_1 * Remove id from put and patch requests * Add unit test for meta.get\_groups\_from\_server * Add unit tests for meta module 0.4.1 ----- * Send empty dict when no headers on create/update 0.4.0 ----- * Adjust long list handling for Flavor * Add ImageDetail for extra information * Fix comment and assert order * Add FlavorDetail for extra information * Use case insensitive dict for Resource attributes * Support listing non-paginated response bodies * Create header property * Convert user\_name to username * resync ksc auth plugins * omit 0.8.7 httpretty * Add a method to create image snapshots from nova * Return extra information for debugging on failures * Don't try to add an IP if there is one * Provide more accurate repr * Document compute/v2 resources * Fix the discoverable plugin with tokens * Increase Resource test coverage * Updated from global requirements * Fix up the limits documentation * Add the Volume resource for the Volume service * Add the Snapshot resource for the Volume service * Add the Type resource for the Volume service * add metric proxy and service * Move metric capabilities into a single Resource * Serialize Resource types before sending to server * Allow Resource attributes to be updated * Provide one resource for Compute v2 Limits * Adjust Container override methods * Mutate Resource via mapping instead of dict * Revamp README file * Add hasExtension method to check cloud capabilities * Create server requires \*Ref names during POST * Prefer storing prop values as their name * Don't compare images when image is None * Support Resource as a type for properties * Revert to KSC auth plugins * Add logging functionality to openstack.utils * Add API docs for network.v2 * Add Resource.name property * Bypass type conversion when setting default prop * telemetry: fix threshold rule in alarm to be dict * telemetry: add missing alarm property severity * Add service\_catalog property * telemetry: add support for Gnocchi capabilities * Introduce the Volume service * Remove unnecessary container creation * Make is\_object\_stale() a public method * Prefer dest value when option is depricated 0.3.2 ----- * Update README and setup text for PyPI * Allow region\_name to be None * Don't return the auth dict inside the loop * Make sure we're deep-copying the auth dict 0.3.1 ----- * Set Resource.page limit to None * Add Resource.from\_name * Add status\_code to HttpException * Provide a better default user-agent string * Fix broken object hashing * Adds some more swift operations * Adds get\_network() and list\_networks function * Get a stack * Build up contributor documentation section * Build up user documentation section * Fix the example on the Usage page * Fix telemetry resource paths * Add support for creating/deleting volumes * Remove version from path * Get auth token lazily * Reorganize existing documentation files * Convert the find method to use the page method rather than list * Add six to requirements * Remove iso8601 from dependencies * Rename floatingip to floating\_ip * Pass service\_name to nova\_client constructor * Create a neutron client * Port to use keystone sessions and auth plugins * Add consistent methods for returning dicts * Add get\_flavor method * Make get\_image return None * Allow examples.get call without data * Resource.find should not raise ResourceNotFound * Use the "iterate timeout" idiom from nodepool * Remove runtime depend on pbr * Provide Rackspace service\_name override * Working script to create a jenkins server * Add the capability for the user to get a page of data * Fix obj\_to\_dict type filtering * Server convenience methods wait and get IPs * Remove flake/pep8 ignores * Adds a method to get security group * Use the proper timeutils package name * Adjust some parameters and return types * Refactor auth plugin loading * Get rid of some useless code * Make better support of service versions * Fix RuntimeError on Python 3 while listing objects * Pull in improvements from nodepool * Remove positional args to create\_server * Don't include deleted images by default * Add image upload support * Refactor glance version call into method * Support uploading swift objects * Debug log any time we re-raise an exception * Start keeping default versions for all services * Support keystone auth plugins in a generic way * Replace defaults\_dict with scanning env vars * Correct auth\_plugin argument values * Better exception in Auth plugins * Implement Swift Proxy object and example * Build Resource from either existing object or id * Add image v2 proxy * Remove py26 support * Explain obj\_to\_dict * Fix python3 unittests * Complete the Resource class documentation * Updated from global requirements * Change meta info to be an Infra project * Fix flake8 errors and turn off hacking * Fix up copyright headers * Add image v2 tags * Add better caching around volumes * Updated from global requirements * Remove extra GET call when limit provided to list * Workflow documentation is now in infra-manual * Workflow documentation is now in infra-manual * Add object\_store resource documentation * Neutron apparently doesn't support PATCH * compute/v2 server metadata and server meta resouce * AttributeError trapped when it won't be raised * Prepare for documentation of Resources * Don't attempt to \_\_get\_\_ prop without instance * Reswizzle proxy tests * Corrections to readme * keystore proxy methods and tests * Support regionless/global services * Expand r\_id to resource\_id for more clarity * identity/v2 extension resource * identity version resource and version fixes * Correct Resource.id deleter property 0.2.1 ----- * Correct the namespace for Server resource 0.2.0 ----- * Updated from global requirements * Add members resource to image v2 api * Add image resource to v2 images api * Add image V2 service version * remove id\_only from keypair find * Add the ability to set a new default for props * Updated from global requirements * Add coverage-package-name to tox.ini for coverage * Updated from global requirements * Fixed a typo in a comment in the tests * Have prop attribute access return None by default * Get prop aliases through their type, not raw value * Add getting started steps for novices * Expand index toctree to two levels * Add details to HttpException string * Fixed a typo in a docstring * Compute proxy methods * Implement iterator paging * Create a discoverable plugin * Sample thin interface * Rename keypairs to keypair to be more consistent * keystore/v1 container resource * keystore/v1 order resource * Add keystore service and secret resource * Minor docs updates to index, installation and usage * Use project name to retrieve version info * Initial "Getting Started" guide * Identity v2 proxy methods * Identity v3 proxy methods * Telemetry proxy methods * Orchestration proxy methods * Network proxy methods * Image proxy methods * Database proxy methods * base class for proxy tests * Use yaml.safe\_load instead of load * Updated from global requirements * Throw error if a non-existent cloud is requested * Properly parse the keypair response and workaround issues * Have resource CRUD return self to simplify the proxies * The fixed ip format on the port may actually be an array * Add resource CRUD to connection class * Add an example for the connection class * move examples to code use preference docs * Move examples to code session docs * Move examples service filter to code * Move transport examples to code * Support boot from volume * Make get\_image work on name or id * Create a method to handle the set\_\* method logic * Fixed a number of typos * Updated from global requirements * High level interface * Fix a missed argument from a previous refactor * Add some additional server meta munging * identity v3 docs * class linke for v2 * resource autodocs * update :class references * identity v2 docs * auth plugin identity base docs * base auth plugin docs * Move the examples docs into the code * remove pointless test * Map CloudConfig attributes to CloudConfig.config * service filter docs * fix identity service comment * User preference docs * Add connection documentation * Convert transport docs to autodoc * Convert the session object to autodoc * Change configuration for sphinx autodoc * Reverse order of tests to avoid incompatibility * Support injecting mount-point meta info * Add ability to extract a list of versions * Allow user to select different visibilities * Add user preference and example CLI to build it * Fix the v2 auth plugin to handle empty token * Move ironic node create/delete logic into shade * Refactor ironic commands into OperatorCloud class * fix typo in create\_server * Don't die if we didn't grab a floating ip * Process flavor and image names * Stop prefixing values with slugify * Don't access object members on a None * Make all of the compute logic work * Handle booleans that are strings in APIs * Add delete and get server name * Fixed up a bunch of flake8 warnings * Add in server metadata routines * Introduce the connection class * Add support for argparse Namespace objects * Add support for command line argument processing * Plumb through a small name change for args * Updated from global requirements * Consume project\_name from os-client-config * Handle lack of username for project\_name defaults * Handle the project/tenant nonesense more cleanly * add Ironic client * Add cache control settings * Handle no vendor clouds config files * Remove unused class method get\_services * Apply id\_attribute to Ceilometer Meters * Remove extraneous vim editor configuration comments 0.1.0.dev20141008 ----------------- * Update README requirements * Updated from global requirements * Use the now graduated oslo.utils * Make user\_id a higher priority for v2 auth * Use stevedore to load authorization plugins * Prepare for auth plugins * Use meter\_name as id for statistics * compute/v2 limits\_absolute resource * Determines version from auth\_url when not explicit * Updates to use keystone session * Add clouds-public.yaml * Add ability to find an available floating ip * Add support for Samples coming from Ceilometer * Add support for telemetry sample Statistics * Prep for move to stackforge * Handle missing vendor key * Make env vars lowest priority * Handle null region * Discover Trove API version * Update the README file for more completeness * Offload config to the os-client-config library * Get rid of extra complexity with service values * Remove babel and add pyyaml * Port in config reading from shade * Initial Cookiecutter Commit * Floating ip does not have an id * Fix find for resources with id\_attribute * Add find command to the examples * identity/v3 user resource * Updated from global requirements * Add database users for instances * Apply id\_attribute throughout resources * compute/v2 keypairs resource * Add databases to instances * Move cacert/insecure awkwardness examples/common * Updated from global requirements * Add docs environement to testing interface * identity/v3 policy resource * identity/v3 domain resource * identity/v3 project resource * identity/v3 credential resource * identity/v3 group resource * identity/v3 endpoint resource * identity/v3 service resource * Add \_\_init\_\_ files to identity service * Add example code to README * Add volumes and config file parsing * Change example so CLI names match object arguments * Remove unused os-url option * Fix log invocations * Adding database flavor support * Fixing path to generated documentation * Implement the rest of Swift containers and objects * Work toward Python 3.4 support and testing * Sync up with latest keystoneclient changes * Allow headers to be retreived with GET bodies * Remove some extra lines from the README * Add the initial library code * Initial cookiecutter repo * Add domains for v3 authentication * identity/v2 role resource * network/v2 pool\_member resource * Fix the example authenticate * compute/v2 limits\_rate resource * Add example update * Add id\_attribute and resource\_name to resource * Adding alarm history to telemetry * Introduces example of running a method on a resource * Add the Alarm resource to telemetry * Updated from global requirements * compute/v2 server\_interface resource * Fix os\_region in example session * orchestration version resource * orchestration/v1 stack resource * Change OS\_REGION to OS\_REGION\_NAME * Change the capabilities to capability * Publicize resource\_id property * Add Meter resource to telemetry * Add the Resource resource to telemetry * Exception has wrong string for list * Server IP resource * Various standard server actions * compute/v2 server resource * identity/v2 tenant resource * identity/v2 user resource * Updated from global requirements * Fixes for telemetry * database/v1.0 instance resource * Add support for Swift containers * compute/v2 image resource * compute/ version resource * compute/v2 flavor resource * Introducing telemetry service * compute/v2 extension resource * network/v2 network resource * Add some factories * network/v2 security\_group\_rule resource * network/v2 quota resource * Add support for interface add/remove to routers * network/v2 metering\_label\_rule resource * network/v2 port resource * network/v2 load balancer healthmonitor resource * network/v2 subnet resource * network/v2 load balancer pool resource * network/v2 loadbalancer resource * network version resource * network/v2 security\_group resource * Have examples handle snake case to camel case * network/v2 load balancer listener resource * network/v2 floatingip resource * network/v2 extension resource * network/v2 metering\_label resource * network/v2 router resource * Separate head restrictions from get * Make logging more efficient in transport * Proper string formatting in exception messages * Minor fixes to examples * Full flavor CRUD * Don't join None values in utils.urljoin * Add example script for HEAD requests * Keep id in \_attrs * Add find method to resource * H405 activate * Add parameters to the list for filtering * Have exceptions print something by default * Make these comments H405 compliant * Allow --data option to pass UUID instead of json * Add some comments to the examples * Simple network resource * Add support for HEAD requests of resources * Change transport JSON handling * Fixed a small grammatical mistake in a docstring * Add example get * Add example delete * Example create command * Add common method to find a resource * The resource repr method should print id * Have the service catalog ignore empty urls * Add --data option to debug curl logging * Make version parsing in examples more intelligent * Important changes for service filtering * Very basic image resource * Updated from global requirements * json default for transport and resource \_\_repr\_\_ * Make the session command a little more friendly * Synced from global-requirements * Example session command * Remove a now unused flake8 exclude directory * Example code reorg and auth examples * Removed two flake8 skips * Sync hacking requirement with global requirements * Important auth fixes * Capitalize SDK more reasonably in an exception name * Move MethodNotSupported exception to exceptions * HttpException should be derived from SdkException * Some docs for the session object * Get rid of base\_url from transport * Rearrange session arguments * Clean up transport stuff out of the resource class * Resolve Ed's concerns on README * Fleshed out the README and removed dependency on babel * Removed now unnecesary workaround for PyPy * Comment in middle of glossary messes it up * Authentication from keystoneclient * Add command structure to example code * Update sphinx from global-requirements * Wrap lines at the appropriate length and use native sphinx constructs * Reorganize the index a bit to make the important content be at the top * Fix an innacuracy in an example in the docs * Fixed an emberassing typo * Added a makefile for sphinx * Converted the glossary to use native Sphinx markup * Mark openstacksdk as being a universal wheel * Add initial glossary * Add Transport doc * Resource Properties * Update the requirements * Finish transport renaming in the tests * Add some sample scripts * Add base resource class * Session layer with base authenticator * Add .venv to .gitignore * Docs cleanup * Rename session to transport * Add redirection handling to openstack.session.Session * Add requests.Session wrapper class * Fix temporary pypy gate issue with setuptools * Several stylistic fixes for the docs * Switch to oslosphinx * add newlines to end of requirements files * remove api\_strawman * reigster->register typo * Added support in the strawman for re-authentication * Initial new version with demonstration of clean implementation * Added sample directory layout for pystack * Remove locale overrides in tox * Fix misspellings in python openstacksdk * Finished the pystack strawman overview * Initial pystack strawman docs * Made tox -e pep8 passed. Also made git review work * setting up the initial layout; move the api proposals to api\_strawman * Added example code based on pystack * This should be a plural * Consolidate readmes * Initial blob of thoughts from me * Initial commit openstacksdk-0.11.3/create_yaml.sh0000777000175100017510000000132413236151340017135 0ustar zuulzuul00000000000000#!/bin/bash # # NOTE(thowe): There are some issues with OCC envvars that force us to do # this for now. # mkdir -p ~/.config/openstack/ FILE=~/.config/openstack/clouds.yaml export OS_IDENTITY_API_VERSION=3 # force v3 identity echo 'clouds:' >$FILE echo ' test_cloud:' >>$FILE env | grep OS_ | tr '=' ' ' | while read k v do k=$(echo $k | sed -e 's/OS_//') k=$(echo $k | tr '[A-Z]' '[a-z]') case "$k" in region_name|*_api_version) echo " $k: $v" >>$FILE esac done echo " auth:" >>$FILE env | grep OS_ | tr '=' ' ' | while read k v do k=$(echo $k | sed -e 's/OS_//') k=$(echo $k | tr '[A-Z]' '[a-z]') case "$k" in region_name|*_api_version) ;; *) echo " $k: $v" >>$FILE esac done openstacksdk-0.11.3/.coveragerc0000666000175100017510000000013713236151340016433 0ustar zuulzuul00000000000000[run] branch = True source = openstack omit = openstack/tests/* [report] ignore_errors = True openstacksdk-0.11.3/LICENSE0000666000175100017510000002363613236151340015330 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. openstacksdk-0.11.3/requirements.txt0000666000175100017510000000133113236151340017573 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 PyYAML>=3.10 # MIT appdirs>=1.3.0 # MIT License requestsexceptions>=1.2.0 # Apache-2.0 jsonpatch!=1.20,>=1.16 # BSD six>=1.10.0 # MIT os-service-types>=1.1.0 # Apache-2.0 keystoneauth1>=3.3.0 # Apache-2.0 deprecation>=1.0 # Apache-2.0 munch>=2.1.0 # MIT decorator>=3.4.0 # BSD jmespath>=0.9.0 # MIT ipaddress>=1.0.16;python_version<'3.3' # PSF futures>=3.0.0;python_version=='2.7' or python_version=='2.6' # BSD iso8601>=0.1.11 # MIT netifaces>=0.10.4 # MIT dogpile.cache>=0.6.2 # BSD openstacksdk-0.11.3/docs-requirements.txt0000666000175100017510000000005513236151340020523 0ustar zuulzuul00000000000000-r requirements.txt -r test-requirements.txt openstacksdk-0.11.3/AUTHORS0000664000175100017510000002163013236151501015360 0ustar zuulzuul00000000000000Aaron-DH Abhijeet Kasurde Adam Gandelman Adam Sheldon Adrian Turjak Akihiro Motoki Alberto Gireud Alex Gaynor Alon Bar Tzlil Alvaro Aleman Alvaro Lopez Garcia Andreas Jaeger Andrey Shestakov Andy Botting Anindita Das Anita Kuno Ankit Agrawal Ankur Gupta Anne Gentle Antoni Segura Puimedon Apoorv Agrawal Arie Arie Bregman Atsushi SAKAI Bence Romsics Bob Ball Brian Curtin Brian Curtin Britt Houser Béla Vancsics Caleb Boylan Cao Xuan Hoang Carlos Goncalves Cedric Brandily ChangBo Guo(gcb) Choe, Cheng-Dae Chris Church Christian Berendt Christian Berendt Christian Zunker Cindia-blue Clark Boylan Clayton O'Neill Clint Byrum Colleen Murphy Daniel Mellado Daniel Speichert Daniel Wallace David Shrewsbury Davide Guerri Dean Troyer Dean Troyer Devananda van der Veen Dinesh Bhor Dirk Mueller Dolph Mathews Dongcan Ye Donovan Jones Doug Hellmann Doug Hellmann Doug Wiegley Doug Wiegley Douglas Mendizábal Duan Jiong EdLeafe Eric Harney Eric Lafontaine Ethan Lynn Ethan Lynn Ethan Lynn Lin Everett Toews Flavio Percoco Ghe Rivero Gregory Haynes Haikel Guemar Haiwei Xu Hangdong Zhang Hardik Italia Hideki Saito Hoolio Wobbits Hunt Xu Ian Cordasco Ian Wienand Ilya Shakhat Iswarya_Vakati JP Sullivan Jacky Hu Jakub Jursa James E. Blair Jamie Lennox Jamie Lennox Javier Pena Jens Harbott Jens Rosenboom Jeremy Stanley Jesse Noller Jesse Proudman Jian Zhao Jim Rollenhagen John Dennis Jon Schlueter Jordan Pittier Jose Delgado Joshua Harlow Joshua Harlow Joshua Hesketh Joshua Phillips Julia Kreger Julien Danjou Kiran_totad Kyle Mestery LIU Yulong Lars Kellogg-Stedman Lee Yarwood LiuNanke Manjeet Singh Bhatia Mark Goddard Markus Zoeller Martin Millnert Mathieu Bultel Mathieu Gagné Matt Fischer Matt Smith Matthew Booth Matthew Edmonds Matthew Treinish Matthew Wagoner Maxime Vidori Michael Gugino Michael Johnson Miguel Angel Ajo Mike Perez Mohammed Naser Mohit Malik Monty Taylor Morgan Fainberg Mário Santos Nakul Dahiwade OpenStack Release Bot Paul Belanger Paulo Matias Pip Oomen Qiming Teng Reedip Reedip Ricardo Carrillo Cruz Ricardo Carrillo Cruz Richard Theis Roberto Polli Rodolfo Alonso Hernandez Rosario Di Somma Rosario Di Somma Rui Chen Sam Yaple SamYaple Samuel de Medeiros Queiroz Sean Handley Sean M. Collins Shane Wang Shashank Kumar Shankar Shuquan Huang Simon Leinen Sindhu Devale Sorin Sbarnea Spencer Krum Stefan Andres Steve Baker Steve Heyman Steve Leon Steve Lewis Steve Martinelli Steve Martinelli Steven Relf Swapnil Kulkarni (coolsvap) Sylvain Baubeau Sławek Kapłoński Tang Chen Tang Chen Terry Howe TerryHowe Thanh Ha Thomas Bechtold Tim Burke Tim Laszlo Timothy Chavez TingtingYu Tony Breeds Tony Xu Tristan Cacqueray Trygve Vea Valery Tschopp Victor Silva Vu Cong Tuan Xav Paice Yaguang Tang Yan Xing'an Yi Zhao Yolanda Robla Yuanbin.Chen Yuriy Taraday Yuval Shalev ZhiQiang Fan Zhou Zhihong Zuul avnish bhagyashris brandonzhao chenpengzi <1523688226@qq.com> chohoor deepakmourya dineshbhor dommgifer elynn jolie jonnary lidong lifeless lingyongxu liuxiaoyang lixinhui liyi lvdongbing lvxianguo malei mariojmdavid matthew wagoner miaohb mountainwei purushothamgk rajat29 reedip ricolin tengqm tianmaofu ting wang wangqiangbj xhzhf xu-haiwei yan.haifeng yanyanhu zengjianfang zhang.lei zhangyangyang Édouard Thuleau openstacksdk-0.11.3/README.rst0000666000175100017510000001517513236151364016017 0ustar zuulzuul00000000000000openstacksdk ============ openstacksdk is a client library for for building applications to work with OpenStack clouds. The project aims to provide a consistent and complete set of interactions with OpenStack's many services, along with complete documentation, examples, and tools. It also contains an abstraction interface layer. Clouds can do many things, but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, the per-service oriented portions of the SDK are for you. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then the Cloud Abstraction layer is for you. A Brief History --------------- .. TODO(shade) This history section should move to the docs. We can put a link to the published URL here in the README, but it's too long. openstacksdk started its life as three different libraries: shade, os-client-config and python-openstacksdk. ``shade`` started its life as some code inside of OpenStack Infra's `nodepool`_ project, and as some code inside of the `Ansible OpenStack Modules`_. Ansible had a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding the logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. Because of its background from nodepool, shade contained abstractions to work around deployment differences and is resource oriented rather than service oriented. This allows a user to think about Security Groups without having to know whether Security Groups are provided by Nova or Neutron on a given cloud. On the other hand, as an interface that provides an abstraction, it deviates from the published OpenStack REST API and adds its own opinions, which may not get in the way of more advanced users with specific needs. ``os-client-config`` was a library for collecting client configuration for using an OpenStack cloud in a consistent and comprehensive manner, which introduced the ``clouds.yaml`` file for expressing named cloud configurations. ``python-openstacksdk`` was a library that exposed the OpenStack APIs to developers in a consistent and predictable manner. After a while it became clear that there was value in both the high-level layer that contains additional business logic and the lower-level SDK that exposes services and their resources faithfully and consistently as Python objects. Even with both of those layers, it is still beneficial at times to be able to make direct REST calls and to do so with the same properly configured `Session`_ from `python-requests`_. This led to the merge of the three projects. The original contents of the shade library have been moved into ``openstack.cloud`` and os-client-config has been moved in to ``openstack.config``. Future releases of shade will provide a thin compatibility layer that subclasses the objects from ``openstack.cloud`` and provides different argument defaults where needed for compatibility. Similarly future releases of os-client-config will provide a compatibility layer shim around ``openstack.config``. .. note:: The ``openstack.cloud.OpenStackCloud`` object and the ``openstack.connection.Connection`` object are going to be merged. It is recommended to not write any new code which consumes objects from the ``openstack.cloud`` namespace until that merge is complete. .. _nodepool: https://docs.openstack.org/infra/nodepool/ .. _Ansible OpenStack Modules: http://docs.ansible.com/ansible/latest/list_of_cloud_modules.html#openstack .. _Session: http://docs.python-requests.org/en/master/user/advanced/#session-objects .. _python-requests: http://docs.python-requests.org/en/master/ openstack ========= List servers using objects configured with the ``clouds.yaml`` file: .. code-block:: python import openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) # Initialize cloud conn = openstack.connect(cloud='mordred') for server in conn.compute.servers(): print(server.to_dict()) openstack.config ================ ``openstack.config`` will find cloud configuration for as few as 1 clouds and as many as you want to put in a config file. It will read environment variables and config files, and it also contains some vendor specific default values so that you don't have to know extra info to use OpenStack * If you have a config file, you will get the clouds listed in it * If you have environment variables, you will get a cloud named `envvars` * If you have neither, you will get a cloud named `defaults` with base defaults Sometimes an example is nice. Create a ``clouds.yaml`` file: .. code-block:: yaml clouds: mordred: region_name: Dallas auth: username: 'mordred' password: XXXXXXX project_name: 'shade' auth_url: 'https://identity.example.com' Please note: ``openstack.config`` will look for a file called ``clouds.yaml`` in the following locations: * Current Directory * ``~/.config/openstack`` * ``/etc/openstack`` More information at https://developer.openstack.org/sdks/python/openstacksdk/users/config openstack.cloud =============== Create a server using objects configured with the ``clouds.yaml`` file: .. code-block:: python import openstack.cloud # Initialize and turn on debug logging openstack.enable_logging(debug=True) # Initialize cloud # Cloud configs are read with openstack.config cloud = openstack.cloud.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Links ===== * `Issue Tracker `_ * `Code Review `_ * `Documentation `_ * `PyPI `_ * `Mailing list `_ * `Bugs `_ openstacksdk-0.11.3/releasenotes/0000775000175100017510000000000013236151501016777 5ustar zuulzuul00000000000000openstacksdk-0.11.3/releasenotes/source/0000775000175100017510000000000013236151501020277 5ustar zuulzuul00000000000000openstacksdk-0.11.3/releasenotes/source/index.rst0000666000175100017510000000023113236151364022145 0ustar zuulzuul00000000000000============================ openstacksdk Release Notes ============================ .. toctree:: :maxdepth: 1 unreleased pike ocata openstacksdk-0.11.3/releasenotes/source/_static/0000775000175100017510000000000013236151501021725 5ustar zuulzuul00000000000000openstacksdk-0.11.3/releasenotes/source/_static/.placeholder0000666000175100017510000000000013236151340024201 0ustar zuulzuul00000000000000openstacksdk-0.11.3/releasenotes/source/unreleased.rst0000666000175100017510000000012513236151340023161 0ustar zuulzuul00000000000000===================== Unreleased Versions ===================== .. release-notes:: openstacksdk-0.11.3/releasenotes/source/conf.py0000666000175100017510000002163213236151340021605 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # oslo.config Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # openstackdocstheme options repository_name = 'openstack/python-openstacksdk' bug_project = '760' bug_tag = '' html_last_updated_fmt = '%Y-%m-%d %H:%M' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'OpenStack SDK Release Notes' copyright = u'2017, Various members of the OpenStack Foundation' # Release notes are version independent. # The short X.Y version. version = '' # The full version, including alpha/beta/rc tags. release = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'shadeReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'shadeReleaseNotes.tex', u'Shade Release Notes Documentation', u'Shade Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'shadereleasenotes', u'shade Release Notes Documentation', [u'shade Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'shadeReleaseNotes', u'shade Release Notes Documentation', u'shade Developers', 'shadeReleaseNotes', u'A client library for interacting with OpenStack clouds', u'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] openstacksdk-0.11.3/releasenotes/source/pike.rst0000666000175100017510000000021713236151340021764 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike openstacksdk-0.11.3/releasenotes/source/_templates/0000775000175100017510000000000013236151501022434 5ustar zuulzuul00000000000000openstacksdk-0.11.3/releasenotes/source/_templates/.placeholder0000666000175100017510000000000013236151340024710 0ustar zuulzuul00000000000000openstacksdk-0.11.3/releasenotes/source/ocata.rst0000666000175100017510000000023013236151340022116 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata openstacksdk-0.11.3/releasenotes/notes/0000775000175100017510000000000013236151501020127 5ustar zuulzuul00000000000000openstacksdk-0.11.3/releasenotes/notes/add_update_service-28e590a7a7524053.yaml0000666000175100017510000000040613236151340026625 0ustar zuulzuul00000000000000--- features: - Add the ability to update a keystone service information. This feature is not available on keystone v2.0. The new function, update_service(), allows the user to update description, name of service, service type, and enabled status. openstacksdk-0.11.3/releasenotes/notes/nova-flavor-to-rest-0a5757e35714a690.yaml0000666000175100017510000000022313236151340026637 0ustar zuulzuul00000000000000--- upgrade: - Nova flavor operations are now handled via REST calls instead of via novaclient. There should be no noticable difference. openstacksdk-0.11.3/releasenotes/notes/add-list_flavor_access-e038253e953e6586.yaml0000666000175100017510000000017113236151340027423 0ustar zuulzuul00000000000000--- features: - Add a list_flavor_access method to list all the projects/tenants allowed to access a given flavor. openstacksdk-0.11.3/releasenotes/notes/set-bootable-volume-454a7a41e7e77d08.yaml0000666000175100017510000000015513236151340027047 0ustar zuulzuul00000000000000--- features: - Added a ``set_volume_bootable`` call to allow toggling the bootable state of a volume. openstacksdk-0.11.3/releasenotes/notes/server-create-error-id-66c698c7e633fb8b.yaml0000666000175100017510000000016513236151340027550 0ustar zuulzuul00000000000000--- features: - server creation errors now include the server id in the Exception to allow people to clean up. openstacksdk-0.11.3/releasenotes/notes/remove-novaclient-3f8d4db20d5f9582.yaml0000666000175100017510000000022613236151340026700 0ustar zuulzuul00000000000000--- upgrade: - All Nova interactions are done via direct REST calls. python-novaclient is no longer a direct dependency of openstack.cloud. openstacksdk-0.11.3/releasenotes/notes/server-security-groups-840ab28c04f359de.yaml0000666000175100017510000000024313236151340027722 0ustar zuulzuul00000000000000--- features: - Add the `add_server_security_groups` and `remove_server_security_groups` functions to add and remove security groups from a specific server. openstacksdk-0.11.3/releasenotes/notes/endpoint-from-catalog-bad36cb0409a4e6a.yaml0000666000175100017510000000020613236151340027545 0ustar zuulzuul00000000000000--- features: - Add new method, 'endpoint_for' which will return the raw endpoint for a given service from the current catalog. openstacksdk-0.11.3/releasenotes/notes/add-service-0bcc16eb026eade3.yaml0000666000175100017510000000025113236151340025617 0ustar zuulzuul00000000000000--- features: - | Added a new method `openstack.connection.Connection.add_service` which allows the registration of Proxy/Resource classes defined externally. openstacksdk-0.11.3/releasenotes/notes/use-interface-ip-c5cb3e7c91150096.yaml0000666000175100017510000000125613236151340026315 0ustar zuulzuul00000000000000--- fixes: - shade now correctly does not try to attach a floating ip with auto_ip if the cloud has given a public IPv6 address and the calling context supports IPv6 routing. shade has always used this logic to determine the server 'interface_ip', but the auto floating ip was incorrectly only looking at the 'public_v4' value to determine whether the server needed additional networking. upgrade: - If your cloud presents a default split IPv4/IPv6 stack with a public v6 and a private v4 address and you have the expectation that auto_ip should procure a v4 floating ip, you need to set 'force_ipv4' to True in your clouds.yaml entry for the cloud. openstacksdk-0.11.3/releasenotes/notes/make-rest-client-version-discovery-84125700f159491a.yaml0000666000175100017510000000032013236151340031561 0ustar zuulzuul00000000000000--- features: - Add version argument to make_rest_client and plumb version discovery through get_session_client so that versioned endpoints are properly found if unversioned are in the catalog. openstacksdk-0.11.3/releasenotes/notes/strict-mode-d493abc0c3e87945.yaml0000666000175100017510000000035413236151340025475 0ustar zuulzuul00000000000000--- features: - Added 'strict' mode, which is set by passing strict=True to the OpenStackCloud constructor. strict mode tells shade to only return values in resources that are part of shade's declared data model contract. openstacksdk-0.11.3/releasenotes/notes/multiple-updates-b48cc2f6db2e526d.yaml0000666000175100017510000000101313236151340026664 0ustar zuulzuul00000000000000--- features: - Removed unneeded calls that were made when deleting servers with floating ips. - Added pagination support for volume listing. upgrade: - Removed designateclient as a dependency. All designate operations are now performed with direct REST calls using keystoneauth Adapter. - Server creation calls are now done with direct REST calls. fixes: - Fixed a bug related to neutron endpoints that did not have trailing slashes. - Fixed issue with ports not having a created_at attribute. openstacksdk-0.11.3/releasenotes/notes/alternate-auth-context-3939f1492a0e1355.yaml0000666000175100017510000000026713236151340027422 0ustar zuulzuul00000000000000--- features: - Added methods for making new cloud connections based on the current OpenStackCloud. This should enable working more easily across projects or user accounts. openstacksdk-0.11.3/releasenotes/notes/removed-swiftclient-aff22bfaeee5f59f.yaml0000666000175100017510000000023013236151340027610 0ustar zuulzuul00000000000000--- upgrade: - Removed swiftclient as a dependency. All swift operations are now performed with direct REST calls using keystoneauth Adapter. openstacksdk-0.11.3/releasenotes/notes/ironic-microversion-ba5b0f36f11196a6.yaml0000666000175100017510000000017013236151340027221 0ustar zuulzuul00000000000000--- features: - Add support for passing Ironic microversion to the ironicclient constructor in get_legacy_client. openstacksdk-0.11.3/releasenotes/notes/fnmatch-name-or-id-f658fe26f84086c8.yaml0000666000175100017510000000030213236151340026543 0ustar zuulzuul00000000000000--- features: - name_or_id parameters to search/get methods now support filename-like globbing. This means search_servers('nb0*') will return all servers whose names start with 'nb0'. openstacksdk-0.11.3/releasenotes/notes/stack-update-5886e91fd6e423bf.yaml0000666000175100017510000000015413236151340025640 0ustar zuulzuul00000000000000--- features: - Implement update_stack to perform the update action on existing orchestration stacks. openstacksdk-0.11.3/releasenotes/notes/domain_operations_name_or_id-baba4cac5b67234d.yaml0000666000175100017510000000017613236151340031327 0ustar zuulzuul00000000000000--- features: - Added name_or_id parameter to domain operations, allowing an admin to update/delete/get by domain name. openstacksdk-0.11.3/releasenotes/notes/workaround-transitive-deps-1e7a214f3256b77e.yaml0000666000175100017510000000100713236151340030463 0ustar zuulzuul00000000000000--- fixes: - Added requests and Babel to the direct dependencies list to work around issues with pip installation, entrypoints and transitive dependencies with conflicting exclusion ranges. Packagers of shade do not need to add these two new requirements to shade's dependency list - they are transitive depends and should be satisfied by the other things in the requirements list. Both will be removed from the list again once the python client libraries that pull them in have been removed. openstacksdk-0.11.3/releasenotes/notes/add_update_server-8761059d6de7e68b.yaml0000666000175100017510000000012613236151340026655 0ustar zuulzuul00000000000000--- features: - Add update_server method to update name or description of a server. openstacksdk-0.11.3/releasenotes/notes/shade-helper-568f8cb372eef6d9.yaml0000666000175100017510000000013113236151340025674 0ustar zuulzuul00000000000000--- features: - Added helper method for constructing shade OpenStackCloud objects. openstacksdk-0.11.3/releasenotes/notes/log-request-ids-37507cb6eed9a7da.yaml0000666000175100017510000000026013236151340026422 0ustar zuulzuul00000000000000--- other: - The contents of x-openstack-request-id are no longer added to object returned. Instead, they are logged to a logger named 'openstack.cloud.request_ids'. openstacksdk-0.11.3/releasenotes/notes/get-usage-72d249ff790d1b8f.yaml0000666000175100017510000000010413236151340025124 0ustar zuulzuul00000000000000--- features: - Allow to retrieve the usage of a specific project openstacksdk-0.11.3/releasenotes/notes/wait-on-image-snapshot-27cd2eacab2fabd8.yaml0000666000175100017510000000044513236151340030076 0ustar zuulzuul00000000000000--- features: - Adds a new pair of options to create_image_snapshot(), wait and timeout, to have the function wait until the image snapshot being created goes into an active state. - Adds a new function wait_for_image() which will wait for an image to go into an active state. openstacksdk-0.11.3/releasenotes/notes/net_provider-dd64b697476b7094.yaml0000666000175100017510000000012113236151340025606 0ustar zuulzuul00000000000000--- features: - Network provider options are now accepted in create_network(). openstacksdk-0.11.3/releasenotes/notes/create_service_norm-319a97433d68fa6a.yaml0000666000175100017510000000013013236151340027173 0ustar zuulzuul00000000000000--- fixes: - The returned data from a create_service() call was not being normalized. openstacksdk-0.11.3/releasenotes/notes/add_magnum_services_support-3d95f9dcc60b5573.yaml0000666000175100017510000000007313236151340031040 0ustar zuulzuul00000000000000--- features: - Add support for listing Magnum services. openstacksdk-0.11.3/releasenotes/notes/list-role-assignments-keystone-v2-b127b12b4860f50c.yaml0000666000175100017510000000013113236151340031560 0ustar zuulzuul00000000000000--- features: - Implement list_role_assignments for keystone v2, using roles_for_user. openstacksdk-0.11.3/releasenotes/notes/fix-delete-ips-1d4eebf7bc4d4733.yaml0000666000175100017510000000033613236151340026214 0ustar zuulzuul00000000000000--- issues: - Fixed the logic in delete_ips and added regression tests to cover it. The old logic was incorrectly looking for floating ips using port syntax. It was also not swallowing errors when it should. openstacksdk-0.11.3/releasenotes/notes/remove-magnumclient-875b3e513f98f57c.yaml0000666000175100017510000000016713236151340027160 0ustar zuulzuul00000000000000--- upgrade: - magnumclient is no longer a direct dependency as magnum API calls are now made directly via REST. openstacksdk-0.11.3/releasenotes/notes/fix-list-networks-a592725df64c306e.yaml0000666000175100017510000000007513236151340026572 0ustar zuulzuul00000000000000--- fixes: - Fix for list_networks() ignoring any filters. openstacksdk-0.11.3/releasenotes/notes/normalize-images-1331bea7bfffa36a.yaml0000666000175100017510000000041313236151340026702 0ustar zuulzuul00000000000000--- features: - Image dicts that are returned are now normalized across glance v1 and glance v2. Extra key/value properties are now both in the root dict and in a properties dict. Additionally, cloud and region have been added like they are for server. openstacksdk-0.11.3/releasenotes/notes/fix-update-domain-af47b066ac52eb7f.yaml0000666000175100017510000000010713236151340026703 0ustar zuulzuul00000000000000--- fixes: - Fix for update_domain() where 'name' was not updatable. openstacksdk-0.11.3/releasenotes/notes/add-show-all-images-flag-352748b6c3d99f3f.yaml0000666000175100017510000000063213236151340027622 0ustar zuulzuul00000000000000--- features: - Added flag "show_all" to list_images. The behavior of Glance v2 to only show shared images if they have been accepted by the user can be confusing, and the only way to change it is to use search_images(filters=dict(member_status='all')) which isn't terribly obvious. "show_all=True" will set that flag, as well as disabling the filtering of images in "deleted" state. openstacksdk-0.11.3/releasenotes/notes/add_magnum_baymodel_support-e35e5aab0b14ff75.yaml0000666000175100017510000000047113236151340031133 0ustar zuulzuul00000000000000--- features: - Add support for Magnum baymodels, with the usual methods (search/list/get/create/update/delete). Due to upcoming rename in Magnum from baymodel to cluster_template, the shade functionality uses the term cluster_template. However, baymodel aliases are provided for each api call. openstacksdk-0.11.3/releasenotes/notes/magic-fixes-dca4ae4dac2441a8.yaml0000666000175100017510000000033713236151340025634 0ustar zuulzuul00000000000000--- fixes: - Refactor ``OpenStackConfig._fix_backward_madness()`` into ``OpenStackConfig.magic_fixes()`` that allows subclasses to inject more fixup magic into the flow during ``get_one_cloud()`` processing. openstacksdk-0.11.3/releasenotes/notes/session-client-b581a6e5d18c8f04.yaml0000666000175100017510000000030613236151340026174 0ustar zuulzuul00000000000000--- features: - Added kwargs and argparse processing for session_client. deprecations: - Renamed simple_client to session_client. simple_client will remain as an alias for backwards compat. openstacksdk-0.11.3/releasenotes/notes/boot-on-server-group-a80e51850db24b3d.yaml0000666000175100017510000000017113236151340027233 0ustar zuulzuul00000000000000--- features: - Added ``group`` parameter to create_server to allow booting a server into a specific server group. openstacksdk-0.11.3/releasenotes/notes/sdk-helper-41f8d815cfbcfb00.yaml0000666000175100017510000000013513236151340025423 0ustar zuulzuul00000000000000--- features: - Added helper method for constructing OpenStack SDK Connection objects. openstacksdk-0.11.3/releasenotes/notes/version-discovery-a501c4e9e9869f77.yaml0000666000175100017510000000076613236151340026703 0ustar zuulzuul00000000000000--- features: - Version discovery is now done via the keystoneauth library. shade still has one behavioral difference from default keystoneauth behavior, which is that shade will use a version it understands if it can find one even if the user has requested a different version. This change opens the door for shade to start being able to consume API microversions as needed. upgrade: - keystoneauth version 3.2.0 or higher is required because of version discovery. openstacksdk-0.11.3/releasenotes/notes/norm_role_assignments-a13f41768e62d40c.yaml0000666000175100017510000000017213236151340027557 0ustar zuulzuul00000000000000--- fixes: - Role assignments were being returned as plain dicts instead of Munch objects. This has been corrected. openstacksdk-0.11.3/releasenotes/notes/merge-shade-os-client-config-29878734ad643e33.yaml0000666000175100017510000000014713236151340030351 0ustar zuulzuul00000000000000--- other: - The shade and os-client-config libraries have been merged into python-openstacksdk. openstacksdk-0.11.3/releasenotes/notes/network-list-e6e9dafdd8446263.yaml0000666000175100017510000000065013236151340025773 0ustar zuulzuul00000000000000--- features: - Support added for configuring metadata about networks for a cloud in a list of dicts, rather than in the external_network and internal_network entries. The dicts support a name, a routes_externally field, a nat_destination field and a default_interface field. deprecations: - external_network and internal_network are deprecated and should be replaced with the list of network dicts. openstacksdk-0.11.3/releasenotes/notes/add_description_create_user-0ddc9a0ef4da840d.yaml0000666000175100017510000000012513236151340031156 0ustar zuulzuul00000000000000--- features: - Add description parameter to create_user, available on Keystone v3 openstacksdk-0.11.3/releasenotes/notes/config-flavor-specs-ca712e17971482b6.yaml0000666000175100017510000000017013236151340026742 0ustar zuulzuul00000000000000--- features: - Adds ability to add a config setting to clouds.yaml to disable fetching extra_specs from flavors. openstacksdk-0.11.3/releasenotes/notes/meta-passthrough-d695bff4f9366b65.yaml0000666000175100017510000000041213236151340026550 0ustar zuulzuul00000000000000--- features: - Added a parameter to create_image 'meta' which allows for providing parameters to the API that will not have any type conversions performed. For the simple case, the existing kwargs approach to image metadata is still the best bet. openstacksdk-0.11.3/releasenotes/notes/renamed-block-store-bc5e0a7315bfeb67.yaml0000666000175100017510000000021013236151340027215 0ustar zuulzuul00000000000000--- upgrade: - The block_store service object has been renamed to block_storage to align the API with the official service types. openstacksdk-0.11.3/releasenotes/notes/fixed-magnum-type-7406f0a60525f858.yaml0000666000175100017510000000036513236151340026364 0ustar zuulzuul00000000000000--- fixes: - Fixed magnum service_type. shade was using it as 'container' but the correct type is 'container-infra'. It's possible that on old clouds with magnum shade may now do the wrong thing. If that occurs, please file a bug. openstacksdk-0.11.3/releasenotes/notes/nova-old-microversion-5e4b8e239ba44096.yaml0000666000175100017510000000030713236151340027414 0ustar zuulzuul00000000000000--- upgrade: - Nova microversion is being requested. Since shade is not yet actively microversion aware, but has been dealing with the 2.0 structures anyway, this should not affect anyone. openstacksdk-0.11.3/releasenotes/notes/cleanup-objects-f99aeecf22ac13dd.yaml0000666000175100017510000000035213236151340026611 0ustar zuulzuul00000000000000--- features: - If shade has to create objects in swift to upload an image, it will now delete those objects upon successful image creation as they are no longer needed. They will also be deleted on fatal import errors. openstacksdk-0.11.3/releasenotes/notes/vendor-add-betacloud-03872c3485104853.yaml0000666000175100017510000000006013236151340026632 0ustar zuulzuul00000000000000--- other: - Add betacloud region for Germany openstacksdk-0.11.3/releasenotes/notes/create-stack-fix-12dbb59a48ac7442.yaml0000666000175100017510000000021613236151340026357 0ustar zuulzuul00000000000000--- fixes: - The create_stack() call was fixed to call the correct iterator method and to return the updated stack object when waiting. openstacksdk-0.11.3/releasenotes/notes/delete-image-objects-9d4b4e0fff36a23f.yaml0000666000175100017510000000212513236151340027347 0ustar zuulzuul00000000000000--- fixes: - Delete swift objects uploaded in service of uploading images at the time that the corresponding image is deleted. On some clouds, image uploads are accomplished by uploading the image to swift and then running a task-import. As shade does this action on behalf of the user, it is not reasonable to assume that the user would then be aware of or manage the swift objects shade created, which led to an ongoing leak of swift objects. - Upload swift Large Objects as Static Large Objects by default. Shade automatically uploads objects as Large Objects when they are over a segment_size threshold. It had been doing this as Dynamic Large Objects, which sound great, but which have the downside of not deleting their sub-segments when the primary object is deleted. Since nothing in the shade interface exposes that the object was segmented, the user would not know they would also need to find and delete the segments. Instead, we now upload as Static Large Objects which behave as expected and delete segments when the object is deleted. openstacksdk-0.11.3/releasenotes/notes/add-current-user-id-49b6463e6bcc3b31.yaml0000666000175100017510000000021013236151340027000 0ustar zuulzuul00000000000000--- features: - Added a new property, 'current_user_id' which contains the id of the currently authenticated user from the token. openstacksdk-0.11.3/releasenotes/notes/remove-metric-fe5ddfd52b43c852.yaml0000666000175100017510000000021513236151340026153 0ustar zuulzuul00000000000000--- upgrade: - | Removed the metric service. It is not an OpenStack service and does not have an entry in service-types-authority. openstacksdk-0.11.3/releasenotes/notes/create_server_network_fix-c4a56b31d2850a4b.yaml0000666000175100017510000000040413236151340030464 0ustar zuulzuul00000000000000--- fixes: - The create_server() API call would not use the supplied 'network' parameter if the 'nics' parameter was also supplied, even though it would be an empty list. It now uses 'network' if 'nics' is not supplied or if it is an empty list. openstacksdk-0.11.3/releasenotes/notes/wait_for_server-8dc8446b7c673d36.yaml0000666000175100017510000000013613236151340026371 0ustar zuulzuul00000000000000--- features: - New wait_for_server() API call to wait for a server to reach ACTIVE status. openstacksdk-0.11.3/releasenotes/notes/network-quotas-b98cce9ffeffdbf4.yaml0000666000175100017510000000030013236151340026713 0ustar zuulzuul00000000000000--- features: - Add new APIs, OperatorCloud.get_network_quotas(), OperatorCloud.set_network_quotas() and OperatorCloud.delete_network_quotas() to manage neutron quotas for projects and usersopenstacksdk-0.11.3/releasenotes/notes/less-file-hashing-d2497337da5acbef.yaml0000666000175100017510000000026213236151340026677 0ustar zuulzuul00000000000000--- upgrade: - shade will now only generate file hashes for glance images if both hashes are empty. If only one is given, the other will be treated as an empty string. openstacksdk-0.11.3/releasenotes/notes/change-attach-vol-return-value-4834a1f78392abb1.yaml0000666000175100017510000000040513236151340031064 0ustar zuulzuul00000000000000--- upgrade: - | The ``attach_volume`` method now always returns a ``volume_attachment`` object. Previously, ``attach_volume`` would return a ``volume`` object if it was called with ``wait=True`` and a ``volume_attachment`` object otherwise. openstacksdk-0.11.3/releasenotes/notes/fix-supplemental-fips-c9cd58aac12eb30e.yaml0000666000175100017510000000052713236151340027702 0ustar zuulzuul00000000000000--- fixes: - Fixed an issue where shade could report a floating IP being attached to a server erroneously due to only matching on fixed ip. Changed the lookup to match on port ids. This adds an API call in the case where the workaround is needed because of a bug in the cloud, but in most cases it should have no difference. openstacksdk-0.11.3/releasenotes/notes/catch-up-release-notes-e385fad34e9f3d6e.yaml0000666000175100017510000000102513236151340027656 0ustar zuulzuul00000000000000--- features: - Swiftclient instantiation now provides authentication information so that long lived swiftclient objects can reauthenticate if necessary. - Add support for explicit v2password auth type. - Add SSL support to VEXXHOST vendor profile. - Add zetta.io cloud vendor profile. fixes: - Fix bug where project_domain_{name,id} was set even if project_{name,id} was not set. other: - HPCloud vendor profile removed due to cloud shutdown. - RunAbove vendor profile removed due to migration to OVH. openstacksdk-0.11.3/releasenotes/notes/compute-quotas-b07a0f24dfac8444.yaml0000666000175100017510000000027513236151340026275 0ustar zuulzuul00000000000000--- features: - Add new APIs, OperatorCloud.get_compute_quotas(), OperatorCloud.set_compute_quotas() and OperatorCloud.delete_compute_quotas() to manage nova quotas for projects and usersopenstacksdk-0.11.3/releasenotes/notes/data-model-cf50d86982646370.yaml0000666000175100017510000000053213236151340025037 0ustar zuulzuul00000000000000--- features: - Explicit data model contracts are now defined for Flavors, Images, Security Groups, Security Group Rules, and Servers. - Resources with data model contracts are now being returned with 'location' attribute. The location carries cloud name, region name and information about the project that owns the resource. openstacksdk-0.11.3/releasenotes/notes/add_host_aggregate_support-471623faf45ec3c3.yaml0000666000175100017510000000012513236151340030616 0ustar zuulzuul00000000000000--- features: - Add support for host aggregates and host aggregate membership. openstacksdk-0.11.3/releasenotes/notes/add-jmespath-support-f47b7a503dbbfda1.yaml0000666000175100017510000000016213236151340027514 0ustar zuulzuul00000000000000--- features: - All get and search functions can now take a jmespath expression in their filters parameter. openstacksdk-0.11.3/releasenotes/notes/deprecated-profile-762afdef0e8fc9e8.yaml0000666000175100017510000000030213236151340027225 0ustar zuulzuul00000000000000--- deprecations: - | ``openstack.profile.Profile`` has been deprecated and will be removed in the ``1.0`` release. Users should use the functions in ``openstack.config`` instead. openstacksdk-0.11.3/releasenotes/notes/fip_timeout-035c4bb3ff92fa1f.yaml0000666000175100017510000000021313236151340025706 0ustar zuulzuul00000000000000--- fixes: - When creating a new server, the timeout was not being passed through to floating IP creation, which could also timeout. openstacksdk-0.11.3/releasenotes/notes/update_endpoint-f87c1f42d0c0d1ef.yaml0000666000175100017510000000046613236151340026557 0ustar zuulzuul00000000000000--- features: - Added update_endpoint as a new function that allows the user to update a created endpoint with new values rather than deleting and recreating that endpoint. This feature only works with keystone v3, with v2 it will raise an exception stating the feature is not available. openstacksdk-0.11.3/releasenotes/notes/add-server-console-078ed2696e5b04d9.yaml0000666000175100017510000000032313236151340026661 0ustar zuulzuul00000000000000--- features: - Added get_server_console method to fetch the console log from a Server. On clouds that do not expose this feature, a debug line will be logged and an empty string will be returned. openstacksdk-0.11.3/releasenotes/notes/new-floating-attributes-213cdf5681d337e1.yaml0000666000175100017510000000017513236151340027732 0ustar zuulzuul00000000000000--- features: - Added support for created_at, updated_at, description and revision_number attributes for floating ips. openstacksdk-0.11.3/releasenotes/notes/add_heat_tag_support-135aa43ba1dce3bb.yaml0000666000175100017510000000031213236151340027600 0ustar zuulzuul00000000000000--- features: - | Add tags support when creating a stack, as specified by the openstack orchestration api at [1] [1]https://developer.openstack.org/api-ref/orchestration/v1/#create-stack openstacksdk-0.11.3/releasenotes/notes/delete-obj-return-a3ecf0415b7a2989.yaml0000666000175100017510000000024213236151340026562 0ustar zuulzuul00000000000000--- fixes: - The delete_object() method was not returning True/False, similar to other delete methods. It is now consistent with the other delete APIs. openstacksdk-0.11.3/releasenotes/notes/add_server_group_support-dfa472e3dae7d34d.yaml0000666000175100017510000000010313236151340030573 0ustar zuulzuul00000000000000--- features: - Adds support to create and delete server groups. openstacksdk-0.11.3/releasenotes/notes/infer-secgroup-source-58d840aaf1a1f485.yaml0000666000175100017510000000064713236151340027467 0ustar zuulzuul00000000000000--- features: - If a cloud does not have a neutron service, it is now assumed that Nova will be the source of security groups. To handle clouds that have nova-network and do not have the security group extension, setting secgroup_source to None will prevent attempting to use them at all. If the cloud has neutron but it is not a functional source of security groups, set secgroup_source to nova. openstacksdk-0.11.3/releasenotes/notes/stream-to-file-91f48d6dcea399c6.yaml0000666000175100017510000000011713236151340026163 0ustar zuulzuul00000000000000--- features: - get_object now supports streaming output directly to a file. openstacksdk-0.11.3/releasenotes/notes/vendor-updates-f11184ba56bb27cf.yaml0000666000175100017510000000022413236151340026242 0ustar zuulzuul00000000000000--- other: - Add citycloud regions for Buffalo, Frankfurt, Karlskrona and Los Angles - Add new DreamCompute cloud and deprecate DreamHost cloud openstacksdk-0.11.3/releasenotes/notes/swift-upload-lock-d18f3d42b3a0719a.yaml0000666000175100017510000000035013236151340026565 0ustar zuulzuul00000000000000--- fixes: - Fixed an issue where a section of code that was supposed to be resetting the SwiftService object was instead resetting the protective mutex around the SwiftService object leading to an exception of "__exit__" openstacksdk-0.11.3/releasenotes/notes/nat-source-field-7c7db2a724616d59.yaml0000666000175100017510000000046513236151340026327 0ustar zuulzuul00000000000000--- features: - Added nat_source flag for networks. In some more complex clouds there can not only be more than one valid network on a server that NAT can attach to, there can also be more than one valid network from which to get a NAT address. Allow flagging a network so that it can be found. openstacksdk-0.11.3/releasenotes/notes/fix-missing-futures-a0617a1c1ce6e659.yaml0000666000175100017510000000025513236151340027161 0ustar zuulzuul00000000000000--- fixes: - Added missing dependency on futures library for python 2. The depend was missed in testing due to it having been listed in test-requirements already. openstacksdk-0.11.3/releasenotes/notes/started-using-reno-242e2b0cd27f9480.yaml0000666000175100017510000000006313236151340026700 0ustar zuulzuul00000000000000--- other: - Started using reno for release notes. openstacksdk-0.11.3/releasenotes/notes/neutron_availability_zone_extension-675c2460ebb50a09.yaml0000666000175100017510000000047613236151340032526 0ustar zuulzuul00000000000000--- features: - | ``availability_zone_hints`` now accepted for ``create_network()`` when ``network_availability_zone`` extension is enabled on target cloud. - | ``availability_zone_hints`` now accepted for ``create_router()`` when ``router_availability_zone`` extension is enabled on target cloud. openstacksdk-0.11.3/releasenotes/notes/cinder_volume_backups_support-6f7ceab440853833.yaml0000666000175100017510000000017613236151340031322 0ustar zuulzuul00000000000000--- features: - Add support for Cinder volume backup resources, with the usual methods (search/list/get/create/delete). openstacksdk-0.11.3/releasenotes/notes/load-yaml-3177efca78e5c67a.yaml0000666000175100017510000000044013236151340025203 0ustar zuulzuul00000000000000--- features: - Added a flag, 'load_yaml_config' that defaults to True. If set to false, no clouds.yaml files will be loaded. This is beneficial if os-client-config wants to be used inside of a service where end-user clouds.yaml files would make things more confusing. openstacksdk-0.11.3/releasenotes/notes/get_object_api-968483adb016bce1.yaml0000666000175100017510000000014513236151340026170 0ustar zuulzuul00000000000000--- features: - Added a new API call, OpenStackCloud.get_object(), to download objects from swift. openstacksdk-0.11.3/releasenotes/notes/default-cloud-7ee0bcb9e5dd24b9.yaml0000666000175100017510000000042313236151340026206 0ustar zuulzuul00000000000000--- issues: - If there was only one cloud defined in clouds.yaml os-client-config was requiring the cloud parameter be passed. This is inconsistent with how the envvars cloud works which WILL work without setting the cloud parameter if it's the only cloud. openstacksdk-0.11.3/releasenotes/notes/cinderv2-norm-fix-037189c60b43089f.yaml0000666000175100017510000000012113236151340026263 0ustar zuulzuul00000000000000--- fixes: - Fixed the volume normalization function when used with cinder v2. openstacksdk-0.11.3/releasenotes/notes/cloud-profile-status-e0d29b5e2f10e95c.yaml0000666000175100017510000000034313236151340027377 0ustar zuulzuul00000000000000--- features: - Add a field to vendor cloud profiles to indicate active, deprecated and shutdown status. A message to the user is triggered when attempting to use cloud with either deprecated or shutdown status. openstacksdk-0.11.3/releasenotes/notes/removed-meter-6f6651b6e452e000.yaml0000666000175100017510000000024313236151340025640 0ustar zuulzuul00000000000000--- upgrade: - | Meter and Alarm services have been removed. The Ceilometer REST API has been deprecated for quite some time and is no longer supported. openstacksdk-0.11.3/releasenotes/notes/volume-quotas-5b674ee8c1f71eb6.yaml0000666000175100017510000000027513236151340026151 0ustar zuulzuul00000000000000--- features: - Add new APIs, OperatorCloud.get_volume_quotas(), OperatorCloud.set_volume_quotas() and OperatorCloud.delete_volume_quotas() to manage cinder quotas for projects and usersopenstacksdk-0.11.3/releasenotes/notes/glance-image-pagination-0b4dfef22b25852b.yaml0000666000175100017510000000017313236151340027745 0ustar zuulzuul00000000000000--- issues: - Fixed an issue where glance image list pagination was being ignored, leading to truncated image lists. openstacksdk-0.11.3/releasenotes/notes/false-not-attribute-error-49484d0fdc61f75d.yaml0000666000175100017510000000040513236151340030265 0ustar zuulzuul00000000000000--- fixes: - delete_image used to fail with an AttributeError if an invalid image name or id was passed, rather than returning False which was the intent. This is worthy of note because it's a behavior change, but the previous behavior was a bug. openstacksdk-0.11.3/releasenotes/notes/removed-profile-437f3038025b0fb3.yaml0000666000175100017510000000046513236151340026165 0ustar zuulzuul00000000000000--- upgrade: - The Profile object has been replaced with the use of CloudRegion objects from openstack.config. - The openstacksdk specific Session object has been removed. - Proxy objects are now subclasses of keystoneauth1.adapter.Adapter. - REST interactions all go through TaskManager now. openstacksdk-0.11.3/releasenotes/notes/make-rest-client-dd3d365632a26fa0.yaml0000666000175100017510000000022013236151340026361 0ustar zuulzuul00000000000000--- deprecations: - Renamed session_client to make_rest_client. session_client will continue to be supported for backwards compatability. openstacksdk-0.11.3/releasenotes/notes/fixed-url-parameters-89c57c3dd64f1573.yaml0000666000175100017510000000047713236151364027247 0ustar zuulzuul00000000000000--- fixes: - | Fixed an issue where some valid query parameters were not listed causing errors due to the new behavior of throwing errors when an invalid filter condition is specified. Specifically, ``tenant_id`` as a filter for Neutron networks and ``ip_version`` for Neutron netork IP availability. openstacksdk-0.11.3/releasenotes/notes/flavor_fix-a53c6b326dc34a2c.yaml0000666000175100017510000000040113236151340025430 0ustar zuulzuul00000000000000--- features: - Flavors will always contain an 'extra_specs' attribute. Client cruft, such as 'links', 'HUMAN_ID', etc. has been removed. fixes: - Setting and unsetting flavor extra specs now works. This had been broken since the 1.2.0 release. openstacksdk-0.11.3/releasenotes/notes/image-flavor-by-name-54865b00ebbf1004.yaml0000666000175100017510000000061313236151340027034 0ustar zuulzuul00000000000000--- features: - The image and flavor parameters for create_server now accept name in addition to id and dict. If given as a name or id, shade will do a get_image or a get_flavor to find the matching image or flavor. If you have an id already and are not using any caching and the extra lookup is annoying, passing the id in as "dict(id='my-id')" will avoid the lookup. openstacksdk-0.11.3/releasenotes/notes/grant-revoke-assignments-231d3f9596a1ae75.yaml0000666000175100017510000000011313236151340030107 0ustar zuulzuul00000000000000--- features: - add granting and revoking of roles from groups and users openstacksdk-0.11.3/releasenotes/notes/volume-types-a07a14ae668e7dd2.yaml0000666000175100017510000000015113236151340025757 0ustar zuulzuul00000000000000--- features: - Add support for listing volume types. - Add support for managing volume type access. openstacksdk-0.11.3/releasenotes/notes/fix-properties-key-conflict-2161ca1faaad6731.yaml0000666000175100017510000000016613236151340030640 0ustar zuulzuul00000000000000--- issues: - Images in the cloud with a string property named "properties" caused image normalization to bomb. openstacksdk-0.11.3/releasenotes/notes/option-precedence-1fecab21fdfb2c33.yaml0000666000175100017510000000040413236151340027116 0ustar zuulzuul00000000000000--- fixes: - Reverse the order of option selction in ``OpenStackConfig._validate_auth()`` to prefer auth options passed in (from argparse) over those found in clouds.yaml. This allows the application to override config profile auth settings. openstacksdk-0.11.3/releasenotes/notes/image-from-volume-9acf7379f5995b5b.yaml0000666000175100017510000000010213236151340026604 0ustar zuulzuul00000000000000--- features: - Added ability to create an image from a volume. openstacksdk-0.11.3/releasenotes/notes/resource2-migration-835590b300bef621.yaml0000666000175100017510000000066113236151340026767 0ustar zuulzuul00000000000000--- upgrade: - | The ``Resource2`` and ``Proxy2`` migration has been completed. The original ``Resource`` and ``Proxy`` clases have been removed and replaced with ``Resource2`` and ``Proxy2``. deprecations: - | The ``shade`` functionality that has been merged in to openstacksdk is found in ``openstack.cloud`` currently. None of these interfaces should be relied upon as the merge has not yet completed. openstacksdk-0.11.3/releasenotes/notes/renamed-telemetry-c08ae3e72afca24f.yaml0000666000175100017510000000013113236151340027062 0ustar zuulzuul00000000000000--- upgrade: - Renamed telemetry to meter to align with the official service type. openstacksdk-0.11.3/releasenotes/notes/add_designate_recordsets_support-69af0a6b317073e7.yaml0000666000175100017510000000020513236151340031754 0ustar zuulzuul00000000000000--- features: - Add support for Designate recordsets resources, with the usual methods (search/list/get/create/update/delete). openstacksdk-0.11.3/releasenotes/notes/add_designate_zones_support-35fa9b8b09995b43.yaml0000666000175100017510000000020013236151340030745 0ustar zuulzuul00000000000000--- features: - Add support for Designate zones resources, with the usual methods (search/list/get/create/update/delete). openstacksdk-0.11.3/releasenotes/notes/always-detail-cluster-templates-3eb4b5744ba327ac.yaml0000666000175100017510000000025513236151340031517 0ustar zuulzuul00000000000000--- upgrade: - Cluster Templates have data model and normalization now. As a result, the detail parameter is now ignored and detailed records are always returned. openstacksdk-0.11.3/releasenotes/notes/router_ext_gw-b86582317bca8b39.yaml0000666000175100017510000000016713236151340026062 0ustar zuulzuul00000000000000--- fixes: - No longer fail in list_router_interfaces() if a router does not have the external_gateway_info key. openstacksdk-0.11.3/releasenotes/notes/get-limits-c383c512f8e01873.yaml0000666000175100017510000000010513236151340025147 0ustar zuulzuul00000000000000--- features: - Allow to retrieve the limits of a specific project openstacksdk-0.11.3/releasenotes/notes/bug-2001080-de52ead3c5466792.yaml0000666000175100017510000000044013236151340024625 0ustar zuulzuul00000000000000--- fixes: - | [`bug 2001080 `_] Project update will only update the enabled field of projects when ``enabled=True`` or ``enabled=False`` is passed explicitly. The previous behavior had ``enabled=True`` as the default. openstacksdk-0.11.3/releasenotes/notes/min-max-legacy-version-301242466ddefa93.yaml0000666000175100017510000000147313236151340027447 0ustar zuulzuul00000000000000--- features: - Add min_version and max_version to get_legacy_client and to get_session_endpoint. At the moment this is only really fully plumbed through for cinder, which has extra special fun around volume, volumev2 and volumev3. Min and max versions to both methods will look through the options available in the service catalog and try to return the latest one available from the span of requested versions. This means a user can say volume_api_version=None, min_version=2, max_version=3 will get an endpoint from get_session_endpoint or a Client from cinderclient that will be either v2 or v3 but not v1. In the future, min and max version for get_session_endpoint should be able to sort out appropriate endpoints via version discovery, but that does not currently exist. openstacksdk-0.11.3/releasenotes/notes/compute-usage-defaults-5f5b2936f17ff400.yaml0000666000175100017510000000066113236151340027545 0ustar zuulzuul00000000000000--- features: - get_compute_usage now has a default value for the start parameter of 2010-07-06. That was the date the OpenStack project started. It's completely impossible for someone to have Nova usage data that goes back further in time. Also, both the start and end date parameters now also accept strings which will be parsed and timezones will be properly converted to UTC which is what Nova expects. openstacksdk-0.11.3/releasenotes/notes/fix-compat-with-old-keystoneauth-66e11ee9d008b962.yaml0000666000175100017510000000044513236151340031500 0ustar zuulzuul00000000000000--- issues: - Fixed a regression when using latest os-client-config with the keystoneauth from stable/newton. Although this isn't a super common combination, the added feature that broke the interaction is really not worthy of the incompatibility, so a workaround was added. openstacksdk-0.11.3/releasenotes/notes/dual-stack-networks-8a81941c97d28deb.yaml0000666000175100017510000000063313236151340027153 0ustar zuulzuul00000000000000--- features: - Added support for dual stack networks where the IPv4 subnet and the IPv6 subnet have opposite public/private qualities. It is now possible to add configuration to clouds.yaml that will indicate that a network is public for v6 and private for v4, which is otherwise very difficult to correctly infer while setting server attributes like private_v4, public_v4 and public_v6. openstacksdk-0.11.3/releasenotes/notes/delete_project-399f9b3107014dde.yaml0000666000175100017510000000034713236151340026152 0ustar zuulzuul00000000000000--- fixes: - The delete_project() API now conforms to our standard of returning True when the delete succeeds, or False when the project was not found. It would previously raise an expection if the project was not found. openstacksdk-0.11.3/releasenotes/notes/make_object_metadata_easier.yaml-e9751723e002e06f.yaml0000666000175100017510000000036613236151340031473 0ustar zuulzuul00000000000000--- features: - create_object() now has a "metadata" parameter that can be used to create an object with metadata of each key and value pair in that dictionary - Add an update_object() function that updates the metadata of a swift object openstacksdk-0.11.3/releasenotes/notes/renamed-bare-metal-b1cdbc52af14e042.yaml0000666000175100017510000000013613236151340026772 0ustar zuulzuul00000000000000--- upgrade: - Renamed bare-metal to baremetal to align with the official service type. openstacksdk-0.11.3/releasenotes/notes/fix-config-drive-a148b7589f7e1022.yaml0000666000175100017510000000037213236151340026241 0ustar zuulzuul00000000000000--- issues: - Fixed an issue where nodepool could cause config_drive to be passed explicitly as None, which was getting directly passed through to the JSON. Also fix the same logic for key_name and scheduler_hints while we're in there. openstacksdk-0.11.3/releasenotes/notes/delete-autocreated-1839187b0aa35022.yaml0000666000175100017510000000025113236151340026536 0ustar zuulzuul00000000000000--- features: - Added new method, delete_autocreated_image_objects that can be used to delete any leaked objects shade may have created on behalf of the user. openstacksdk-0.11.3/releasenotes/notes/list-az-names-a38c277d1192471b.yaml0000666000175100017510000000007713236151340025561 0ustar zuulzuul00000000000000--- features: - Added list_availability_zone_names API call. openstacksdk-0.11.3/releasenotes/notes/service_enabled_flag-c917b305d3f2e8fd.yaml0000666000175100017510000000031013236151340027414 0ustar zuulzuul00000000000000--- fixes: - Keystone service descriptions were missing an attribute describing whether or not the service was enabled. A new 'enabled' boolean attribute has been added to the service data. openstacksdk-0.11.3/releasenotes/notes/renamed-cluster-743da6d321fffcba.yaml0000666000175100017510000000013413236151340026536 0ustar zuulzuul00000000000000--- upgrade: - Renamed cluster to clustering to align with the official service type. openstacksdk-0.11.3/releasenotes/notes/no-more-troveclient-0a4739c21432ac63.yaml0000666000175100017510000000030513236151340026765 0ustar zuulzuul00000000000000--- upgrade: - troveclient is no longer a hard dependency. Users who were using shade to construct a troveclient Client object should use os_client_config.make_legacy_client instead. openstacksdk-0.11.3/releasenotes/notes/removed-glanceclient-105c7fba9481b9be.yaml0000666000175100017510000000224313236151340027404 0ustar zuulzuul00000000000000--- prelude: > The ``shade`` and ``os-client-config`` libraries have been merged in to openstacksdk. As a result, their functionality is being integrated into the sdk functionality, and in some cases is replacing exisiting things. The ``openstack.profile.Profile`` and ``openstack.auth.base.BaseAuthPlugin`` classes are no more. Profile has been replace by ``openstack.config.cloud_region.CloudRegion`` from `os-client-config `_ ``openstack.auth.base.BaseAuthPlugin`` has been replaced with the Auth plugins from keystoneauth. Service proxy names on the ``openstack.connection.Connection`` are all based on the official names from the OpenStack Service Types Authority. ``openstack.proxy.Proxy`` is now a subclass of ``keystoneauth1.adapter.Adapter``. Removed local logic that duplicates keystoneauth logic. This means every proxy also has direct REST primitives available. .. code-block:: python connection = connection.Connection() servers = connection.compute.servers() server_response = connection.compute.get('/servers') openstacksdk-0.11.3/releasenotes/notes/cache-in-use-volumes-c7fa8bb378106fe3.yaml0000666000175100017510000000011213236151340027242 0ustar zuulzuul00000000000000--- fixes: - Fixed caching the volume list when volumes are in use. openstacksdk-0.11.3/releasenotes/notes/list-servers-all-projects-349e6dc665ba2e8d.yaml0000666000175100017510000000035013236151340030361 0ustar zuulzuul00000000000000--- features: - Add 'all_projects' parameter to list_servers and search_servers which will tell Nova to return servers for all projects rather than just for the current project. This is only available to cloud admins. openstacksdk-0.11.3/releasenotes/notes/feature-server-metadata-50caf18cec532160.yaml0000666000175100017510000000024713236151340027737 0ustar zuulzuul00000000000000--- features: - Add new APIs, OpenStackCloud.set_server_metadata() and OpenStackCloud.delete_server_metadata() to manage metadata of existing nova compute instances openstacksdk-0.11.3/.stestr.conf0000666000175100017510000000006613236151340016564 0ustar zuulzuul00000000000000[DEFAULT] test_path=./openstack/tests/unit top_dir=./ openstacksdk-0.11.3/HACKING.rst0000666000175100017510000000264513236151340016116 0ustar zuulzuul00000000000000openstacksdk Style Commandments =============================== Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/ Indentation ----------- PEP-8 allows for 'visual' indentation. Do not use it. Visual indentation looks like this: .. code-block:: python return_value = self.some_method(arg1, arg1, arg3, arg4) Visual indentation makes refactoring the code base unneccesarily hard. Instead of visual indentation, use this: .. code-block:: python return_value = self.some_method( arg1, arg1, arg3, arg4) That way, if some_method ever needs to be renamed, the only line that needs to be touched is the line with some_method. Additionaly, if you need to line break at the top of a block, please indent the continuation line an additional 4 spaces, like this: .. code-block:: python for val in self.some_method( arg1, arg1, arg3, arg4): self.do_something_awesome() Neither of these are 'mandated' by PEP-8. However, they are prevailing styles within this code base. Unit Tests ---------- Unit tests should be virtually instant. If a unit test takes more than 1 second to run, it is a bad unit test. Honestly, 1 second is too slow. All unit test classes should subclass `openstack.tests.unit.base.BaseTestCase`. The base TestCase class takes care of properly creating `OpenStackCloud` objects in a way that protects against local environment. openstacksdk-0.11.3/MANIFEST.in0000666000175100017510000000013513236151340016046 0ustar zuulzuul00000000000000include AUTHORS include ChangeLog exclude .gitignore exclude .gitreview global-exclude *.pycopenstacksdk-0.11.3/devstack/0000775000175100017510000000000013236151501016112 5ustar zuulzuul00000000000000openstacksdk-0.11.3/devstack/plugin.sh0000666000175100017510000000225413236151340017752 0ustar zuulzuul00000000000000# Install and configure **openstacksdk** library in devstack # # To enable openstacksdk in devstack add an entry to local.conf that looks like # # [[local|localrc]] # enable_plugin openstacksdk git://git.openstack.org/openstack/python-openstacksdk function preinstall_openstacksdk { : } function install_openstacksdk { if use_library_from_git "python-openstacksdk"; then # don't clone, it'll be done by the plugin install setup_dev_lib "python-openstacksdk" else pip_install "python-openstacksdk" fi } function configure_openstacksdk { : } function initialize_openstacksdk { : } function unstack_openstacksdk { : } function clean_openstacksdk { : } # This is the main for plugin.sh if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then preinstall_openstacksdk elif [[ "$1" == "stack" && "$2" == "install" ]]; then install_openstacksdk elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then configure_openstacksdk elif [[ "$1" == "stack" && "$2" == "extra" ]]; then initialize_openstacksdk fi if [[ "$1" == "unstack" ]]; then unstack_openstacksdk fi if [[ "$1" == "clean" ]]; then clean_openstacksdk fi openstacksdk-0.11.3/babel.cfg0000666000175100017510000000002013236151340016027 0ustar zuulzuul00000000000000[python: **.py] openstacksdk-0.11.3/doc/0000775000175100017510000000000013236151501015053 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/0000775000175100017510000000000013236151501016353 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/index.rst0000666000175100017510000000146513236151340020225 0ustar zuulzuul00000000000000Welcome to the OpenStack SDK! ============================= This documentation is split into three sections: * an :doc:`installation ` guide * a section for :doc:`users ` looking to build applications which make use of OpenStack * a section for those looking to :doc:`contribute ` to this project Installation ------------ .. toctree:: :maxdepth: 2 install/index For Users --------- .. toctree:: :maxdepth: 2 user/index For Contributors ---------------- .. toctree:: :maxdepth: 2 contributor/index .. include:: ../../README.rst General Information ------------------- General information about the SDK including a glossary and release history. .. toctree:: :maxdepth: 1 Glossary of Terms Release Notes openstacksdk-0.11.3/doc/source/enforcer.py0000666000175100017510000001223013236151364020537 0ustar zuulzuul00000000000000import importlib import itertools import os from bs4 import BeautifulSoup from sphinx import errors # NOTE: We do this because I can't find any way to pass "-v" # into sphinx-build through pbr... DEBUG = True if os.getenv("ENFORCER_DEBUG") else False WRITTEN_METHODS = set() # NOTE: This is temporary! These methods currently exist on the base # Proxy class as public methods, but they're deprecated in favor of # subclasses actually exposing them if necessary. However, as they're # public and purposely undocumented, they cause spurious warnings. # Ignore these methods until they're actually removed from the API, # and then we can take this special case out. IGNORED_METHODS = ("wait_for_delete", "wait_for_status") class EnforcementError(errors.SphinxError): """A mismatch between what exists and what's documented""" category = "Enforcer" def get_proxy_methods(): """Return a set of public names on all proxies""" names = ["openstack.baremetal.v1._proxy", "openstack.clustering.v1._proxy", "openstack.block_storage.v2._proxy", "openstack.compute.v2._proxy", "openstack.database.v1._proxy", "openstack.identity.v2._proxy", "openstack.identity.v3._proxy", "openstack.image.v1._proxy", "openstack.image.v2._proxy", "openstack.key_manager.v1._proxy", "openstack.load_balancer.v2._proxy", "openstack.message.v2._proxy", "openstack.network.v2._proxy", "openstack.object_store.v1._proxy", "openstack.orchestration.v1._proxy", "openstack.workflow.v2._proxy"] modules = (importlib.import_module(name) for name in names) methods = set() for module in modules: # We're not going to use the Proxy for anything other than a `dir` # so just pass a dummy value so we can create the instance. instance = module.Proxy("") # We only document public names names = [name for name in dir(instance) if not name.startswith("_")] # Remove the wait_for_* names temporarily. for name in IGNORED_METHODS: names.remove(name) good_names = [module.__name__ + ".Proxy." + name for name in names] methods.update(good_names) return methods def page_context(app, pagename, templatename, context, doctree): """Handle html-page-context-event This event is emitted once the builder has the contents to create an HTML page, but before the template is rendered. This is the point where we'll know what documentation is going to be written, so gather all of the method names that are about to be included so we can check which ones were or were not processed earlier by autodoc. """ if "users/proxies" in pagename: soup = BeautifulSoup(context["body"], "html.parser") dts = soup.find_all("dt") ids = [dt.get("id") for dt in dts] written = 0 for id in ids: if id is not None and "_proxy.Proxy" in id: WRITTEN_METHODS.add(id) written += 1 if DEBUG: app.info("ENFORCER: Wrote %d proxy methods for %s" % ( written, pagename)) def build_finished(app, exception): """Handle build-finished event This event is emitted once the builder has written all of the output. At this point we just compare what we know was written to what we know exists within the modules and share the results. When enforcer_warnings_as_errors=True in conf.py, this method will raise EnforcementError on any failures in order to signal failure. """ all_methods = get_proxy_methods() app.info("ENFORCER: %d proxy methods exist" % len(all_methods)) app.info("ENFORCER: %d proxy methods written" % len(WRITTEN_METHODS)) missing = all_methods - WRITTEN_METHODS def is_ignored(name): for ignored_name in IGNORED_METHODS: if ignored_name in name: return True return False # TEMPORARY: Ignore the wait_for names when determining what is missing. app.info("ENFORCER: Ignoring wait_for_* names...") missing = set(itertools.filterfalse(is_ignored, missing)) missing_count = len(missing) app.info("ENFORCER: Found %d missing proxy methods " "in the output" % missing_count) # TODO(shade) This is spewing a bunch of content for missing thing that # are not actually missing. Leave it as info rather than warn so that the # gate doesn't break ... but we should figure out why this is broken and # fix it. # We also need to deal with Proxy subclassing keystoneauth.adapter.Adapter # now - some of the warnings come from Adapter elements. for name in sorted(missing): app.info("ENFORCER: %s was not included in the output" % name) if app.config.enforcer_warnings_as_errors and missing_count > 0: raise EnforcementError( "There are %d undocumented proxy methods" % missing_count) def setup(app): app.add_config_value("enforcer_warnings_as_errors", False, "env") app.connect("html-page-context", page_context) app.connect("build-finished", build_finished) openstacksdk-0.11.3/doc/source/user/0000775000175100017510000000000013236151501017331 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/index.rst0000666000175100017510000001165013236151364021206 0ustar zuulzuul00000000000000Getting started with the OpenStack SDK ====================================== For a listing of terms used throughout the SDK, including the names of projects and services supported by it, see the :doc:`glossary <../glossary>`. Installation ------------ The OpenStack SDK is available on `PyPI `_ under the name **openstacksdk**. To install it, use ``pip``:: $ pip install openstacksdk .. _user_guides: User Guides ----------- These guides walk you through how to make use of the libraries we provide to work with each OpenStack service. If you're looking for a cookbook approach, this is where you'll want to begin. .. toctree:: :maxdepth: 1 Configuration Connect to an OpenStack Cloud Connect to an OpenStack Cloud Using a Config File Using Cloud Abstration Layer Logging Microversions Baremetal Block Storage Clustering Compute Database Identity Image Key Manager Message Network Object Store Orchestration API Documentation ----------------- Service APIs are exposed through a two-layered approach. The classes exposed through our `Connection Interface`_ are the place to start if you're an application developer consuming an OpenStack cloud. The `Resource Interface`_ is the layer upon which the `Connection Interface`_ is built, with methods on `Service Proxies`_ accepting and returning :class:`~openstack.resource.Resource` objects. The Cloud Abstraction layer has a data model. .. toctree:: :maxdepth: 1 model Connection Interface ~~~~~~~~~~~~~~~~~~~~ A :class:`~openstack.connection.Connection` instance maintains your cloud config, session and authentication information providing you with a set of higher-level interfaces to work with OpenStack services. .. toctree:: :maxdepth: 1 connection Once you have a :class:`~openstack.connection.Connection` instance, services are accessed through instances of :class:`~openstack.proxy.BaseProxy` or subclasses of it that exist as attributes on the :class:`~openstack.connection.Connection`. .. autoclass:: openstack.proxy.BaseProxy :members: .. _service-proxies: Service Proxies ~~~~~~~~~~~~~~~ The following service proxies exist on the :class:`~openstack.connection.Connection`. The service proxies are all always present on the :class:`~openstack.connection.Connection` object, but the combination of your ``CloudRegion`` and the catalog of the cloud in question control which services can be used. .. toctree:: :maxdepth: 1 Baremetal Block Storage Clustering Compute Database Identity v2 Identity v3 Image v1 Image v2 Key Manager Load Balancer Message v2 Network Object Store Orchestration Workflow Resource Interface ~~~~~~~~~~~~~~~~~~ The *Resource* layer is a lower-level interface to communicate with OpenStack services. While the classes exposed by the `Service Proxies`_ build a convenience layer on top of this, :class:`~openstack.resource.Resource` objects can be used directly. However, the most common usage of this layer is in receiving an object from a class in the `Connection Interface_`, modifying it, and sending it back to the `Service Proxies`_ layer, such as to update a resource on the server. The following services have exposed :class:`~openstack.resource.Resource` classes. .. toctree:: :maxdepth: 1 Baremetal Block Storage Clustering Compute Database Identity Image Key Management Load Balancer Network Orchestration Object Store Workflow Low-Level Classes ~~~~~~~~~~~~~~~~~ The following classes are not commonly used by application developers, but are used to construct applications to talk to OpenStack APIs. Typically these parts are managed through the `Connection Interface`_, but their use can be customized. .. toctree:: :maxdepth: 1 resource service_filter utils Presentations ============= .. toctree:: :maxdepth: 1 multi-cloud-demo openstacksdk-0.11.3/doc/source/user/guides/0000775000175100017510000000000013236151501020611 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/guides/clustering.rst0000666000175100017510000000236513236151340023533 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================ Using OpenStack Clustering ================================ Before working with the Clustering service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used by all examples in this guide. The primary abstractions/resources of the Clustering service are: .. toctree:: :maxdepth: 1 Profile Type Profile Cluster Node Policy Type Policy Receiver Action Event openstacksdk-0.11.3/doc/source/user/guides/key_manager.rst0000666000175100017510000000377013236151340023637 0ustar zuulzuul00000000000000Using OpenStack Key Manager =========================== Before working with the Key Manager service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. contents:: Table of Contents :local: .. note:: Some interactions with the Key Manager service differ from that of other services in that resources do not have a proper ``id`` parameter, which is necessary to make some calls. Instead, resources have a separately named id attribute, e.g., the Secret resource has ``secret_id``. The examples below outline when to pass in those id values. Create a Secret --------------- The Key Manager service allows you to create new secrets by passing the attributes of the :class:`~openstack.key_manager.v1.secret.Secret` to the :meth:`~openstack.key_manager.v1._proxy.Proxy.create_secret` method. .. literalinclude:: ../examples/key_manager/create.py :pyobject: create_secret List Secrets ------------ Once you have stored some secrets, they are available for you to list via the :meth:`~openstack.key_manager.v1._proxy.Proxy.secrets` method. This method returns a generator, which yields each :class:`~openstack.key_manager.v1.secret.Secret`. .. literalinclude:: ../examples/key_manager/list.py :pyobject: list_secrets The :meth:`~openstack.key_manager.v1._proxy.Proxy.secrets` method can also make more advanced queries to limit the secrets that are returned. .. literalinclude:: ../examples/key_manager/list.py :pyobject: list_secrets_query Get Secret Payload ------------------ Once you have received a :class:`~openstack.key_manager.v1.secret.Secret`, you can obtain the payload for it by passing the secret's id value to the :meth:`~openstack.key_manager.v1._proxy.Proxy.secrets` method. Use the :data:`~openstack.key_manager.v1.secret.Secret.secret_id` attribute when making this request. .. literalinclude:: ../examples/key_manager/get.py :pyobject: get_secret_payload openstacksdk-0.11.3/doc/source/user/guides/logging.rst0000666000175100017510000001000213236151340022765 0ustar zuulzuul00000000000000======= Logging ======= .. note:: TODO(shade) This document is written from a shade POV. It needs to be combined with the existing logging guide, but also the logging systems need to be rationalized. `openstacksdk` uses `Python Logging`_. As `openstacksdk` is a library, it does not configure logging handlers automatically, expecting instead for that to be the purview of the consuming application. Simple Usage ------------ For consumers who just want to get a basic logging setup without thinking about it too deeply, there is a helper method. If used, it should be called before any other openstacksdk functionality. .. autofunction:: openstack.enable_logging .. code-block:: python import openstack openstack.enable_logging() The ``stream`` parameter controls the stream where log message are written to. It defaults to `sys.stdout` which will result in log messages being written to STDOUT. It can be set to another output stream, or to ``None`` to disable logging to the console. The ``path`` parameter sets up logging to log to a file. By default, if ``path`` is given and ``stream`` is not, logging will only go to ``path``. You can combine the ``path`` and ``stream`` parameters to log to both places simultaneously. To log messages to a file called ``openstack.log`` and the console on ``stdout``: .. code-block:: python import sys from openstack import utils utils.enable_logging(debug=True, path='openstack.log', stream=sys.stdout) `openstack.enable_logging` also sets up a few other loggers and squelches some warnings or log messages that are otherwise uninteresting or unactionable by an openstacksdk user. Advanced Usage -------------- `openstacksdk` logs to a set of different named loggers. Most of the logging is set up to log to the root ``openstack`` logger. There are additional sub-loggers that are used at times, primarily so that a user can decide to turn on or off a specific type of logging. They are listed below. openstack.config Issues pertaining to configuration are logged to the ``openstack.config`` logger. openstack.task_manager `openstacksdk` uses a Task Manager to perform remote calls. The ``openstack.task_manager`` logger emits messages at the start and end of each Task announcing what it is going to run and then what it ran and how long it took. Logging ``openstack.task_manager`` is a good way to get a trace of external actions `openstacksdk` is taking without full `HTTP Tracing`_. openstack.iterate_timeout When `openstacksdk` needs to poll a resource, it does so in a loop that waits between iterations and ultimately times out. The ``openstack.iterate_timeout`` logger emits messages for each iteration indicating it is waiting and for how long. These can be useful to see for long running tasks so that one can know things are not stuck, but can also be noisy. openstack.fnmatch `openstacksdk` will try to use `fnmatch`_ on given `name_or_id` arguments. It's a best effort attempt, so pattern misses are logged to ``openstack.fnmatch``. A user may not be intending to use an fnmatch pattern - such as if they are trying to find an image named ``Fedora 24 [official]``, so these messages are logged separately. .. _fnmatch: https://pymotw.com/2/fnmatch/ HTTP Tracing ------------ HTTP Interactions are handled by `keystoneauth`_. If you want to enable HTTP tracing while using openstacksdk and are not using `openstack.enable_logging`, set the log level of the ``keystoneauth`` logger to ``DEBUG``. For more information see https://docs.openstack.org/keystoneauth/latest/using-sessions.html#logging .. _keystoneauth: https://docs.openstack.org/keystoneauth/latest/ Python Logging -------------- Python logging is a standard feature of Python and is documented fully in the Python Documentation, which varies by version of Python. For more information on Python Logging for Python v2, see https://docs.python.org/2/library/logging.html. For more information on Python Logging for Python v3, see https://docs.python.org/3/library/logging.html. openstacksdk-0.11.3/doc/source/user/guides/connect_from_config.rst0000666000175100017510000000436013236151340025352 0ustar zuulzuul00000000000000Connect From Config =================== In order to work with an OpenStack cloud you first need to create a :class:`~openstack.connection.Connection` to it using your credentials. A :class:`~openstack.connection.Connection` can be created in 3 ways, using the class itself (see :doc:`connect`), a file, or environment variables as illustrated below. The SDK uses `os-client-config `_ to handle the configuration. Create Connection From A File ----------------------------- Default Location **************** To create a connection from a file you need a YAML file to contain the configuration. .. literalinclude:: ../../contributor/clouds.yaml :language: yaml To use a configuration file called ``clouds.yaml`` in one of the default locations: * Current Directory * ~/.config/openstack * /etc/openstack call :py:func:`~openstack.connection.from_config`. The ``from_config`` function takes three optional arguments: * **cloud_name** allows you to specify a cloud from your ``clouds.yaml`` file. * **cloud_config** allows you to pass in an existing ``openstack.config.loader.OpenStackConfig``` object. * **options** allows you to specify a namespace object with options to be added to the cloud config. .. literalinclude:: ../examples/connect.py :pyobject: Opts .. literalinclude:: ../examples/connect.py :pyobject: create_connection_from_config .. literalinclude:: ../examples/connect.py :pyobject: create_connection_from_args .. note:: To enable logging, set ``debug=True`` in the ``options`` object. User Defined Location ********************* To use a configuration file in a user defined location set the environment variable ``OS_CLIENT_CONFIG_FILE`` to the absolute path of a file.:: export OS_CLIENT_CONFIG_FILE=/path/to/my/config/my-clouds.yaml and call :py:func:`~openstack.connection.from_config` with the **cloud_name** of the cloud configuration to use, . .. Create Connection From Environment Variables -------------------------------------------- TODO(etoews): Document when https://bugs.launchpad.net/os-client-config/+bug/1489617 is fixed. Next ---- Now that you can create a connection, continue with the :ref:`user_guides` for an OpenStack service. openstacksdk-0.11.3/doc/source/user/guides/identity.rst0000666000175100017510000000777513236151340023217 0ustar zuulzuul00000000000000Using OpenStack Identity ======================== Before working with the Identity service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. The OpenStack Identity service is the default identity management system for OpenStack. The Identity service authentication process confirms the identity of a user and an incoming request by validating a set of credentials that the user supplies. Initially, these credentials are a user name and password or a user name and API key. When the Identity service validates user credentials, it issues an authentication token that the user provides in subsequent requests. An authentication token is an alpha-numeric text string that enables access to OpenStack APIs and resources. A token may be revoked at any time and is valid for a finite duration. List Users ---------- A **user** is a digital representation of a person, system, or service that uses OpenStack cloud services. The Identity service validates that incoming requests are made by the user who claims to be making the call. Users have a login and can access resources by using assigned tokens. Users can be directly assigned to a particular project and behave as if they are contained in that project. .. literalinclude:: ../examples/identity/list.py :pyobject: list_users Full example: `identity resource list`_ List Credentials ---------------- **Credentials** are data that confirms the identity of the user. For example, user name and password, user name and API key, or an authentication token that the Identity service provides. .. literalinclude:: ../examples/identity/list.py :pyobject: list_credentials Full example: `identity resource list`_ List Projects ------------- A **project** is a container that groups or isolates resources or identity objects. .. literalinclude:: ../examples/identity/list.py :pyobject: list_projects Full example: `identity resource list`_ List Domains ------------ A **domain** is an Identity service API v3 entity and represents a collection of projects and users that defines administrative boundaries for the management of Identity entities. Users can be granted the administrator role for a domain. A domain administrator can create projects, users, and groups in a domain and assign roles to users and groups in a domain. .. literalinclude:: ../examples/identity/list.py :pyobject: list_domains Full example: `identity resource list`_ List Groups ----------- A **group** is an Identity service API v3 entity and represents a collection of users that are owned by a domain. A group role granted to a domain or project applies to all users in the group. Adding users to, or removing users from, a group respectively grants, or revokes, their role and authentication to the associated domain or project. .. literalinclude:: ../examples/identity/list.py :pyobject: list_groups Full example: `identity resource list`_ List Services ------------- A **service** is an OpenStack service, such as Compute, Object Storage, or Image service, that provides one or more endpoints through which users can access resources and perform operations. .. literalinclude:: ../examples/identity/list.py :pyobject: list_services Full example: `identity resource list`_ List Endpoints -------------- An **endpoint** is a network-accessible address, usually a URL, through which you can access a service. .. literalinclude:: ../examples/identity/list.py :pyobject: list_endpoints Full example: `identity resource list`_ List Regions ------------ A **region** is an Identity service API v3 entity and represents a general division in an OpenStack deployment. You can associate zero or more sub-regions with a region to make a tree-like structured hierarchy. .. literalinclude:: ../examples/identity/list.py :pyobject: list_regions Full example: `identity resource list`_ .. _identity resource list: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/identity/list.py openstacksdk-0.11.3/doc/source/user/guides/baremetal.rst0000666000175100017510000000047513236151340023310 0ustar zuulzuul00000000000000Using OpenStack Baremetal =========================== Before working with the Baremetal service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. TODO(Qiming): Implement this guide openstacksdk-0.11.3/doc/source/user/guides/message.rst0000666000175100017510000000047213236151340022775 0ustar zuulzuul00000000000000Using OpenStack Message ======================= Before working with the Message service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. TODO(briancurtin): Implement this guide openstacksdk-0.11.3/doc/source/user/guides/object_store.rst0000666000175100017510000002042213236151340024030 0ustar zuulzuul00000000000000Using OpenStack Object Store ============================ Before working with the Object Store service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. contents:: Table of Contents :local: The primary resources of the Object Store service are containers and objects. Working with Containers ----------------------- Listing Containers ****************** To list existing containers, use the :meth:`~openstack.object_store.v1._proxy.Proxy.containers` method. :: >>> for cont in conn.object_store.containers(): ... print cont ... openstack.object_store.v1.container.Container: {u'count': 5, u'bytes': 500, u'name': u'my container'} openstack.object_store.v1.container.Container: {u'count': 0, u'bytes': 0, u'name': u'empty container'} openstack.object_store.v1.container.Container: {u'count': 100, u'bytes': 1000000, u'name': u'another container'} The ``containers`` method returns a generator which yields :class:`~openstack.object_store.v1.container.Container` objects. It handles pagination for you, which can be adjusted via the ``limit`` argument. By default, the ``containers`` method will yield as many containers as the service will return, and it will continue requesting until it receives no more. :: >>> for cont in conn.object_store.containers(limit=500): ... print(cont) ... <500 Containers> ... another request transparently made to the Object Store service <500 more Containers> ... Creating Containers ******************* To create a container, use the :meth:`~openstack.object_store.v1._proxy.Proxy.create_container` method. :: >>> cont = conn.object_store.create_container(name="new container") >>> cont openstack.object_store.v1.container.Container: {'name': u'new container'} Working with Container Metadata ******************************* To get the metadata for a container, use the :meth:`~openstack.object_store.v1._proxy.Proxy.get_container_metadata` method. This method either takes the name of a container, or a :class:`~openstack.object_store.v1.container.Container` object, and it returns a `Container` object with all of its metadata attributes set. :: >>> cont = conn.object_store.get_container_metadata("new container") openstack.object_store.v1.container.Container: {'content-length': '0', 'x-container-object-count': '0', 'name': u'new container', 'accept-ranges': 'bytes', 'x-trans-id': 'tx22c5de63466e4c05bb104-0054740c39', 'date': 'Tue, 25 Nov 2014 04:57:29 GMT', 'x-timestamp': '1416889793.23520', 'x-container-read': '.r:mysite.com', 'x-container-bytes-used': '0', 'content-type': 'text/plain; charset=utf-8'} To set the metadata for a container, use the :meth:`~openstack.object_store.v1._proxy.Proxy.set_container_metadata` method. This method takes a :class:`~openstack.object_store.v1.container.Container` object. For example, to grant another user write access to this container, you can set the :attr:`~openstack.object_store.v1.container.Container.write_ACL` on a resource and pass it to `set_container_metadata`. :: >>> cont.write_ACL = "big_project:another_user" >>> conn.object_store.set_container_metadata(cont) openstack.object_store.v1.container.Container: {'content-length': '0', 'x-container-object-count': '0', 'name': u'my new container', 'accept-ranges': 'bytes', 'x-trans-id': 'txc3ee751f971d41de9e9f4-0054740ec1', 'date': 'Tue, 25 Nov 2014 05:08:17 GMT', 'x-timestamp': '1416889793.23520', 'x-container-read': '.r:mysite.com', 'x-container-bytes-used': '0', 'content-type': 'text/plain; charset=utf-8', 'x-container-write': 'big_project:another_user'} Working with Objects -------------------- Objects are held in containers. From an API standpoint, you work with them using similarly named methods, typically with an additional argument to specify their container. Listing Objects *************** To list the objects that exist in a container, use the :meth:`~openstack.object_store.v1._proxy.Proxy.objects` method. If you have a :class:`~openstack.object_store.v1.container.Container` object, you can pass it to ``objects``. :: >>> print cont.name pictures >>> for obj in conn.object_store.objects(cont): ... print obj ... openstack.object_store.v1.container.Object: {u'hash': u'0522d4ccdf9956badcb15c4087a0c4cb', u'name': u'pictures/selfie.jpg', u'bytes': 15744, 'last-modified': u'2014-10-31T06:33:36.618640', u'last_modified': u'2014-10-31T06:33:36.618640', u'content_type': u'image/jpeg', 'container': u'pictures', 'content-type': u'image/jpeg'} ... Similar to the :meth:`~openstack.object_store.v1._proxy.Proxy.containers` method, ``objects`` returns a generator which yields :class:`~openstack.object_store.v1.obj.Object` objects stored in the container. It also handles pagination for you, which you can adjust with the ``limit`` parameter, otherwise making each request for the maximum that your Object Store will return. If you have the name of a container instead of an object, you can also pass that to the ``objects`` method. :: >>> for obj in conn.object_store.objects("pictures".decode("utf8"), limit=100): ... print obj ... <100 Objects> ... another request transparently made to the Object Store service <100 more Objects> Getting Object Data ******************* Once you have an :class:`~openstack.object_store.v1.obj.Object`, you get the data stored inside of it with the :meth:`~openstack.object_store.v1._proxy.Proxy.get_object_data` method. :: >>> print ob.name message.txt >>> data = conn.object_store.get_object_data(ob) >>> print data Hello, world! Additionally, if you want to save the object to disk, the :meth:`~openstack.object_store.v1._proxy.Proxy.download_object` convenience method takes an :class:`~openstack.object_store.v1.obj.Object` and a ``path`` to write the contents to. :: >>> conn.object_store.download_object(ob, "the_message.txt") Uploading Objects ***************** Once you have data you'd like to store in the Object Store service, you use the :meth:`~openstack.object_store.v1._proxy.Proxy.upload_object` method. This method takes the ``data`` to be stored, along with at least an object ``name`` and the ``container`` it is to be stored in. :: >>> hello = conn.object_store.upload_object(container="messages", name="helloworld.txt", data="Hello, world!") >>> print hello openstack.object_store.v1.container.Object: {'content-length': '0', 'container': u'messages', 'name': u'helloworld.txt', 'last-modified': 'Tue, 25 Nov 2014 17:39:29 GMT', 'etag': '5eb63bbbe01eeed093cb22bb8f5acdc3', 'x-trans-id': 'tx3035d41b03334aeaaf3dd-005474bed0', 'date': 'Tue, 25 Nov 2014 17:39:28 GMT', 'content-type': 'text/html; charset=UTF-8'} Working with Object Metadata **************************** Working with metadata on objects is identical to how it's done with containers. You use the :meth:`~openstack.object_store.v1._proxy.Proxy.get_object_metadata` and :meth:`~openstack.object_store.v1._proxy.Proxy.set_object_metadata` methods. The metadata attributes to be set can be found on the :class:`~openstack.object_store.v1.obj.Object` object. :: >>> secret.delete_after = 300 >>> secret = conn.object_store.set_object_metadata(secret) We set the :attr:`~openstack.object_store.obj.Object.delete_after` value to 500 seconds, causing the object to be deleted in 300 seconds, or five minutes. That attribute corresponds to the ``X-Delete-After`` header value, which you can see is returned when we retrieve the updated metadata. :: >>> conn.object_store.get_object_metadata(ob) openstack.object_store.v1.container.Object: {'content-length': '11', 'container': u'Secret Container', 'name': u'selfdestruct.txt', 'x-delete-after': 300, 'accept-ranges': 'bytes', 'last-modified': 'Tue, 25 Nov 2014 17:50:45 GMT', 'etag': '5eb63bbbe01eeed093cb22bb8f5acdc3', 'x-timestamp': '1416937844.36805', 'x-trans-id': 'tx5c3fd94adf7c4e1b8f334-005474c17b', 'date': 'Tue, 25 Nov 2014 17:50:51 GMT', 'content-type': 'text/plain'} openstacksdk-0.11.3/doc/source/user/guides/database.rst0000666000175100017510000000046713236151340023121 0ustar zuulzuul00000000000000Using OpenStack Database ======================== Before working with the Database service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. TODO(thowe): Implement this guide openstacksdk-0.11.3/doc/source/user/guides/clustering/0000775000175100017510000000000013236151501022770 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/guides/clustering/receiver.rst0000666000175100017510000000537013236151340025336 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================== Managing Receivers ================== Receivers are the event sinks associated to senlin clusters. When certain events (or alarms) are seen by a monitoring software, the software can notify the senlin clusters of those events (or alarms). When senlin receives those notifications, it can automatically trigger some predefined operations with preset parameter values. List Receivers ~~~~~~~~~~~~~~ To examine the list of receivers: .. literalinclude:: ../../examples/clustering/receiver.py :pyobject: list_receivers When listing receivers, you can specify the sorting option using the ``sort`` parameter and you can do pagination using the ``limit`` and ``marker`` parameters. Full example: `manage receiver`_ Create Receiver ~~~~~~~~~~~~~~~ When creating a receiver, you will provide a dictionary with keys and values according to the receiver type referenced. .. literalinclude:: ../../examples/clustering/receiver.py :pyobject: create_receiver Optionally, you can specify a ``metadata`` keyword argument that contains some key-value pairs to be associated with the receiver. Full example: `manage receiver`_ Get Receiver ~~~~~~~~~~~~ To get a receiver based on its name or ID: .. literalinclude:: ../../examples/clustering/receiver.py :pyobject: get_receiver Full example: `manage receiver`_ Find Receiver ~~~~~~~~~~~~~ To find a receiver based on its name or ID: .. literalinclude:: ../../examples/clustering/receiver.py :pyobject: find_receiver Full example: `manage receiver`_ Update Receiver ~~~~~~~~~~~~~~~ After a receiver is created, most of its properties are immutable. Still, you can update a receiver's ``name`` and/or ``params``. .. literalinclude:: ../../examples/clustering/receiver.py :pyobject: update_receiver Full example: `manage receiver`_ Delete Receiver ~~~~~~~~~~~~~~~ A receiver can be deleted after creation, provided that it is not referenced by any active clusters. If you attempt to delete a receiver that is still in use, you will get an error message. .. literalinclude:: ../../examples/clustering/receiver.py :pyobject: delete_receiver .. _manage receiver: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/receiver.py openstacksdk-0.11.3/doc/source/user/guides/clustering/action.rst0000666000175100017510000000267613236151340025015 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==================== Working with Actions ==================== An action is an abstraction of some logic that can be executed by a worker thread. Most of the operations supported by Senlin are executed asynchronously, which means they are queued into database and then picked up by certain worker thread for execution. List Actions ~~~~~~~~~~~~ To examine the list of actions: .. literalinclude:: ../../examples/clustering/action.py :pyobject: list_actions When listing actions, you can specify the sorting option using the ``sort`` parameter and you can do pagination using the ``limit`` and ``marker`` parameters. Full example: `manage action`_ Get Action ~~~~~~~~~~ To get a action based on its name or ID: .. literalinclude:: ../../examples/clustering/action.py :pyobject: get_action .. _manage action: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/action.py openstacksdk-0.11.3/doc/source/user/guides/clustering/cluster.rst0000666000175100017510000001102413236151340025204 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================= Managing Clusters ================= Clusters are first-class citizens in Senlin service design. A cluster is defined as a collection of homogeneous objects. The "homogeneous" here means that the objects managed (aka. Nodes) have to be instantiated from the same "profile type". List Clusters ~~~~~~~~~~~~~ To examine the list of receivers: .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: list_cluster When listing clusters, you can specify the sorting option using the ``sort`` parameter and you can do pagination using the ``limit`` and ``marker`` parameters. Full example: `manage cluster`_ Create Cluster ~~~~~~~~~~~~~~ When creating a cluster, you will provide a dictionary with keys and values according to the cluster type referenced. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: create_cluster Optionally, you can specify a ``metadata`` keyword argument that contains some key-value pairs to be associated with the cluster. Full example: `manage cluster`_ Get Cluster ~~~~~~~~~~~ To get a cluster based on its name or ID: .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: get_cluster Full example: `manage cluster`_ Find Cluster ~~~~~~~~~~~~ To find a cluster based on its name or ID: .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: find_cluster Full example: `manage cluster`_ Update Cluster ~~~~~~~~~~~~~~ After a cluster is created, most of its properties are immutable. Still, you can update a cluster's ``name`` and/or ``params``. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: update_cluster Full example: `manage cluster`_ Delete Cluster ~~~~~~~~~~~~~~ A cluster can be deleted after creation, When there are nodes in the cluster, the Senlin engine will launch a process to delete all nodes from the cluster and destroy them before deleting the cluster object itself. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: delete_cluster Cluster Add Nodes ~~~~~~~~~~~~~~~~~ Add some existing nodes into the specified cluster. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: cluster_add_nodes Cluster Del Nodes ~~~~~~~~~~~~~~~~~ Remove nodes from specified cluster. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: cluster_del_nodes Cluster Replace Nodes ~~~~~~~~~~~~~~~~~~~~~ Replace some existing nodes in the specified cluster. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: cluster_replace_nodes Cluster Scale Out ~~~~~~~~~~~~~~~~~ Inflate the size of a cluster. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: cluster_scale_out Cluster Scale In ~~~~~~~~~~~~~~~~ Shrink the size of a cluster. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: cluster_scale_in Cluster Resize ~~~~~~~~~~~~~~ Resize of cluster. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: cluster_resize Cluster Policy Attach ~~~~~~~~~~~~~~~~~~~~~ Once a policy is attached (bound) to a cluster, it will be enforced when related actions are performed on that cluster, unless the policy is (temporarily) disabled on the cluster .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: cluster_attach_policy Cluster Policy Detach ~~~~~~~~~~~~~~~~~~~~~ Once a policy is attached to a cluster, it can be detached from the cluster at user's request. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: cluster_detach_policy Cluster Check ~~~~~~~~~~~~~ Check cluster health status, Cluster members can be check. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: check_cluster Cluster Recover ~~~~~~~~~~~~~~~ To restore a specified cluster, members in the cluster will be checked. .. literalinclude:: ../../examples/clustering/cluster.py :pyobject: recover_cluster .. _manage cluster: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/cluster.py openstacksdk-0.11.3/doc/source/user/guides/clustering/event.rst0000666000175100017510000000262413236151340024652 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =================== Working with Events =================== An event is a record generated during engine execution. Such an event captures what has happened inside the senlin-engine. The senlin-engine service generates event records when it is performing some actions or checking policies. List Events ~~~~~~~~~~~~ To examine the list of events: .. literalinclude:: ../../examples/clustering/event.py :pyobject: list_events When listing events, you can specify the sorting option using the ``sort`` parameter and you can do pagination using the ``limit`` and ``marker`` parameters. Full example: `manage event`_ Get Event ~~~~~~~~~ To get a event based on its name or ID: .. literalinclude:: ../../examples/clustering/event.py :pyobject: get_event .. _manage event: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/event.py openstacksdk-0.11.3/doc/source/user/guides/clustering/profile_type.rst0000666000175100017510000000261013236151340026225 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ========================== Working with Profile Types ========================== A **profile** is a template used to create and manage nodes, i.e. objects exposed by other OpenStack services. A profile encodes the information needed for node creation in a property named ``spec``. List Profile Types ~~~~~~~~~~~~~~~~~~ To examine the known profile types: .. literalinclude:: ../../examples/clustering/profile_type.py :pyobject: list_profile_types Full example: `manage profile type`_ Get Profile Type ~~~~~~~~~~~~~~~~ To get the details about a profile type, you need to provide the name of it. .. literalinclude:: ../../examples/clustering/profile_type.py :pyobject: get_profile_type Full example: `manage profile type`_ .. _manage profile type: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/profile_type.py openstacksdk-0.11.3/doc/source/user/guides/clustering/profile.rst0000666000175100017510000000561313236151340025172 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================= Managing Profiles ================= A **profile type** can be treated as the meta-type of a `Profile` object. A registry of profile types is built when the Cluster service starts. When creating a `Profile` object, you will indicate the profile type used in its `spec` property. List Profiles ~~~~~~~~~~~~~ To examine the list of profiles: .. literalinclude:: ../../examples/clustering/profile.py :pyobject: list_profiles When listing profiles, you can specify the sorting option using the ``sort`` parameter and you can do pagination using the ``limit`` and ``marker`` parameters. Full example: `manage profile`_ Create Profile ~~~~~~~~~~~~~~ When creating a profile, you will provide a dictionary with keys and values specified according to the profile type referenced. .. literalinclude:: ../../examples/clustering/profile.py :pyobject: create_profile Optionally, you can specify a ``metadata`` keyword argument that contains some key-value pairs to be associated with the profile. Full example: `manage profile`_ Find Profile ~~~~~~~~~~~~ To find a profile based on its name or ID: .. literalinclude:: ../../examples/clustering/profile.py :pyobject: find_profile The Cluster service doesn't allow updating the ``spec`` of a profile. The only way to achieve that is to create a new profile. Full example: `manage profile`_ Get Profile ~~~~~~~~~~~~ To get a profile based on its name or ID: .. literalinclude:: ../../examples/clustering/profile.py :pyobject: get_profile Full example: `manage profile`_ Update Profile ~~~~~~~~~~~~~~ After a profile is created, most of its properties are immutable. Still, you can update a profile's ``name`` and/or ``metadata``. .. literalinclude:: ../../examples/clustering/profile.py :pyobject: update_profile The Cluster service doesn't allow updating the ``spec`` of a profile. The only way to achieve that is to create a new profile. Full example: `manage profile`_ Delete Profile ~~~~~~~~~~~~~~ A profile can be deleted after creation, provided that it is not referenced by any active clusters or nodes. If you attempt to delete a profile that is still in use, you will get an error message. .. literalinclude:: ../../examples/clustering/profile.py :pyobject: delete_profile .. _manage profile: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/profile.py openstacksdk-0.11.3/doc/source/user/guides/clustering/policy.rst0000666000175100017510000000532413236151340025030 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================= Managing Policies ================= A **policy type** can be treated as the meta-type of a `Policy` object. A registry of policy types is built when the Cluster service starts. When creating a `Policy` object, you will indicate the policy type used in its `spec` property. List Policies ~~~~~~~~~~~~~ To examine the list of policies: .. literalinclude:: ../../examples/clustering/policy.py :pyobject: list_policies When listing policies, you can specify the sorting option using the ``sort`` parameter and you can do pagination using the ``limit`` and ``marker`` parameters. Full example: `manage policy`_ Create Policy ~~~~~~~~~~~~~ When creating a policy, you will provide a dictionary with keys and values according to the policy type referenced. .. literalinclude:: ../../examples/clustering/policy.py :pyobject: create_policy Optionally, you can specify a ``metadata`` keyword argument that contains some key-value pairs to be associated with the policy. Full example: `manage policy`_ Find Policy ~~~~~~~~~~~ To find a policy based on its name or ID: .. literalinclude:: ../../examples/clustering/policy.py :pyobject: find_policy Full example: `manage policy`_ Get Policy ~~~~~~~~~~ To get a policy based on its name or ID: .. literalinclude:: ../../examples/clustering/policy.py :pyobject: get_policy Full example: `manage policy`_ Update Policy ~~~~~~~~~~~~~ After a policy is created, most of its properties are immutable. Still, you can update a policy's ``name`` and/or ``metadata``. .. literalinclude:: ../../examples/clustering/policy.py :pyobject: update_policy The Cluster service doesn't allow updating the ``spec`` of a policy. The only way to achieve that is to create a new policy. Full example: `manage policy`_ Delete Policy ~~~~~~~~~~~~~ A policy can be deleted after creation, provided that it is not referenced by any active clusters or nodes. If you attempt to delete a policy that is still in use, you will get an error message. .. literalinclude:: ../../examples/clustering/policy.py :pyobject: delete_policy .. _manage policy: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/policy.py openstacksdk-0.11.3/doc/source/user/guides/clustering/policy_type.rst0000666000175100017510000000262313236151340026070 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ========================= Working with Policy Types ========================= A **policy** is a template that encodes the information needed for specifying the rules that are checked/enforced before/after certain actions are performed on a cluster. The rules are encoded in a property named ``spec``. List Policy Types ~~~~~~~~~~~~~~~~~ To examine the known policy types: .. literalinclude:: ../../examples/clustering/policy_type.py :pyobject: list_policy_types Full example: `manage policy type`_ Get Policy Type ~~~~~~~~~~~~~~~ To retrieve the details about a policy type, you need to provide the name of it. .. literalinclude:: ../../examples/clustering/policy_type.py :pyobject: get_policy_type Full example: `manage policy type`_ .. _manage policy type: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/policy_type.py openstacksdk-0.11.3/doc/source/user/guides/clustering/node.rst0000666000175100017510000000550413236151340024456 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============== Managing Nodes ============== Node is a logical object managed by the Senlin service. A node can be a member of at most one cluster at any time. A node can be an orphan node which means it doesn't belong to any clusters. List Nodes ~~~~~~~~~~ To examine the list of Nodes: .. literalinclude:: ../../examples/clustering/node.py :pyobject: list_nodes When listing nodes, you can specify the sorting option using the ``sort`` parameter and you can do pagination using the ``limit`` and ``marker`` parameters. Full example: `manage node`_ Create Node ~~~~~~~~~~~ When creating a node, you will provide a dictionary with keys and values according to the node type referenced. .. literalinclude:: ../../examples/clustering/node.py :pyobject: create_node Optionally, you can specify a ``metadata`` keyword argument that contains some key-value pairs to be associated with the node. Full example: `manage node`_ Get Node ~~~~~~~~ To get a node based on its name or ID: .. literalinclude:: ../../examples/clustering/node.py :pyobject: get_node Full example: `manage node`_ Find Node ~~~~~~~~~ To find a node based on its name or ID: .. literalinclude:: ../../examples/clustering/node.py :pyobject: find_node Full example: `manage node`_ Update Node ~~~~~~~~~~~ After a node is created, most of its properties are immutable. Still, you can update a node's ``name`` and/or ``params``. .. literalinclude:: ../../examples/clustering/node.py :pyobject: update_node Full example: `manage node`_ Delete Node ~~~~~~~~~~~ A node can be deleted after creation, provided that it is not referenced by any active clusters. If you attempt to delete a node that is still in use, you will get an error message. .. literalinclude:: ../../examples/clustering/node.py :pyobject: delete_node Full example: `manage node`_ Check Node ~~~~~~~~~~ If the underlying physical resource is not healthy, the node will be set to ERROR status. .. literalinclude:: ../../examples/clustering/node.py :pyobject: check_node Full example: `manage node`_ Recover Node ~~~~~~~~~~~~ To restore a specified node. .. literalinclude:: ../../examples/clustering/node.py :pyobject: recover_node .. _manage node: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/clustering/node.py openstacksdk-0.11.3/doc/source/user/guides/connect.rst0000666000175100017510000000242713236151340023004 0ustar zuulzuul00000000000000Connect ======= In order to work with an OpenStack cloud you first need to create a :class:`~openstack.connection.Connection` to it using your credentials. A :class:`~openstack.connection.Connection` can be created in 3 ways, using the class itself, :ref:`config-clouds-yaml`, or :ref:`config-environment-variables`. It is recommended to always use :ref:`config-clouds-yaml` as the same config can be used across tools and languages. Create Connection ----------------- To create a :class:`~openstack.connection.Connection` instance, use the :func:`~openstack.connect` factory function. .. literalinclude:: ../examples/connect.py :pyobject: create_connection Full example at `connect.py `_ .. note:: To enable logging, see the :doc:`logging` user guide. Next ---- Now that you can create a connection, continue with the :ref:`user_guides` to work with an OpenStack service. As an alternative to creating a :class:`~openstack.connection.Connection` using :ref:config-clouds-yaml, you can connect using `config-environment-variables`. .. TODO(shade) Update the text here and consolidate with the old os-client-config docs so that we have a single and consistent explanation of the envvars cloud, etc. openstacksdk-0.11.3/doc/source/user/guides/orchestration.rst0000666000175100017510000000050613236151340024233 0ustar zuulzuul00000000000000Using OpenStack Orchestration ============================= Before working with the Orchestration service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. TODO(thowe): Implement this guide openstacksdk-0.11.3/doc/source/user/guides/image.rst0000666000175100017510000000564713236151340022444 0ustar zuulzuul00000000000000Using OpenStack Image ===================== Before working with the Image service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. The primary resource of the Image service is the image. List Images ----------- An **image** is a collection of files for a specific operating system that you use to create or rebuild a server. OpenStack provides `pre-built images `_. You can also create custom images, or snapshots, from servers that you have launched. Images come in different formats and are sometimes called virtual machine images. .. literalinclude:: ../examples/image/list.py :pyobject: list_images Full example: `image resource list`_ Create Image ------------ Create an image by uploading its data and setting its attributes. .. literalinclude:: ../examples/image/create.py :pyobject: upload_image Full example: `image resource create`_ .. _download_image-stream-true: Downloading an Image with stream=True ------------------------------------- As images are often very large pieces of data, storing their entire contents in the memory of your application can be less than desirable. A more efficient method may be to iterate over a stream of the response data. By choosing to stream the response content, you determine the ``chunk_size`` that is appropriate for your needs, meaning only that many bytes of data are read for each iteration of the loop until all data has been consumed. See :meth:`requests.Response.iter_content` for more information. When you choose to stream an image download, openstacksdk is no longer able to compute the checksum of the response data for you. This example shows how you might do that yourself, in a very similar manner to how the library calculates checksums for non-streamed responses. .. literalinclude:: ../examples/image/download.py :pyobject: download_image_stream Downloading an Image with stream=False -------------------------------------- If you wish to download an image's contents all at once and to memory, simply set ``stream=False``, which is the default. .. literalinclude:: ../examples/image/download.py :pyobject: download_image Full example: `image resource download`_ Delete Image ------------ Delete an image. .. literalinclude:: ../examples/image/delete.py :pyobject: delete_image Full example: `image resource delete`_ .. _image resource create: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/image/create.py .. _image resource delete: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/image/delete.py .. _image resource list: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/image/list.py .. _image resource download: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/image/download.py openstacksdk-0.11.3/doc/source/user/guides/block_storage.rst0000666000175100017510000000050613236151340024165 0ustar zuulzuul00000000000000Using OpenStack Block Storage ============================= Before working with the Block Storage service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. TODO(thowe): Implement this guide openstacksdk-0.11.3/doc/source/user/guides/compute.rst0000666000175100017510000000515113236151340023024 0ustar zuulzuul00000000000000Using OpenStack Compute ======================= Before working with the Compute service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. contents:: Table of Contents :local: The primary resource of the Compute service is the server. List Servers ------------ A **server** is a virtual machine that provides access to a compute instance being run by your cloud provider. .. literalinclude:: ../examples/compute/list.py :pyobject: list_servers Full example: `compute resource list`_ List Images ----------- An **image** is the operating system you want to use for your server. .. literalinclude:: ../examples/compute/list.py :pyobject: list_images Full example: `compute resource list`_ List Flavors ------------ A **flavor** is the resource configuration for a server. Each flavor is a unique combination of disk, memory, vCPUs, and network bandwidth. .. literalinclude:: ../examples/compute/list.py :pyobject: list_flavors Full example: `compute resource list`_ List Networks ------------- A **network** provides connectivity to servers. .. literalinclude:: ../examples/network/list.py :pyobject: list_networks Full example: `network resource list`_ Create Key Pair --------------- A **key pair** is the public key and private key of `public–key cryptography`_. They are used to encrypt and decrypt login information when connecting to your server. .. literalinclude:: ../examples/compute/create.py :pyobject: create_keypair Full example: `compute resource create`_ Create Server ------------- At minimum, a server requires a name, an image, a flavor, and a network on creation. You can discover the names and IDs of these attributes by listing them as above and then using the find methods to get the appropriate resources. Ideally you'll also create a server using a keypair so you can login to that server with the private key. Servers take time to boot so we call ``wait_for_server`` to wait for it to become active. .. literalinclude:: ../examples/compute/create.py :pyobject: create_server Full example: `compute resource create`_ .. _compute resource list: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/compute/list.py .. _network resource list: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/network/list.py .. _compute resource create: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/compute/create.py .. _public–key cryptography: https://en.wikipedia.org/wiki/Public-key_cryptography openstacksdk-0.11.3/doc/source/user/guides/network.rst0000666000175100017510000001103513236151340023037 0ustar zuulzuul00000000000000Using OpenStack Network ======================= Before working with the Network service, you'll need to create a connection to your OpenStack cloud by following the :doc:`connect` user guide. This will provide you with the ``conn`` variable used in the examples below. .. contents:: Table of Contents :local: The primary resource of the Network service is the network. List Networks ------------- A **network** is an isolated `Layer 2 `_ networking segment. There are two types of networks, project and provider networks. Project networks are fully isolated and are not shared with other projects. Provider networks map to existing physical networks in the data center and provide external network access for servers. Only an OpenStack administrator can create provider networks. Networks can be connected via routers. .. literalinclude:: ../examples/network/list.py :pyobject: list_networks Full example: `network resource list`_ List Subnets ------------ A **subnet** is a block of IP addresses and associated configuration state. Subnets are used to allocate IP addresses when new ports are created on a network. .. literalinclude:: ../examples/network/list.py :pyobject: list_subnets Full example: `network resource list`_ List Ports ---------- A **port** is a connection point for attaching a single device, such as the `NIC `_ of a server, to a network. The port also describes the associated network configuration, such as the `MAC `_ and IP addresses to be used on that port. .. literalinclude:: ../examples/network/list.py :pyobject: list_ports Full example: `network resource list`_ List Security Groups -------------------- A **security group** acts as a virtual firewall for servers. It is a container for security group rules which specify the type of network traffic and direction that is allowed to pass through a port. .. literalinclude:: ../examples/network/list.py :pyobject: list_security_groups Full example: `network resource list`_ List Routers ------------ A **router** is a logical component that forwards data packets between networks. It also provides `Layer 3 `_ and `NAT `_ forwarding to provide external network access for servers on project networks. .. literalinclude:: ../examples/network/list.py :pyobject: list_routers Full example: `network resource list`_ List Network Agents ------------------- A **network agent** is a plugin that handles various tasks used to implement virtual networks. These agents include neutron-dhcp-agent, neutron-l3-agent, neutron-metering-agent, and neutron-lbaas-agent, among others. .. literalinclude:: ../examples/network/list.py :pyobject: list_network_agents Full example: `network resource list`_ Create Network -------------- Create a project network and subnet. This network can be used when creating a server and allows the server to communicate with others servers on the same project network. .. literalinclude:: ../examples/network/create.py :pyobject: create_network Full example: `network resource create`_ Open a Port ----------- When creating a security group for a network, you will need to open certain ports to allow communication via them. For example, you may need to enable HTTPS access on port 443. .. literalinclude:: ../examples/network/security_group_rules.py :pyobject: open_port Full example: `network security group create`_ Accept Pings ------------ In order to ping a machine on your network within a security group, you will need to create a rule to allow inbound ICMP packets. .. literalinclude:: ../examples/network/security_group_rules.py :pyobject: allow_ping Full example: `network security group create`_ Delete Network -------------- Delete a project network and its subnets. .. literalinclude:: ../examples/network/delete.py :pyobject: delete_network Full example: `network resource delete`_ .. _network resource create: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/network/create.py .. _network resource delete: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/network/delete.py .. _network resource list: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/network/list.py .. _network security group create: http://git.openstack.org/cgit/openstack/python-openstacksdk/tree/examples/network/security_group_rules.py openstacksdk-0.11.3/doc/source/user/usage.rst0000666000175100017510000000104513236151340021172 0ustar zuulzuul00000000000000===== Usage ===== To use `openstack.cloud` in a project: .. code-block:: python import openstack.cloud .. note:: API methods that return a description of an OpenStack resource (e.g., server instance, image, volume, etc.) do so using a `munch.Munch` object from the `Munch library `_. `Munch` objects can be accessed using either dictionary or object notation (e.g., ``server.id``, ``image.name`` and ``server['id']``, ``image['name']``) .. autoclass:: openstack.cloud.OpenStackCloud :members: openstacksdk-0.11.3/doc/source/user/model.rst0000666000175100017510000002763313236151340021201 0ustar zuulzuul00000000000000========== Data Model ========== shade has a very strict policy on not breaking backwards compatability ever. However, with the data structures returned from OpenStack, there are places where the resource structures from OpenStack are returned to the user somewhat directly, leaving a shade user open to changes/differences in result content. To combat that, shade 'normalizes' the return structure from OpenStack in many places, and the results of that normalization are listed below. Where shade performs normalization, a user can count on any fields declared in the docs as being completely safe to use - they are as much a part of shade's API contract as any other Python method. Some OpenStack objects allow for arbitrary attributes at the root of the object. shade will pass those through so as not to break anyone who may be counting on them, but as they are arbitrary shade can make no guarantees as to their existence. As part of normalization, shade will put any attribute from an OpenStack resource that is not in its data model contract into an attribute called 'properties'. The contents of properties are defined to be an arbitrary collection of key value pairs with no promises as to any particular key ever existing. If a user passes `strict=True` to the shade constructor, shade will not pass through arbitrary objects to the root of the resource, and will instead only put them in the properties dict. If a user is worried about accidentally writing code that depends on an attribute that is not part of the API contract, this can be a useful tool. Keep in mind all data can still be accessed via the properties dict, but any code touching anything in the properties dict should be aware that the keys found there are highly user/cloud specific. Any key that is transformed as part of the shade data model contract will not wind up with an entry in properties - only keys that are unknown. Location -------- A Location defines where a resource lives. It includes a cloud name and a region name, an availability zone as well as information about the project that owns the resource. The project information may contain a project id, or a combination of one or more of a project name with a domain name or id. If a project id is present, it should be considered correct. Some resources do not carry ownership information with them. For those, the project information will be filled in from the project the user currently has a token for. Some resources do not have information about availability zones, or may exist region wide. Those resources will have None as their availability zone. If all of the project information is None, then .. code-block:: python Location = dict( cloud=str(), region=str(), zone=str() or None, project=dict( id=str() or None, name=str() or None, domain_id=str() or None, domain_name=str() or None)) Resources ========= Flavor ------ A flavor for a Nova Server. .. code-block:: python Flavor = dict( location=Location(), id=str(), name=str(), is_public=bool(), is_disabled=bool(), ram=int(), vcpus=int(), disk=int(), ephemeral=int(), swap=int(), rxtx_factor=float(), extra_specs=dict(), properties=dict()) Flavor Access ------------- An access entry for a Nova Flavor. .. code-block:: python FlavorAccess = dict( flavor_id=str(), project_id=str()) Image ----- A Glance Image. .. code-block:: python Image = dict( location=Location(), id=str(), name=str(), min_ram=int(), min_disk=int(), size=int(), virtual_size=int(), container_format=str(), disk_format=str(), checksum=str(), created_at=str(), updated_at=str(), owner=str(), is_public=bool(), is_protected=bool(), visibility=str(), status=str(), locations=list(), direct_url=str() or None, tags=list(), properties=dict()) Keypair ------- A keypair for a Nova Server. .. code-block:: python Keypair = dict( location=Location(), name=str(), id=str(), public_key=str(), fingerprint=str(), type=str(), user_id=str(), private_key=str() or None properties=dict()) Security Group -------------- A Security Group from either Nova or Neutron .. code-block:: python SecurityGroup = dict( location=Location(), id=str(), name=str(), description=str(), security_group_rules=list(), properties=dict()) Security Group Rule ------------------- A Security Group Rule from either Nova or Neutron .. code-block:: python SecurityGroupRule = dict( location=Location(), id=str(), direction=str(), # oneof('ingress', 'egress') ethertype=str(), port_range_min=int() or None, port_range_max=int() or None, protocol=str() or None, remote_ip_prefix=str() or None, security_group_id=str() or None, remote_group_id=str() or None properties=dict()) Server ------ A Server from Nova .. code-block:: python Server = dict( location=Location(), id=str(), name=str(), image=dict() or str(), flavor=dict(), volumes=list(), # Volume interface_ip=str(), has_config_drive=bool(), accessIPv4=str(), accessIPv6=str(), addresses=dict(), # string, list(Address) created=str(), key_name=str(), metadata=dict(), # string, string private_v4=str(), progress=int(), public_v4=str(), public_v6=str(), security_groups=list(), # SecurityGroup status=str(), updated=str(), user_id=str(), host_id=str() or None, power_state=str() or None, task_state=str() or None, vm_state=str() or None, launched_at=str() or None, terminated_at=str() or None, task_state=str() or None, properties=dict()) ComputeLimits ------------- Limits and current usage for a project in Nova .. code-block:: python ComputeLimits = dict( location=Location(), max_personality=int(), max_personality_size=int(), max_server_group_members=int(), max_server_groups=int(), max_server_meta=int(), max_total_cores=int(), max_total_instances=int(), max_total_keypairs=int(), max_total_ram_size=int(), total_cores_used=int(), total_instances_used=int(), total_ram_used=int(), total_server_groups_used=int(), properties=dict()) ComputeUsage ------------ Current usage for a project in Nova .. code-block:: python ComputeUsage = dict( location=Location(), started_at=str(), stopped_at=str(), server_usages=list(), max_personality=int(), max_personality_size=int(), max_server_group_members=int(), max_server_groups=int(), max_server_meta=int(), max_total_cores=int(), max_total_instances=int(), max_total_keypairs=int(), max_total_ram_size=int(), total_cores_used=int(), total_hours=int(), total_instances_used=int(), total_local_gb_usage=int(), total_memory_mb_usage=int(), total_ram_used=int(), total_server_groups_used=int(), total_vcpus_usage=int(), properties=dict()) ServerUsage ----------- Current usage for a server in Nova .. code-block:: python ComputeUsage = dict( started_at=str(), ended_at=str(), flavor=str(), hours=int(), instance_id=str(), local_gb=int(), memory_mb=int(), name=str(), state=str(), uptime=int(), vcpus=int(), properties=dict()) Floating IP ----------- A Floating IP from Neutron or Nova .. code-block:: python FloatingIP = dict( location=Location(), id=str(), description=str(), attached=bool(), fixed_ip_address=str() or None, floating_ip_address=str() or None, network=str() or None, port=str() or None, router=str(), status=str(), created_at=str() or None, updated_at=str() or None, revision_number=int() or None, properties=dict()) Volume ------ A volume from cinder. .. code-block:: python Volume = dict( location=Location(), id=str(), name=str(), description=str(), size=int(), attachments=list(), status=str(), migration_status=str() or None, host=str() or None, replication_driver=str() or None, replication_status=str() or None, replication_extended_status=str() or None, snapshot_id=str() or None, created_at=str(), updated_at=str() or None, source_volume_id=str() or None, consistencygroup_id=str() or None, volume_type=str() or None, metadata=dict(), is_bootable=bool(), is_encrypted=bool(), can_multiattach=bool(), properties=dict()) VolumeType ---------- A volume type from cinder. .. code-block:: python VolumeType = dict( location=Location(), id=str(), name=str(), description=str() or None, is_public=bool(), qos_specs_id=str() or None, extra_specs=dict(), properties=dict()) VolumeTypeAccess ---------------- A volume type access from cinder. .. code-block:: python VolumeTypeAccess = dict( location=Location(), volume_type_id=str(), project_id=str(), properties=dict()) ClusterTemplate --------------- A Cluster Template from magnum. .. code-block:: python ClusterTemplate = dict( location=Location(), apiserver_port=int(), cluster_distro=str(), coe=str(), created_at=str(), dns_nameserver=str(), docker_volume_size=int(), external_network_id=str(), fixed_network=str() or None, flavor_id=str(), http_proxy=str() or None, https_proxy=str() or None, id=str(), image_id=str(), insecure_registry=str(), is_public=bool(), is_registry_enabled=bool(), is_tls_disabled=bool(), keypair_id=str(), labels=dict(), master_flavor_id=str() or None, name=str(), network_driver=str(), no_proxy=str() or None, server_type=str(), updated_at=str() or None, volume_driver=str(), properties=dict()) MagnumService ------------- A Magnum Service from magnum .. code-block:: python MagnumService = dict( location=Location(), binary=str(), created_at=str(), disabled_reason=str() or None, host=str(), id=str(), report_count=int(), state=str(), properties=dict()) Stack ----- A Stack from Heat .. code-block:: python Stack = dict( location=Location(), id=str(), name=str(), created_at=str(), deleted_at=str(), updated_at=str(), description=str(), action=str(), identifier=str(), is_rollback_enabled=bool(), notification_topics=list(), outputs=list(), owner=str(), parameters=dict(), parent=str(), stack_user_project_id=str(), status=str(), status_reason=str(), tags=dict(), tempate_description=str(), timeout_mins=int(), properties=dict()) Identity Resources ================== Identity Resources are slightly different. They are global to a cloud, so location.availability_zone and location.region_name and will always be None. If a deployer happens to deploy OpenStack in such a way that users and projects are not shared amongst regions, that necessitates treating each of those regions as separate clouds from shade's POV. The Identity Resources that are not Project do not exist within a Project, so all of the values in ``location.project`` will be None. Project ------- A Project from Keystone (or a tenant if Keystone v2) Location information for Project has some additional specific semantics. If the project has a parent project, that will be in ``location.project.id``, and if it doesn't that should be ``None``. If the Project is associated with a domain that will be in ``location.project.domain_id`` in addition to the normal ``domain_id`` regardless of the current user's token scope. .. code-block:: python Project = dict( location=Location(), id=str(), name=str(), description=str(), is_enabled=bool(), is_domain=bool(), domain_id=str(), properties=dict()) Role ---- A Role from Keystone .. code-block:: python Project = dict( location=Location(), id=str(), name=str(), domain_id=str(), properties=dict()) openstacksdk-0.11.3/doc/source/user/resource.rst0000666000175100017510000000110313236151340021710 0ustar zuulzuul00000000000000**Note: This class is in the process of being applied as the new base class for resources around the OpenStack SDK. Once that has been completed, this module will be drop the 2 suffix and be the only resource module.** Resource ======== .. automodule:: openstack.resource Components ---------- .. autoclass:: openstack.resource.Body :members: .. autoclass:: openstack.resource.Header :members: .. autoclass:: openstack.resource.URI :members: The Resource class ------------------ .. autoclass:: openstack.resource.Resource :members: :member-order: bysource openstacksdk-0.11.3/doc/source/user/multi-cloud-demo.rst0000666000175100017510000005304013236151364023256 0ustar zuulzuul00000000000000================ Multi-Cloud Demo ================ This document contains a presentation in `presentty`_ format. If you want to walk through it like a presentation, install `presentty` and run: .. code:: bash presentty doc/source/user/multi-cloud-demo.rst The content is hopefully helpful even if it's not being narrated, so it's being included in the `shade` docs. .. _presentty: https://pypi.python.org/pypi/presentty Using Multiple OpenStack Clouds Easily with Shade ================================================= Who am I? ========= Monty Taylor * OpenStack Infra Core * irc: mordred * twitter: @e_monty What are we going to talk about? ================================ `shade` * a task and end-user oriented Python library * abstracts deployment differences * designed for multi-cloud * simple to use * massive scale * optional advanced features to handle 20k servers a day * Initial logic/design extracted from nodepool * Librified to re-use in Ansible shade is Free Software ====================== * https://git.openstack.org/cgit/openstack-infra/shade * openstack-dev@lists.openstack.org * #openstack-shade on freenode This talk is Free Software, too =============================== * Written for presentty (https://pypi.python.org/pypi/presentty) * doc/source/multi-cloud-demo.rst * examples in doc/source/examples * Paths subject to change- this is the first presentation in tree! Complete Example ================ .. code:: python from openstack import cloud as openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name) # Upload an image to the cloud image = cloud.create_image( 'devuan-jessie', filename='devuan-jessie.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Let's Take a Few Steps Back =========================== Multi-cloud is easy, but you need to know a few things. * Terminology * Config * Shade API Cloud Terminology ================= Let's define a few terms, so that we can use them with ease: * `cloud` - logically related collection of services * `region` - completely independent subset of a given cloud * `patron` - human who has an account * `user` - account on a cloud * `project` - logical collection of cloud resources * `domain` - collection of users and projects Cloud Terminology Relationships =============================== * A `cloud` has one or more `regions` * A `patron` has one or more `users` * A `patron` has one or more `projects` * A `cloud` has one or more `domains` * In a `cloud` with one `domain` it is named "default" * Each `patron` may have their own `domain` * Each `user` is in one `domain` * Each `project` is in one `domain` * A `user` has one or more `roles` on one or more `projects` HTTP Sessions ============= * HTTP interactions are authenticated via keystone * Authenticating returns a `token` * An authenticated HTTP Session is shared across a `region` Cloud Regions ============= A `cloud region` is the basic unit of REST interaction. * A `cloud` has a `service catalog` * The `service catalog` is returned in the `token` * The `service catalog` lists `endpoint` for each `service` in each `region` * A `region` is completely autonomous Users, Projects and Domains =========================== In clouds with multiple domains, project and user names are only unique within a region. * Names require `domain` information for uniqueness. IDs do not. * Providing `domain` information when not needed is fine. * `project_name` requires `project_domain_name` or `project_domain_id` * `project_id` does not * `username` requires `user_domain_name` or `user_domain_id` * `user_id` does not Confused Yet? ============= Don't worry - you don't have to deal with most of that. Auth per cloud, select per region ================================= In general, the thing you need to know is: * Configure authentication per `cloud` * Select config to use by `cloud` and `region` clouds.yaml =========== Information about the clouds you want to connect to is stored in a file called `clouds.yaml`. `clouds.yaml` can be in your homedir: `~/.config/openstack/clouds.yaml` or system-wide: `/etc/openstack/clouds.yaml`. Information in your homedir, if it exists, takes precedence. Full docs on `clouds.yaml` are at https://docs.openstack.org/developer/os-client-config/ What about Mac and Windows? =========================== `USER_CONFIG_DIR` is different on Linux, OSX and Windows. * Linux: `~/.config/openstack` * OSX: `~/Library/Application Support/openstack` * Windows: `C:\\Users\\USERNAME\\AppData\\Local\\OpenStack\\openstack` `SITE_CONFIG_DIR` is different on Linux, OSX and Windows. * Linux: `/etc/openstack` * OSX: `/Library/Application Support/openstack` * Windows: `C:\\ProgramData\\OpenStack\\openstack` Config Terminology ================== For multi-cloud, think of two types: * `profile` - Facts about the `cloud` that are true for everyone * `cloud` - Information specific to a given `user` Apologies for the use of `cloud` twice. Environment Variables and Simple Usage ====================================== * Environment variables starting with `OS_` go into a cloud called `envvars` * If you only have one cloud, you don't have to specify it * `OS_CLOUD` and `OS_REGION_NAME` are default values for `cloud` and `region_name` TOO MUCH TALKING - NOT ENOUGH CODE ================================== basic clouds.yaml for the example code ====================================== Simple example of a clouds.yaml * Config for a named `cloud` "my-citycloud" * Reference a well-known "named" profile: `citycloud` * `os-client-config` has a built-in list of profiles at https://docs.openstack.org/developer/os-client-config/vendor-support.html * Vendor profiles contain various advanced config * `cloud` name can match `profile` name (using different names for clarity) .. code:: yaml clouds: my-citycloud: profile: citycloud auth: username: mordred project_id: 65222a4d09ea4c68934fa1028c77f394 user_domain_id: d0919bd5e8d74e49adf0e145807ffc38 project_domain_id: d0919bd5e8d74e49adf0e145807ffc38 Where's the password? secure.yaml =========== * Optional additional file just like `clouds.yaml` * Values overlaid on `clouds.yaml` * Useful if you want to protect secrets more stringently Example secure.yaml =================== * No, my password isn't XXXXXXXX * `cloud` name should match `clouds.yaml` * Optional - I actually keep mine in my `clouds.yaml` .. code:: yaml clouds: my-citycloud: auth: password: XXXXXXXX more clouds.yaml ================ More information can be provided. * Use v3 of the `identity` API - even if others are present * Use `https://image-ca-ymq-1.vexxhost.net/v2` for `image` API instead of what's in the catalog .. code:: yaml my-vexxhost: identity_api_version: 3 image_endpoint_override: https://image-ca-ymq-1.vexxhost.net/v2 profile: vexxhost auth: user_domain_id: default project_domain_id: default project_name: d8af8a8f-a573-48e6-898a-af333b970a2d username: 0b8c435b-cc4d-4e05-8a47-a2ada0539af1 Much more complex clouds.yaml example ===================================== * Not using a profile - all settings included * In the `ams01` `region` there are two networks with undiscoverable qualities * Each one are labeled here so choices can be made * Any of the settings can be specific to a `region` if needed * `region` settings override `cloud` settings * `cloud` does not support `floating-ips` .. code:: yaml my-internap: auth: auth_url: https://identity.api.cloud.iweb.com username: api-55f9a00fb2619 project_name: inap-17037 identity_api_version: 3 floating_ip_source: None regions: - name: ams01 values: networks: - name: inap-17037-WAN1654 routes_externally: true default_interface: true - name: inap-17037-LAN3631 routes_externally: false Complete Example Again ====================== .. code:: python from openstack import cloud as openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name) # Upload an image to the cloud image = cloud.create_image( 'devuan-jessie', filename='devuan-jessie.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Step By Step ============ Import the library ================== .. code:: python from openstack import cloud as openstack Logging ======= * `openstacksdk` uses standard python logging * ``openstack.enable_logging`` does easy defaults * Squelches some meaningless warnings * `debug` * Logs shade loggers at debug level * `http_debug` Implies `debug`, turns on HTTP tracing .. code:: python # Initialize and turn on debug logging openstack.enable_logging(debug=True) Example with Debug Logging ========================== * doc/source/examples/debug-logging.py .. code:: python from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud( cloud='my-vexxhost', region_name='ca-ymq-1') cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]') Example with HTTP Debug Logging =============================== * doc/source/examples/http-debug-logging.py .. code:: python from openstack import cloud as openstack openstack.enable_logging(http_debug=True) cloud = openstack.openstack_cloud( cloud='my-vexxhost', region_name='ca-ymq-1') cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]') Cloud Regions ============= * `cloud` constructor needs `cloud` and `region_name` * `openstack.openstack_cloud` is a helper factory function .. code:: python for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name) Upload an Image =============== * Picks the correct upload mechanism * **SUGGESTION** Always upload your own base images .. code:: python # Upload an image to the cloud image = cloud.create_image( 'devuan-jessie', filename='devuan-jessie.qcow2', wait=True) Always Upload an Image ====================== Ok. You don't have to. But, for multi-cloud... * Images with same content are named different on different clouds * Images with same name on different clouds can have different content * Upload your own to all clouds, both problems go away * Download from OS vendor or build with `diskimage-builder` Find a flavor ============= * Flavors are all named differently on clouds * Flavors can be found via RAM * `get_flavor_by_ram` finds the smallest matching flavor .. code:: python # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) Create a server =============== * my-vexxhost * Boot server * Wait for `status==ACTIVE` * my-internap * Boot server on network `inap-17037-WAN1654` * Wait for `status==ACTIVE` * my-citycloud * Boot server * Wait for `status==ACTIVE` * Find the `port` for the `fixed_ip` for `server` * Create `floating-ip` on that `port` * Wait for `floating-ip` to attach .. code:: python # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Wow. We didn't even deploy Wordpress! ===================================== Image and Flavor by Name or ID ============================== * Pass string to image/flavor * Image/Flavor will be found by name or ID * Common pattern * doc/source/examples/create-server-name-or-id.py .. code:: python from openstack import cloud as openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) for cloud_name, region_name, image, flavor in [ ('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]', 'v1-standard-4'), ('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus', '4C-4GB-100GB'), ('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]: # Initialize cloud cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. server = cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) print(server.name) print(server['name']) cloud.pprint(server) # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) cloud.pprint method was just added this morning =============================================== Delete Servers ============== * `delete_ips` Delete any `floating_ips` the server may have .. code:: python cloud.delete_server('my-server', wait=True, delete_ips=True) Image and Flavor by Dict ======================== * Pass dict to image/flavor * If you know if the value is Name or ID * Common pattern * doc/source/examples/create-server-dict.py .. code:: python from openstack import cloud as openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) for cloud_name, region_name, image, flavor_id in [ ('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]', '5cf64088-893b-46b5-9bb1-ee020277635d'), ('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus', '0dab10b5-42a2-438e-be7b-505741a7ffcc'), ('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]: # Initialize cloud cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. server = cloud.create_server( 'my-server', image=image, flavor=dict(id=flavor_id), wait=True, auto_ip=True) # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) Munch Objects ============= * Behave like a dict and an object * doc/source/examples/munch-dict-object.py .. code:: python from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='zetta', region_name='no-osl1') image = cloud.get_image('Ubuntu 14.04 (AMD64) [Local Storage]') print(image.name) print(image['name']) API Organized by Logical Resource ================================= * list_servers * search_servers * get_server * create_server * delete_server * update_server For other things, it's still {verb}_{noun} * attach_volume * wait_for_server * add_auto_ip Cleanup Script ============== * Sometimes my examples had bugs * doc/source/examples/cleanup-servers.py .. code:: python from openstack import cloud as openstack # Initialize and turn on debug logging openstack.enable_logging(debug=True) for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name) for server in cloud.search_servers('my-server'): cloud.delete_server(server, wait=True, delete_ips=True) Normalization ============= * https://docs.openstack.org/developer/shade/model.html#image * doc/source/examples/normalization.py .. code:: python from openstack import cloud as openstack openstack.enable_logging() cloud = openstack.openstack_cloud(cloud='fuga', region_name='cystack') image = cloud.get_image( 'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image') cloud.pprint(image) Strict Normalized Results ========================= * Return only the declared model * doc/source/examples/strict-mode.py .. code:: python from openstack import cloud as openstack openstack.enable_logging() cloud = openstack.openstack_cloud( cloud='fuga', region_name='cystack', strict=True) image = cloud.get_image( 'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image') cloud.pprint(image) How Did I Find the Image Name for the Last Example? =================================================== * I often make stupid little utility scripts * doc/source/examples/find-an-image.py .. code:: python from openstack import cloud as openstack openstack.enable_logging() cloud = openstack.openstack_cloud(cloud='fuga', region_name='cystack') cloud.pprint([ image for image in cloud.list_images() if 'ubuntu' in image.name.lower()]) Added / Modified Information ============================ * Servers need more extra help * Fetch addresses dict from neutron * Figure out which IPs are good * `detailed` - defaults to True, add everything * `bare` - no extra calls - don't even fix broken things * `bare` is still normalized * doc/source/examples/server-information.py .. code:: python from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='my-citycloud', region_name='Buf1') try: server = cloud.create_server( 'my-server', image='Ubuntu 16.04 Xenial Xerus', flavor=dict(id='0dab10b5-42a2-438e-be7b-505741a7ffcc'), wait=True, auto_ip=True) print("\n\nFull Server\n\n") cloud.pprint(server) print("\n\nTurn Detailed Off\n\n") cloud.pprint(cloud.get_server('my-server', detailed=False)) print("\n\nBare Server\n\n") cloud.pprint(cloud.get_server('my-server', bare=True)) finally: # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) Exceptions ========== * All shade exceptions are subclasses of `OpenStackCloudException` * Direct REST calls throw `OpenStackCloudHTTPError` * `OpenStackCloudHTTPError` subclasses `OpenStackCloudException` and `requests.exceptions.HTTPError` * `OpenStackCloudURINotFound` for 404 * `OpenStackCloudBadRequest` for 400 User Agent Info =============== * Set `app_name` and `app_version` for User Agents * (sssh ... `region_name` is optional if the cloud has one region) * doc/source/examples/user-agent.py .. code:: python from openstack import cloud as openstack openstack.enable_logging(http_debug=True) cloud = openstack.openstack_cloud( cloud='datacentred', app_name='AmazingApp', app_version='1.0') cloud.list_networks() Uploading Large Objects ======================= * swift has a maximum object size * Large Objects are uploaded specially * shade figures this out and does it * multi-threaded * doc/source/examples/upload-object.py .. code:: python from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1') cloud.create_object( container='my-container', name='my-object', filename='/home/mordred/briarcliff.sh3d') cloud.delete_object('my-container', 'my-object') cloud.delete_container('my-container') Uploading Large Objects ======================= * Default max_file_size is 5G * This is a conference demo * Let's force a segment_size * One MILLION bytes * doc/source/examples/upload-object.py .. code:: python from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1') cloud.create_object( container='my-container', name='my-object', filename='/home/mordred/briarcliff.sh3d', segment_size=1000000) cloud.delete_object('my-container', 'my-object') cloud.delete_container('my-container') Service Conditionals ==================== .. code:: python from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='kiss', region_name='region1') print(cloud.has_service('network')) print(cloud.has_service('container-orchestration')) Service Conditional Overrides ============================= * Sometimes clouds are weird and figuring that out won't work .. code:: python from openstack import cloud as openstack openstack.enable_logging(debug=True) cloud = openstack.openstack_cloud(cloud='rax', region_name='DFW') print(cloud.has_service('network')) .. code:: yaml clouds: rax: profile: rackspace auth: username: mordred project_id: 245018 # This is already in profile: rackspace has_network: false Coming Soon =========== * Completion of RESTification * Full version discovery support * Multi-cloud facade layer * Microversion support (talk tomorrow) * Completion of caching tier (talk tomorrow) * All of you helping hacking on shade!!! (we're friendly) openstacksdk-0.11.3/doc/source/user/microversions.rst0000666000175100017510000001007513236151340022773 0ustar zuulzuul00000000000000============= Microversions ============= As shade rolls out support for consuming microversions, it will do so on a call by call basis as needed. Just like with major versions, shade should have logic to handle each microversion for a given REST call it makes, with the following rules in mind: * If an activity shade performs can be done differently or more efficiently with a new microversion, the support should be added to openstack.cloud. * shade should always attempt to use the latest microversion it is aware of for a given call, unless a microversion removes important data. * Microversion selection should under no circumstances be exposed to the user, except in the case of missing feature error messages. * If a feature is only exposed for a given microversion and cannot be simulated for older clouds without that microversion, it is ok to add it to shade but a clear error message should be given to the user that the given feature is not available on their cloud. (A message such as "This cloud only supports a maximum microversion of XXX for service YYY and this feature only exists on clouds with microversion ZZZ. Please contact your cloud provider for information about when this feature might be available") * When adding a feature to shade that only exists behind a new microversion, every effort should be made to figure out how to provide the same functionality if at all possible, even if doing so is inefficient. If an inefficient workaround is employed, a warning should be provided to the user. (the user's workaround to skip the inefficient behavior would be to stop using that shade API call) * If shade is aware of logic for more than one microversion, it should always attempt to use the latest version available for the service for that call. * Objects returned from shade should always go through normalization and thus should always conform to shade's documented data model and should never look different to the shade user regardless of the microversion used for the REST call. * If a microversion adds new fields to an object, those fields should be added to shade's data model contract for that object and the data should either be filled in by performing additional REST calls if the data is available that way, or the field should have a default value of None which the user can be expected to test for when attempting to use the new value. * If a microversion removes fields from an object that are part of shade's existing data model contract, care should be taken to not use the new microversion for that call unless forced to by lack of availablity of the old microversion on the cloud in question. In the case where an old microversion is no longer available, care must be taken to either find the data from another source and fill it in, or to put a value of None into the field and document for the user that on some clouds the value may not exist. * If a microversion removes a field and the outcome is particularly intractable and impossible to work around without fundamentally breaking shade's users, an issue should be raised with the service team in question. Hopefully a resolution can be found during the period while clouds still have the old microversion. * As new calls or objects are added to shade, it is important to check in with the service team in question on the expected stability of the object. If there are known changes expected in the future, even if they may be a few years off, shade should take care to not add committments to its data model for those fields/features. It is ok for shade to not have something. ..note:: shade does not currently have any sort of "experimental" opt-in API that would allow a shade to expose things to a user that may not be supportable under shade's normal compatibility contract. If a conflict arises in the future where there is a strong desire for a feature but also a lack of certainty about its stability over time, an experimental API may want to be explored ... but concrete use cases should arise before such a thing is started. openstacksdk-0.11.3/doc/source/user/service_filter.rst0000666000175100017510000000026613236151340023077 0ustar zuulzuul00000000000000ServiceFilter ============== .. automodule:: openstack.service_filter ServiceFilter object -------------------- .. autoclass:: openstack.service_filter.ServiceFilter :members: openstacksdk-0.11.3/doc/source/user/connection.rst0000666000175100017510000000076113236151364022237 0ustar zuulzuul00000000000000Connection ========== .. automodule:: openstack.connection from_config ----------- .. autofunction:: openstack.connection.from_config Connection Object ----------------- .. autoclass:: openstack.connection.Connection :members: Transitioning from Profile -------------------------- Support exists for users coming from older releases of OpenStack SDK who have been using the :class:`~openstack.profile.Profile` interface. .. toctree:: :maxdepth: 1 transition_from_profile openstacksdk-0.11.3/doc/source/user/transition_from_profile.rst0000666000175100017510000001730013236151340025024 0ustar zuulzuul00000000000000Transition from Profile ======================= .. note:: This section describes migrating code from a previous interface of python-openstacksdk and can be ignored by people writing new code. If you have code that currently uses the :class:`~openstack.profile.Profile` object and/or an ``authenticator`` instance from an object based on ``openstack.auth.base.BaseAuthPlugin``, that code should be updated to use the :class:`~openstack.config.cloud_region.CloudRegion` object instead. .. important:: :class:`~openstack.profile.Profile` is going away. Existing code using it should be migrated as soon as possible. Writing Code that Works with Both --------------------------------- These examples should all work with both the old and new interface, with one caveat. With the old interface, the ``CloudConfig`` object comes from the ``os-client-config`` library, and in the new interface that has been moved into the SDK. In order to write code that works with both the old and new interfaces, use the following code to import the config namespace: .. code-block:: python try: from openstack import config as occ except ImportError: from os_client_config import config as occ The examples will assume that the config module has been imported in that manner. .. note:: Yes, there is an easier and less verbose way to do all of these. These are verbose to handle both the old and new interfaces in the same codebase. Replacing authenticator ----------------------- There is no direct replacement for ``openstack.auth.base.BaseAuthPlugin``. ``python-openstacksdk`` uses the `keystoneauth`_ library for authentication and HTTP interactions. `keystoneauth`_ has `auth plugins`_ that can be used to control how authentication is done. The ``auth_type`` config parameter can be set to choose the correct authentication method to be used. Replacing Profile ----------------- The right way to replace the use of ``openstack.profile.Profile`` depends a bit on what you're trying to accomplish. Common patterns are listed below, but in general the approach is either to pass a cloud name to the `openstack.connection.Connection` constructor, or to construct a `openstack.config.cloud_region.CloudRegion` object and pass it to the constructor. All of the examples on this page assume that you want to support old and new interfaces simultaneously. There are easier and less verbose versions of each that are available if you can just make a clean transition. Getting a Connection to a named cloud from clouds.yaml ------------------------------------------------------ If you want is to construct a `openstack.connection.Connection` based on parameters configured in a ``clouds.yaml`` file, or from environment variables: .. code-block:: python import openstack.connection conn = connection.from_config(cloud_name='name-of-cloud-you-want') Getting a Connection from python arguments avoiding clouds.yaml --------------------------------------------------------------- If, on the other hand, you want to construct a `openstack.connection.Connection`, but are in a context where reading config from a clouds.yaml file is undesirable, such as inside of a Service: * create a `openstack.config.loader.OpenStackConfig` object, telling it to not load yaml files. Optionally pass an ``app_name`` and ``app_version`` which will be added to user-agent strings. * get a `openstack.config.cloud_region.CloudRegion` object from it * get a `openstack.connection.Connection` .. code-block:: python try: from openstack import config as occ except ImportError: from os_client_config import config as occ from openstack import connection loader = occ.OpenStackConfig( load_yaml_files=False, app_name='spectacular-app', app_version='1.0') cloud_region = loader.get_one_cloud( region_name='my-awesome-region', auth_type='password', auth=dict( auth_url='https://auth.example.com', username='amazing-user', user_domain_name='example-domain', project_name='astounding-project', user_project_name='example-domain', password='super-secret-password', )) conn = connection.from_config(cloud_config=cloud_region) .. note:: app_name and app_version are completely optional, and auth_type defaults to 'password'. They are shown here for clarity as to where they should go if they want to be set. Getting a Connection from python arguments and optionally clouds.yaml --------------------------------------------------------------------- If you want to make a connection from python arguments and want to allow one of them to optionally be ``cloud`` to allow selection of a named cloud, it's essentially the same as the previous example, except without ``load_yaml_files=False``. .. code-block:: python try: from openstack import config as occ except ImportError: from os_client_config import config as occ from openstack import connection loader = occ.OpenStackConfig( app_name='spectacular-app', app_version='1.0') cloud_region = loader.get_one_cloud( region_name='my-awesome-region', auth_type='password', auth=dict( auth_url='https://auth.example.com', username='amazing-user', user_domain_name='example-domain', project_name='astounding-project', user_project_name='example-domain', password='super-secret-password', )) conn = connection.from_config(cloud_config=cloud_region) Parameters to get_one_cloud --------------------------- The most important things to note are: * ``auth_type`` specifies which kind of authentication plugin to use. It controls how authentication is done, as well as what parameters are required. * ``auth`` is a dictionary containing the parameters needed by the auth plugin. The most common information it needs are user, project, domain, auth_url and password. * The rest of the keyword arguments to ``openstack.config.loader.OpenStackConfig.get_one_cloud`` are either parameters needed by the `keystoneauth Session`_ object, which control how HTTP connections are made, or parameters needed by the `keystoneauth Adapter`_ object, which control how services are found in the Keystone Catalog. For `keystoneauth Adapter`_ parameters, since there is one `openstack.connection.Connection` object but many services, per-service parameters are formed by using the official ``service_type`` of the service in question. For instance, to override the endpoint for the ``compute`` service, the parameter ``compute_endpoint_override`` would be used. ``region_name`` in ``openstack.profile.Profile`` was a per-service parameter. This is no longer a valid concept. An `openstack.connection.Connection` is a connection to a region of a cloud. If you are in an extreme situation where you have one service in one region and a different service in a different region, you must use two different `openstack.connection.Connection` objects. .. note:: service_type, although a parameter for keystoneauth1.adapter.Adapter, is not a valid parameter for get_one_cloud. service_type is the key by which services are referred, so saying 'compute_service_type="henry"' doesn't have any meaning. .. _keystoneauth: https://docs.openstack.org/keystoneauth/latest/ .. _auth plugins: https://docs.openstack.org/keystoneauth/latest/authentication-plugins.html .. _keystoneauth Adapter: https://docs.openstack.org/keystoneauth/latest/api/keystoneauth1.html#keystoneauth1.adapter.Adapter .. _keystoneauth Session: https://docs.openstack.org/keystoneauth/latest/api/keystoneauth1.html#keystoneauth1.session.Session openstacksdk-0.11.3/doc/source/user/config/0000775000175100017510000000000013236151501020576 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/config/index.rst0000666000175100017510000000026413236151340022444 0ustar zuulzuul00000000000000======================== Using os-client-config ======================== .. toctree:: :maxdepth: 2 configuration using vendor-support network-config reference openstacksdk-0.11.3/doc/source/user/config/configuration.rst0000666000175100017510000002304013236151340024201 0ustar zuulzuul00000000000000.. _openstack-config: ======================================== Configuring OpenStack SDK Applications ======================================== .. _config-environment-variables: Environment Variables --------------------- `openstacksdk` honors all of the normal `OS_*` variables. It does not provide backwards compatibility to service-specific variables such as `NOVA_USERNAME`. If you have OpenStack environment variables set, `openstacksdk` will produce a cloud config object named `envvars` containing your values from the environment. If you don't like the name `envvars`, that's ok, you can override it by setting `OS_CLOUD_NAME`. Service specific settings, like the nova service type, are set with the default service type as a prefix. For instance, to set a special service_type for trove set .. code-block:: bash export OS_DATABASE_SERVICE_TYPE=rax:database .. _config-clouds-yaml: Config Files ------------ `openstacksdk` will look for a file called `clouds.yaml` in the following locations: * Current Directory * ~/.config/openstack * /etc/openstack The first file found wins. You can also set the environment variable `OS_CLIENT_CONFIG_FILE` to an absolute path of a file to look for and that location will be inserted at the front of the file search list. The keys are all of the keys you'd expect from `OS_*` - except lower case and without the OS prefix. So, region name is set with `region_name`. Service specific settings, like the nova service type, are set with the default service type as a prefix. For instance, to set a special service_type for trove (because you're using Rackspace) set: .. code-block:: yaml database_service_type: 'rax:database' Site Specific File Locations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In addition to `~/.config/openstack` and `/etc/openstack` - some platforms have other locations they like to put things. `openstacksdk` will also look in an OS specific config dir * `USER_CONFIG_DIR` * `SITE_CONFIG_DIR` `USER_CONFIG_DIR` is different on Linux, OSX and Windows. * Linux: `~/.config/openstack` * OSX: `~/Library/Application Support/openstack` * Windows: `C:\\Users\\USERNAME\\AppData\\Local\\OpenStack\\openstack` `SITE_CONFIG_DIR` is different on Linux, OSX and Windows. * Linux: `/etc/openstack` * OSX: `/Library/Application Support/openstack` * Windows: `C:\\ProgramData\\OpenStack\\openstack` An example config file is probably helpful: .. code-block:: yaml clouds: mtvexx: profile: vexxhost auth: username: mordred@inaugust.com password: XXXXXXXXX project_name: mordred@inaugust.com region_name: ca-ymq-1 dns_api_version: 1 mordred: region_name: RegionOne auth: username: 'mordred' password: XXXXXXX project_name: 'shade' auth_url: 'https://montytaylor-sjc.openstack.blueboxgrid.com:5001/v2.0' infra: profile: rackspace auth: username: openstackci password: XXXXXXXX project_id: 610275 regions: - DFW - ORD - IAD You may note a few things. First, since `auth_url` settings are silly and embarrassingly ugly, known cloud vendor profile information is included and may be referenced by name. One of the benefits of that is that `auth_url` isn't the only thing the vendor defaults contain. For instance, since Rackspace lists `rax:database` as the service type for trove, `openstacksdk` knows that so that you don't have to. In case the cloud vendor profile is not available, you can provide one called `clouds-public.yaml`, following the same location rules previously mentioned for the config files. `regions` can be a list of regions. When you call `get_all_clouds`, you'll get a cloud config object for each cloud/region combo. As seen with `dns_service_type`, any setting that makes sense to be per-service, like `service_type` or `endpoint` or `api_version` can be set by prefixing the setting with the default service type. That might strike you funny when setting `service_type` and it does me too - but that's just the world we live in. Auth Settings ------------- Keystone has auth plugins - which means it's not possible to know ahead of time which auth settings are needed. `openstacksdk` sets the default plugin type to `password`, which is what things all were before plugins came about. In order to facilitate validation of values, all of the parameters that exist as a result of a chosen plugin need to go into the auth dict. For password auth, this includes `auth_url`, `username` and `password` as well as anything related to domains, projects and trusts. Splitting Secrets ----------------- In some scenarios, such as configuration management controlled environments, it might be easier to have secrets in one file and non-secrets in another. This is fully supported via an optional file `secure.yaml` which follows all the same location rules as `clouds.yaml`. It can contain anything you put in `clouds.yaml` and will take precedence over anything in the `clouds.yaml` file. .. code-block:: yaml # clouds.yaml clouds: internap: profile: internap auth: username: api-55f9a00fb2619 project_name: inap-17037 regions: - ams01 - nyj01 # secure.yaml clouds: internap: auth: password: XXXXXXXXXXXXXXXXX SSL Settings ------------ When the access to a cloud is done via a secure connection, `openstacksdk` will always verify the SSL cert by default. This can be disabled by setting `verify` to `False`. In case the cert is signed by an unknown CA, a specific cacert can be provided via `cacert`. **WARNING:** `verify` will always have precedence over `cacert`, so when setting a CA cert but disabling `verify`, the cloud cert will never be validated. Client certs are also configurable. `cert` will be the client cert file location. In case the cert key is not included within the client cert file, its file location needs to be set via `key`. .. code-block:: yaml # clouds.yaml clouds: secure: auth: ... key: /home/myhome/client-cert.key cert: /home/myhome/client-cert.crt cacert: /home/myhome/ca.crt insecure: auth: ... verify: False Cache Settings -------------- Accessing a cloud is often expensive, so it's quite common to want to do some client-side caching of those operations. To facilitate that, `openstacksdk` understands passing through cache settings to dogpile.cache, with the following behaviors: * Listing no config settings means you get a null cache. * `cache.expiration_time` and nothing else gets you memory cache. * Otherwise, `cache.class` and `cache.arguments` are passed in Different cloud behaviors are also differently expensive to deal with. If you want to get really crazy and tweak stuff, you can specify different expiration times on a per-resource basis by passing values, in seconds to an expiration mapping keyed on the singular name of the resource. A value of `-1` indicates that the resource should never expire. `openstacksdk` does not actually cache anything itself, but it collects and presents the cache information so that your various applications that are connecting to OpenStack can share a cache should you desire. .. code-block:: yaml cache: class: dogpile.cache.pylibmc expiration_time: 3600 arguments: url: - 127.0.0.1 expiration: server: 5 flavor: -1 clouds: mtvexx: profile: vexxhost auth: username: mordred@inaugust.com password: XXXXXXXXX project_name: mordred@inaugust.com region_name: ca-ymq-1 dns_api_version: 1 IPv6 ---- IPv6 is the future, and you should always use it if your cloud supports it and if your local network supports it. Both of those are easily detectable and all friendly software should do the right thing. However, sometimes you might exist in a location where you have an IPv6 stack, but something evil has caused it to not actually function. In that case, there is a config option you can set to unbreak you `force_ipv4`, or `OS_FORCE_IPV4` boolean environment variable. .. code-block:: yaml client: force_ipv4: true clouds: mtvexx: profile: vexxhost auth: username: mordred@inaugust.com password: XXXXXXXXX project_name: mordred@inaugust.com region_name: ca-ymq-1 dns_api_version: 1 monty: profile: rax auth: username: mordred@inaugust.com password: XXXXXXXXX project_name: mordred@inaugust.com region_name: DFW The above snippet will tell client programs to prefer returning an IPv4 address. Per-region settings ------------------- Sometimes you have a cloud provider that has config that is common to the cloud, but also with some things you might want to express on a per-region basis. For instance, Internap provides a public and private network specific to the user in each region, and putting the values of those networks into config can make consuming programs more efficient. To support this, the region list can actually be a list of dicts, and any setting that can be set at the cloud level can be overridden for that region. .. code-block:: yaml clouds: internap: profile: internap auth: password: XXXXXXXXXXXXXXXXX username: api-55f9a00fb2619 project_name: inap-17037 regions: - name: ams01 values: networks: - name: inap-17037-WAN1654 routes_externally: true - name: inap-17037-LAN6745 - name: nyj01 values: networks: - name: inap-17037-WAN1654 routes_externally: true - name: inap-17037-LAN6745 openstacksdk-0.11.3/doc/source/user/config/reference.rst0000666000175100017510000000031013236151340023263 0ustar zuulzuul00000000000000============= API Reference ============= .. module:: openstack.config :synopsis: OpenStack client configuration .. autoclass:: openstack.config.OpenStackConfig :members: :inherited-members: openstacksdk-0.11.3/doc/source/user/config/network-config.rst0000666000175100017510000000556113236151340024276 0ustar zuulzuul00000000000000============== Network Config ============== There are several different qualities that networks in OpenStack might have that might not be able to be automatically inferred from the available metadata. To help users navigate more complex setups, `os-client-config` allows configuring a list of network metadata. .. code-block:: yaml clouds: amazing: networks: - name: blue routes_externally: true - name: purple routes_externally: true default_interface: true - name: green routes_externally: false - name: yellow routes_externally: false nat_destination: true - name: chartreuse routes_externally: false routes_ipv6_externally: true - name: aubergine routes_ipv4_externally: false routes_ipv6_externally: true Every entry must have a name field, which can hold either the name or the id of the network. `routes_externally` is a boolean field that labels the network as handling north/south traffic off of the cloud. In a public cloud this might be thought of as the "public" network, but in private clouds it's possible it might be an RFC1918 address. In either case, it's provides IPs to servers that things not on the cloud can use. This value defaults to `false`, which indicates only servers on the same network can talk to it. `routes_ipv4_externally` and `routes_ipv6_externally` are boolean fields to help handle `routes_externally` in the case where a network has a split stack with different values for IPv4 and IPv6. Either entry, if not given, defaults to the value of `routes_externally`. `default_interface` is a boolean field that indicates that the network is the one that programs should use. It defaults to false. An example of needing to use this value is a cloud with two private networks, and where a user is running ansible in one of the servers to talk to other servers on the private network. Because both networks are private, there would otherwise be no way to determine which one should be used for the traffic. There can only be one `default_interface` per cloud. `nat_destination` is a boolean field that indicates which network floating ips should be attached to. It defaults to false. Normally this can be inferred by looking for a network that has subnets that have a gateway_ip. But it's possible to have more than one network that satisfies that condition, so the user might want to tell programs which one to pick. There can be only one `nat_destination` per cloud. `nat_source` is a boolean field that indicates which network floating ips should be requested from. It defaults to false. Normally this can be inferred by looking for a network that is attached to a router. But it's possible to have more than one network that satisfies that condition, so the user might want to tell programs which one to pick. There can be only one `nat_source` per cloud. openstacksdk-0.11.3/doc/source/user/config/using.rst0000666000175100017510000000270713236151340022466 0ustar zuulzuul00000000000000======================================== Using openstack.config in an Application ======================================== Usage ----- The simplest and least useful thing you can do is: .. code-block:: python python -m openstack.config.loader Which will print out whatever if finds for your config. If you want to use it from python, which is much more likely what you want to do, things like: Get a named cloud. .. code-block:: python import openstack.config cloud_region = openstack.config.OpenStackConfig().get_one( 'internap', region_name='ams01') print(cloud_region.name, cloud_region.region, cloud_region.config) Or, get all of the clouds. .. code-block:: python import openstack.config cloud_regions = openstack.config.OpenStackConfig().get_all() for cloud_region in cloud_regions: print(cloud_region.name, cloud_region.region, cloud_region.config) argparse -------- If you're using `openstack.config` from a program that wants to process command line options, there is a registration function to register the arguments that both `openstack.config` and keystoneauth know how to deal with - as well as a consumption argument. .. code-block:: python import argparse import sys import openstack.config config = openstack.config.OpenStackConfig() parser = argparse.ArgumentParser() config.register_argparse_arguments(parser, sys.argv) options = parser.parse_args() cloud_region = config.get_one(argparse=options) openstacksdk-0.11.3/doc/source/user/config/vendor-support.rst0000666000175100017510000001732013236151340024345 0ustar zuulzuul00000000000000============== Vendor Support ============== OpenStack presents deployers with many options, some of which can expose differences to end users. `os-client-config` tries its best to collect information about various things a user would need to know. The following is a text representation of the vendor related defaults `os-client-config` knows about. Default Values -------------- These are the default behaviors unless a cloud is configured differently. * Identity uses `password` authentication * Identity API Version is 2 * Image API Version is 2 * Volume API Version is 2 * Images must be in `qcow2` format * Images are uploaded using PUT interface * Public IPv4 is directly routable via DHCP from Neutron * IPv6 is not provided * Floating IPs are not required * Floating IPs are provided by Neutron * Security groups are provided by Neutron * Vendor specific agents are not used auro ---- https://api.auro.io:5000/v2.0 ============== ================ Region Name Location ============== ================ van1 Vancouver, BC ============== ================ * Public IPv4 is provided via NAT with Neutron Floating IP betacloud --------- https://api-1.betacloud.io:5000 ============== ================== Region Name Location ============== ================== betacloud-1 Nuremberg, Germany ============== ================== * Identity API Version is 3 * Images must be in `raw` format * Public IPv4 is provided via NAT with Neutron Floating IP * Volume API Version is 3 catalyst -------- https://api.cloud.catalyst.net.nz:5000/v2.0 ============== ================ Region Name Location ============== ================ nz-por-1 Porirua, NZ nz_wlg_2 Wellington, NZ ============== ================ * Image API Version is 1 * Images must be in `raw` format * Volume API Version is 1 citycloud --------- https://identity1.citycloud.com:5000/v3/ ============== ================ Region Name Location ============== ================ Buf1 Buffalo, NY Fra1 Frankfurt, DE Kna1 Karlskrona, SE La1 Los Angeles, CA Lon1 London, UK Sto2 Stockholm, SE ============== ================ * Identity API Version is 3 * Public IPv4 is provided via NAT with Neutron Floating IP * Volume API Version is 1 conoha ------ https://identity.%(region_name)s.conoha.io ============== ================ Region Name Location ============== ================ tyo1 Tokyo, JP sin1 Singapore sjc1 San Jose, CA ============== ================ * Image upload is not supported datacentred ----------- https://compute.datacentred.io:5000 ============== ================ Region Name Location ============== ================ sal01 Manchester, UK ============== ================ * Image API Version is 1 dreamcompute ------------ https://iad2.dream.io:5000 ============== ================ Region Name Location ============== ================ RegionOne Ashburn, VA ============== ================ * Identity API Version is 3 * Images must be in `raw` format * IPv6 is provided to every server dreamhost --------- Deprecated, please use dreamcompute https://keystone.dream.io/v2.0 ============== ================ Region Name Location ============== ================ RegionOne Ashburn, VA ============== ================ * Images must be in `raw` format * Public IPv4 is provided via NAT with Neutron Floating IP * IPv6 is provided to every server otc --- https://iam.%(region_name)s.otc.t-systems.com/v3 ============== ================ Region Name Location ============== ================ eu-de Germany ============== ================ * Identity API Version is 3 * Images must be in `vhd` format * Public IPv4 is provided via NAT with Neutron Floating IP elastx ------ https://ops.elastx.net:5000/v2.0 ============== ================ Region Name Location ============== ================ regionOne Stockholm, SE ============== ================ * Public IPv4 is provided via NAT with Neutron Floating IP entercloudsuite --------------- https://api.entercloudsuite.com/v2.0 ============== ================ Region Name Location ============== ================ nl-ams1 Amsterdam, NL it-mil1 Milan, IT de-fra1 Frankfurt, DE ============== ================ * Image API Version is 1 * Volume API Version is 1 fuga ---- https://identity.api.fuga.io:5000 ============== ================ Region Name Location ============== ================ cystack Netherlands ============== ================ * Identity API Version is 3 * Volume API Version is 3 internap -------- https://identity.api.cloud.iweb.com/v2.0 ============== ================ Region Name Location ============== ================ ams01 Amsterdam, NL da01 Dallas, TX nyj01 New York, NY sin01 Singapore sjc01 San Jose, CA ============== ================ * Floating IPs are not supported ovh --- https://auth.cloud.ovh.net/v2.0 ============== ================ Region Name Location ============== ================ BHS1 Beauharnois, QC SBG1 Strassbourg, FR GRA1 Gravelines, FR ============== ================ * Images may be in `raw` format. The `qcow2` default is also supported * Floating IPs are not supported rackspace --------- https://identity.api.rackspacecloud.com/v2.0/ ============== ================ Region Name Location ============== ================ DFW Dallas, TX HKG Hong Kong IAD Washington, D.C. LON London, UK ORD Chicago, IL SYD Sydney, NSW ============== ================ * Database Service Type is `rax:database` * Compute Service Name is `cloudServersOpenStack` * Images must be in `vhd` format * Images must be uploaded using the Glance Task Interface * Floating IPs are not supported * Public IPv4 is directly routable via static config by Nova * IPv6 is provided to every server * Security groups are not supported * Uploaded Images need properties to not use vendor agent:: :vm_mode: hvm :xenapi_use_agent: False * Volume API Version is 1 * While passwords are recommended for use, API keys do work as well. The `rackspaceauth` python package must be installed, and then the following can be added to clouds.yaml:: auth: username: myusername api_key: myapikey auth_type: rackspace_apikey switchengines ------------- https://keystone.cloud.switch.ch:5000/v2.0 ============== ================ Region Name Location ============== ================ LS Lausanne, CH ZH Zurich, CH ============== ================ * Images must be in `raw` format * Images must be uploaded using the Glance Task Interface * Volume API Version is 1 ultimum ------- https://console.ultimum-cloud.com:5000/v2.0 ============== ================ Region Name Location ============== ================ RegionOne Prague, CZ ============== ================ * Volume API Version is 1 unitedstack ----------- https://identity.api.ustack.com/v3 ============== ================ Region Name Location ============== ================ bj1 Beijing, CN gd1 Guangdong, CN ============== ================ * Identity API Version is 3 * Images must be in `raw` format * Volume API Version is 1 vexxhost -------- http://auth.vexxhost.net ============== ================ Region Name Location ============== ================ ca-ymq-1 Montreal, QC ============== ================ * DNS API Version is 1 * Identity API Version is 3 zetta ----- https://identity.api.zetta.io/v3 ============== ================ Region Name Location ============== ================ no-osl1 Oslo, NO ============== ================ * DNS API Version is 2 * Identity API Version is 3 openstacksdk-0.11.3/doc/source/user/resources/0000775000175100017510000000000013236151501021343 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/network/0000775000175100017510000000000013236151501023034 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/network/index.rst0000666000175100017510000000127013236151340024700 0ustar zuulzuul00000000000000Network Resources ================= .. toctree:: :maxdepth: 1 v2/address_scope v2/agent v2/auto_allocated_topology v2/availability_zone v2/extension v2/flavor v2/floating_ip v2/health_monitor v2/listener v2/load_balancer v2/metering_label v2/metering_label_rule v2/network v2/network_ip_availability v2/pool v2/pool_member v2/port v2/qos_bandwidth_limit_rule v2/qos_dscp_marking_rule v2/qos_minimum_bandwidth_rule v2/qos_policy v2/qos_rule_type v2/quota v2/rbac_policy v2/router v2/security_group v2/security_group_rule v2/segment v2/service_profile v2/service_provider v2/subnet v2/subnet_pool openstacksdk-0.11.3/doc/source/user/resources/network/v2/0000775000175100017510000000000013236151501023363 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/network/v2/address_scope.rst0000666000175100017510000000050713236151340026740 0ustar zuulzuul00000000000000openstack.network.v2.address_scope ================================== .. automodule:: openstack.network.v2.address_scope The AddressScope Class ---------------------- The ``AddressScope`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.address_scope.AddressScope :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/security_group_rule.rst0000666000175100017510000000056313236151340030236 0ustar zuulzuul00000000000000openstack.network.v2.security_group_rule ======================================== .. automodule:: openstack.network.v2.security_group_rule The SecurityGroupRule Class --------------------------- The ``SecurityGroupRule`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.security_group_rule.SecurityGroupRule :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/availability_zone.rst0000666000175100017510000000054713236151340027633 0ustar zuulzuul00000000000000openstack.network.v2.availability_zone ====================================== .. automodule:: openstack.network.v2.availability_zone The AvailabilityZone Class -------------------------- The ``AvailabilityZone`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.availability_zone.AvailabilityZone :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/agent.rst0000666000175100017510000000041513236151340025216 0ustar zuulzuul00000000000000openstack.network.v2.agent ========================== .. automodule:: openstack.network.v2.agent The Agent Class ----------------- The ``Agent`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.agent.Agent :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/security_group.rst0000666000175100017510000000051713236151340027206 0ustar zuulzuul00000000000000openstack.network.v2.security_group =================================== .. automodule:: openstack.network.v2.security_group The SecurityGroup Class ----------------------- The ``SecurityGroup`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.security_group.SecurityGroup :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/flavor.rst0000666000175100017510000000042313236151340025410 0ustar zuulzuul00000000000000openstack.network.v2.flavor =========================== .. automodule:: openstack.network.v2.flavor The Flavor Class ---------------- The ``Flavor`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.flavor.Flavor :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/metering_label.rst0000666000175100017510000000051713236151340027074 0ustar zuulzuul00000000000000openstack.network.v2.metering_label =================================== .. automodule:: openstack.network.v2.metering_label The MeteringLabel Class ----------------------- The ``MeteringLabel`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.metering_label.MeteringLabel :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/service_provider.rst0000666000175100017510000000054213236151340027473 0ustar zuulzuul00000000000000openstack.network.v2.service_provider ===================================== .. automodule:: openstack.network.v2.service_provider The Service Provider Class -------------------------- The ``Service Provider`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.service_provider.ServiceProvider :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/rbac_policy.rst0000666000175100017510000000046713236151340026415 0ustar zuulzuul00000000000000openstack.network.v2.rbac_policy ================================ .. automodule:: openstack.network.v2.rbac_policy The RBACPolicy Class -------------------- The ``RBACPolicy`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.rbac_policy.RBACPolicy :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/quota.rst0000666000175100017510000000041313236151340025247 0ustar zuulzuul00000000000000openstack.network.v2.quota ========================== .. automodule:: openstack.network.v2.quota The Quota Class --------------- The ``Quota`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.quota.Quota :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/router.rst0000666000175100017510000000042313236151340025437 0ustar zuulzuul00000000000000openstack.network.v2.router =========================== .. automodule:: openstack.network.v2.router The Router Class ---------------- The ``Router`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.router.Router :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/qos_rule_type.rst0000666000175100017510000000050313236151340027010 0ustar zuulzuul00000000000000openstack.network.v2.qos_rule_type ================================== .. automodule:: openstack.network.v2.qos_rule_type The QoSRuleType Class --------------------- The ``QoSRuleType`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.qos_rule_type.QoSRuleType :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/qos_bandwidth_limit_rule.rst0000666000175100017510000000062713236151340031200 0ustar zuulzuul00000000000000openstack.network.v2.qos_bandwidth_limit_rule ============================================= .. automodule:: openstack.network.v2.qos_bandwidth_limit_rule The QoSBandwidthLimitRule Class ------------------------------- The ``QoSBandwidthLimitRule`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.qos_bandwidth_limit_rule.QoSBandwidthLimitRule :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/qos_dscp_marking_rule.rst0000666000175100017510000000057713236151340030503 0ustar zuulzuul00000000000000openstack.network.v2.qos_dscp_marking_rule ========================================== .. automodule:: openstack.network.v2.qos_dscp_marking_rule The QoSDSCPMarkingRule Class ---------------------------- The ``QoSDSCPMarkingRule`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.qos_dscp_marking_rule.QoSDSCPMarkingRule :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/extension.rst0000666000175100017510000000045313236151340026136 0ustar zuulzuul00000000000000openstack.network.v2.extension ============================== .. automodule:: openstack.network.v2.extension The Extension Class ------------------- The ``Extension`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.extension.Extension :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/segment.rst0000666000175100017510000000043313236151340025562 0ustar zuulzuul00000000000000openstack.network.v2.segment ============================ .. automodule:: openstack.network.v2.segment The Segment Class ----------------- The ``Segment`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.segment.Segment :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/subnet_pool.rst0000666000175100017510000000046713236151340026460 0ustar zuulzuul00000000000000openstack.network.v2.subnet_pool ================================ .. automodule:: openstack.network.v2.subnet_pool The SubnetPool Class -------------------- The ``SubnetPool`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.subnet_pool.SubnetPool :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/metering_label_rule.rst0000666000175100017510000000056313236151340030124 0ustar zuulzuul00000000000000openstack.network.v2.metering_label_rule ======================================== .. automodule:: openstack.network.v2.metering_label_rule The MeteringLabelRule Class --------------------------- The ``MeteringLabelRule`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.metering_label_rule.MeteringLabelRule :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/pool.rst0000666000175100017510000000040313236151340025066 0ustar zuulzuul00000000000000openstack.network.v2.pool ========================= .. automodule:: openstack.network.v2.pool The Pool Class -------------- The ``Pool`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.pool.Pool :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/listener.rst0000666000175100017510000000044313236151340025746 0ustar zuulzuul00000000000000openstack.network.v2.listener ============================= .. automodule:: openstack.network.v2.listener The Listener Class ------------------ The ``Listener`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.listener.Listener :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/service_profile.rst0000666000175100017510000000052713236151340027304 0ustar zuulzuul00000000000000openstack.network.v2.service_profile ==================================== .. automodule:: openstack.network.v2.service_profile The ServiceProfile Class ------------------------ The ``ServiceProfile`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.service_profile.ServiceProfile :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/floating_ip.rst0000666000175100017510000000046713236151340026422 0ustar zuulzuul00000000000000openstack.network.v2.floating_ip ================================ .. automodule:: openstack.network.v2.floating_ip The FloatingIP Class -------------------- The ``FloatingIP`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.floating_ip.FloatingIP :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/subnet.rst0000666000175100017510000000042313236151340025417 0ustar zuulzuul00000000000000openstack.network.v2.subnet =========================== .. automodule:: openstack.network.v2.subnet The Subnet Class ---------------- The ``Subnet`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.subnet.Subnet :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/pool_member.rst0000666000175100017510000000046713236151340026427 0ustar zuulzuul00000000000000openstack.network.v2.pool_member ================================ .. automodule:: openstack.network.v2.pool_member The PoolMember Class -------------------- The ``PoolMember`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.pool_member.PoolMember :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/qos_policy.rst0000666000175100017510000000045713236151340026307 0ustar zuulzuul00000000000000openstack.network.v2.qos_policy =============================== .. automodule:: openstack.network.v2.qos_policy The QoSPolicy Class ------------------- The ``QoSPolicy`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.qos_policy.QoSPolicy :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/health_monitor.rst0000666000175100017510000000051713236151340027137 0ustar zuulzuul00000000000000openstack.network.v2.health_monitor =================================== .. automodule:: openstack.network.v2.health_monitor The HealthMonitor Class ----------------------- The ``HealthMonitor`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.health_monitor.HealthMonitor :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/network.rst0000666000175100017510000000043313236151340025611 0ustar zuulzuul00000000000000openstack.network.v2.network ============================ .. automodule:: openstack.network.v2.network The Network Class ----------------- The ``Network`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.network.Network :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/network_ip_availability.rst0000666000175100017510000000062313236151340031034 0ustar zuulzuul00000000000000openstack.network.v2.network_ip_availability ============================================ .. automodule:: openstack.network.v2.network_ip_availability The NetworkIPAvailability Class ------------------------------- The ``NetworkIPAvailability`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.network_ip_availability.NetworkIPAvailability :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/auto_allocated_topology.rst0000666000175100017510000000063013236151340031033 0ustar zuulzuul00000000000000openstack.network.v2.auto_allocated_topology ============================================ .. automodule:: openstack.network.v2.auto_allocated_topology The Auto Allocated Topology Class --------------------------------- The ``Auto Allocated Toplogy`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.auto_allocated_topology.AutoAllocatedTopology :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/port.rst0000666000175100017510000000040313236151340025101 0ustar zuulzuul00000000000000openstack.network.v2.port ========================= .. automodule:: openstack.network.v2.port The Port Class -------------- The ``Port`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.port.Port :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/qos_minimum_bandwidth_rule.rst0000666000175100017510000000064713236151340031537 0ustar zuulzuul00000000000000openstack.network.v2.qos_minimum_bandwidth_rule =============================================== .. automodule:: openstack.network.v2.qos_minimum_bandwidth_rule The QoSMinimumBandwidthRule Class --------------------------------- The ``QoSMinimumBandwidthRule`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.qos_minimum_bandwidth_rule.QoSMinimumBandwidthRule :members: openstacksdk-0.11.3/doc/source/user/resources/network/v2/load_balancer.rst0000666000175100017510000000050713236151340026670 0ustar zuulzuul00000000000000openstack.network.v2.load_balancer ================================== .. automodule:: openstack.network.v2.load_balancer The LoadBalancer Class ---------------------- The ``LoadBalancer`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.network.v2.load_balancer.LoadBalancer :members: openstacksdk-0.11.3/doc/source/user/resources/block_storage/0000775000175100017510000000000013236151501024161 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/block_storage/index.rst0000666000175100017510000000016613236151340026030 0ustar zuulzuul00000000000000Block Storage Resources ======================= .. toctree:: :maxdepth: 1 v2/snapshot v2/type v2/volume openstacksdk-0.11.3/doc/source/user/resources/block_storage/v2/0000775000175100017510000000000013236151501024510 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/block_storage/v2/volume.rst0000666000175100017510000000100413236151340026547 0ustar zuulzuul00000000000000openstack.block_storage.v2.volume ================================= .. automodule:: openstack.block_storage.v2.volume The Volume Class ---------------- The ``Volume`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.block_storage.v2.volume.Volume :members: The VolumeDetail Class ---------------------- The ``VolumeDetail`` class inherits from :class:`~openstack.block_storage.v2.volume.Volume`. .. autoclass:: openstack.block_storage.v2.volume.VolumeDetail :members: openstacksdk-0.11.3/doc/source/user/resources/block_storage/v2/snapshot.rst0000666000175100017510000000104213236151340027101 0ustar zuulzuul00000000000000openstack.block_storage.v2.snapshot =================================== .. automodule:: openstack.block_storage.v2.snapshot The Snapshot Class ------------------ The ``Snapshot`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.block_storage.v2.snapshot.Snapshot :members: The SnapshotDetail Class ------------------------ The ``SnapshotDetail`` class inherits from :class:`~openstack.block_storage.v2.snapshot.Snapshot`. .. autoclass:: openstack.block_storage.v2.snapshot.SnapshotDetail :members: openstacksdk-0.11.3/doc/source/user/resources/block_storage/v2/type.rst0000666000175100017510000000043413236151340026227 0ustar zuulzuul00000000000000openstack.block_storage.v2.type =============================== .. automodule:: openstack.block_storage.v2.type The Type Class -------------- The ``Type`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.block_storage.v2.type.Type :members: openstacksdk-0.11.3/doc/source/user/resources/compute/0000775000175100017510000000000013236151501023017 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/compute/index.rst0000666000175100017510000000027313236151340024665 0ustar zuulzuul00000000000000Compute Resources ================= .. toctree:: :maxdepth: 1 v2/extension v2/flavor v2/image v2/keypair v2/limits v2/server v2/server_interface v2/server_ip openstacksdk-0.11.3/doc/source/user/resources/compute/v2/0000775000175100017510000000000013236151501023346 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/compute/v2/flavor.rst0000666000175100017510000000074113236151340025376 0ustar zuulzuul00000000000000openstack.compute.v2.flavor ============================ .. automodule:: openstack.compute.v2.flavor The Flavor Class ---------------- The ``Flavor`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.flavor.Flavor :members: The FlavorDetail Class ---------------------- The ``FlavorDetail`` class inherits from :class:`~openstack.compute.v2.flavor.Flavor`. .. autoclass:: openstack.compute.v2.flavor.FlavorDetail :members: openstacksdk-0.11.3/doc/source/user/resources/compute/v2/limits.rst0000666000175100017510000000123313236151340025403 0ustar zuulzuul00000000000000openstack.compute.v2.limits =========================== .. automodule:: openstack.compute.v2.limits The Limits Class ---------------- The ``Limits`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.limits.Limits :members: The AbsoluteLimits Class ------------------------ The ``AbsoluteLimits`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.limits.AbsoluteLimits :members: The RateLimit Class ------------------- The ``RateLimit`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.limits.RateLimit :members: openstacksdk-0.11.3/doc/source/user/resources/compute/v2/server_ip.rst0000666000175100017510000000044713236151340026106 0ustar zuulzuul00000000000000openstack.compute.v2.server_ip ============================== .. automodule:: openstack.compute.v2.server_ip The ServerIP Class ------------------ The ``ServerIP`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.server_ip.ServerIP :members: openstacksdk-0.11.3/doc/source/user/resources/compute/v2/server_interface.rst0000666000175100017510000000053713236151340027436 0ustar zuulzuul00000000000000openstack.compute.v2.server_interface ===================================== .. automodule:: openstack.compute.v2.server_interface The ServerInterface Class ------------------------- The ``ServerInterface`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.server_interface.ServerInterface :members: openstacksdk-0.11.3/doc/source/user/resources/compute/v2/image.rst0000666000175100017510000000072113236151340025165 0ustar zuulzuul00000000000000openstack.compute.v2.image ========================== .. automodule:: openstack.compute.v2.image The Image Class --------------- The ``Image`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.image.Image :members: The ImageDetail Class --------------------- The ``ImageDetail`` class inherits from :class:`~openstack.compute.v2.image.Image`. .. autoclass:: openstack.compute.v2.image.ImageDetail :members: openstacksdk-0.11.3/doc/source/user/resources/compute/v2/extension.rst0000666000175100017510000000045313236151340026121 0ustar zuulzuul00000000000000openstack.compute.v2.extension ============================== .. automodule:: openstack.compute.v2.extension The Extension Class ------------------- The ``Extension`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.extension.Extension :members: openstacksdk-0.11.3/doc/source/user/resources/compute/v2/server.rst0000666000175100017510000000042413236151340025411 0ustar zuulzuul00000000000000openstack.compute.v2.server ============================ .. automodule:: openstack.compute.v2.server The Server Class ---------------- The ``Server`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.server.Server :members: openstacksdk-0.11.3/doc/source/user/resources/compute/v2/keypair.rst0000666000175100017510000000043313236151340025547 0ustar zuulzuul00000000000000openstack.compute.v2.keypair ============================ .. automodule:: openstack.compute.v2.keypair The Keypair Class ----------------- The ``Keypair`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.compute.v2.keypair.Keypair :members: openstacksdk-0.11.3/doc/source/user/resources/object_store/0000775000175100017510000000000013236151501024025 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/object_store/index.rst0000666000175100017510000000016513236151340025673 0ustar zuulzuul00000000000000Object Store Resources ====================== .. toctree:: :maxdepth: 1 v1/account v1/container v1/obj openstacksdk-0.11.3/doc/source/user/resources/object_store/v1/0000775000175100017510000000000013236151501024353 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/object_store/v1/container.rst0000666000175100017510000000047713236151340027102 0ustar zuulzuul00000000000000openstack.object_store.v1.container =================================== .. automodule:: openstack.object_store.v1.container The Container Class ------------------- The ``Container`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.object_store.v1.container.Container :members: openstacksdk-0.11.3/doc/source/user/resources/object_store/v1/obj.rst0000666000175100017510000000043313236151340025662 0ustar zuulzuul00000000000000openstack.object_store.v1.obj ============================= .. automodule:: openstack.object_store.v1.obj The Object Class ---------------- The ``Object`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.object_store.v1.obj.Object :members: openstacksdk-0.11.3/doc/source/user/resources/object_store/v1/account.rst0000666000175100017510000000045713236151340026552 0ustar zuulzuul00000000000000openstack.object_store.v1.account ================================= .. automodule:: openstack.object_store.v1.account The Account Class ----------------- The ``Account`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.object_store.v1.account.Account :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/0000775000175100017510000000000013236151501023522 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/clustering/index.rst0000666000175100017510000000035213236151340025366 0ustar zuulzuul00000000000000Cluster Resources ================= .. toctree:: :maxdepth: 1 v1/build_info v1/profile_type v1/profile v1/policy_type v1/policy v1/cluster v1/node v1/cluster_policy v1/receiver v1/action v1/event openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/0000775000175100017510000000000013236151501024050 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/cluster_policy.rst0000666000175100017510000000053313236151340027646 0ustar zuulzuul00000000000000openstack.clustering.v1.cluster_policy ====================================== .. automodule:: openstack.clustering.v1.cluster_policy The ClusterPolicy Class ----------------------- The ``ClusterPolicy`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.cluster_policy.ClusterPolicy :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/receiver.rst0000666000175100017510000000045713236151340026417 0ustar zuulzuul00000000000000openstack.clustering.v1.receiver ================================ .. automodule:: openstack.clustering.v1.receiver The Receiver Class ------------------ The ``Receiver`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.receiver.Receiver :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/action.rst0000666000175100017510000000043713236151340026066 0ustar zuulzuul00000000000000openstack.clustering.v1.action ============================== .. automodule:: openstack.clustering.v1.action The Action Class ---------------- The ``Action`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.action.Action :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/cluster.rst0000666000175100017510000000045513236151340026272 0ustar zuulzuul00000000000000openstack.clustering.v1.Cluster ===================================== .. automodule:: openstack.clustering.v1.cluster The Cluster Class ----------------- The ``Cluster`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.cluster.Cluster :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/event.rst0000666000175100017510000000042713236151340025731 0ustar zuulzuul00000000000000openstack.clustering.v1.event ============================= .. automodule:: openstack.clustering.v1.event The Event Class --------------- The ``Event`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.event.Event :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/profile_type.rst0000666000175100017510000000051313236151340027305 0ustar zuulzuul00000000000000openstack.clustering.v1.profile_type ==================================== .. automodule:: openstack.clustering.v1.profile_type The ProfileType Class --------------------- The ``ProfileType`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.profile_type.ProfileType :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/profile.rst0000666000175100017510000000044713236151340026252 0ustar zuulzuul00000000000000openstack.clustering.v1.profile =============================== .. automodule:: openstack.clustering.v1.profile The Profile Class ----------------- The ``Profile`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.profile.Profile :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/policy.rst0000666000175100017510000000043713236151340026110 0ustar zuulzuul00000000000000openstack.clustering.v1.policy ============================== .. automodule:: openstack.clustering.v1.policy The Policy Class ---------------- The ``Policy`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.policy.Policy :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/policy_type.rst0000666000175100017510000000050313236151340027143 0ustar zuulzuul00000000000000openstack.clustering.v1.policy_type =================================== .. automodule:: openstack.clustering.v1.policy_type The PolicyType Class -------------------- The ``PolicyType`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.policy_type.PolicyType :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/build_info.rst0000666000175100017510000000047313236151340026723 0ustar zuulzuul00000000000000openstack.clustering.v1.build_info ================================== .. automodule:: openstack.clustering.v1.build_info The BuildInfo Class ------------------- The ``BuildInfo`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.build_info.BuildInfo :members: openstacksdk-0.11.3/doc/source/user/resources/clustering/v1/node.rst0000666000175100017510000000041713236151340025534 0ustar zuulzuul00000000000000openstack.clustering.v1.Node ============================ .. automodule:: openstack.clustering.v1.node The Node Class -------------- The ``Node`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.clustering.v1.node.Node :members: openstacksdk-0.11.3/doc/source/user/resources/image/0000775000175100017510000000000013236151501022425 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/image/index.rst0000666000175100017510000000026013236151340024267 0ustar zuulzuul00000000000000Image v1 Resources ================== .. toctree:: :maxdepth: 1 v1/image Image v2 Resources ================== .. toctree:: :maxdepth: 1 v2/image v2/member openstacksdk-0.11.3/doc/source/user/resources/image/v2/0000775000175100017510000000000013236151501022754 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/image/v2/member.rst0000666000175100017510000000041313236151340024756 0ustar zuulzuul00000000000000openstack.image.v2.member ========================= .. automodule:: openstack.image.v2.member The Member Class ---------------- The ``Member`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.image.v2.member.Member :members: openstacksdk-0.11.3/doc/source/user/resources/image/v2/image.rst0000666000175100017510000000040313236151340024570 0ustar zuulzuul00000000000000openstack.image.v2.image ======================== .. automodule:: openstack.image.v2.image The Image Class --------------- The ``Image`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.image.v2.image.Image :members: openstacksdk-0.11.3/doc/source/user/resources/image/v1/0000775000175100017510000000000013236151501022753 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/image/v1/image.rst0000666000175100017510000000040313236151340024567 0ustar zuulzuul00000000000000openstack.image.v1.image ======================== .. automodule:: openstack.image.v1.image The Image Class --------------- The ``Image`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.image.v1.image.Image :members: openstacksdk-0.11.3/doc/source/user/resources/key_manager/0000775000175100017510000000000013236151501023625 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/key_manager/index.rst0000666000175100017510000000016213236151340025470 0ustar zuulzuul00000000000000KeyManager Resources ==================== .. toctree:: :maxdepth: 1 v1/container v1/order v1/secret openstacksdk-0.11.3/doc/source/user/resources/key_manager/v1/0000775000175100017510000000000013236151501024153 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/key_manager/v1/container.rst0000666000175100017510000000047613236151340026701 0ustar zuulzuul00000000000000openstack.key_manager.v1.container ===================================== .. automodule:: openstack.key_manager.v1.container The Container Class ------------------- The ``Container`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.key_manager.v1.container.Container :members: openstacksdk-0.11.3/doc/source/user/resources/key_manager/v1/order.rst0000666000175100017510000000043313236151340026023 0ustar zuulzuul00000000000000openstack.key_manager.v1.order ============================== .. automodule:: openstack.key_manager.v1.order The Order Class --------------- The ``Order`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.key_manager.v1.order.Order :members: openstacksdk-0.11.3/doc/source/user/resources/key_manager/v1/secret.rst0000666000175100017510000000044313236151340026176 0ustar zuulzuul00000000000000openstack.key_manager.v1.secret =============================== .. automodule:: openstack.key_manager.v1.secret The Secret Class ---------------- The ``Secret`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.key_manager.v1.secret.Secret :members: openstacksdk-0.11.3/doc/source/user/resources/baremetal/0000775000175100017510000000000013236151501023277 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/baremetal/index.rst0000666000175100017510000000021313236151340025137 0ustar zuulzuul00000000000000Baremetal Resources ===================== .. toctree:: :maxdepth: 1 v1/driver v1/chassis v1/node v1/port v1/port_group openstacksdk-0.11.3/doc/source/user/resources/baremetal/v1/0000775000175100017510000000000013236151501023625 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/baremetal/v1/driver.rst0000666000175100017510000000043413236151340025656 0ustar zuulzuul00000000000000openstack.baremetal.v1.driver ============================== .. automodule:: openstack.baremetal.v1.driver The Driver Class ---------------- The ``Driver`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.baremetal.v1.driver.Driver :members: openstacksdk-0.11.3/doc/source/user/resources/baremetal/v1/chassis.rst0000666000175100017510000000044413236151340026021 0ustar zuulzuul00000000000000openstack.baremetal.v1.chassis =============================== .. automodule:: openstack.baremetal.v1.chassis The Chassis Class ----------------- The ``Chassis`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.baremetal.v1.chassis.Chassis :members: openstacksdk-0.11.3/doc/source/user/resources/baremetal/v1/port_group.rst0000666000175100017510000000047013236151340026563 0ustar zuulzuul00000000000000openstack.baremetal.v1.port_group ================================== .. automodule:: openstack.baremetal.v1.port_group The PortGroup Class ------------------- The ``PortGroup`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.baremetal.v1.port_group.PortGroup :members: openstacksdk-0.11.3/doc/source/user/resources/baremetal/v1/node.rst0000666000175100017510000000041413236151340025306 0ustar zuulzuul00000000000000openstack.baremetal.v1.Node ============================ .. automodule:: openstack.baremetal.v1.node The Node Class -------------- The ``Node`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.baremetal.v1.node.Node :members: openstacksdk-0.11.3/doc/source/user/resources/baremetal/v1/port.rst0000666000175100017510000000041413236151340025345 0ustar zuulzuul00000000000000openstack.baremetal.v1.port ============================ .. automodule:: openstack.baremetal.v1.port The Port Class -------------- The ``Port`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.baremetal.v1.port.Port :members: openstacksdk-0.11.3/doc/source/user/resources/workflow/0000775000175100017510000000000013236151501023215 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/workflow/index.rst0000666000175100017510000000015413236151340025061 0ustar zuulzuul00000000000000Object Store Resources ====================== .. toctree:: :maxdepth: 1 v2/execution v2/workflow openstacksdk-0.11.3/doc/source/user/resources/workflow/v2/0000775000175100017510000000000013236151501023544 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/workflow/v2/execution.rst0000666000175100017510000000045713236151340026312 0ustar zuulzuul00000000000000openstack.workflow.v2.execution =============================== .. automodule:: openstack.workflow.v2.execution The Execution Class ------------------- The ``Execution`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.workflow.v2.execution.Execution :members: openstacksdk-0.11.3/doc/source/user/resources/workflow/v2/workflow.rst0000666000175100017510000000044713236151340026160 0ustar zuulzuul00000000000000openstack.workflow.v2.workflow ============================== .. automodule:: openstack.workflow.v2.workflow The Workflow Class ------------------ The ``Workflow`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.workflow.v2.workflow.Workflow :members: openstacksdk-0.11.3/doc/source/user/resources/orchestration/0000775000175100017510000000000013236151501024227 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/orchestration/index.rst0000666000175100017510000000015213236151340026071 0ustar zuulzuul00000000000000Orchestration Resources ======================= .. toctree:: :maxdepth: 1 v1/stack v1/resource openstacksdk-0.11.3/doc/source/user/resources/orchestration/v1/0000775000175100017510000000000013236151501024555 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/orchestration/v1/resource.rst0000666000175100017510000000047313236151340027145 0ustar zuulzuul00000000000000openstack.orchestration.v1.resource =================================== .. automodule:: openstack.orchestration.v1.resource The Resource Class ------------------ The ``Resource`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.orchestration.v1.resource.Resource :members: openstacksdk-0.11.3/doc/source/user/resources/orchestration/v1/stack.rst0000666000175100017510000000044313236151340026420 0ustar zuulzuul00000000000000openstack.orchestration.v1.stack ================================ .. automodule:: openstack.orchestration.v1.stack The Stack Class --------------- The ``Stack`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.orchestration.v1.stack.Stack :members: openstacksdk-0.11.3/doc/source/user/resources/load_balancer/0000775000175100017510000000000013236151501024111 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/load_balancer/index.rst0000666000175100017510000000027513236151340025761 0ustar zuulzuul00000000000000Load Balancer Resources ======================= .. toctree:: :maxdepth: 1 v2/load_balancer v2/listener v2/pool v2/member v2/health_monitor v2/l7_policy v2/l7_rule openstacksdk-0.11.3/doc/source/user/resources/load_balancer/v2/0000775000175100017510000000000013236151501024440 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/load_balancer/v2/member.rst0000666000175100017510000000045313236151340026446 0ustar zuulzuul00000000000000openstack.load_balancer.v2.member ================================= .. automodule:: openstack.load_balancer.v2.member The Member Class ---------------- The ``Member`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.load_balancer.v2.member.Member :members: openstacksdk-0.11.3/doc/source/user/resources/load_balancer/v2/l7_policy.rst0000666000175100017510000000047713236151340027106 0ustar zuulzuul00000000000000openstack.load_balancer.v2.l7_policy ==================================== .. automodule:: openstack.load_balancer.v2.l7_policy The L7Policy Class ------------------ The ``L7Policy`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.load_balancer.v2.l7_policy.L7Policy :members: openstacksdk-0.11.3/doc/source/user/resources/load_balancer/v2/pool.rst0000666000175100017510000000043713236151340026152 0ustar zuulzuul00000000000000openstack.load_balancer.v2.pool =============================== .. automodule:: openstack.load_balancer.v2.pool The Pool Class ------------------ The ``Pool`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.load_balancer.v2.pool.Pool :members: openstacksdk-0.11.3/doc/source/user/resources/load_balancer/v2/listener.rst0000666000175100017510000000047313236151340027026 0ustar zuulzuul00000000000000openstack.load_balancer.v2.listener =================================== .. automodule:: openstack.load_balancer.v2.listener The Listener Class ------------------ The ``Listener`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.load_balancer.v2.listener.Listener :members: openstacksdk-0.11.3/doc/source/user/resources/load_balancer/v2/health_monitor.rst0000666000175100017510000000054713236151340030217 0ustar zuulzuul00000000000000openstack.load_balancer.v2.health_monitor ========================================= .. automodule:: openstack.load_balancer.v2.health_monitor The HealthMonitor Class ----------------------- The ``HealthMonitor`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.load_balancer.v2.health_monitor.HealthMonitor :members: openstacksdk-0.11.3/doc/source/user/resources/load_balancer/v2/l7_rule.rst0000666000175100017510000000046313236151340026551 0ustar zuulzuul00000000000000openstack.load_balancer.v2.l7_rule ==================================== .. automodule:: openstack.load_balancer.v2.l7_rule The L7Rule Class ------------------ The ``L7Rule`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.load_balancer.v2.l7_rule.L7Rule :members: openstacksdk-0.11.3/doc/source/user/resources/load_balancer/v2/load_balancer.rst0000666000175100017510000000053713236151340027750 0ustar zuulzuul00000000000000openstack.load_balancer.v2.load_balancer ======================================== .. automodule:: openstack.load_balancer.v2.load_balancer The LoadBalancer Class ---------------------- The ``LoadBalancer`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.load_balancer.v2.load_balancer.LoadBalancer :members: openstacksdk-0.11.3/doc/source/user/resources/identity/0000775000175100017510000000000013236151501023174 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/identity/index.rst0000666000175100017510000000050313236151340025036 0ustar zuulzuul00000000000000Identity v2 Resources ===================== .. toctree:: :maxdepth: 1 v2/extension v2/role v2/tenant v2/user Identity v3 Resources ===================== .. toctree:: :maxdepth: 1 v3/credential v3/domain v3/endpoint v3/group v3/policy v3/project v3/service v3/trust v3/user openstacksdk-0.11.3/doc/source/user/resources/identity/v2/0000775000175100017510000000000013236151501023523 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/identity/v2/tenant.rst0000666000175100017510000000042713236151340025554 0ustar zuulzuul00000000000000openstack.identity.v2.tenant ============================ .. automodule:: openstack.identity.v2.tenant The Tenant Class ---------------- The ``Tenant`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v2.tenant.Tenant :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v2/role.rst0000666000175100017510000000040713236151340025222 0ustar zuulzuul00000000000000openstack.identity.v2.role ========================== .. automodule:: openstack.identity.v2.role The Role Class -------------- The ``Role`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v2.role.Role :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v2/user.rst0000666000175100017510000000040713236151340025237 0ustar zuulzuul00000000000000openstack.identity.v2.user ========================== .. automodule:: openstack.identity.v2.user The User Class -------------- The ``User`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v2.user.User :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v2/extension.rst0000666000175100017510000000045713236151340026302 0ustar zuulzuul00000000000000openstack.identity.v2.extension =============================== .. automodule:: openstack.identity.v2.extension The Extension Class ------------------- The ``Extension`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v2.extension.Extension :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/0000775000175100017510000000000013236151501023524 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/identity/v3/domain.rst0000666000175100017510000000042713236151340025533 0ustar zuulzuul00000000000000openstack.identity.v3.domain ============================ .. automodule:: openstack.identity.v3.domain The Domain Class ---------------- The ``Domain`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.domain.Domain :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/credential.rst0000666000175100017510000000046713236151340026402 0ustar zuulzuul00000000000000openstack.identity.v3.credential ================================ .. automodule:: openstack.identity.v3.credential The Credential Class -------------------- The ``Credential`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.credential.Credential :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/endpoint.rst0000666000175100017510000000044713236151340026106 0ustar zuulzuul00000000000000openstack.identity.v3.endpoint ============================== .. automodule:: openstack.identity.v3.endpoint The Endpoint Class ------------------ The ``Endpoint`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.endpoint.Endpoint :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/user.rst0000666000175100017510000000040713236151340025240 0ustar zuulzuul00000000000000openstack.identity.v3.user ========================== .. automodule:: openstack.identity.v3.user The User Class -------------- The ``User`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.user.User :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/group.rst0000666000175100017510000000041713236151340025417 0ustar zuulzuul00000000000000openstack.identity.v3.group =========================== .. automodule:: openstack.identity.v3.group The Group Class --------------- The ``Group`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.group.Group :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/policy.rst0000666000175100017510000000042713236151340025563 0ustar zuulzuul00000000000000openstack.identity.v3.policy ============================ .. automodule:: openstack.identity.v3.policy The Policy Class ---------------- The ``Policy`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.policy.Policy :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/project.rst0000666000175100017510000000043713236151340025733 0ustar zuulzuul00000000000000openstack.identity.v3.project ============================= .. automodule:: openstack.identity.v3.project The Project Class ----------------- The ``Project`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.project.Project :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/service.rst0000666000175100017510000000043713236151340025725 0ustar zuulzuul00000000000000openstack.identity.v3.service ============================= .. automodule:: openstack.identity.v3.service The Service Class ----------------- The ``Service`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.service.Service :members: openstacksdk-0.11.3/doc/source/user/resources/identity/v3/trust.rst0000666000175100017510000000041713236151340025444 0ustar zuulzuul00000000000000openstack.identity.v3.trust =========================== .. automodule:: openstack.identity.v3.trust The Trust Class --------------- The ``Trust`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.identity.v3.trust.Trust :members: openstacksdk-0.11.3/doc/source/user/resources/database/0000775000175100017510000000000013236151501023107 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/database/index.rst0000666000175100017510000000017713236151340024760 0ustar zuulzuul00000000000000Database Resources ====================== .. toctree:: :maxdepth: 1 v1/database v1/flavor v1/instance v1/user openstacksdk-0.11.3/doc/source/user/resources/database/v1/0000775000175100017510000000000013236151501023435 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/resources/database/v1/instance.rst0000666000175100017510000000044713236151340026003 0ustar zuulzuul00000000000000openstack.database.v1.instance ============================== .. automodule:: openstack.database.v1.instance The Instance Class ------------------ The ``Instance`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.database.v1.instance.Instance :members: openstacksdk-0.11.3/doc/source/user/resources/database/v1/flavor.rst0000666000175100017510000000042713236151340025466 0ustar zuulzuul00000000000000openstack.database.v1.flavor ============================ .. automodule:: openstack.database.v1.flavor The Flavor Class ---------------- The ``Flavor`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.database.v1.flavor.Flavor :members: openstacksdk-0.11.3/doc/source/user/resources/database/v1/user.rst0000666000175100017510000000040713236151340025151 0ustar zuulzuul00000000000000openstack.database.v1.user ========================== .. automodule:: openstack.database.v1.user The User Class -------------- The ``User`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.database.v1.user.User :members: openstacksdk-0.11.3/doc/source/user/resources/database/v1/database.rst0000666000175100017510000000044713236151340025743 0ustar zuulzuul00000000000000openstack.database.v1.database ============================== .. automodule:: openstack.database.v1.database The Database Class ------------------ The ``Database`` class inherits from :class:`~openstack.resource.Resource`. .. autoclass:: openstack.database.v1.database.Database :members: openstacksdk-0.11.3/doc/source/user/utils.rst0000666000175100017510000000012013236151340021217 0ustar zuulzuul00000000000000Utilities ========= .. automodule:: openstack.utils :members: enable_logging openstacksdk-0.11.3/doc/source/user/proxies/0000775000175100017510000000000013236151501021022 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/user/proxies/clustering.rst0000666000175100017510000001513413236151340023742 0ustar zuulzuul00000000000000Cluster API =========== .. automodule:: openstack.clustering.v1._proxy The Cluster Class ----------------- The cluster high-level interface is available through the ``cluster`` member of a :class:`~openstack.connection.Connection` object. The ``cluster`` member will only be added if the service is detected. Build Info Operations ^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.get_build_info Profile Type Operations ^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.profile_types .. automethod:: openstack.clustering.v1._proxy.Proxy.get_profile_type Profile Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.create_profile .. automethod:: openstack.clustering.v1._proxy.Proxy.update_profile .. automethod:: openstack.clustering.v1._proxy.Proxy.delete_profile .. automethod:: openstack.clustering.v1._proxy.Proxy.get_profile .. automethod:: openstack.clustering.v1._proxy.Proxy.find_profile .. automethod:: openstack.clustering.v1._proxy.Proxy.profiles .. automethod:: openstack.clustering.v1._proxy.Proxy.validate_profile Policy Type Operations ^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.policy_types .. automethod:: openstack.clustering.v1._proxy.Proxy.get_policy_type Policy Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.create_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.update_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.delete_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.get_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.find_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.policies .. automethod:: openstack.clustering.v1._proxy.Proxy.validate_policy Cluster Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.create_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.update_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.delete_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.get_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.find_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.clusters .. automethod:: openstack.clustering.v1._proxy.Proxy.check_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.recover_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.resize_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.scale_in_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.scale_out_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.collect_cluster_attrs .. automethod:: openstack.clustering.v1._proxy.Proxy.perform_operation_on_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.add_nodes_to_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.remove_nodes_from_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.replace_nodes_in_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.attach_policy_to_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.update_cluster_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.detach_policy_from_cluster .. automethod:: openstack.clustering.v1._proxy.Proxy.get_cluster_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_policies .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_add_nodes .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_attach_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_del_nodes .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_detach_policy .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_operation .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_replace_nodes .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_resize .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_scale_in .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_scale_out .. automethod:: openstack.clustering.v1._proxy.Proxy.cluster_update_policy Node Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.create_node .. automethod:: openstack.clustering.v1._proxy.Proxy.update_node .. automethod:: openstack.clustering.v1._proxy.Proxy.delete_node .. automethod:: openstack.clustering.v1._proxy.Proxy.get_node .. automethod:: openstack.clustering.v1._proxy.Proxy.find_node .. automethod:: openstack.clustering.v1._proxy.Proxy.nodes .. automethod:: openstack.clustering.v1._proxy.Proxy.check_node .. automethod:: openstack.clustering.v1._proxy.Proxy.recover_node .. automethod:: openstack.clustering.v1._proxy.Proxy.perform_operation_on_node .. automethod:: openstack.clustering.v1._proxy.Proxy.adopt_node .. automethod:: openstack.clustering.v1._proxy.Proxy.node_operation Receiver Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.create_receiver .. automethod:: openstack.clustering.v1._proxy.Proxy.update_receiver .. automethod:: openstack.clustering.v1._proxy.Proxy.delete_receiver .. automethod:: openstack.clustering.v1._proxy.Proxy.get_receiver .. automethod:: openstack.clustering.v1._proxy.Proxy.find_receiver .. automethod:: openstack.clustering.v1._proxy.Proxy.receivers Action Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.get_action .. automethod:: openstack.clustering.v1._proxy.Proxy.actions Event Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.get_event .. automethod:: openstack.clustering.v1._proxy.Proxy.events Helper Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.wait_for_delete .. automethod:: openstack.clustering.v1._proxy.Proxy.wait_for_status Service Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.clustering.v1._proxy.Proxy .. automethod:: openstack.clustering.v1._proxy.Proxy.services openstacksdk-0.11.3/doc/source/user/proxies/message_v2.rst0000666000175100017510000000342413236151340023615 0ustar zuulzuul00000000000000Message API v2 ============== For details on how to use message, see :doc:`/user/guides/message` .. automodule:: openstack.message.v2._proxy The Message v2 Class -------------------- The message high-level interface is available through the ``message`` member of a :class:`~openstack.connection.Connection` object. The ``message`` member will only be added if the service is detected. Message Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.message.v2._proxy.Proxy .. automethod:: openstack.message.v2._proxy.Proxy.post_message .. automethod:: openstack.message.v2._proxy.Proxy.delete_message .. automethod:: openstack.message.v2._proxy.Proxy.get_message .. automethod:: openstack.message.v2._proxy.Proxy.messages Queue Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.message.v2._proxy.Proxy .. automethod:: openstack.message.v2._proxy.Proxy.create_queue .. automethod:: openstack.message.v2._proxy.Proxy.delete_queue .. automethod:: openstack.message.v2._proxy.Proxy.get_queue .. automethod:: openstack.message.v2._proxy.Proxy.queues Claim Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.message.v2._proxy.Proxy .. automethod:: openstack.message.v2._proxy.Proxy.create_claim .. automethod:: openstack.message.v2._proxy.Proxy.update_claim .. automethod:: openstack.message.v2._proxy.Proxy.delete_claim .. automethod:: openstack.message.v2._proxy.Proxy.get_claim Subscription Operations ^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.message.v2._proxy.Proxy .. automethod:: openstack.message.v2._proxy.Proxy.create_subscription .. automethod:: openstack.message.v2._proxy.Proxy.delete_subscription .. automethod:: openstack.message.v2._proxy.Proxy.get_subscription .. automethod:: openstack.message.v2._proxy.Proxy.subscriptions openstacksdk-0.11.3/doc/source/user/proxies/load_balancer_v2.rst0000666000175100017510000001010313236151340024727 0ustar zuulzuul00000000000000Load Balancer v2 API ==================== .. automodule:: openstack.load_balancer.v2._proxy The LoadBalancer Class ---------------------- The load_balancer high-level interface is available through the ``load_balancer`` member of a :class:`~openstack.connection.Connection` object. The ``load_balancer`` member will only be added if the service is detected. Load Balancer Operations ^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.load_balancer.v2._proxy.Proxy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.create_load_balancer .. automethod:: openstack.load_balancer.v2._proxy.Proxy.delete_load_balancer .. automethod:: openstack.load_balancer.v2._proxy.Proxy.find_load_balancer .. automethod:: openstack.load_balancer.v2._proxy.Proxy.get_load_balancer .. automethod:: openstack.load_balancer.v2._proxy.Proxy.load_balancers .. automethod:: openstack.load_balancer.v2._proxy.Proxy.update_load_balancer Listener Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.load_balancer.v2._proxy.Proxy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.create_listener .. automethod:: openstack.load_balancer.v2._proxy.Proxy.delete_listener .. automethod:: openstack.load_balancer.v2._proxy.Proxy.find_listener .. automethod:: openstack.load_balancer.v2._proxy.Proxy.get_listener .. automethod:: openstack.load_balancer.v2._proxy.Proxy.listeners .. automethod:: openstack.load_balancer.v2._proxy.Proxy.update_listener Pool Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.load_balancer.v2._proxy.Proxy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.create_pool .. automethod:: openstack.load_balancer.v2._proxy.Proxy.delete_pool .. automethod:: openstack.load_balancer.v2._proxy.Proxy.find_pool .. automethod:: openstack.load_balancer.v2._proxy.Proxy.get_pool .. automethod:: openstack.load_balancer.v2._proxy.Proxy.pools .. automethod:: openstack.load_balancer.v2._proxy.Proxy.update_pool Member Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.load_balancer.v2._proxy.Proxy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.create_member .. automethod:: openstack.load_balancer.v2._proxy.Proxy.delete_member .. automethod:: openstack.load_balancer.v2._proxy.Proxy.find_member .. automethod:: openstack.load_balancer.v2._proxy.Proxy.get_member .. automethod:: openstack.load_balancer.v2._proxy.Proxy.members .. automethod:: openstack.load_balancer.v2._proxy.Proxy.update_member Health Monitor Operations ^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.load_balancer.v2._proxy.Proxy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.create_health_monitor .. automethod:: openstack.load_balancer.v2._proxy.Proxy.delete_health_monitor .. automethod:: openstack.load_balancer.v2._proxy.Proxy.find_health_monitor .. automethod:: openstack.load_balancer.v2._proxy.Proxy.get_health_monitor .. automethod:: openstack.load_balancer.v2._proxy.Proxy.health_monitors .. automethod:: openstack.load_balancer.v2._proxy.Proxy.update_health_monitor L7 Policy Operations ^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.load_balancer.v2._proxy.Proxy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.create_l7_policy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.delete_l7_policy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.find_l7_policy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.get_l7_policy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.l7_policies .. automethod:: openstack.load_balancer.v2._proxy.Proxy.update_l7_policy L7 Rule Operations ^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.load_balancer.v2._proxy.Proxy .. automethod:: openstack.load_balancer.v2._proxy.Proxy.create_l7_rule .. automethod:: openstack.load_balancer.v2._proxy.Proxy.delete_l7_rule .. automethod:: openstack.load_balancer.v2._proxy.Proxy.find_l7_rule .. automethod:: openstack.load_balancer.v2._proxy.Proxy.get_l7_rule .. automethod:: openstack.load_balancer.v2._proxy.Proxy.l7_rules .. automethod:: openstack.load_balancer.v2._proxy.Proxy.update_l7_rule openstacksdk-0.11.3/doc/source/user/proxies/key_manager.rst0000666000175100017510000000364513236151340024051 0ustar zuulzuul00000000000000KeyManager API ============== For details on how to use key_management, see :doc:`/user/guides/key_manager` .. automodule:: openstack.key_manager.v1._proxy The KeyManager Class -------------------- The key_management high-level interface is available through the ``key_manager`` member of a :class:`~openstack.connection.Connection` object. The ``key_manager`` member will only be added if the service is detected. Secret Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.key_manager.v1._proxy.Proxy .. automethod:: openstack.key_manager.v1._proxy.Proxy.create_secret .. automethod:: openstack.key_manager.v1._proxy.Proxy.update_secret .. automethod:: openstack.key_manager.v1._proxy.Proxy.delete_secret .. automethod:: openstack.key_manager.v1._proxy.Proxy.get_secret .. automethod:: openstack.key_manager.v1._proxy.Proxy.find_secret .. automethod:: openstack.key_manager.v1._proxy.Proxy.secrets Container Operations ^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.key_manager.v1._proxy.Proxy .. automethod:: openstack.key_manager.v1._proxy.Proxy.create_container .. automethod:: openstack.key_manager.v1._proxy.Proxy.update_container .. automethod:: openstack.key_manager.v1._proxy.Proxy.delete_container .. automethod:: openstack.key_manager.v1._proxy.Proxy.get_container .. automethod:: openstack.key_manager.v1._proxy.Proxy.find_container .. automethod:: openstack.key_manager.v1._proxy.Proxy.containers Order Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.key_manager.v1._proxy.Proxy .. automethod:: openstack.key_manager.v1._proxy.Proxy.create_order .. automethod:: openstack.key_manager.v1._proxy.Proxy.update_order .. automethod:: openstack.key_manager.v1._proxy.Proxy.delete_order .. automethod:: openstack.key_manager.v1._proxy.Proxy.get_order .. automethod:: openstack.key_manager.v1._proxy.Proxy.find_order .. automethod:: openstack.key_manager.v1._proxy.Proxy.orders openstacksdk-0.11.3/doc/source/user/proxies/image_v2.rst0000666000175100017510000000311413236151340023247 0ustar zuulzuul00000000000000Image API v2 ============ For details on how to use image, see :doc:`/user/guides/image` .. automodule:: openstack.image.v2._proxy The Image v2 Class ------------------ The image high-level interface is available through the ``image`` member of a :class:`~openstack.connection.Connection` object. The ``image`` member will only be added if the service is detected. Image Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.image.v2._proxy.Proxy .. automethod:: openstack.image.v2._proxy.Proxy.upload_image .. automethod:: openstack.image.v2._proxy.Proxy.download_image .. automethod:: openstack.image.v2._proxy.Proxy.update_image .. automethod:: openstack.image.v2._proxy.Proxy.delete_image .. automethod:: openstack.image.v2._proxy.Proxy.get_image .. automethod:: openstack.image.v2._proxy.Proxy.find_image .. automethod:: openstack.image.v2._proxy.Proxy.images .. automethod:: openstack.image.v2._proxy.Proxy.deactivate_image .. automethod:: openstack.image.v2._proxy.Proxy.reactivate_image .. automethod:: openstack.image.v2._proxy.Proxy.add_tag .. automethod:: openstack.image.v2._proxy.Proxy.remove_tag Member Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.image.v2._proxy.Proxy .. automethod:: openstack.image.v2._proxy.Proxy.add_member .. automethod:: openstack.image.v2._proxy.Proxy.remove_member .. automethod:: openstack.image.v2._proxy.Proxy.update_member .. automethod:: openstack.image.v2._proxy.Proxy.get_member .. automethod:: openstack.image.v2._proxy.Proxy.find_member .. automethod:: openstack.image.v2._proxy.Proxy.members openstacksdk-0.11.3/doc/source/user/proxies/image_v1.rst0000666000175100017510000000143013236151340023245 0ustar zuulzuul00000000000000Image API v1 ============ For details on how to use image, see :doc:`/user/guides/image` .. automodule:: openstack.image.v1._proxy The Image v1 Class ------------------ The image high-level interface is available through the ``image`` member of a :class:`~openstack.connection.Connection` object. The ``image`` member will only be added if the service is detected. .. autoclass:: openstack.image.v1._proxy.Proxy .. automethod:: openstack.image.v1._proxy.Proxy.upload_image .. automethod:: openstack.image.v1._proxy.Proxy.update_image .. automethod:: openstack.image.v1._proxy.Proxy.delete_image .. automethod:: openstack.image.v1._proxy.Proxy.get_image .. automethod:: openstack.image.v1._proxy.Proxy.find_image .. automethod:: openstack.image.v1._proxy.Proxy.images openstacksdk-0.11.3/doc/source/user/proxies/baremetal.rst0000666000175100017510000000604113236151340023514 0ustar zuulzuul00000000000000Baremetal API ============== For details on how to use baremetal, see :doc:`/user/guides/baremetal` .. automodule:: openstack.baremetal.v1._proxy The Baremetal Class -------------------- The baremetal high-level interface is available through the ``baremetal`` member of a :class:`~openstack.connection.Connection` object. The ``baremetal`` member will only be added if the service is detected. Node Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.baremetal.v1._proxy.Proxy .. automethod:: openstack.baremetal.v1._proxy.Proxy.create_node .. automethod:: openstack.baremetal.v1._proxy.Proxy.update_node .. automethod:: openstack.baremetal.v1._proxy.Proxy.delete_node .. automethod:: openstack.baremetal.v1._proxy.Proxy.get_node .. automethod:: openstack.baremetal.v1._proxy.Proxy.find_node .. automethod:: openstack.baremetal.v1._proxy.Proxy.nodes Port Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.baremetal.v1._proxy.Proxy .. automethod:: openstack.baremetal.v1._proxy.Proxy.create_port .. automethod:: openstack.baremetal.v1._proxy.Proxy.update_port .. automethod:: openstack.baremetal.v1._proxy.Proxy.delete_port .. automethod:: openstack.baremetal.v1._proxy.Proxy.get_port .. automethod:: openstack.baremetal.v1._proxy.Proxy.find_port .. automethod:: openstack.baremetal.v1._proxy.Proxy.ports Port Group Operations ^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.baremetal.v1._proxy.Proxy .. automethod:: openstack.baremetal.v1._proxy.Proxy.create_port_group .. automethod:: openstack.baremetal.v1._proxy.Proxy.update_port_group .. automethod:: openstack.baremetal.v1._proxy.Proxy.delete_port_group .. automethod:: openstack.baremetal.v1._proxy.Proxy.get_port_group .. automethod:: openstack.baremetal.v1._proxy.Proxy.find_port_group .. automethod:: openstack.baremetal.v1._proxy.Proxy.port_groups Driver Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.baremetal.v1._proxy.Proxy .. automethod:: openstack.baremetal.v1._proxy.Proxy.drivers .. automethod:: openstack.baremetal.v1._proxy.Proxy.get_driver Chassis Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.baremetal.v1._proxy.Proxy .. automethod:: openstack.baremetal.v1._proxy.Proxy.create_chassis .. automethod:: openstack.baremetal.v1._proxy.Proxy.update_chassis .. automethod:: openstack.baremetal.v1._proxy.Proxy.delete_chassis .. automethod:: openstack.baremetal.v1._proxy.Proxy.get_chassis .. automethod:: openstack.baremetal.v1._proxy.Proxy.find_chassis .. automethod:: openstack.baremetal.v1._proxy.Proxy.chassis Deprecated Methods ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.baremetal.v1._proxy.Proxy .. automethod:: openstack.baremetal.v1._proxy.Proxy.create_portgroup .. automethod:: openstack.baremetal.v1._proxy.Proxy.update_portgroup .. automethod:: openstack.baremetal.v1._proxy.Proxy.delete_portgroup .. automethod:: openstack.baremetal.v1._proxy.Proxy.get_portgroup .. automethod:: openstack.baremetal.v1._proxy.Proxy.find_portgroup .. automethod:: openstack.baremetal.v1._proxy.Proxy.portgroups openstacksdk-0.11.3/doc/source/user/proxies/workflow.rst0000666000175100017510000000224413236151340023433 0ustar zuulzuul00000000000000Workflow API ============ .. automodule:: openstack.workflow.v2._proxy The Workflow Class ------------------ The workflow high-level interface is available through the ``workflow`` member of a :class:`~openstack.connection.Connection` object. The ``workflow`` member will only be added if the service is detected. Workflow Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.workflow.v2._proxy.Proxy .. automethod:: openstack.workflow.v2._proxy.Proxy.create_workflow .. automethod:: openstack.workflow.v2._proxy.Proxy.delete_workflow .. automethod:: openstack.workflow.v2._proxy.Proxy.get_workflow .. automethod:: openstack.workflow.v2._proxy.Proxy.find_workflow .. automethod:: openstack.workflow.v2._proxy.Proxy.workflows Execution Operations ^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.workflow.v2._proxy.Proxy .. automethod:: openstack.workflow.v2._proxy.Proxy.create_execution .. automethod:: openstack.workflow.v2._proxy.Proxy.delete_execution .. automethod:: openstack.workflow.v2._proxy.Proxy.get_execution .. automethod:: openstack.workflow.v2._proxy.Proxy.find_execution .. automethod:: openstack.workflow.v2._proxy.Proxy.executions openstacksdk-0.11.3/doc/source/user/proxies/object_store.rst0000666000175100017510000000371213236151340024244 0ustar zuulzuul00000000000000Object Store API ================ For details on how to use this API, see :doc:`/user/guides/object_store` .. automodule:: openstack.object_store.v1._proxy The Object Store Class ---------------------- The Object Store high-level interface is exposed as the ``object_store`` object on :class:`~openstack.connection.Connection` objects. Account Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.object_store.v1._proxy.Proxy .. automethod:: openstack.object_store.v1._proxy.Proxy.get_account_metadata .. automethod:: openstack.object_store.v1._proxy.Proxy.set_account_metadata .. automethod:: openstack.object_store.v1._proxy.Proxy.delete_account_metadata Container Operations ^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.object_store.v1._proxy.Proxy .. automethod:: openstack.object_store.v1._proxy.Proxy.create_container .. automethod:: openstack.object_store.v1._proxy.Proxy.delete_container .. automethod:: openstack.object_store.v1._proxy.Proxy.containers .. automethod:: openstack.object_store.v1._proxy.Proxy.get_container_metadata .. automethod:: openstack.object_store.v1._proxy.Proxy.set_container_metadata .. automethod:: openstack.object_store.v1._proxy.Proxy.delete_container_metadata Object Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.object_store.v1._proxy.Proxy .. automethod:: openstack.object_store.v1._proxy.Proxy.upload_object .. automethod:: openstack.object_store.v1._proxy.Proxy.download_object .. automethod:: openstack.object_store.v1._proxy.Proxy.copy_object .. automethod:: openstack.object_store.v1._proxy.Proxy.delete_object .. automethod:: openstack.object_store.v1._proxy.Proxy.get_object .. automethod:: openstack.object_store.v1._proxy.Proxy.objects .. automethod:: openstack.object_store.v1._proxy.Proxy.get_object_metadata .. automethod:: openstack.object_store.v1._proxy.Proxy.set_object_metadata .. automethod:: openstack.object_store.v1._proxy.Proxy.delete_object_metadata openstacksdk-0.11.3/doc/source/user/proxies/database.rst0000666000175100017510000000373013236151340023326 0ustar zuulzuul00000000000000Database API ============ For details on how to use database, see :doc:`/user/guides/database` .. automodule:: openstack.database.v1._proxy The Database Class ------------------ The database high-level interface is available through the ``database`` member of a :class:`~openstack.connection.Connection` object. The ``database`` member will only be added if the service is detected. Database Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.database.v1._proxy.Proxy .. automethod:: openstack.database.v1._proxy.Proxy.create_database .. automethod:: openstack.database.v1._proxy.Proxy.delete_database .. automethod:: openstack.database.v1._proxy.Proxy.get_database .. automethod:: openstack.database.v1._proxy.Proxy.find_database .. automethod:: openstack.database.v1._proxy.Proxy.databases Flavor Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.database.v1._proxy.Proxy .. automethod:: openstack.database.v1._proxy.Proxy.get_flavor .. automethod:: openstack.database.v1._proxy.Proxy.find_flavor .. automethod:: openstack.database.v1._proxy.Proxy.flavors Instance Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.database.v1._proxy.Proxy .. automethod:: openstack.database.v1._proxy.Proxy.create_instance .. automethod:: openstack.database.v1._proxy.Proxy.update_instance .. automethod:: openstack.database.v1._proxy.Proxy.delete_instance .. automethod:: openstack.database.v1._proxy.Proxy.get_instance .. automethod:: openstack.database.v1._proxy.Proxy.find_instance .. automethod:: openstack.database.v1._proxy.Proxy.instances User Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.database.v1._proxy.Proxy .. automethod:: openstack.database.v1._proxy.Proxy.create_user .. automethod:: openstack.database.v1._proxy.Proxy.delete_user .. automethod:: openstack.database.v1._proxy.Proxy.get_user .. automethod:: openstack.database.v1._proxy.Proxy.find_user .. automethod:: openstack.database.v1._proxy.Proxy.users openstacksdk-0.11.3/doc/source/user/proxies/identity_v2.rst0000666000175100017510000000400613236151340024017 0ustar zuulzuul00000000000000Identity API v2 =============== For details on how to use identity, see :doc:`/user/guides/identity` .. automodule:: openstack.identity.v2._proxy The Identity v2 Class --------------------- The identity high-level interface is available through the ``identity`` member of a :class:`~openstack.connection.Connection` object. The ``identity`` member will only be added if the service is detected. Extension Operations ^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v2._proxy.Proxy .. automethod:: openstack.identity.v2._proxy.Proxy.get_extension .. automethod:: openstack.identity.v2._proxy.Proxy.extensions User Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v2._proxy.Proxy .. automethod:: openstack.identity.v2._proxy.Proxy.create_user .. automethod:: openstack.identity.v2._proxy.Proxy.update_user .. automethod:: openstack.identity.v2._proxy.Proxy.delete_user .. automethod:: openstack.identity.v2._proxy.Proxy.get_user .. automethod:: openstack.identity.v2._proxy.Proxy.find_user .. automethod:: openstack.identity.v2._proxy.Proxy.users Role Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v2._proxy.Proxy .. automethod:: openstack.identity.v2._proxy.Proxy.create_role .. automethod:: openstack.identity.v2._proxy.Proxy.update_role .. automethod:: openstack.identity.v2._proxy.Proxy.delete_role .. automethod:: openstack.identity.v2._proxy.Proxy.get_role .. automethod:: openstack.identity.v2._proxy.Proxy.find_role .. automethod:: openstack.identity.v2._proxy.Proxy.roles Tenant Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v2._proxy.Proxy .. automethod:: openstack.identity.v2._proxy.Proxy.create_tenant .. automethod:: openstack.identity.v2._proxy.Proxy.update_tenant .. automethod:: openstack.identity.v2._proxy.Proxy.delete_tenant .. automethod:: openstack.identity.v2._proxy.Proxy.get_tenant .. automethod:: openstack.identity.v2._proxy.Proxy.find_tenant .. automethod:: openstack.identity.v2._proxy.Proxy.tenants openstacksdk-0.11.3/doc/source/user/proxies/orchestration.rst0000666000175100017510000000452413236151340024450 0ustar zuulzuul00000000000000Orchestration API ================= For details on how to use orchestration, see :doc:`/user/guides/orchestration` .. automodule:: openstack.orchestration.v1._proxy The Orchestration Class ----------------------- The orchestration high-level interface is available through the ``orchestration`` member of a :class:`~openstack.connection.Connection` object. The ``orchestration`` member will only be added if the service is detected. Stack Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.orchestration.v1._proxy.Proxy .. automethod:: openstack.orchestration.v1._proxy.Proxy.create_stack .. automethod:: openstack.orchestration.v1._proxy.Proxy.check_stack .. automethod:: openstack.orchestration.v1._proxy.Proxy.update_stack .. automethod:: openstack.orchestration.v1._proxy.Proxy.delete_stack .. automethod:: openstack.orchestration.v1._proxy.Proxy.find_stack .. automethod:: openstack.orchestration.v1._proxy.Proxy.get_stack .. automethod:: openstack.orchestration.v1._proxy.Proxy.get_stack_environment .. automethod:: openstack.orchestration.v1._proxy.Proxy.get_stack_files .. automethod:: openstack.orchestration.v1._proxy.Proxy.get_stack_template .. automethod:: openstack.orchestration.v1._proxy.Proxy.stacks .. automethod:: openstack.orchestration.v1._proxy.Proxy.validate_template .. automethod:: openstack.orchestration.v1._proxy.Proxy.resources Software Configuration Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.orchestration.v1._proxy.Proxy .. automethod:: openstack.orchestration.v1._proxy.Proxy.create_software_config .. automethod:: openstack.orchestration.v1._proxy.Proxy.delete_software_config .. automethod:: openstack.orchestration.v1._proxy.Proxy.get_software_config .. automethod:: openstack.orchestration.v1._proxy.Proxy.software_configs Software Deployment Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.orchestration.v1._proxy.Proxy .. automethod:: openstack.orchestration.v1._proxy.Proxy.create_software_deployment .. automethod:: openstack.orchestration.v1._proxy.Proxy.update_software_deployment .. automethod:: openstack.orchestration.v1._proxy.Proxy.delete_software_deployment .. automethod:: openstack.orchestration.v1._proxy.Proxy.get_software_deployment .. automethod:: openstack.orchestration.v1._proxy.Proxy.software_deployments openstacksdk-0.11.3/doc/source/user/proxies/block_storage.rst0000666000175100017510000000330613236151340024377 0ustar zuulzuul00000000000000Block Storage API ================= For details on how to use block_storage, see :doc:`/user/guides/block_storage` .. automodule:: openstack.block_storage.v2._proxy The BlockStorage Class ---------------------- The block_storage high-level interface is available through the ``block_storage`` member of a :class:`~openstack.connection.Connection` object. The ``block_storage`` member will only be added if the service is detected. Volume Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.block_storage.v2._proxy.Proxy .. automethod:: openstack.block_storage.v2._proxy.Proxy.create_volume .. automethod:: openstack.block_storage.v2._proxy.Proxy.delete_volume .. automethod:: openstack.block_storage.v2._proxy.Proxy.get_volume .. automethod:: openstack.block_storage.v2._proxy.Proxy.volumes Type Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.block_storage.v2._proxy.Proxy .. automethod:: openstack.block_storage.v2._proxy.Proxy.create_type .. automethod:: openstack.block_storage.v2._proxy.Proxy.delete_type .. automethod:: openstack.block_storage.v2._proxy.Proxy.get_type .. automethod:: openstack.block_storage.v2._proxy.Proxy.types Snapshot Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.block_storage.v2._proxy.Proxy .. automethod:: openstack.block_storage.v2._proxy.Proxy.create_snapshot .. automethod:: openstack.block_storage.v2._proxy.Proxy.delete_snapshot .. automethod:: openstack.block_storage.v2._proxy.Proxy.get_snapshot .. automethod:: openstack.block_storage.v2._proxy.Proxy.snapshots Stats Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.block_storage.v2._proxy.Proxy .. automethod:: openstack.block_storage.v2._proxy.Proxy.backend_pools openstacksdk-0.11.3/doc/source/user/proxies/compute.rst0000666000175100017510000001721113236151340023235 0ustar zuulzuul00000000000000Compute API =========== For details on how to use compute, see :doc:`/user/guides/compute` .. automodule:: openstack.compute.v2._proxy The Compute Class ----------------- The compute high-level interface is available through the ``compute`` member of a :class:`~openstack.connection.Connection` object. The ``compute`` member will only be added if the service is detected. Server Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.create_server .. automethod:: openstack.compute.v2._proxy.Proxy.update_server .. automethod:: openstack.compute.v2._proxy.Proxy.delete_server .. automethod:: openstack.compute.v2._proxy.Proxy.get_server .. automethod:: openstack.compute.v2._proxy.Proxy.find_server .. automethod:: openstack.compute.v2._proxy.Proxy.servers .. automethod:: openstack.compute.v2._proxy.Proxy.get_server_metadata .. automethod:: openstack.compute.v2._proxy.Proxy.set_server_metadata .. automethod:: openstack.compute.v2._proxy.Proxy.delete_server_metadata .. automethod:: openstack.compute.v2._proxy.Proxy.wait_for_server .. automethod:: openstack.compute.v2._proxy.Proxy.create_server_image .. automethod:: openstack.compute.v2._proxy.Proxy.backup_server Network Actions *************** .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.add_fixed_ip_to_server .. automethod:: openstack.compute.v2._proxy.Proxy.remove_fixed_ip_from_server .. automethod:: openstack.compute.v2._proxy.Proxy.add_floating_ip_to_server .. automethod:: openstack.compute.v2._proxy.Proxy.remove_floating_ip_from_server .. automethod:: openstack.compute.v2._proxy.Proxy.add_security_group_to_server .. automethod:: openstack.compute.v2._proxy.Proxy.remove_security_group_from_server Starting, Stopping, etc. ************************ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.start_server .. automethod:: openstack.compute.v2._proxy.Proxy.stop_server .. automethod:: openstack.compute.v2._proxy.Proxy.suspend_server .. automethod:: openstack.compute.v2._proxy.Proxy.resume_server .. automethod:: openstack.compute.v2._proxy.Proxy.reboot_server .. automethod:: openstack.compute.v2._proxy.Proxy.shelve_server .. automethod:: openstack.compute.v2._proxy.Proxy.unshelve_server .. automethod:: openstack.compute.v2._proxy.Proxy.lock_server .. automethod:: openstack.compute.v2._proxy.Proxy.unlock_server .. automethod:: openstack.compute.v2._proxy.Proxy.pause_server .. automethod:: openstack.compute.v2._proxy.Proxy.unpause_server .. automethod:: openstack.compute.v2._proxy.Proxy.rescue_server .. automethod:: openstack.compute.v2._proxy.Proxy.unrescue_server .. automethod:: openstack.compute.v2._proxy.Proxy.evacuate_server .. automethod:: openstack.compute.v2._proxy.Proxy.migrate_server .. automethod:: openstack.compute.v2._proxy.Proxy.get_server_console_output .. automethod:: openstack.compute.v2._proxy.Proxy.live_migrate_server Modifying a Server ****************** .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.resize_server .. automethod:: openstack.compute.v2._proxy.Proxy.confirm_server_resize .. automethod:: openstack.compute.v2._proxy.Proxy.revert_server_resize .. automethod:: openstack.compute.v2._proxy.Proxy.rebuild_server .. automethod:: openstack.compute.v2._proxy.Proxy.reset_server_state .. automethod:: openstack.compute.v2._proxy.Proxy.change_server_password .. automethod:: openstack.compute.v2._proxy.Proxy.get_server_password Image Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.images .. automethod:: openstack.compute.v2._proxy.Proxy.get_image .. automethod:: openstack.compute.v2._proxy.Proxy.find_image .. automethod:: openstack.compute.v2._proxy.Proxy.delete_image .. automethod:: openstack.compute.v2._proxy.Proxy.get_image_metadata .. automethod:: openstack.compute.v2._proxy.Proxy.set_image_metadata .. automethod:: openstack.compute.v2._proxy.Proxy.delete_image_metadata Flavor Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.create_flavor .. automethod:: openstack.compute.v2._proxy.Proxy.delete_flavor .. automethod:: openstack.compute.v2._proxy.Proxy.get_flavor .. automethod:: openstack.compute.v2._proxy.Proxy.find_flavor .. automethod:: openstack.compute.v2._proxy.Proxy.flavors Service Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.services .. automethod:: openstack.compute.v2._proxy.Proxy.enable_service .. automethod:: openstack.compute.v2._proxy.Proxy.disable_service .. automethod:: openstack.compute.v2._proxy.Proxy.force_service_down Volume Attachment Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.create_volume_attachment .. automethod:: openstack.compute.v2._proxy.Proxy.update_volume_attachment .. automethod:: openstack.compute.v2._proxy.Proxy.delete_volume_attachment .. automethod:: openstack.compute.v2._proxy.Proxy.get_volume_attachment .. automethod:: openstack.compute.v2._proxy.Proxy.volume_attachments Keypair Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.create_keypair .. automethod:: openstack.compute.v2._proxy.Proxy.delete_keypair .. automethod:: openstack.compute.v2._proxy.Proxy.get_keypair .. automethod:: openstack.compute.v2._proxy.Proxy.find_keypair .. automethod:: openstack.compute.v2._proxy.Proxy.keypairs Server IPs ^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.server_ips Server Group Operations ^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.create_server_group .. automethod:: openstack.compute.v2._proxy.Proxy.delete_server_group .. automethod:: openstack.compute.v2._proxy.Proxy.get_server_group .. automethod:: openstack.compute.v2._proxy.Proxy.find_server_group .. automethod:: openstack.compute.v2._proxy.Proxy.server_groups Server Interface Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.create_server_interface .. automethod:: openstack.compute.v2._proxy.Proxy.delete_server_interface .. automethod:: openstack.compute.v2._proxy.Proxy.get_server_interface .. automethod:: openstack.compute.v2._proxy.Proxy.server_interfaces Availability Zone Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.availability_zones Limits Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.get_limits Hypervisor Operations ^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.get_hypervisor .. automethod:: openstack.compute.v2._proxy.Proxy.find_hypervisor .. automethod:: openstack.compute.v2._proxy.Proxy.hypervisors Extension Operations ^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.compute.v2._proxy.Proxy .. automethod:: openstack.compute.v2._proxy.Proxy.find_extension .. automethod:: openstack.compute.v2._proxy.Proxy.extensions openstacksdk-0.11.3/doc/source/user/proxies/network.rst0000666000175100017510000004020213236151340023246 0ustar zuulzuul00000000000000Network API =========== For details on how to use network, see :doc:`/user/guides/network` .. automodule:: openstack.network.v2._proxy The Network Class ----------------- The network high-level interface is available through the ``network`` member of a :class:`~openstack.connection.Connection` object. The ``network`` member will only be added if the service is detected. Network Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_network .. automethod:: openstack.network.v2._proxy.Proxy.update_network .. automethod:: openstack.network.v2._proxy.Proxy.delete_network .. automethod:: openstack.network.v2._proxy.Proxy.get_network .. automethod:: openstack.network.v2._proxy.Proxy.find_network .. automethod:: openstack.network.v2._proxy.Proxy.networks .. automethod:: openstack.network.v2._proxy.Proxy.get_network_ip_availability .. automethod:: openstack.network.v2._proxy.Proxy.find_network_ip_availability .. automethod:: openstack.network.v2._proxy.Proxy.network_ip_availabilities .. automethod:: openstack.network.v2._proxy.Proxy.add_dhcp_agent_to_network .. automethod:: openstack.network.v2._proxy.Proxy.remove_dhcp_agent_from_network .. automethod:: openstack.network.v2._proxy.Proxy.dhcp_agent_hosting_networks Port Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_port .. automethod:: openstack.network.v2._proxy.Proxy.update_port .. automethod:: openstack.network.v2._proxy.Proxy.delete_port .. automethod:: openstack.network.v2._proxy.Proxy.get_port .. automethod:: openstack.network.v2._proxy.Proxy.find_port .. automethod:: openstack.network.v2._proxy.Proxy.ports .. automethod:: openstack.network.v2._proxy.Proxy.add_ip_to_port .. automethod:: openstack.network.v2._proxy.Proxy.remove_ip_from_port Router Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_router .. automethod:: openstack.network.v2._proxy.Proxy.update_router .. automethod:: openstack.network.v2._proxy.Proxy.delete_router .. automethod:: openstack.network.v2._proxy.Proxy.get_router .. automethod:: openstack.network.v2._proxy.Proxy.find_router .. automethod:: openstack.network.v2._proxy.Proxy.routers .. automethod:: openstack.network.v2._proxy.Proxy.add_gateway_to_router .. automethod:: openstack.network.v2._proxy.Proxy.remove_gateway_from_router .. automethod:: openstack.network.v2._proxy.Proxy.add_interface_to_router .. automethod:: openstack.network.v2._proxy.Proxy.remove_interface_from_router Floating IP Operations ^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_ip .. automethod:: openstack.network.v2._proxy.Proxy.update_ip .. automethod:: openstack.network.v2._proxy.Proxy.delete_ip .. automethod:: openstack.network.v2._proxy.Proxy.get_ip .. automethod:: openstack.network.v2._proxy.Proxy.find_ip .. automethod:: openstack.network.v2._proxy.Proxy.find_available_ip .. automethod:: openstack.network.v2._proxy.Proxy.ips Pool Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_pool .. automethod:: openstack.network.v2._proxy.Proxy.update_pool .. automethod:: openstack.network.v2._proxy.Proxy.delete_pool .. automethod:: openstack.network.v2._proxy.Proxy.get_pool .. automethod:: openstack.network.v2._proxy.Proxy.find_pool .. automethod:: openstack.network.v2._proxy.Proxy.pools .. automethod:: openstack.network.v2._proxy.Proxy.create_pool_member .. automethod:: openstack.network.v2._proxy.Proxy.update_pool_member .. automethod:: openstack.network.v2._proxy.Proxy.delete_pool_member .. automethod:: openstack.network.v2._proxy.Proxy.get_pool_member .. automethod:: openstack.network.v2._proxy.Proxy.find_pool_member .. automethod:: openstack.network.v2._proxy.Proxy.pool_members Auto Allocated Topology Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.delete_auto_allocated_topology .. automethod:: openstack.network.v2._proxy.Proxy.get_auto_allocated_topology .. automethod:: openstack.network.v2._proxy.Proxy.validate_auto_allocated_topology Security Group Operations ^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_security_group .. automethod:: openstack.network.v2._proxy.Proxy.update_security_group .. automethod:: openstack.network.v2._proxy.Proxy.delete_security_group .. automethod:: openstack.network.v2._proxy.Proxy.get_security_group .. automethod:: openstack.network.v2._proxy.Proxy.get_security_group_rule .. automethod:: openstack.network.v2._proxy.Proxy.find_security_group .. automethod:: openstack.network.v2._proxy.Proxy.find_security_group_rule .. automethod:: openstack.network.v2._proxy.Proxy.security_group_rules .. automethod:: openstack.network.v2._proxy.Proxy.security_groups .. automethod:: openstack.network.v2._proxy.Proxy.security_group_allow_ping .. automethod:: openstack.network.v2._proxy.Proxy.security_group_open_port .. automethod:: openstack.network.v2._proxy.Proxy.create_security_group_rule .. automethod:: openstack.network.v2._proxy.Proxy.delete_security_group_rule Availability Zone Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.availability_zones Address Scope Operations ^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_address_scope .. automethod:: openstack.network.v2._proxy.Proxy.update_address_scope .. automethod:: openstack.network.v2._proxy.Proxy.delete_address_scope .. automethod:: openstack.network.v2._proxy.Proxy.get_address_scope .. automethod:: openstack.network.v2._proxy.Proxy.find_address_scope .. automethod:: openstack.network.v2._proxy.Proxy.address_scopes Quota Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.update_quota .. automethod:: openstack.network.v2._proxy.Proxy.delete_quota .. automethod:: openstack.network.v2._proxy.Proxy.get_quota .. automethod:: openstack.network.v2._proxy.Proxy.get_quota_default .. automethod:: openstack.network.v2._proxy.Proxy.quotas QoS Operations ^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_qos_policy .. automethod:: openstack.network.v2._proxy.Proxy.update_qos_policy .. automethod:: openstack.network.v2._proxy.Proxy.delete_qos_policy .. automethod:: openstack.network.v2._proxy.Proxy.get_qos_policy .. automethod:: openstack.network.v2._proxy.Proxy.find_qos_policy .. automethod:: openstack.network.v2._proxy.Proxy.qos_policies .. automethod:: openstack.network.v2._proxy.Proxy.get_qos_rule_type .. automethod:: openstack.network.v2._proxy.Proxy.find_qos_rule_type .. automethod:: openstack.network.v2._proxy.Proxy.qos_rule_types .. automethod:: openstack.network.v2._proxy.Proxy.create_qos_minimum_bandwidth_rule .. automethod:: openstack.network.v2._proxy.Proxy.update_qos_minimum_bandwidth_rule .. automethod:: openstack.network.v2._proxy.Proxy.delete_qos_minimum_bandwidth_rule .. automethod:: openstack.network.v2._proxy.Proxy.get_qos_minimum_bandwidth_rule .. automethod:: openstack.network.v2._proxy.Proxy.find_qos_minimum_bandwidth_rule .. automethod:: openstack.network.v2._proxy.Proxy.qos_minimum_bandwidth_rules .. automethod:: openstack.network.v2._proxy.Proxy.create_qos_bandwidth_limit_rule .. automethod:: openstack.network.v2._proxy.Proxy.update_qos_bandwidth_limit_rule .. automethod:: openstack.network.v2._proxy.Proxy.delete_qos_bandwidth_limit_rule .. automethod:: openstack.network.v2._proxy.Proxy.get_qos_bandwidth_limit_rule .. automethod:: openstack.network.v2._proxy.Proxy.find_qos_bandwidth_limit_rule .. automethod:: openstack.network.v2._proxy.Proxy.qos_bandwidth_limit_rules .. automethod:: openstack.network.v2._proxy.Proxy.create_qos_dscp_marking_rule .. automethod:: openstack.network.v2._proxy.Proxy.update_qos_dscp_marking_rule .. automethod:: openstack.network.v2._proxy.Proxy.delete_qos_dscp_marking_rule .. automethod:: openstack.network.v2._proxy.Proxy.get_qos_dscp_marking_rule .. automethod:: openstack.network.v2._proxy.Proxy.find_qos_dscp_marking_rule .. automethod:: openstack.network.v2._proxy.Proxy.qos_dscp_marking_rules Agent Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.delete_agent .. automethod:: openstack.network.v2._proxy.Proxy.update_agent .. automethod:: openstack.network.v2._proxy.Proxy.get_agent .. automethod:: openstack.network.v2._proxy.Proxy.agents .. automethod:: openstack.network.v2._proxy.Proxy.agent_hosted_routers .. automethod:: openstack.network.v2._proxy.Proxy.routers_hosting_l3_agents .. automethod:: openstack.network.v2._proxy.Proxy.network_hosting_dhcp_agents .. automethod:: openstack.network.v2._proxy.Proxy.add_router_to_agent .. automethod:: openstack.network.v2._proxy.Proxy.remove_router_from_agent RBAC Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_rbac_policy .. automethod:: openstack.network.v2._proxy.Proxy.update_rbac_policy .. automethod:: openstack.network.v2._proxy.Proxy.delete_rbac_policy .. automethod:: openstack.network.v2._proxy.Proxy.get_rbac_policy .. automethod:: openstack.network.v2._proxy.Proxy.find_rbac_policy .. automethod:: openstack.network.v2._proxy.Proxy.rbac_policies Listener Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_listener .. automethod:: openstack.network.v2._proxy.Proxy.update_listener .. automethod:: openstack.network.v2._proxy.Proxy.delete_listener .. automethod:: openstack.network.v2._proxy.Proxy.get_listener .. automethod:: openstack.network.v2._proxy.Proxy.find_listener .. automethod:: openstack.network.v2._proxy.Proxy.listeners Subnet Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_subnet .. automethod:: openstack.network.v2._proxy.Proxy.update_subnet .. automethod:: openstack.network.v2._proxy.Proxy.delete_subnet .. automethod:: openstack.network.v2._proxy.Proxy.get_subnet .. automethod:: openstack.network.v2._proxy.Proxy.get_subnet_ports .. automethod:: openstack.network.v2._proxy.Proxy.find_subnet .. automethod:: openstack.network.v2._proxy.Proxy.subnets .. automethod:: openstack.network.v2._proxy.Proxy.create_subnet_pool .. automethod:: openstack.network.v2._proxy.Proxy.update_subnet_pool .. automethod:: openstack.network.v2._proxy.Proxy.delete_subnet_pool .. automethod:: openstack.network.v2._proxy.Proxy.get_subnet_pool .. automethod:: openstack.network.v2._proxy.Proxy.find_subnet_pool .. automethod:: openstack.network.v2._proxy.Proxy.subnet_pools Load Balancer Operations ^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_load_balancer .. automethod:: openstack.network.v2._proxy.Proxy.update_load_balancer .. automethod:: openstack.network.v2._proxy.Proxy.delete_load_balancer .. automethod:: openstack.network.v2._proxy.Proxy.get_load_balancer .. automethod:: openstack.network.v2._proxy.Proxy.find_load_balancer .. automethod:: openstack.network.v2._proxy.Proxy.load_balancers Health Monitor Operations ^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_health_monitor .. automethod:: openstack.network.v2._proxy.Proxy.update_health_monitor .. automethod:: openstack.network.v2._proxy.Proxy.delete_health_monitor .. automethod:: openstack.network.v2._proxy.Proxy.get_health_monitor .. automethod:: openstack.network.v2._proxy.Proxy.find_health_monitor .. automethod:: openstack.network.v2._proxy.Proxy.health_monitors Metering Label Operations ^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_metering_label .. automethod:: openstack.network.v2._proxy.Proxy.update_metering_label .. automethod:: openstack.network.v2._proxy.Proxy.delete_metering_label .. automethod:: openstack.network.v2._proxy.Proxy.get_metering_label .. automethod:: openstack.network.v2._proxy.Proxy.find_metering_label .. automethod:: openstack.network.v2._proxy.Proxy.metering_labels .. automethod:: openstack.network.v2._proxy.Proxy.create_metering_label_rule .. automethod:: openstack.network.v2._proxy.Proxy.update_metering_label_rule .. automethod:: openstack.network.v2._proxy.Proxy.delete_metering_label_rule .. automethod:: openstack.network.v2._proxy.Proxy.get_metering_label_rule .. automethod:: openstack.network.v2._proxy.Proxy.find_metering_label_rule .. automethod:: openstack.network.v2._proxy.Proxy.metering_label_rules Segment Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_segment .. automethod:: openstack.network.v2._proxy.Proxy.update_segment .. automethod:: openstack.network.v2._proxy.Proxy.delete_segment .. automethod:: openstack.network.v2._proxy.Proxy.get_segment .. automethod:: openstack.network.v2._proxy.Proxy.find_segment .. automethod:: openstack.network.v2._proxy.Proxy.segments Flavor Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_flavor .. automethod:: openstack.network.v2._proxy.Proxy.update_flavor .. automethod:: openstack.network.v2._proxy.Proxy.delete_flavor .. automethod:: openstack.network.v2._proxy.Proxy.get_flavor .. automethod:: openstack.network.v2._proxy.Proxy.find_flavor .. automethod:: openstack.network.v2._proxy.Proxy.flavors Service Profile Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_service_profile .. automethod:: openstack.network.v2._proxy.Proxy.update_service_profile .. automethod:: openstack.network.v2._proxy.Proxy.delete_service_profile .. automethod:: openstack.network.v2._proxy.Proxy.get_service_profile .. automethod:: openstack.network.v2._proxy.Proxy.find_service_profile .. automethod:: openstack.network.v2._proxy.Proxy.service_profiles .. automethod:: openstack.network.v2._proxy.Proxy.associate_flavor_with_service_profile .. automethod:: openstack.network.v2._proxy.Proxy.disassociate_flavor_from_service_profile Tag Operations ^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.set_tags VPN Operations ^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.create_vpn_service .. automethod:: openstack.network.v2._proxy.Proxy.update_vpn_service .. automethod:: openstack.network.v2._proxy.Proxy.delete_vpn_service .. automethod:: openstack.network.v2._proxy.Proxy.get_vpn_service .. automethod:: openstack.network.v2._proxy.Proxy.find_vpn_service .. automethod:: openstack.network.v2._proxy.Proxy.vpn_services Extension Operations ^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.find_extension .. automethod:: openstack.network.v2._proxy.Proxy.extensions Service Provider Operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.network.v2._proxy.Proxy .. automethod:: openstack.network.v2._proxy.Proxy.service_providers openstacksdk-0.11.3/doc/source/user/proxies/identity_v3.rst0000666000175100017510000001332513236151340024024 0ustar zuulzuul00000000000000Identity API v3 =============== For details on how to use identity, see :doc:`/user/guides/identity` .. automodule:: openstack.identity.v3._proxy The Identity v3 Class --------------------- The identity high-level interface is available through the ``identity`` member of a :class:`~openstack.connection.Connection` object. The ``identity`` member will only be added if the service is detected. Credential Operations ^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_credential .. automethod:: openstack.identity.v3._proxy.Proxy.update_credential .. automethod:: openstack.identity.v3._proxy.Proxy.delete_credential .. automethod:: openstack.identity.v3._proxy.Proxy.get_credential .. automethod:: openstack.identity.v3._proxy.Proxy.find_credential .. automethod:: openstack.identity.v3._proxy.Proxy.credentials Domain Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_domain .. automethod:: openstack.identity.v3._proxy.Proxy.update_domain .. automethod:: openstack.identity.v3._proxy.Proxy.delete_domain .. automethod:: openstack.identity.v3._proxy.Proxy.get_domain .. automethod:: openstack.identity.v3._proxy.Proxy.find_domain .. automethod:: openstack.identity.v3._proxy.Proxy.domains Endpoint Operations ^^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_endpoint .. automethod:: openstack.identity.v3._proxy.Proxy.update_endpoint .. automethod:: openstack.identity.v3._proxy.Proxy.delete_endpoint .. automethod:: openstack.identity.v3._proxy.Proxy.get_endpoint .. automethod:: openstack.identity.v3._proxy.Proxy.find_endpoint .. automethod:: openstack.identity.v3._proxy.Proxy.endpoints Group Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_group .. automethod:: openstack.identity.v3._proxy.Proxy.update_group .. automethod:: openstack.identity.v3._proxy.Proxy.delete_group .. automethod:: openstack.identity.v3._proxy.Proxy.get_group .. automethod:: openstack.identity.v3._proxy.Proxy.find_group .. automethod:: openstack.identity.v3._proxy.Proxy.groups Policy Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_policy .. automethod:: openstack.identity.v3._proxy.Proxy.update_policy .. automethod:: openstack.identity.v3._proxy.Proxy.delete_policy .. automethod:: openstack.identity.v3._proxy.Proxy.get_policy .. automethod:: openstack.identity.v3._proxy.Proxy.find_policy .. automethod:: openstack.identity.v3._proxy.Proxy.policies Project Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_project .. automethod:: openstack.identity.v3._proxy.Proxy.update_project .. automethod:: openstack.identity.v3._proxy.Proxy.delete_project .. automethod:: openstack.identity.v3._proxy.Proxy.get_project .. automethod:: openstack.identity.v3._proxy.Proxy.find_project .. automethod:: openstack.identity.v3._proxy.Proxy.projects Region Operations ^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_region .. automethod:: openstack.identity.v3._proxy.Proxy.update_region .. automethod:: openstack.identity.v3._proxy.Proxy.delete_region .. automethod:: openstack.identity.v3._proxy.Proxy.get_region .. automethod:: openstack.identity.v3._proxy.Proxy.find_region .. automethod:: openstack.identity.v3._proxy.Proxy.regions Role Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_role .. automethod:: openstack.identity.v3._proxy.Proxy.update_role .. automethod:: openstack.identity.v3._proxy.Proxy.delete_role .. automethod:: openstack.identity.v3._proxy.Proxy.get_role .. automethod:: openstack.identity.v3._proxy.Proxy.find_role .. automethod:: openstack.identity.v3._proxy.Proxy.roles .. automethod:: openstack.identity.v3._proxy.Proxy.role_assignments .. automethod:: openstack.identity.v3._proxy.Proxy.role_assignments_filter Service Operations ^^^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_service .. automethod:: openstack.identity.v3._proxy.Proxy.update_service .. automethod:: openstack.identity.v3._proxy.Proxy.delete_service .. automethod:: openstack.identity.v3._proxy.Proxy.get_service .. automethod:: openstack.identity.v3._proxy.Proxy.find_service .. automethod:: openstack.identity.v3._proxy.Proxy.services Trust Operations ^^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_trust .. automethod:: openstack.identity.v3._proxy.Proxy.delete_trust .. automethod:: openstack.identity.v3._proxy.Proxy.get_trust .. automethod:: openstack.identity.v3._proxy.Proxy.find_trust .. automethod:: openstack.identity.v3._proxy.Proxy.trusts User Operations ^^^^^^^^^^^^^^^ .. autoclass:: openstack.identity.v3._proxy.Proxy .. automethod:: openstack.identity.v3._proxy.Proxy.create_user .. automethod:: openstack.identity.v3._proxy.Proxy.update_user .. automethod:: openstack.identity.v3._proxy.Proxy.delete_user .. automethod:: openstack.identity.v3._proxy.Proxy.get_user .. automethod:: openstack.identity.v3._proxy.Proxy.find_user .. automethod:: openstack.identity.v3._proxy.Proxy.users openstacksdk-0.11.3/doc/source/glossary.rst0000666000175100017510000000712713236151340020762 0ustar zuulzuul00000000000000:orphan: Glossary ======== .. glossary:: :sorted: CLI Command-Line Interface; a textual user interface. compute OpenStack Compute (Nova). container One of the :term:`object-store` resources; a container holds :term:`objects ` being stored. endpoint A base URL used in a REST request. An `authentication endpoint` is specifically the URL given to a user to identify a cloud. A service endpoint is generally obtained from the service catalog. host A physical computer. Contrast with :term:`node` and :term:`server`. identity OpenStack Identity (Keystone). image OpenStack Image (Glance). Also the attribute name of the disk files stored for use by servers. keypair The attribute name of the SSH public key used in the OpenStack Compute API for server authentication. node A logical system, may refer to a :term:`server` (virtual machine) or a :term:`host`. Generally used to describe an OS instance where a specific process is running, e.g. a 'network node' is where the network processes run, and may be directly on a host or in a server. Contrast with :term:`host` and :term:`server`. object A generic term which normally refers to the a Python ``object``. The OpenStack Object Store service (Swift) also uses `object` as the name of the item being stored within a :term:`container`. object-store OpenStack Object Store (Swift). project The name of the owner of resources in an OpenStack cloud. A `project` can map to a customer, account or organization in different OpenStack deployments. Used instead of the deprecated :term:`tenant`. region The attribute name of a partitioning of cloud resources. resource A Python object representing an OpenStack resource inside the SDK code. Also used to describe the items managed by OpenStack. role A personality that a user assumes when performing a specific set of operations. A `role` includes a set of rights and privileges that a user assuming that role inherits. The OpenStack Identity service includes the set of roles that a user can assume in the :term:`token` that is issued to that user. The individual services determine how the roles are interpreted and access granted to operations or resources. The OpenStack Identity service treats a role as an arbitrary name assigned by the cloud administrator. server A virtual machine or a bare-metal host managed by the OpenStack Compute service. Contrast with :term:`host` and :term:`node`. service In OpenStack this refers to a service/endpoint in the :term:`ServiceCatalog `. It could also be a collection of endpoints for different :term:`regions `. A service has a type and a name. service catalog The list of :term:`services ` configured at a given authentication endpoint available to the authenticated user. tenant Deprecated in favor of :term:`project`. token An arbitrary bit of text that is used to access resources. Some tokens are `scoped` to determine what resources are accessible with it. A token may be revoked at any time and is valid for a finite duration. volume OpenStack Volume (Cinder). Also the attribute name of the virtual disks managed by the OpenStack Volume service. openstacksdk-0.11.3/doc/source/conf.py0000666000175100017510000000730213236151340017657 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys import warnings import openstackdocstheme sys.path.insert(0, os.path.abspath('../..')) sys.path.insert(0, os.path.abspath('.')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'openstackdocstheme', 'enforcer' ] # openstackdocstheme options repository_name = 'openstack/python-openstacksdk' bug_project = '760' bug_tag = '' html_last_updated_fmt = '%Y-%m-%d %H:%M' html_theme = 'openstackdocs' # TODO(shade) Set this to true once the build-openstack-sphinx-docs job is # updated to use sphinx-build. # When True, this will raise an exception that kills sphinx-build. enforcer_warnings_as_errors = False # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'python-openstacksdk' copyright = u'2017, Various members of the OpenStack Foundation' # A few variables have to be set for the log-a-bug feature. # gitsha: The SHA checksum of the bug description. Extracted from git log. # bug_tag: Tag for categorizing the bug. Must be set manually. # bug_project: Launchpad project to file bugs against. # These variables are passed to the logabug code via html_context. git_cmd = "/usr/bin/git log | head -n1 | cut -f2 -d' '" try: gitsha = os.popen(git_cmd).read().strip('\n') except Exception: warnings.warn("Can not get git sha.") gitsha = "unknown" bug_tag = "docs" pwd = os.getcwd() # html_context allows us to pass arbitrary values into the html template html_context = {"pwd": pwd, "gitsha": gitsha, "bug_tag": bug_tag, "bug_project": "python-openstacksdk"} # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' autodoc_member_order = "bysource" # Locations to exclude when looking for source files. exclude_patterns = [] # -- Options for HTML output ---------------------------------------------- # Don't let openstackdocstheme insert TOCs automatically. theme_include_auto_toc = False # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, u'%s Documentation' % project, u'OpenStack Foundation', 'manual'), ] # Include both the class and __init__ docstrings when describing the class autoclass_content = "both" openstacksdk-0.11.3/doc/source/install/0000775000175100017510000000000013236151501020021 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/install/index.rst0000666000175100017510000000034313236151340021665 0ustar zuulzuul00000000000000============ Installation ============ At the command line:: $ pip install python-openstacksdk Or, if you have virtualenv wrapper installed:: $ mkvirtualenv python-openstacksdk $ pip install python-openstacksdk openstacksdk-0.11.3/doc/source/releasenotes.rst0000666000175100017510000000023213236151340021576 0ustar zuulzuul00000000000000============= Release Notes ============= Release notes for `python-openstacksdk` can be found at https://releases.openstack.org/teams/openstacksdk.html openstacksdk-0.11.3/doc/source/contributor/0000775000175100017510000000000013236151501020725 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/contributor/index.rst0000666000175100017510000000576013236151340022601 0ustar zuulzuul00000000000000Contributing to the OpenStack SDK ================================= This section of documentation pertains to those who wish to contribute to the development of this SDK. If you're looking for documentation on how to use the SDK to build applications, please see the `user <../users>`_ section. About the Project ----------------- The OpenStack SDK is a OpenStack project aimed at providing a complete software development kit for the programs which make up the OpenStack community. It is a set of Python-based libraries, documentation, examples, and tools released under the Apache 2 license. Contribution Mechanics ---------------------- .. toctree:: :maxdepth: 2 contributing Contacting the Developers ------------------------- IRC *** The developers of this project are available in the `#openstack-sdks `_ channel on Freenode. This channel includes conversation on SDKs and tools within the general OpenStack community, including OpenStackClient as well as occasional talk about SDKs created for languages outside of Python. Email ***** The `openstack-dev `_ mailing list fields questions of all types on OpenStack. Using the ``[python-openstacksdk]`` filter to begin your email subject will ensure that the message gets to SDK developers. Coding Standards ---------------- We are a bit stricter than usual in the coding standards department. It's a good idea to read through the :doc:`coding ` section. .. toctree:: :maxdepth: 2 coding Development Environment ----------------------- The first step towards contributing code and documentation is to setup your development environment. We use a pretty standard setup, but it is fully documented in our :doc:`setup ` section. .. toctree:: :maxdepth: 2 setup Testing ------- The project contains three test packages, one for unit tests, one for functional tests and one for examples tests. The ``openstack.tests.unit`` package tests the SDK's features in isolation. The ``openstack.tests.functional`` and ``openstack.tests.examples`` packages test the SDK's features and examples against an OpenStack cloud. .. toctree:: testing Project Layout -------------- The project contains a top-level ``openstack`` package, which houses several modules that form the foundation upon which each service's API is built on. Under the ``openstack`` package are packages for each of those services, such as ``openstack.compute``. .. toctree:: layout Adding Features --------------- Does this SDK not do what you need it to do? Is it missing a service? Are you a developer on another project who wants to add their service? You're in the right place. Below are examples of how to add new features to the OpenStack SDK. .. toctree:: :maxdepth: 2 create/resource .. TODO(briancurtin): document how to create a proxy .. TODO(briancurtin): document how to create auth plugins openstacksdk-0.11.3/doc/source/contributor/local.conf0000666000175100017510000000344213236151340022674 0ustar zuulzuul00000000000000[[local|localrc]] # Configure passwords and the Swift Hash MYSQL_PASSWORD=DEVSTACK_PASSWORD RABBIT_PASSWORD=DEVSTACK_PASSWORD SERVICE_TOKEN=DEVSTACK_PASSWORD ADMIN_PASSWORD=DEVSTACK_PASSWORD SERVICE_PASSWORD=DEVSTACK_PASSWORD SWIFT_HASH=DEVSTACK_PASSWORD # Configure the stable OpenStack branches used by DevStack # For stable branches see # http://git.openstack.org/cgit/openstack-dev/devstack/refs/ CINDER_BRANCH=stable/OPENSTACK_VERSION CEILOMETER_BRANCH=stable/OPENSTACK_VERSION GLANCE_BRANCH=stable/OPENSTACK_VERSION HEAT_BRANCH=stable/OPENSTACK_VERSION HORIZON_BRANCH=stable/OPENSTACK_VERSION KEYSTONE_BRANCH=stable/OPENSTACK_VERSION NEUTRON_BRANCH=stable/OPENSTACK_VERSION NOVA_BRANCH=stable/OPENSTACK_VERSION SWIFT_BRANCH=stable/OPENSTACK_VERSION ZAQAR_BRANCH=stable/OPENSTACK_VERSION # Enable Swift enable_service s-proxy enable_service s-object enable_service s-container enable_service s-account # Disable Nova Network and enable Neutron disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service q-metering # Enable Zaqar enable_plugin zaqar https://github.com/openstack/zaqar enable_service zaqar-server # Enable Heat enable_service h-eng enable_service h-api enable_service h-api-cfn enable_service h-api-cw # Automatically download and register a VM image that Heat can launch # For more information on Heat and DevStack see # https://docs.openstack.org/heat/latest/getting_started/on_devstack.html IMAGE_URL_SITE="http://download.fedoraproject.org" IMAGE_URL_PATH="/pub/fedora/linux/releases/25/CloudImages/x86_64/images/" IMAGE_URL_FILE="Fedora-Cloud-Base-25-1.3.x86_64.qcow2" IMAGE_URLS+=","$IMAGE_URL_SITE$IMAGE_URL_PATH$IMAGE_URL_FILE # Logging LOGDAYS=1 LOGFILE=/opt/stack/logs/stack.sh.log LOGDIR=/opt/stack/logs openstacksdk-0.11.3/doc/source/contributor/clouds.yaml0000666000175100017510000000065113236151340023107 0ustar zuulzuul00000000000000clouds: test_cloud: region_name: RegionOne auth: auth_url: http://xxx.xxx.xxx.xxx:5000/v2.0/ username: demo password: secrete project_name: demo example: image_name: fedora-20.x86_64 flavor_name: m1.small network_name: private rackspace: cloud: rackspace auth: username: joe password: joes-password project_name: 123123 region_name: IAD openstacksdk-0.11.3/doc/source/contributor/layout.txt0000666000175100017510000000033013236151340023002 0ustar zuulzuul00000000000000openstack/ connection.py resource.py compute/ compute_service.py v2/ server.py _proxy.py tests/ compute/ v2/ test_server.py openstacksdk-0.11.3/doc/source/contributor/setup.rst0000666000175100017510000001130113236151340022616 0ustar zuulzuul00000000000000Creating a Development Environment ================================== Required Tools -------------- Python ****** As the OpenStack SDK is developed in Python, you will need at least one version of Python installed. It is strongly preferred that you have at least one of version 2 and one of version 3 so that your tests are run against both. Our continuous integration system runs against several versions, so ultimately we will have the proper test coverage, but having multiple versions locally results in less time spent in code review when changes unexpectedly break other versions. Python can be downloaded from https://www.python.org/downloads. virtualenv ********** In order to isolate our development environment from the system-based Python installation, we use `virtualenv `_. This allows us to install all of our necessary dependencies without interfering with anything else, and preventing others from interfering with us. Virtualenv must be installed on your system in order to use it, and it can be had from PyPI, via pip, as follows. Note that you may need to run this as an administrator in some situations.:: $ apt-get install python-virtualenv # Debian based platforms $ yum install python-virtualenv # Red Hat based platforms $ pip install virtualenv # Mac OS X and other platforms You can create a virtualenv in any location. A common usage is to store all of your virtualenvs in the same place, such as under your home directory. To create a virtualenv for the default Python, likely a version 2, run the following:: $ virtualenv $HOME/envs/sdk To create an environment for a different version, such as Python 3, run the following:: $ virtualenv -p python3.4 $HOME/envs/sdk3 When you want to enable your environment so that you can develop inside of it, you *activate* it. To activate an environment, run the /bin/activate script inside of it, like the following:: $ source $HOME/envs/sdk3/bin/activate (sdk3)$ Once you are activated, you will see the environment name in front of your command prompt. In order to exit that environment, run the ``deactivate`` command. tox *** We use `tox `_ as our test runner, which allows us to run the same test commands against multiple versions of Python. Inside any of the virtualenvs you use for working on the SDK, run the following to install ``tox`` into it.:: (sdk3)$ pip install tox Git *** The source of the OpenStack SDK is stored in Git. In order to work with our source repository, you must have Git installed on your system. If your system has a package manager, it can likely be had from there. If not, you can find downloads or the source at http://git-scm.com. Getting the Source Code ----------------------- .. TODO(briancurtin): We should try and distill the following document into the minimally necessary parts to include directly in this section. I've talked to several people who are discouraged by that large of a document to go through before even getting into the project they want to work on. I don't want that to happen to us because we have the potential to be more public facing than a lot of other projects. .. note:: Before checking out the code, please read the OpenStack `Developer's Guide `_ for details on how to use the continuous integration and code review systems that we use. The canonical Git repository is hosted on openstack.org at http://git.openstack.org/cgit/openstack/python-openstacksdk/, with a mirror on GitHub at https://github.com/openstack/python-openstacksdk. Because of how Git works, you can create a local clone from either of those, or your own personal fork.:: (sdk3)$ git clone https://git.openstack.org/openstack/python-openstacksdk.git (sdk3)$ cd python-openstacksdk Installing Dependencies ----------------------- In order to work with the SDK locally, such as in the interactive interpreter or to run example scripts, you need to install the project's dependencies.:: (sdk3)$ pip install -r requirements.txt After the downloads and installs are complete, you'll have a fully functional environment to use the SDK in. Building the Documentation -------------------------- Our documentation is written in reStructured Text and is built using Sphinx. A ``docs`` command is available in our ``tox.ini``, allowing you to build the documentation like you'd run tests. The ``docs`` command is not evaluated by default.:: (sdk3)$ tox -e docs That command will cause the documentation, which lives in the ``docs`` folder, to be built. HTML output is the most commonly referenced, which is located in ``docs/build/html``. openstacksdk-0.11.3/doc/source/contributor/create/0000775000175100017510000000000013236151501022170 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/contributor/create/resource.rst0000666000175100017510000001672613236151340024570 0ustar zuulzuul00000000000000.. TODO(shade) Update this guide. Creating a New Resource ======================= This guide will walk you through how to add resources for a service. Naming Conventions ------------------ Above all, names across this project conform to Python's naming standards, as laid out in `PEP 8 `_. The relevant details we need to know are as follows: * Module names are lower case, and separated by underscores if more than one word. For example, ``openstack.object_store`` * Class names are capitalized, with no spacing, and each subsequent word is capitalized in a name. For example, ``ServerMetadata``. * Attributes on classes, including methods, are lower case and separated by underscores. For example, ``allow_list`` or ``get_data``. Services ******** Services in the OpenStack SDK are named after their program name, not their code name. For example, the project often known as "Nova" is always called "compute" within this SDK. This guide walks through creating service for an OpenStack program called "Fake". Following our guidelines, the code for its service would live under the ``openstack.fake`` namespace. What follows is the creation of a :class:`~openstack.resource.Resource` class for the "Fake" service. Resources ********* Resources are named after the server-side resource, which is set in the ``base_path`` attribute of the resource class. This guide creates a resource class for the ``/fake`` server resource, so the resource module is called ``fake.py`` and the class is called ``Fake``. An Example ---------- ``openstack/fake/fake_service.py`` .. literalinclude:: examples/resource/fake_service.py :language: Python :linenos: ``openstack/fake/v2/fake.py`` .. literalinclude:: examples/resource/fake.py :language: Python :linenos: ``fake.Fake`` Attributes ------------------------ Each service's resources inherit from :class:`~openstack.resource.Resource`, so they can override any of the base attributes to fit the way their particular resource operates. ``resource_key`` and ``resources_key`` ************************************** These attributes are set based on how your resource responds with data. The default values for each of these are ``None``, which works fine when your resource returns a JSON body that can be used directly without a top-level key, such as ``{"name": "Ernie Banks", ...}"``. However, our ``Fake`` resource returns JSON bodies that have the details of the resource one level deeper, such as ``{"resources": {"name": "Ernie Banks", ...}, {...}}``. It does a similar thing with single resources, putting them inside a dictionary keyed on ``"resource"``. By setting ``Fake.resource_key`` on *line 8*, we tell the ``Resource.create``, ``Resource.get``, and ``Resource.update`` methods that we're either sending or receiving a resource that is in a dictionary with that key. By setting ``Fake.resources_key`` on *line 9*, we tell the ``Resource.list`` method that we're expecting to receive multiple resources inside a dictionary with that key. ``base_path`` ************* The ``base_path`` is the URL we're going to use to make requests for this resource. In this case, *line 10* sets ``base_path = "/fake"``, which also corresponds to the name of our class, ``Fake``. Most resources follow this basic formula. Some cases are more complex, where the URL to make requests to has to contain some extra data. The volume service has several resources which make either basic requests or detailed requests, so they use ``base_path = "/volumes/%s(detailed)"``. Before a request is made, if ``detailed = True``, they convert it to a string so the URL becomes ``/volumes/detailed``. If it's ``False``, they only send ``/volumes/``. ``service`` *********** *Line 11* is an instance of the service we're implementing. Each resource ties itself to the service through this setting, so that the proper URL can be constructed. In ``fake_service.py``, we specify the valid versions as well as what this service is called in the service catalog. When a request is made for this resource, the Session now knows how to construct the appropriate URL using this ``FakeService`` instance. Supported Operations -------------------- The base :class:`~openstack.resource.Resource` disallows all types of requests by default, requiring each resource to specify which requests they support. On *lines 14-19*, our ``Fake`` resource specifies that it'll work with all of the operations. In order to have the following methods work, you must allow the corresponding value by setting it to ``True``: +----------------------------------------------+----------------+ | :class:`~openstack.resource.Resource.create` | allow_create | +----------------------------------------------+----------------+ | :class:`~openstack.resource.Resource.delete` | allow_delete | +----------------------------------------------+----------------+ | :class:`~openstack.resource.Resource.head` | allow_head | +----------------------------------------------+----------------+ | :class:`~openstack.resource.Resource.list` | allow_list | +----------------------------------------------+----------------+ | :class:`~openstack.resource.Resource.get` | allow_get | +----------------------------------------------+----------------+ | :class:`~openstack.resource.Resource.update` | allow_update | +----------------------------------------------+----------------+ An additional attribute to set is ``put_update`` if your service uses ``PUT`` requests in order to update a resource. By default, ``PATCH`` requests are used for ``Resource.update``. Properties ---------- .. TODO(shade) Especially this section The way resource classes communicate values between the user and the server are :class:`~openstack.resource.prop` objects. These act similarly to Python's built-in property objects, but they share only the name - they're not the same. Properties are set based on the contents of a response body or headers. Based on what your resource returns, you should set ``prop``\s to map those those values to ones on your :class:`~openstack.resource.Resource` object. *Line 22* sets a prop for ``timestamp`` , which will cause the ``Fake.timestamp`` attribute to contain the value returned in an ``X-Timestamp`` header, such as from a ``Fake.head`` request. *Line 24* sets a prop for ``name``, which is a value returned in a body, such as from a ``Fake.get`` request. Note from *line 12* that ``name`` is specified its ``id`` attribute, so when this resource is populated from a response, ``Fake.name`` and ``Fake.id`` are the same value. *Line 26* sets a prop which contains an alias. ``Fake.value`` will be set when a response body contains a ``value``, or when a header contains ``X-Resource-Value``. *Line 28* specifies a type to be checked before sending the value in a request. In this case, we can only set ``Fake.cool`` to either ``True`` or ``False``, otherwise a TypeError will be raised if the value can't be converted to the expected type. Documentation ------------- We use Sphinx's ``autodoc`` feature in order to build API documentation for each resource we expose. The attributes we override from :class:`~openstack.resource.Resource` don't need to be documented, but any :class:`~openstack.resource.prop` attributes must be. All you need to do is add a comment *above* the line to document, with a colon following the pound-sign. *Lines 21, 23, 25, and 27-28* are comments which will then appear in the API documentation. As shown in *lines 27 & 28*, these comments can span multiple lines. openstacksdk-0.11.3/doc/source/contributor/create/examples/0000775000175100017510000000000013236151501024006 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/contributor/create/examples/resource/0000775000175100017510000000000013236151501025635 5ustar zuulzuul00000000000000openstacksdk-0.11.3/doc/source/contributor/create/examples/resource/fake.py0000666000175100017510000000152513236151340027123 0ustar zuulzuul00000000000000# Apache 2 header omitted for brevity from openstack.fake import fake_service from openstack import resource class Fake(resource.Resource): resource_key = "resource" resources_key = "resources" base_path = "/fake" service = fake_service.FakeService() allow_create = True allow_get = True allow_update = True allow_delete = True allow_list = True allow_head = True #: The transaction date and time. timestamp = resource.Header("x-timestamp") #: The name of this resource. name = resource.Body("name", alternate_id=True) #: The value of the resource. Also available in headers. value = resource.Body("value", alias="x-resource-value") #: Is this resource cool? If so, set it to True. #: This is a multi-line comment about cool stuff. cool = resource.Body("cool", type=bool) openstacksdk-0.11.3/doc/source/contributor/create/examples/resource/fake_service.py0000666000175100017510000000056113236151340030642 0ustar zuulzuul00000000000000# Apache 2 header omitted for brevity from openstack import service_filter class FakeService(service_filter.ServiceFilter): """The fake service.""" valid_versions = [service_filter.ValidVersion('v2')] def __init__(self, version=None): """Create a fake service.""" super(FakeService, self).__init__(service_type='fake', version=version) openstacksdk-0.11.3/doc/source/contributor/layout.rst0000666000175100017510000001025413236151340023001 0ustar zuulzuul00000000000000How the SDK is organized ======================== The following diagram shows how the project is laid out. .. literalinclude:: layout.txt Resource -------- The :class:`openstack.resource.Resource` base class is the building block of any service implementation. ``Resource`` objects correspond to the resources each service's REST API works with, so the :class:`openstack.compute.v2.server.Server` subclass maps to the compute service's ``https://openstack:1234/v2/servers`` resource. The base ``Resource`` contains methods to support the typical `CRUD `_ operations supported by REST APIs, and handles the construction of URLs and calling the appropriate HTTP verb on the given ``Adapter``. Values sent to or returned from the service are implemented as attributes on the ``Resource`` subclass with type :class:`openstack.resource.prop`. The ``prop`` is created with the exact name of what the API expects, and can optionally include a ``type`` to be validated against on requests. You should choose an attribute name that follows PEP-8, regardless of what the server-side expects, as this ``prop`` becomes a mapping between the two.:: is_public = resource.prop('os-flavor-access:is_public', type=bool) There are six additional attributes which the ``Resource`` class checks before making requests to the REST API. ``allow_create``, ``allow_retreive``, ``allow_update``, ``allow_delete``, ``allow_head``, and ``allow_list`` are set to ``True`` or ``False``, and are checked before making the corresponding method call. The ``base_path`` attribute should be set to the URL which corresponds to this resource. Many ``base_path``\s are simple, such as ``"/servers"``. For ``base_path``\s which are composed of non-static information, Python's string replacement is used, e.g., ``base_path = "/servers/%(server_id)s/ips"``. ``resource_key`` and ``resources_key`` are attributes to set when a ``Resource`` returns more than one item in a response, or otherwise requires a key to obtain the response value. For example, the ``Server`` class sets ``resource_key = "server"`` as an individual ``Server`` is stored in a dictionary keyed with the singular noun, and ``resource_keys = "servers"`` as multiple ``Server``\s are stored in a dictionary keyed with the plural noun in the response. Proxy ----- Each service implements a ``Proxy`` class, within the ``openstack//vX/_proxy.py`` module. For example, the v2 compute service's ``Proxy`` exists in ``openstack/compute/v2/_proxy.py``. This ``Proxy`` class contains a :class:`~keystoneauth1.adapter.Adapter` and provides a higher-level interface for users to work with via a :class:`~openstack.connection.Connection` instance. Rather than requiring users to maintain their own ``Adapter`` and work with lower-level :class:`~openstack.resource.Resource` objects, the ``Proxy`` interface offers a place to make things easier for the caller. Each ``Proxy`` class implements methods which act on the underlying ``Resource`` classes which represent the service. For example:: def list_flavors(self, **params): return flavor.Flavor.list(self.session, **params) This method is operating on the ``openstack.compute.v2.flavor.Flavor.list`` method. For the time being, it simply passes on the ``Adapter`` maintained by the ``Proxy``, and returns what the underlying ``Resource.list`` method does. The implementations and method signatures of ``Proxy`` methods are currently under construction, as we figure out the best way to implement them in a way which will apply nicely across all of the services. Connection ---------- The :class:`openstack.connection.Connection` class builds atop a :class:`os_client_config.config.CloudRegion` object, and provides a higher level interface constructed of ``Proxy`` objects from each of the services. The ``Connection`` class' primary purpose is to act as a high-level interface to this SDK, managing the lower level connecton bits and exposing the ``Resource`` objects through their corresponding `Proxy`_ object. If you've built proper ``Resource`` objects and implemented methods on the corresponding ``Proxy`` object, the high-level interface to your service should now be exposed. openstacksdk-0.11.3/doc/source/contributor/testing.rst0000666000175100017510000001130413236151340023136 0ustar zuulzuul00000000000000Testing ======= The tests are run with `tox `_ and configured in ``tox.ini``. The test results are tracked by `testr `_ and configured in ``.testr.conf``. Unit Tests ---------- Run *** In order to run the entire unit test suite, simply run the ``tox`` command inside of your source checkout. This will attempt to run every test command listed inside of ``tox.ini``, which includes Python 2.7, 3.4, PyPy, and a PEP 8 check. You should run the full test suite on all versions before submitting changes for review in order to avoid unexpected failures in the continuous integration system.:: (sdk3)$ tox ... py34: commands succeeded py27: commands succeeded pypy: commands succeeded pep8: commands succeeded congratulations :) During development, it may be more convenient to run a subset of the tests to keep test time to a minimum. You can choose to run the tests only on one version. A step further is to run only the tests you are working on.:: (sdk3)$ tox -e py34 # Run run the tests on Python 3.4 (sdk3)$ tox -e py34 TestContainer # Run only the TestContainer tests on 3.4 Functional Tests ---------------- The functional tests assume that you have a public or private OpenStack cloud that you can run the tests against. The tests must be able to be run against public clouds but first and foremost they must be run against OpenStack. In practice, this means that the tests should initially be run against a stable branch of `DevStack `_. DevStack ******** There are many ways to run and configure DevStack. The link above will show you how to run DevStack a number of ways. You'll need to choose a method you're familiar with and can run in your environment. Wherever DevStack is running, we need to make sure that python-openstacksdk contributors are using the same configuration. This is the ``local.conf`` file we use to configure DevStack. .. literalinclude:: local.conf Replace ``DEVSTACK_PASSWORD`` with a password of your choice. Replace ``OPENSTACK_VERSION`` with a `stable branch `_ of OpenStack (without the ``stable/`` prefix on the branch name). os-client-config **************** To connect the functional tests to an OpenStack cloud we use `os-client-config `_. To setup os-client-config create a ``clouds.yaml`` file in the root of your source checkout. This is an example of a minimal configuration for a ``clouds.yaml`` that connects the functional tests to a DevStack instance. Note that one cloud under ``clouds`` must be named ``test_cloud``. .. literalinclude:: clouds.yaml :language: yaml Replace ``xxx.xxx.xxx.xxx`` with the IP address or FQDN of your DevStack instance. You can also create a ``~/.config/openstack/clouds.yaml`` file for your DevStack cloud environment using the following commands. Replace ``DEVSTACK_SOURCE`` with your DevStack source checkout.:: (sdk3)$ source DEVSTACK_SOURCE/accrc/admin/admin (sdk3)$ ./create_yaml.sh Run *** Functional tests are run against both Python 2 and 3. In order to run the entire functional test suite, run the ``tox -e functional`` and ``tox -e functional3`` command inside of your source checkout. This will attempt to run every test command under ``/openstack/tests/functional/`` in the source tree. You should run the full functional test suite before submitting changes for review in order to avoid unexpected failures in the continuous integration system.:: (sdk3)$ tox -e functional ... functional: commands succeeded congratulations :) (sdk3)$ tox -e functional3 ... functional3: commands succeeded congratulations :) Examples Tests -------------- Similar to the functional tests, the examples tests assume that you have a public or private OpenStack cloud that you can run the tests against. In practice, this means that the tests should initially be run against a stable branch of `DevStack `_. And like the functional tests, the examples tests connect to an OpenStack cloud using `os-client-config `_. See the functional tests instructions for information on setting up DevStack and os-client-config. Run *** In order to run the entire examples test suite, simply run the ``tox -e examples`` command inside of your source checkout. This will attempt to run every test command under ``/openstack/tests/examples/`` in the source tree.:: (sdk3)$ tox -e examples ... examples: commands succeeded congratulations :) openstacksdk-0.11.3/doc/source/contributor/coding.rst0000666000175100017510000001117113236151340022726 0ustar zuulzuul00000000000000======================================== OpenStack SDK Developer Coding Standards ======================================== In the beginning, there were no guidelines. And it was good. But that didn't last long. As more and more people added more and more code, we realized that we needed a set of coding standards to make sure that the openstacksdk API at least *attempted* to display some form of consistency. Thus, these coding standards/guidelines were developed. Note that not all of openstacksdk adheres to these standards just yet. Some older code has not been updated because we need to maintain backward compatibility. Some of it just hasn't been changed yet. But be clear, all new code *must* adhere to these guidelines. Below are the patterns that we expect openstacksdk developers to follow. Release Notes ============= openstacksdk uses `reno `_ for managing its release notes. A new release note should be added to your contribution anytime you add new API calls, fix significant bugs, add new functionality or parameters to existing API calls, or make any other significant changes to the code base that we should draw attention to for the user base. It is *not* necessary to add release notes for minor fixes, such as correction of documentation typos, minor code cleanup or reorganization, or any other change that a user would not notice through normal usage. Exceptions ========== Exceptions should NEVER be wrapped and re-raised inside of a new exception. This removes important debug information from the user. All of the exceptions should be raised correctly the first time. openstack.cloud API Methods =========================== The `openstack.cloud` layer has some specific rules: - When an API call acts on a resource that has both a unique ID and a name, that API call should accept either identifier with a name_or_id parameter. - All resources should adhere to the get/list/search interface that control retrieval of those resources. E.g., `get_image()`, `list_images()`, `search_images()`. - Resources should have `create_RESOURCE()`, `delete_RESOURCE()`, `update_RESOURCE()` API methods (as it makes sense). - For those methods that should behave differently for omitted or None-valued parameters, use the `_utils.valid_kwargs` decorator. Notably: all Neutron `update_*` functions. - Deleting a resource should return True if the delete succeeded, or False if the resource was not found. Returned Resources ------------------ Complex objects returned to the caller must be a `munch.Munch` type. The `openstack._adapter.ShadeAdapter` class makes resources into `munch.Munch`. All objects should be normalized. It is shade's purpose in life to make OpenStack consistent for end users, and this means not trusting the clouds to return consistent objects. There should be a normalize function in `openstack/cloud/_normalize.py` that is applied to objects before returning them to the user. See :doc:`../user/model` for further details on object model requirements. Fields should not be in the normalization contract if we cannot commit to providing them to all users. Fields should be renamed in normalization to be consistent with the rest of `openstack.cloud`. For instance, nothing in `openstack.cloud` exposes the legacy OpenStack concept of "tenant" to a user, but instead uses "project" even if the cloud in question uses tenant. Nova vs. Neutron ---------------- - Recognize that not all cloud providers support Neutron, so never assume it will be present. If a task can be handled by either Neutron or Nova, code it to be handled by either. - For methods that accept either a Nova pool or Neutron network, the parameter should just refer to the network, but documentation of it should explain about the pool. See: `create_floating_ip()` and `available_floating_ip()` methods. Tests ===== - New API methods *must* have unit tests! - New unit tests should only mock at the REST layer using `requests_mock`. Any mocking of openstacksdk itself should be considered legacy and to be avoided. Exceptions to this rule can be made when attempting to test the internals of a logical shim where the inputs and output of the method aren't actually impacted by remote content. - Functional tests should be added, when possible. - In functional tests, always use unique names (for resources that have this attribute) and use it for clean up (see next point). - In functional tests, always define cleanup functions to delete data added by your test, should something go wrong. Data removal should be wrapped in a try except block and try to delete as many entries added by the test as possible. openstacksdk-0.11.3/doc/source/contributor/contributing.rst0000666000175100017510000000004713236151340024172 0ustar zuulzuul00000000000000.. include:: ../../../CONTRIBUTING.rst openstacksdk-0.11.3/doc/requirements.txt0000666000175100017510000000060413236151340020342 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. sphinx!=1.6.6,>=1.6.2 # BSD docutils>=0.11 # OSI-Approved Open Source, Public Domain openstackdocstheme>=1.18.1 # Apache-2.0 beautifulsoup4>=4.6.0 # MIT reno>=2.5.0 # Apache-2.0 openstacksdk-0.11.3/bindep.txt0000666000175100017510000000061713236151340016317 0ustar zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed by tests; # see http://docs.openstack.org/infra/bindep/ for additional information. build-essential [platform:dpkg] python-dev [platform:dpkg] python-devel [platform:rpm] libffi-dev [platform:dpkg] libffi-devel [platform:rpm] openssl-devel [platform:rpm] pypy [pypy] pypy-dev [platform:dpkg pypy] pypy-devel [platform:rpm pypy] openstacksdk-0.11.3/playbooks/0000775000175100017510000000000013236151501016311 5ustar zuulzuul00000000000000openstacksdk-0.11.3/playbooks/devstack/0000775000175100017510000000000013236151501020115 5ustar zuulzuul00000000000000openstacksdk-0.11.3/playbooks/devstack/legacy-git.yaml0000666000175100017510000000046013236151340023031 0ustar zuulzuul00000000000000- hosts: all tasks: - name: Set openstacksdk libraries to master branch before functional tests command: git checkout master args: chdir: "src/git.openstack.org/{{ item }}" with_items: - openstack-infra/shade - openstack/keystoneauth - openstack/os-client-config openstacksdk-0.11.3/SHADE-MERGE-TODO.rst0000666000175100017510000001530113236151364017275 0ustar zuulzuul00000000000000Tasks Needed for rationalizing shade and openstacksdk ====================================================== A large portion of the important things have already been done and landed already. For reference, those are: * shade and os-client-config library content have been merged into the tree. * Use official service-type names from Service Types Authority via os-service-types to refer to services and proxies. * Automatically also add properties to the connection for every known alias for each service-type. * Made openstack.proxy.Proxy a subclass of keystoneauth1.adapter.Adapter. Removed local logic that duplicates keystoneauth logic. This means every proxy also has direct REST primitives available. For example: .. code-block:: python connection = connection.Connection() servers = connection.compute.servers() server_response = connection.compute.get('/servers') * Removed the Profile object in favor of openstack.config. * Removed the Session object in favor of using keystoneauth. * Plumbed Proxy use of Adapter through the Adapter subclass from shade that uses the TaskManager to run REST calls. * Finish migrating to Resource2 and Proxy2, rename them to Resource and Proxy. Next steps ========== * Maybe rename self.session and session parameter in all usage in proxy and resource to self.adapter. They are Adapters not Sessions, but that may not mean anything to people. * Migrate unit tests to requests-mock instead of mocking python calls to session. * Investigate removing ServiceFilter and the various Service objects if an acceptable plan can be found for using discovery. * Replace _prepare_request with requests.Session.prepare_request. shade integration ----------------- * Merge OpenStackCloud into Connection. This should result in being able to use the connection interact with the cloud using all three interfaces. For instance: .. code-block:: python conn = connection.Connection() servers = conn.list_servers() # High-level resource interface from shade servers = conn.compute.servers() # SDK Service/Object Interface response = conn.compute.get('/servers') # REST passthrough * Invent some terminology that is clear and makes sense to distinguish between the object interface that came originally from python-openstacksdk and the interface that came from shade. * Shift the shade interface methods to use the Object Interface for their operations. It's possible there may be cases where the REST layer needs to be used instead, but we should try to sort those out. * Investigate options and then make a plan as to whether shade methods should return SDK objects or return dicts/munches as they do today. Should we make Resource objects extend dict/munch so they can be used like the shade ones today? Or should we just have the external shade shim library get objects from the high-level SDK 'shade' interface and call to_dict() on them all? * Add support for shade expressing normalization model/contract into Resource, or for just leveraging what's in Resource for shade-layer normalization. * Make a plan for normalization supporting shade users continuing to get shade normalized resource Munch objects from shade API calls, sdk proxy/resource users getting SDK objects, and both of them being able to opt in to "strict" normalization at Connection constructor time. Perhaps making Resource subclass Munch would allow mixed use? Needs investigation. * Investigate auto-generating the bulk of shade's API based on introspection of SDK objects, leaving only the code with extra special logic in the shade layer. Service Proxies --------------- These are all things to think about. * Authenticate at Connection() creation time? Having done that, use the catalog in the token to determine which service proxies to add to the Connection object. * Filter the above service list from the token by has_service() from openstack.config. * Add a has_service method to Connection which will BASICALLY just be hasattr(self, 'service') - but will look nicer. * Consider adding magic to Connection for every service that a given cloud DOESN'T have that will throw an exception on any attribute access that is "cloud doesn't have service blah" rather than simply Attribute Not Found. The SDK has a python api regardless of the services remotely, it would be nice if trimming the existing attribute list wouldn't make it impossible for someone to validate their code correctness. It's also possible that instead of not having services, we always mount proxy objects for every service, but we mount a "NotFound" proxy for each service that isn't there. * Since openstacksdk uses version discovery now, there is always a good path to "the" version of a given service. However, a cloud may have more than one. Attach the discovered service proxy to connection as today under the service type name. Add a property to each service proxy for each version the SDK knows about. For instance: .. code-block:: python connection = openstack.Connection() connection.volume # openstack.volume.v3._proxy connection.volume.v2 # openstack.volume.v2._proxy connection.volume.v3 # openstack.volume.v3._proxy Those versioned proxies should be done as Adapters with min and max version set explicitly. This should allow a common pattern for people to write code that just wants to use the discovered or configured service, or who want to attempt to use a specific version of the API if they know what they're doing and at the very least wind up with a properly configured Adapter they can make rest calls on. Because: .. code-block:: python connection = openstack.Connection() connection.dns.v2.get('/zones') should always work on an OpenStack cloud with designate even if the SDK authors don't know anything about Designate and haven't added Resource or Proxy explicitly for it. * Decide what todo about non-OpenStack services. Do we add base Proxy properties to Connection for every service we find in the catalog regardless of official/non-official? If so, do we let someone pass a dict of service-type, Proxy to connection that would let the provide a local service we don't know about? If we do that- we should disallow passing in overrides for services we DO know about to discourage people writing local tools that have different Compute behavior, for instance. Microversions ------------- * keystoneauth.adapter.Adapter knows how to send microversion headers, and get_endpoint_data knows how to fetch supported ranges. As microversion support is added to calls, it needs to be on a per-request basis. This has implications to both Resource and Proxy, as cloud payloads for data mapping can be different on a per-microversion basis. openstacksdk-0.11.3/.zuul.yaml0000666000175100017510000001603313236151340016255 0ustar zuulzuul00000000000000- job: name: openstacksdk-tox-py27-tips parent: openstack-tox-py27 description: | Run tox python 27 unittests against master of important libs vars: tox_install_siblings: true # openstacksdk in required-projects so that os-client-config # and keystoneauth can add the job as well required-projects: - openstack-infra/shade - openstack/keystoneauth - openstack/os-client-config - openstack/python-openstacksdk - job: name: openstacksdk-tox-py35-tips parent: openstack-tox-py35 description: | Run tox python 35 unittests against master of important libs vars: tox_install_siblings: true # openstacksdk in required-projects so that osc and keystoneauth # can add the job as well required-projects: - openstack-infra/shade - openstack/keystoneauth - openstack/os-client-config - openstack/python-openstacksdk - project-template: name: openstacksdk-tox-tips check: jobs: - openstacksdk-tox-py27-tips - openstacksdk-tox-py35-tips gate: jobs: - openstacksdk-tox-py27-tips - openstacksdk-tox-py35-tips - job: name: openstacksdk-functional-devstack-base parent: devstack-tox-functional-consumer description: | Base job for devstack-based functional tests required-projects: # These jobs will DTRT when openstacksdk triggers them, but we want to # make sure stable branches of openstacksdk never get cloned by other # people, since stable branches of openstacksdk are, well, not actually # things. - name: openstack-infra/shade override-branch: master - name: openstack/python-openstacksdk override-branch: master - name: openstack/os-client-config override-branch: master - name: openstack/heat - name: openstack/swift timeout: 9000 vars: devstack_local_conf: post-config: $CINDER_CONF: DEFAULT: osapi_max_limit: 6 devstack_services: s-account: true s-container: true s-object: true s-proxy: true devstack_plugins: heat: https://git.openstack.org/openstack/heat tox_environment: # Do we really need to set this? It's cargo culted PYTHONUNBUFFERED: 'true' # Is there a way we can query the localconf variable to get these # rather than setting them explicitly? OPENSTACKSDK_HAS_DESIGNATE: 0 OPENSTACKSDK_HAS_HEAT: 1 OPENSTACKSDK_HAS_MAGNUM: 0 OPENSTACKSDK_HAS_NEUTRON: 1 OPENSTACKSDK_HAS_SWIFT: 1 tox_install_siblings: false tox_envlist: functional zuul_work_dir: src/git.openstack.org/openstack/python-openstacksdk - job: name: openstacksdk-functional-devstack-legacy parent: openstacksdk-functional-devstack-base description: | Run openstacksdk functional tests against a legacy devstack voting: false vars: devstack_localrc: ENABLE_IDENTITY_V2: true FLAT_INTERFACE: br_flat PUBLIC_INTERFACE: br_pub tox_environment: OPENSTACKSDK_USE_KEYSTONE_V2: 1 OPENSTACKSDK_HAS_NEUTRON: 0 override-branch: stable/newton - job: name: openstacksdk-functional-devstack parent: openstacksdk-functional-devstack-base description: | Run openstacksdk functional tests against a master devstack required-projects: - openstack/octavia vars: devstack_localrc: Q_SERVICE_PLUGIN_CLASSES: qos Q_ML2_PLUGIN_EXT_DRIVERS: qos,port_security DISABLE_AMP_IMAGE_BUILD: True devstack_local_conf: post-config: $OCTAVIA_CONF: DEFAULT: debug: True controller_worker: amphora_driver: amphora_noop_driver compute_driver: compute_noop_driver network_driver: network_noop_driver certificates: cert_manager: local_cert_manager devstack_plugins: octavia: https://git.openstack.org/openstack/octavia devstack_services: octavia: true o-api: true o-cw: true o-hm: true o-hk: true neutron-qos: true tox_environment: OPENSTACKSDK_HAS_OCTAVIA: 1 - job: name: openstacksdk-functional-devstack-python3 parent: openstacksdk-functional-devstack description: | Run openstacksdk functional tests using python3 against a master devstack vars: tox_environment: OPENSTACKSDK_TOX_PYTHON: python3 - job: name: openstacksdk-functional-devstack-tips parent: openstacksdk-functional-devstack description: | Run openstacksdk functional tests with tips of library dependencies against a master devstack. required-projects: - openstack-infra/shade - openstack/keystoneauth - openstack/os-client-config - openstack/python-openstacksdk vars: tox_install_siblings: true - job: name: openstacksdk-functional-devstack-tips-python3 parent: openstacksdk-functional-devstack-tips description: | Run openstacksdk functional tests with tips of library dependencies using python3 against a master devstack. vars: tox_environment: OPENSTACKSDK_TOX_PYTHON: python3 - job: name: openstacksdk-functional-devstack-magnum parent: openstacksdk-functional-devstack description: | Run openstacksdk functional tests against a master devstack with magnum required-projects: - openstack/magnum - openstack/python-magnumclient vars: devstack_plugins: magnum: https://git.openstack.org/openstack/magnum devstack_localrc: MAGNUM_GUEST_IMAGE_URL: https://tarballs.openstack.org/magnum/images/fedora-atomic-f23-dib.qcow2 MAGNUM_IMAGE_NAME: fedora-atomic-f23-dib devstack_services: s-account: false s-container: false s-object: false s-proxy: false tox_environment: OPENSTACKSDK_HAS_SWIFT: 0 OPENSTACKSDK_HAS_MAGNUM: 1 - project-template: name: openstacksdk-functional-tips check: jobs: - openstacksdk-functional-devstack-tips - openstacksdk-functional-devstack-tips-python3 gate: jobs: - openstacksdk-functional-devstack-tips - openstacksdk-functional-devstack-tips-python3 - project: templates: - openstacksdk-functional-tips - openstacksdk-tox-tips - osc-tox-unit-tips check: jobs: - build-openstack-sphinx-docs: vars: sphinx_python: python3 - openstacksdk-functional-devstack - openstacksdk-functional-devstack-magnum: voting: false - openstacksdk-functional-devstack-python3 - osc-functional-devstack-tips: voting: false - neutron-grenade gate: jobs: - build-openstack-sphinx-docs: vars: sphinx_python: python3 - openstacksdk-functional-devstack - openstacksdk-functional-devstack-python3 - neutron-grenade