shade-1.7.0/0000775000567000056710000000000012677257023013757 5ustar jenkinsjenkins00000000000000shade-1.7.0/extras/0000775000567000056710000000000012677257023015265 5ustar jenkinsjenkins00000000000000shade-1.7.0/extras/delete-network.sh0000664000567000056710000000107412677256557020567 0ustar jenkinsjenkins00000000000000neutron router-gateway-clear router1 neutron router-interface-delete router1 for subnet in private-subnet ipv6-private-subnet ; do neutron router-interface-delete router1 $subnet subnet_id=$(neutron subnet-show $subnet -f value -c id) neutron port-list | grep $subnet_id | awk '{print $2}' | xargs -n1 neutron port-delete neutron subnet-delete $subnet done neutron router-delete router1 neutron net-delete private # Make the public network directly consumable neutron subnet-update public-subnet --enable-dhcp=True neutron net-update public --shared=True shade-1.7.0/extras/run-ansible-tests.sh0000775000567000056710000000511012677256557021213 0ustar jenkinsjenkins00000000000000#!/bin/bash ############################################################################# # run-ansible-tests.sh # # Script used to setup a tox environment for running Ansible. This is meant # to be called by tox (via tox.ini). To run the Ansible tests, use: # # tox -e ansible [TAG ...] # or # tox -e ansible -- -c cloudX [TAG ...] # or to use the development version of Ansible: # tox -e ansible -- -d -c cloudX [TAG ...] # # USAGE: # run-ansible-tests.sh -e ENVDIR [-d] [-c CLOUD] [TAG ...] # # PARAMETERS: # -d Use Ansible source repo development branch. # -e ENVDIR Directory of the tox environment to use for testing. # -c CLOUD Name of the cloud to use for testing. # Defaults to "devstack-admin". # [TAG ...] Optional list of space-separated tags to control which # modules are tested. # # EXAMPLES: # # Run all Ansible tests # run-ansible-tests.sh -e ansible # # # Run auth, keypair, and network tests against cloudX # run-ansible-tests.sh -e ansible -c cloudX auth keypair network ############################################################################# CLOUD="devstack-admin" ENVDIR= USE_DEV=0 while getopts "c:de:" opt do case $opt in d) USE_DEV=1 ;; c) CLOUD=${OPTARG} ;; e) ENVDIR=${OPTARG} ;; ?) echo "Invalid option: -${OPTARG}" exit 1;; esac done if [ -z ${ENVDIR} ] then echo "Option -e is required" exit 1 fi shift $((OPTIND-1)) TAGS=$( echo "$*" | tr ' ' , ) # We need to source the current tox environment so that Ansible will # be setup for the correct python environment. source $ENVDIR/bin/activate if [ ${USE_DEV} -eq 1 ] then if [ -d ${ENVDIR}/ansible ] then echo "Using existing Ansible source repo" else echo "Installing Ansible source repo at $ENVDIR" git clone --recursive git://github.com/ansible/ansible.git ${ENVDIR}/ansible fi source $ENVDIR/ansible/hacking/env-setup else echo "Installing Ansible from pip" pip install ansible fi # Run the shade Ansible tests tag_opt="" if [ ! -z ${TAGS} ] then tag_opt="--tags ${TAGS}" fi # Until we have a module that lets us determine the image we want from # within a playbook, we have to find the image here and pass it in. # We use the openstack client instead of nova client since it can use clouds.yaml. IMAGE=`openstack --os-cloud=${CLOUD} image list -f value -c Name | grep -v -e ramdisk -e kernel` if [ $? -ne 0 ] then echo "Failed to find Cirros image" exit 1 fi ansible-playbook -vvv ./shade/tests/ansible/run.yml -e "cloud=${CLOUD} image=${IMAGE}" ${tag_opt} shade-1.7.0/shade.egg-info/0000775000567000056710000000000012677257023016535 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade.egg-info/dependency_links.txt0000664000567000056710000000000112677257023022603 0ustar jenkinsjenkins00000000000000 shade-1.7.0/shade.egg-info/pbr.json0000664000567000056710000000005612677257023020214 0ustar jenkinsjenkins00000000000000{"git_version": "bdeb25d", "is_release": true}shade-1.7.0/shade.egg-info/PKG-INFO0000664000567000056710000000550612677257023017640 0ustar jenkinsjenkins00000000000000Metadata-Version: 1.1 Name: shade Version: 1.7.0 Summary: Client library for operating OpenStack clouds Home-page: http://docs.openstack.org/infra/shade/ Author: OpenStack Infrastructure Team Author-email: openstack-infra@lists.openstack.org License: UNKNOWN Description: Introduction ============ shade is a simple client library for operating OpenStack clouds. The key word here is *simple*. Clouds can do many many many things - but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, you should probably use the lower level client libraries - or even the REST API directly. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then shade is for you. shade started its life as some code inside of ansible. ansible has a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. Example ======= Sometimes an example is nice. :: import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) # Initialize cloud # Cloud configs are read with os-client-config cloud = shade.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 shade-1.7.0/shade.egg-info/top_level.txt0000664000567000056710000000000612677257023021263 0ustar jenkinsjenkins00000000000000shade shade-1.7.0/shade.egg-info/not-zip-safe0000664000567000056710000000000112677257014020763 0ustar jenkinsjenkins00000000000000 shade-1.7.0/shade.egg-info/entry_points.txt0000664000567000056710000000007612677257023022036 0ustar jenkinsjenkins00000000000000[console_scripts] shade-inventory = shade.cmd.inventory:main shade-1.7.0/shade.egg-info/SOURCES.txt0000664000567000056710000001312112677257023020417 0ustar jenkinsjenkins00000000000000.coveragerc .mailmap .testr.conf AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE MANIFEST.in README.rst requirements.txt setup.cfg setup.py test-requirements.txt tox.ini doc/source/coding.rst doc/source/conf.py doc/source/contributing.rst doc/source/future.rst doc/source/index.rst doc/source/installation.rst doc/source/releasenotes.rst doc/source/usage.rst extras/delete-network.sh extras/run-ansible-tests.sh releasenotes/notes/add_update_service-28e590a7a7524053.yaml releasenotes/notes/cache-in-use-volumes-c7fa8bb378106fe3.yaml releasenotes/notes/cinderv2-norm-fix-037189c60b43089f.yaml releasenotes/notes/create-stack-fix-12dbb59a48ac7442.yaml releasenotes/notes/create_server_network_fix-c4a56b31d2850a4b.yaml releasenotes/notes/create_service_norm-319a97433d68fa6a.yaml releasenotes/notes/delete-obj-return-a3ecf0415b7a2989.yaml releasenotes/notes/fip_timeout-035c4bb3ff92fa1f.yaml releasenotes/notes/fix-list-networks-a592725df64c306e.yaml releasenotes/notes/fix-update-domain-af47b066ac52eb7f.yaml releasenotes/notes/flavor_fix-a53c6b326dc34a2c.yaml releasenotes/notes/get_object_api-968483adb016bce1.yaml releasenotes/notes/grant-revoke-assignments-231d3f9596a1ae75.yaml releasenotes/notes/list-role-assignments-keystone-v2-b127b12b4860f50c.yaml releasenotes/notes/net_provider-dd64b697476b7094.yaml releasenotes/notes/norm_role_assignments-a13f41768e62d40c.yaml releasenotes/notes/router_ext_gw-b86582317bca8b39.yaml releasenotes/notes/service_enabled_flag-c917b305d3f2e8fd.yaml releasenotes/notes/started-using-reno-242e2b0cd27f9480.yaml releasenotes/notes/swift-upload-lock-d18f3d42b3a0719a.yaml releasenotes/notes/wait-on-image-snapshot-27cd2eacab2fabd8.yaml releasenotes/notes/wait_for_server-8dc8446b7c673d36.yaml shade/__init__.py shade/_log.py shade/_tasks.py shade/_utils.py shade/exc.py shade/inventory.py shade/meta.py shade/openstackcloud.py shade/operatorcloud.py shade/task_manager.py shade.egg-info/PKG-INFO shade.egg-info/SOURCES.txt shade.egg-info/dependency_links.txt shade.egg-info/entry_points.txt shade.egg-info/not-zip-safe shade.egg-info/pbr.json shade.egg-info/requires.txt shade.egg-info/top_level.txt shade/cmd/__init__.py shade/cmd/inventory.py shade/tests/__init__.py shade/tests/base.py shade/tests/fakes.py shade/tests/ansible/README.txt shade/tests/ansible/run.yml shade/tests/ansible/hooks/post_test_hook.sh shade/tests/ansible/roles/auth/tasks/main.yml shade/tests/ansible/roles/client_config/tasks/main.yml shade/tests/ansible/roles/image/tasks/main.yml shade/tests/ansible/roles/image/vars/main.yml shade/tests/ansible/roles/keypair/tasks/main.yml shade/tests/ansible/roles/keypair/vars/main.yml shade/tests/ansible/roles/network/tasks/main.yml shade/tests/ansible/roles/network/vars/main.yml shade/tests/ansible/roles/nova_flavor/tasks/main.yml shade/tests/ansible/roles/object/tasks/main.yml shade/tests/ansible/roles/port/tasks/main.yml shade/tests/ansible/roles/port/vars/main.yml shade/tests/ansible/roles/router/tasks/main.yml shade/tests/ansible/roles/router/vars/main.yml shade/tests/ansible/roles/security_group/tasks/main.yml shade/tests/ansible/roles/security_group/vars/main.yml shade/tests/ansible/roles/server/tasks/main.yml shade/tests/ansible/roles/server/vars/main.yaml shade/tests/ansible/roles/subnet/tasks/main.yml shade/tests/ansible/roles/subnet/vars/main.yml shade/tests/ansible/roles/user/tasks/main.yml shade/tests/ansible/roles/user_group/tasks/main.yml shade/tests/ansible/roles/volume/tasks/main.yml shade/tests/functional/__init__.py shade/tests/functional/base.py shade/tests/functional/test_compute.py shade/tests/functional/test_domain.py shade/tests/functional/test_endpoints.py shade/tests/functional/test_flavor.py shade/tests/functional/test_floating_ip.py shade/tests/functional/test_floating_ip_pool.py shade/tests/functional/test_groups.py shade/tests/functional/test_identity.py shade/tests/functional/test_image.py shade/tests/functional/test_inventory.py shade/tests/functional/test_network.py shade/tests/functional/test_object.py shade/tests/functional/test_port.py shade/tests/functional/test_range_search.py shade/tests/functional/test_router.py shade/tests/functional/test_services.py shade/tests/functional/test_users.py shade/tests/functional/test_volume.py shade/tests/functional/util.py shade/tests/functional/hooks/post_test_hook.sh shade/tests/unit/__init__.py shade/tests/unit/base.py shade/tests/unit/test__utils.py shade/tests/unit/test_caching.py shade/tests/unit/test_create_server.py shade/tests/unit/test_create_volume_snapshot.py shade/tests/unit/test_delete_server.py shade/tests/unit/test_delete_volume_snapshot.py shade/tests/unit/test_domain_params.py shade/tests/unit/test_domains.py shade/tests/unit/test_endpoints.py shade/tests/unit/test_flavors.py shade/tests/unit/test_floating_ip_common.py shade/tests/unit/test_floating_ip_neutron.py shade/tests/unit/test_floating_ip_nova.py shade/tests/unit/test_floating_ip_pool.py shade/tests/unit/test_groups.py shade/tests/unit/test_identity_roles.py shade/tests/unit/test_image.py shade/tests/unit/test_image_snapshot.py shade/tests/unit/test_inventory.py shade/tests/unit/test_keypair.py shade/tests/unit/test_meta.py shade/tests/unit/test_network.py shade/tests/unit/test_object.py shade/tests/unit/test_operator_noauth.py shade/tests/unit/test_port.py shade/tests/unit/test_project.py shade/tests/unit/test_rebuild_server.py shade/tests/unit/test_role_assignment.py shade/tests/unit/test_security_groups.py shade/tests/unit/test_services.py shade/tests/unit/test_shade.py shade/tests/unit/test_shade_operator.py shade/tests/unit/test_stack.py shade/tests/unit/test_task_manager.py shade/tests/unit/test_users.py shade/tests/unit/test_volume.pyshade-1.7.0/shade.egg-info/requires.txt0000664000567000056710000000065412677257023021142 0ustar jenkinsjenkins00000000000000pbr>=0.11,<2.0 munch decorator jsonpatch ipaddress os-client-config>=1.13.0 requestsexceptions>=1.1.1 six keystoneauth1>=1.0.0 netifaces>=0.10.4 python-novaclient>=2.21.0,!=2.27.0,!=2.32.0 python-keystoneclient>=0.11.0 python-glanceclient>=1.0.0 python-cinderclient>=1.3.1 python-neutronclient>=2.3.10 python-troveclient>=1.2.0 python-ironicclient>=0.10.0 python-swiftclient>=2.5.0 python-heatclient>=0.3.0 dogpile.cache>=0.5.3 shade-1.7.0/test-requirements.txt0000664000567000056710000000042112677256557020230 0ustar jenkinsjenkins00000000000000hacking>=0.10.0,<0.11 coverage>=3.6 discover fixtures>=0.3.14 mock>=1.0 python-openstackclient>=2.1.0 python-subunit oslosphinx>=2.2.0 # Apache-2.0 sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 testrepository>=0.0.17 testscenarios>=0.4,<0.5 testtools>=0.9.32 warlock>=1.0.1,<2 reno shade-1.7.0/LICENSE0000664000567000056710000002363612677256557015011 0ustar jenkinsjenkins00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. shade-1.7.0/PKG-INFO0000664000567000056710000000550612677257023015062 0ustar jenkinsjenkins00000000000000Metadata-Version: 1.1 Name: shade Version: 1.7.0 Summary: Client library for operating OpenStack clouds Home-page: http://docs.openstack.org/infra/shade/ Author: OpenStack Infrastructure Team Author-email: openstack-infra@lists.openstack.org License: UNKNOWN Description: Introduction ============ shade is a simple client library for operating OpenStack clouds. The key word here is *simple*. Clouds can do many many many things - but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, you should probably use the lower level client libraries - or even the REST API directly. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then shade is for you. shade started its life as some code inside of ansible. ansible has a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. Example ======= Sometimes an example is nice. :: import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) # Initialize cloud # Cloud configs are read with os-client-config cloud = shade.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 shade-1.7.0/setup.cfg0000664000567000056710000000162212677257023015601 0ustar jenkinsjenkins00000000000000[metadata] name = shade summary = Client library for operating OpenStack clouds description-file = README.rst author = OpenStack Infrastructure Team author-email = openstack-infra@lists.openstack.org home-page = http://docs.openstack.org/infra/shade/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.4 [entry_points] console_scripts = shade-inventory = shade.cmd.inventory:main [build_sphinx] source-dir = doc/source build-dir = doc/build all_files = 1 [upload_sphinx] upload-dir = doc/build/html [egg_info] tag_date = 0 tag_build = tag_svn_revision = 0 shade-1.7.0/.mailmap0000664000567000056710000000020712677256557015412 0ustar jenkinsjenkins00000000000000# Format is: # # shade-1.7.0/setup.py0000775000567000056710000000131012677256557015502 0ustar jenkinsjenkins00000000000000#!/usr/bin/env python # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import setuptools setuptools.setup( setup_requires=['pbr'], pbr=True) shade-1.7.0/MANIFEST.in0000664000567000056710000000013512677256557015527 0ustar jenkinsjenkins00000000000000include AUTHORS include ChangeLog exclude .gitignore exclude .gitreview global-exclude *.pycshade-1.7.0/requirements.txt0000664000567000056710000000065712677256557017266 0ustar jenkinsjenkins00000000000000pbr>=0.11,<2.0 munch decorator jsonpatch ipaddress os-client-config>=1.13.0 requestsexceptions>=1.1.1 six keystoneauth1>=1.0.0 netifaces>=0.10.4 python-novaclient>=2.21.0,!=2.27.0,!=2.32.0 python-keystoneclient>=0.11.0 python-glanceclient>=1.0.0 python-cinderclient>=1.3.1 python-neutronclient>=2.3.10 python-troveclient>=1.2.0 python-ironicclient>=0.10.0 python-swiftclient>=2.5.0 python-heatclient>=0.3.0 dogpile.cache>=0.5.3 shade-1.7.0/.coveragerc0000664000567000056710000000012712677256557016113 0ustar jenkinsjenkins00000000000000[run] branch = True source = shade omit = shade/tests/* [report] ignore_errors = True shade-1.7.0/AUTHORS0000664000567000056710000000272712677257023015037 0ustar jenkinsjenkins00000000000000Adam Gandelman Alberto Gireud Atsushi SAKAI Caleb Boylan Cedric Brandily Clark Boylan Clayton O'Neill Clint Byrum Daniel Wallace David Shrewsbury Davide Guerri Devananda van der Veen Ghe Rivero Gregory Haynes Haikel Guemar Hideki Saito Ian Wienand James E. Blair Jeremy Stanley Jon Schlueter Joshua Harlow Joshua Hesketh Julia Kreger Kyle Mestery Lars Kellogg-Stedman Mathieu Bultel Matthew Treinish Monty Taylor Morgan Fainberg Ricardo Carrillo Cruz Rosario Di Somma SamYaple Spencer Krum Stefan Andres Steve Leon Timothy Chavez Tristan Cacqueray Yolanda Robla matthew wagoner shade-1.7.0/releasenotes/0000775000567000056710000000000012677257023016450 5ustar jenkinsjenkins00000000000000shade-1.7.0/releasenotes/notes/0000775000567000056710000000000012677257023017600 5ustar jenkinsjenkins00000000000000shade-1.7.0/releasenotes/notes/delete-obj-return-a3ecf0415b7a2989.yaml0000664000567000056710000000024212677256557026243 0ustar jenkinsjenkins00000000000000--- fixes: - The delete_object() method was not returning True/False, similar to other delete methods. It is now consistent with the other delete APIs. shade-1.7.0/releasenotes/notes/cinderv2-norm-fix-037189c60b43089f.yaml0000664000567000056710000000012112677256557025744 0ustar jenkinsjenkins00000000000000--- fixes: - Fixed the volume normalization function when used with cinder v2. shade-1.7.0/releasenotes/notes/swift-upload-lock-d18f3d42b3a0719a.yaml0000664000567000056710000000035012677256557026246 0ustar jenkinsjenkins00000000000000--- fixes: - Fixed an issue where a section of code that was supposed to be resetting the SwiftService object was instead resetting the protective mutex around the SwiftService object leading to an exception of "__exit__" shade-1.7.0/releasenotes/notes/cache-in-use-volumes-c7fa8bb378106fe3.yaml0000664000567000056710000000011212677256557026723 0ustar jenkinsjenkins00000000000000--- fixes: - Fixed caching the volume list when volumes are in use. shade-1.7.0/releasenotes/notes/create_service_norm-319a97433d68fa6a.yaml0000664000567000056710000000013012677256557026654 0ustar jenkinsjenkins00000000000000--- fixes: - The returned data from a create_service() call was not being normalized. shade-1.7.0/releasenotes/notes/wait_for_server-8dc8446b7c673d36.yaml0000664000567000056710000000013612677256557026052 0ustar jenkinsjenkins00000000000000--- features: - New wait_for_server() API call to wait for a server to reach ACTIVE status. shade-1.7.0/releasenotes/notes/fix-list-networks-a592725df64c306e.yaml0000664000567000056710000000007512677256557026253 0ustar jenkinsjenkins00000000000000--- fixes: - Fix for list_networks() ignoring any filters. shade-1.7.0/releasenotes/notes/fix-update-domain-af47b066ac52eb7f.yaml0000664000567000056710000000010712677256557026364 0ustar jenkinsjenkins00000000000000--- fixes: - Fix for update_domain() where 'name' was not updatable. shade-1.7.0/releasenotes/notes/started-using-reno-242e2b0cd27f9480.yaml0000664000567000056710000000006312677256557026361 0ustar jenkinsjenkins00000000000000--- other: - Started using reno for release notes. shade-1.7.0/releasenotes/notes/create_server_network_fix-c4a56b31d2850a4b.yaml0000664000567000056710000000040412677256557030145 0ustar jenkinsjenkins00000000000000--- fixes: - The create_server() API call would not use the supplied 'network' parameter if the 'nics' parameter was also supplied, even though it would be an empty list. It now uses 'network' if 'nics' is not supplied or if it is an empty list. shade-1.7.0/releasenotes/notes/fip_timeout-035c4bb3ff92fa1f.yaml0000664000567000056710000000021312677256557025367 0ustar jenkinsjenkins00000000000000--- fixes: - When creating a new server, the timeout was not being passed through to floating IP creation, which could also timeout. shade-1.7.0/releasenotes/notes/grant-revoke-assignments-231d3f9596a1ae75.yaml0000664000567000056710000000011312677256557027570 0ustar jenkinsjenkins00000000000000--- features: - add granting and revoking of roles from groups and users shade-1.7.0/releasenotes/notes/wait-on-image-snapshot-27cd2eacab2fabd8.yaml0000664000567000056710000000044512677256557027557 0ustar jenkinsjenkins00000000000000--- features: - Adds a new pair of options to create_image_snapshot(), wait and timeout, to have the function wait until the image snapshot being created goes into an active state. - Adds a new function wait_for_image() which will wait for an image to go into an active state. shade-1.7.0/releasenotes/notes/list-role-assignments-keystone-v2-b127b12b4860f50c.yaml0000664000567000056710000000013112677256557031241 0ustar jenkinsjenkins00000000000000--- features: - Implement list_role_assignments for keystone v2, using roles_for_user. shade-1.7.0/releasenotes/notes/get_object_api-968483adb016bce1.yaml0000664000567000056710000000014512677256557025651 0ustar jenkinsjenkins00000000000000--- features: - Added a new API call, OpenStackCloud.get_object(), to download objects from swift. shade-1.7.0/releasenotes/notes/net_provider-dd64b697476b7094.yaml0000664000567000056710000000012112677256557025267 0ustar jenkinsjenkins00000000000000--- features: - Network provider options are now accepted in create_network(). shade-1.7.0/releasenotes/notes/create-stack-fix-12dbb59a48ac7442.yaml0000664000567000056710000000021612677256557026040 0ustar jenkinsjenkins00000000000000--- fixes: - The create_stack() call was fixed to call the correct iterator method and to return the updated stack object when waiting. shade-1.7.0/releasenotes/notes/router_ext_gw-b86582317bca8b39.yaml0000664000567000056710000000016712677256557025543 0ustar jenkinsjenkins00000000000000--- fixes: - No longer fail in list_router_interfaces() if a router does not have the external_gateway_info key. shade-1.7.0/releasenotes/notes/flavor_fix-a53c6b326dc34a2c.yaml0000664000567000056710000000040112677256557025111 0ustar jenkinsjenkins00000000000000--- features: - Flavors will always contain an 'extra_specs' attribute. Client cruft, such as 'links', 'HUMAN_ID', etc. has been removed. fixes: - Setting and unsetting flavor extra specs now works. This had been broken since the 1.2.0 release. shade-1.7.0/releasenotes/notes/norm_role_assignments-a13f41768e62d40c.yaml0000664000567000056710000000017212677256557027240 0ustar jenkinsjenkins00000000000000--- fixes: - Role assignments were being returned as plain dicts instead of Munch objects. This has been corrected. shade-1.7.0/releasenotes/notes/add_update_service-28e590a7a7524053.yaml0000664000567000056710000000040612677256557026306 0ustar jenkinsjenkins00000000000000--- features: - Add the ability to update a keystone service information. This feature is not available on keystone v2.0. The new function, update_service(), allows the user to update description, name of service, service type, and enabled status. shade-1.7.0/releasenotes/notes/service_enabled_flag-c917b305d3f2e8fd.yaml0000664000567000056710000000031012677256557027075 0ustar jenkinsjenkins00000000000000--- fixes: - Keystone service descriptions were missing an attribute describing whether or not the service was enabled. A new 'enabled' boolean attribute has been added to the service data. shade-1.7.0/shade/0000775000567000056710000000000012677257023015043 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/_tasks.py0000664000567000056710000005070212677256557016720 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. from shade import task_manager class UserList(task_manager.Task): def main(self, client): return client.keystone_client.users.list() class UserCreate(task_manager.Task): def main(self, client): return client.keystone_client.users.create(**self.args) class UserDelete(task_manager.Task): def main(self, client): return client.keystone_client.users.delete(**self.args) class UserUpdate(task_manager.Task): def main(self, client): return client.keystone_client.users.update(**self.args) class UserPasswordUpdate(task_manager.Task): def main(self, client): return client.keystone_client.users.update_password(**self.args) class UserGet(task_manager.Task): def main(self, client): return client.keystone_client.users.get(**self.args) class UserAddToGroup(task_manager.Task): def main(self, client): return client.keystone_client.users.add_to_group(**self.args) class UserCheckInGroup(task_manager.Task): def main(self, client): return client.keystone_client.users.check_in_group(**self.args) class UserRemoveFromGroup(task_manager.Task): def main(self, client): return client.keystone_client.users.remove_from_group(**self.args) class ProjectList(task_manager.Task): def main(self, client): return client._project_manager.list() class ProjectCreate(task_manager.Task): def main(self, client): return client._project_manager.create(**self.args) class ProjectDelete(task_manager.Task): def main(self, client): return client._project_manager.delete(**self.args) class ProjectUpdate(task_manager.Task): def main(self, client): return client._project_manager.update(**self.args) class FlavorList(task_manager.Task): def main(self, client): return client.nova_client.flavors.list(**self.args) class FlavorGetExtraSpecs(task_manager.RequestTask): result_key = 'extra_specs' def main(self, client): return client._compute_client.get( "/flavors/{id}/os-extra_specs".format(**self.args)) class FlavorSetExtraSpecs(task_manager.RequestTask): result_key = 'extra_specs' def main(self, client): return client._compute_client.post( "/flavors/{id}/os-extra_specs".format(**self.args), json=self.args['json'] ) class FlavorUnsetExtraSpecs(task_manager.RequestTask): def main(self, client): return client._compute_client.delete( "/flavors/{id}/os-extra_specs/{key}".format(**self.args), ) class FlavorCreate(task_manager.Task): def main(self, client): return client.nova_client.flavors.create(**self.args) class FlavorDelete(task_manager.Task): def main(self, client): return client.nova_client.flavors.delete(**self.args) class FlavorGet(task_manager.Task): def main(self, client): return client.nova_client.flavors.get(**self.args) class FlavorAddAccess(task_manager.Task): def main(self, client): return client.nova_client.flavor_access.add_tenant_access( **self.args ) class FlavorRemoveAccess(task_manager.Task): def main(self, client): return client.nova_client.flavor_access.remove_tenant_access( **self.args ) class ServerList(task_manager.Task): def main(self, client): return client.nova_client.servers.list(**self.args) class ServerListSecurityGroups(task_manager.Task): def main(self, client): return client.nova_client.servers.list_security_group(**self.args) class ServerGet(task_manager.Task): def main(self, client): return client.nova_client.servers.get(**self.args) class ServerCreate(task_manager.Task): def main(self, client): return client.nova_client.servers.create(**self.args) class ServerDelete(task_manager.Task): def main(self, client): return client.nova_client.servers.delete(**self.args) class ServerRebuild(task_manager.Task): def main(self, client): return client.nova_client.servers.rebuild(**self.args) class HypervisorList(task_manager.Task): def main(self, client): return client.nova_client.hypervisors.list(**self.args) class KeypairList(task_manager.Task): def main(self, client): return client.nova_client.keypairs.list() class KeypairCreate(task_manager.Task): def main(self, client): return client.nova_client.keypairs.create(**self.args) class KeypairDelete(task_manager.Task): def main(self, client): return client.nova_client.keypairs.delete(**self.args) class NovaListExtensions(task_manager.RequestTask): result_key = 'extensions' def main(self, client): return client._compute_client.get('/extensions') class NovaUrlGet(task_manager.RequestTask): def main(self, client): return client._compute_client.get(**self.args) class NetworkList(task_manager.Task): def main(self, client): return client.neutron_client.list_networks(**self.args) class NetworkCreate(task_manager.Task): def main(self, client): return client.neutron_client.create_network(**self.args) class NetworkDelete(task_manager.Task): def main(self, client): return client.neutron_client.delete_network(**self.args) class RouterList(task_manager.Task): def main(self, client): return client.neutron_client.list_routers() class RouterCreate(task_manager.Task): def main(self, client): return client.neutron_client.create_router(**self.args) class RouterUpdate(task_manager.Task): def main(self, client): return client.neutron_client.update_router(**self.args) class RouterDelete(task_manager.Task): def main(self, client): return client.neutron_client.delete_router(**self.args) class RouterAddInterface(task_manager.Task): def main(self, client): return client.neutron_client.add_interface_router(**self.args) class RouterRemoveInterface(task_manager.Task): def main(self, client): client.neutron_client.remove_interface_router(**self.args) class GlanceImageList(task_manager.Task): def main(self, client): return [image for image in self.args['image_gen']] class NovaImageList(task_manager.Task): def main(self, client): return client.nova_client.images.list() class ImageSnapshotCreate(task_manager.Task): def main(self, client): return client.nova_client.servers.create_image(**self.args) class ImageCreate(task_manager.Task): def main(self, client): return client.glance_client.images.create(**self.args) class ImageDelete(task_manager.Task): def main(self, client): return client.glance_client.images.delete(**self.args) class ImageTaskCreate(task_manager.Task): def main(self, client): return client.glance_client.tasks.create(**self.args) class ImageTaskGet(task_manager.Task): def main(self, client): return client.glance_client.tasks.get(**self.args) class ImageUpdate(task_manager.Task): def main(self, client): client.glance_client.images.update(**self.args) class ImageUpload(task_manager.Task): def main(self, client): client.glance_client.images.upload(**self.args) class VolumeCreate(task_manager.Task): def main(self, client): return client.cinder_client.volumes.create(**self.args) class VolumeDelete(task_manager.Task): def main(self, client): client.cinder_client.volumes.delete(**self.args) class VolumeList(task_manager.Task): def main(self, client): return client.cinder_client.volumes.list() class VolumeDetach(task_manager.Task): def main(self, client): client.nova_client.volumes.delete_server_volume(**self.args) class VolumeAttach(task_manager.Task): def main(self, client): return client.nova_client.volumes.create_server_volume(**self.args) class VolumeSnapshotCreate(task_manager.Task): def main(self, client): return client.cinder_client.volume_snapshots.create(**self.args) class VolumeSnapshotGet(task_manager.Task): def main(self, client): return client.cinder_client.volume_snapshots.get(**self.args) class VolumeSnapshotList(task_manager.Task): def main(self, client): return client.cinder_client.volume_snapshots.list(**self.args) class VolumeSnapshotDelete(task_manager.Task): def main(self, client): return client.cinder_client.volume_snapshots.delete(**self.args) class NeutronSecurityGroupList(task_manager.Task): def main(self, client): return client.neutron_client.list_security_groups() class NeutronSecurityGroupCreate(task_manager.Task): def main(self, client): return client.neutron_client.create_security_group(**self.args) class NeutronSecurityGroupDelete(task_manager.Task): def main(self, client): return client.neutron_client.delete_security_group(**self.args) class NeutronSecurityGroupUpdate(task_manager.Task): def main(self, client): return client.neutron_client.update_security_group(**self.args) class NeutronSecurityGroupRuleCreate(task_manager.Task): def main(self, client): return client.neutron_client.create_security_group_rule(**self.args) class NeutronSecurityGroupRuleDelete(task_manager.Task): def main(self, client): return client.neutron_client.delete_security_group_rule(**self.args) class NovaSecurityGroupList(task_manager.Task): def main(self, client): return client.nova_client.security_groups.list() class NovaSecurityGroupCreate(task_manager.Task): def main(self, client): return client.nova_client.security_groups.create(**self.args) class NovaSecurityGroupDelete(task_manager.Task): def main(self, client): return client.nova_client.security_groups.delete(**self.args) class NovaSecurityGroupUpdate(task_manager.Task): def main(self, client): return client.nova_client.security_groups.update(**self.args) class NovaSecurityGroupRuleCreate(task_manager.Task): def main(self, client): return client.nova_client.security_group_rules.create(**self.args) class NovaSecurityGroupRuleDelete(task_manager.Task): def main(self, client): return client.nova_client.security_group_rules.delete(**self.args) class NeutronFloatingIPList(task_manager.Task): def main(self, client): return client.neutron_client.list_floatingips(**self.args) class NovaFloatingIPList(task_manager.Task): def main(self, client): return client.nova_client.floating_ips.list() class NeutronFloatingIPCreate(task_manager.Task): def main(self, client): return client.neutron_client.create_floatingip(**self.args) class NovaFloatingIPCreate(task_manager.Task): def main(self, client): return client.nova_client.floating_ips.create(**self.args) class NeutronFloatingIPDelete(task_manager.Task): def main(self, client): return client.neutron_client.delete_floatingip(**self.args) class NovaFloatingIPDelete(task_manager.Task): def main(self, client): return client.nova_client.floating_ips.delete(**self.args) class NovaFloatingIPAttach(task_manager.Task): def main(self, client): return client.nova_client.servers.add_floating_ip(**self.args) class NovaFloatingIPDetach(task_manager.Task): def main(self, client): return client.nova_client.servers.remove_floating_ip(**self.args) class NeutronFloatingIPUpdate(task_manager.Task): def main(self, client): return client.neutron_client.update_floatingip(**self.args) class FloatingIPPoolList(task_manager.Task): def main(self, client): return client.nova_client.floating_ip_pools.list() class ContainerGet(task_manager.Task): def main(self, client): return client.swift_client.head_container(**self.args) class ContainerCreate(task_manager.Task): def main(self, client): client.swift_client.put_container(**self.args) class ContainerDelete(task_manager.Task): def main(self, client): client.swift_client.delete_container(**self.args) class ContainerUpdate(task_manager.Task): def main(self, client): client.swift_client.post_container(**self.args) class ContainerList(task_manager.Task): def main(self, client): return client.swift_client.get_account(**self.args)[1] class ObjectCapabilities(task_manager.Task): def main(self, client): return client.swift_client.get_capabilities(**self.args) class ObjectDelete(task_manager.Task): def main(self, client): return client.swift_client.delete_object(**self.args) class ObjectCreate(task_manager.Task): def main(self, client): return client.swift_service.upload(**self.args) class ObjectUpdate(task_manager.Task): def main(self, client): client.swift_client.post_object(**self.args) class ObjectList(task_manager.Task): def main(self, client): return client.swift_client.get_container(**self.args)[1] class ObjectMetadata(task_manager.Task): def main(self, client): return client.swift_client.head_object(**self.args) class ObjectGet(task_manager.Task): def main(self, client): return client.swift_client.get_object(**self.args) class SubnetCreate(task_manager.Task): def main(self, client): return client.neutron_client.create_subnet(**self.args) class SubnetList(task_manager.Task): def main(self, client): return client.neutron_client.list_subnets() class SubnetDelete(task_manager.Task): def main(self, client): client.neutron_client.delete_subnet(**self.args) class SubnetUpdate(task_manager.Task): def main(self, client): return client.neutron_client.update_subnet(**self.args) class PortList(task_manager.Task): def main(self, client): return client.neutron_client.list_ports(**self.args) class PortCreate(task_manager.Task): def main(self, client): return client.neutron_client.create_port(**self.args) class PortUpdate(task_manager.Task): def main(self, client): return client.neutron_client.update_port(**self.args) class PortDelete(task_manager.Task): def main(self, client): return client.neutron_client.delete_port(**self.args) class MachineCreate(task_manager.Task): def main(self, client): return client.ironic_client.node.create(**self.args) class MachineDelete(task_manager.Task): def main(self, client): return client.ironic_client.node.delete(**self.args) class MachinePatch(task_manager.Task): def main(self, client): return client.ironic_client.node.update(**self.args) class MachinePortGet(task_manager.Task): def main(self, client): return client.ironic_client.port.get(**self.args) class MachinePortGetByAddress(task_manager.Task): def main(self, client): return client.ironic_client.port.get_by_address(**self.args) class MachinePortCreate(task_manager.Task): def main(self, client): return client.ironic_client.port.create(**self.args) class MachinePortDelete(task_manager.Task): def main(self, client): return client.ironic_client.port.delete(**self.args) class MachinePortList(task_manager.Task): def main(self, client): return client.ironic_client.port.list() class MachineNodeGet(task_manager.Task): def main(self, client): return client.ironic_client.node.get(**self.args) class MachineNodeList(task_manager.Task): def main(self, client): return client.ironic_client.node.list(**self.args) class MachineNodePortList(task_manager.Task): def main(self, client): return client.ironic_client.node.list_ports(**self.args) class MachineNodeUpdate(task_manager.Task): def main(self, client): return client.ironic_client.node.update(**self.args) class MachineNodeValidate(task_manager.Task): def main(self, client): return client.ironic_client.node.validate(**self.args) class MachineSetMaintenance(task_manager.Task): def main(self, client): return client.ironic_client.node.set_maintenance(**self.args) class MachineSetPower(task_manager.Task): def main(self, client): return client.ironic_client.node.set_power_state(**self.args) class MachineSetProvision(task_manager.Task): def main(self, client): return client.ironic_client.node.set_provision_state(**self.args) class ServiceCreate(task_manager.Task): def main(self, client): return client.keystone_client.services.create(**self.args) class ServiceList(task_manager.Task): def main(self, client): return client.keystone_client.services.list() class ServiceUpdate(task_manager.Task): def main(self, client): return client.keystone_client.services.update(**self.args) class ServiceDelete(task_manager.Task): def main(self, client): return client.keystone_client.services.delete(**self.args) class EndpointCreate(task_manager.Task): def main(self, client): return client.keystone_client.endpoints.create(**self.args) class EndpointList(task_manager.Task): def main(self, client): return client.keystone_client.endpoints.list() class EndpointDelete(task_manager.Task): def main(self, client): return client.keystone_client.endpoints.delete(**self.args) class DomainCreate(task_manager.Task): def main(self, client): return client.keystone_client.domains.create(**self.args) class DomainList(task_manager.Task): def main(self, client): return client.keystone_client.domains.list(**self.args) class DomainGet(task_manager.Task): def main(self, client): return client.keystone_client.domains.get(**self.args) class DomainUpdate(task_manager.Task): def main(self, client): return client.keystone_client.domains.update(**self.args) class DomainDelete(task_manager.Task): def main(self, client): return client.keystone_client.domains.delete(**self.args) class GroupList(task_manager.Task): def main(self, client): return client.keystone_client.groups.list() class GroupCreate(task_manager.Task): def main(self, client): return client.keystone_client.groups.create(**self.args) class GroupDelete(task_manager.Task): def main(self, client): return client.keystone_client.groups.delete(**self.args) class GroupUpdate(task_manager.Task): def main(self, client): return client.keystone_client.groups.update(**self.args) class RoleList(task_manager.Task): def main(self, client): return client.keystone_client.roles.list() class RoleCreate(task_manager.Task): def main(self, client): return client.keystone_client.roles.create(**self.args) class RoleDelete(task_manager.Task): def main(self, client): return client.keystone_client.roles.delete(**self.args) class RoleAddUser(task_manager.Task): def main(self, client): return client.keystone_client.roles.add_user_role(**self.args) class RoleGrantUser(task_manager.Task): def main(self, client): return client.keystone_client.roles.grant(**self.args) class RoleRemoveUser(task_manager.Task): def main(self, client): return client.keystone_client.roles.remove_user_role(**self.args) class RoleRevokeUser(task_manager.Task): def main(self, client): return client.keystone_client.roles.revoke(**self.args) class RoleAssignmentList(task_manager.Task): def main(self, client): return client.keystone_client.role_assignments.list(**self.args) class RolesForUser(task_manager.Task): def main(self, client): return client.keystone_client.roles.roles_for_user(**self.args) class StackList(task_manager.Task): def main(self, client): return client.heat_client.stacks.list() class StackCreate(task_manager.Task): def main(self, client): return client.heat_client.stacks.create(**self.args) class StackDelete(task_manager.Task): def main(self, client): return client.heat_client.stacks.delete(self.args['id']) shade-1.7.0/shade/inventory.py0000664000567000056710000000566712677256557017503 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import os_client_config import shade from shade import _utils class OpenStackInventory(object): # Put this here so the capability can be detected with hasattr on the class extra_config = None def __init__( self, config_files=None, refresh=False, private=False, config_key=None, config_defaults=None, cloud=None): if config_files is None: config_files = [] config = os_client_config.config.OpenStackConfig( config_files=os_client_config.config.CONFIG_FILES + config_files) self.extra_config = config.get_extra_config( config_key, config_defaults) if cloud is None: self.clouds = [ shade.OpenStackCloud(cloud_config=cloud_config) for cloud_config in config.get_all_clouds() ] else: try: self.clouds = [ shade.OpenStackCloud( cloud_config=config.get_one_cloud(cloud)) ] except os_client_config.exceptions.OpenStackConfigException as e: raise shade.OpenStackCloudException(e) if private: for cloud in self.clouds: cloud.private = True # Handle manual invalidation of entire persistent cache if refresh: for cloud in self.clouds: cloud._cache.invalidate() def list_hosts(self, expand=True, fail_on_cloud_config=True): hostvars = [] for cloud in self.clouds: try: # Cycle on servers for server in cloud.list_servers(detailed=expand): hostvars.append(server) except shade.OpenStackCloudException: # Don't fail on one particular cloud as others may work if fail_on_cloud_config: raise return hostvars def search_hosts(self, name_or_id=None, filters=None, expand=True): hosts = self.list_hosts(expand=expand) return _utils._filter_list(hosts, name_or_id, filters) def get_host(self, name_or_id, filters=None, expand=True): if expand: func = self.search_hosts else: func = functools.partial(self.search_hosts, expand=False) return _utils._get_entity(func, name_or_id, filters) shade-1.7.0/shade/_log.py0000664000567000056710000000147612677256557016360 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging class NullHandler(logging.Handler): def emit(self, record): pass def setup_logging(name): log = logging.getLogger(name) if len(log.handlers) == 0: h = NullHandler() log.addHandler(h) return log shade-1.7.0/shade/openstackcloud.py0000664000567000056710000057221512677256562020456 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import hashlib import ipaddress import operator import os import os_client_config import os_client_config.defaults import six import threading import time import warnings from dogpile import cache import requestsexceptions import cinderclient.client import cinderclient.exceptions as cinder_exceptions import glanceclient import glanceclient.exc import heatclient.client from heatclient.common import template_utils import keystoneauth1.exceptions import keystoneclient.client import neutronclient.neutron.client import novaclient.client import novaclient.exceptions as nova_exceptions import swiftclient.client import swiftclient.service import swiftclient.exceptions as swift_exceptions import troveclient.client from shade.exc import * # noqa from shade import _log from shade import meta from shade import task_manager from shade import _tasks from shade import _utils OBJECT_MD5_KEY = 'x-object-meta-x-shade-md5' OBJECT_SHA256_KEY = 'x-object-meta-x-shade-sha256' IMAGE_MD5_KEY = 'owner_specified.shade.md5' IMAGE_SHA256_KEY = 'owner_specified.shade.sha256' # Rackspace returns this for intermittent import errors IMAGE_ERROR_396 = "Image cannot be imported. Error code: '396'" DEFAULT_OBJECT_SEGMENT_SIZE = 1073741824 # 1GB # This halves the current default for Swift DEFAULT_MAX_FILE_SIZE = (5 * 1024 * 1024 * 1024 + 2) / 2 DEFAULT_SERVER_AGE = 5 DEFAULT_PORT_AGE = 5 OBJECT_CONTAINER_ACLS = { 'public': ".r:*,.rlistings", 'private': '', } def _no_pending_volumes(volumes): '''If there are any volumes not in a steady state, don't cache''' for volume in volumes: if volume['status'] not in ('available', 'error', 'in-use'): return False return True def _no_pending_images(images): '''If there are any images not in a steady state, don't cache''' for image in images: if image.status not in ('active', 'deleted', 'killed'): return False return True def _no_pending_stacks(stacks): '''If there are any stacks not in a steady state, don't cache''' for stack in stacks: status = stack['stack_status'] if '_COMPLETE' not in status and '_FAILED' not in status: return False return True class OpenStackCloud(object): """Represent a connection to an OpenStack Cloud. OpenStackCloud is the entry point for all cloud operations, regardless of which OpenStack service those operations may ultimately come from. The operations on an OpenStackCloud are resource oriented rather than REST API operation oriented. For instance, one will request a Floating IP and that Floating IP will be actualized either via neutron or via nova depending on how this particular cloud has decided to arrange itself. :param TaskManager manager: Optional task manager to use for running OpenStack API tasks. Unless you're doing rate limiting client side, you almost certainly don't need this. (optional) :param bool log_inner_exceptions: Send wrapped exceptions to the error log. Defaults to false, because there are a number of wrapped exceptions that are noise for normal usage. It's possible that for a user that has python logging configured properly, it's desirable to have all of the wrapped exceptions be emitted to the error log. This flag will enable that behavior. :param CloudConfig cloud_config: Cloud config object from os-client-config In the future, this will be the only way to pass in cloud configuration, but is being phased in currently. """ def __init__( self, cloud_config=None, manager=None, log_inner_exceptions=False, **kwargs): if log_inner_exceptions: OpenStackCloudException.log_inner_exceptions = True self.log = _log.setup_logging('shade') if not cloud_config: config = os_client_config.OpenStackConfig() cloud_config = config.get_one_cloud(**kwargs) self.name = cloud_config.name self.auth = cloud_config.get_auth_args() self.region_name = cloud_config.region_name self.default_interface = cloud_config.get_interface() self.private = cloud_config.config.get('private', False) self.api_timeout = cloud_config.config['api_timeout'] self.image_api_use_tasks = cloud_config.config['image_api_use_tasks'] self.secgroup_source = cloud_config.config['secgroup_source'] self.force_ipv4 = cloud_config.force_ipv4 self._external_network_name_or_id = cloud_config.config.get( 'external_network', None) self._use_external_network = cloud_config.config.get( 'use_external_network', True) self._internal_network_name_or_id = cloud_config.config.get( 'internal_network', None) self._use_internal_network = cloud_config.config.get( 'use_internal_network', True) if manager is not None: self.manager = manager else: self.manager = task_manager.TaskManager( name=':'.join([self.name, self.region_name]), client=self) (self.verify, self.cert) = cloud_config.get_requests_verify_args() # Turn off urllib3 warnings about insecure certs if we have # explicitly configured requests to tell it we do not want # cert verification if not self.verify: self.log.debug( "Turning off Insecure SSL warnings since verify=False") category = requestsexceptions.InsecureRequestWarning if category: # InsecureRequestWarning references a Warning class or is None warnings.filterwarnings('ignore', category=category) self._servers = [] self._servers_time = 0 self._servers_lock = threading.Lock() self._ports = [] self._ports_time = 0 self._ports_lock = threading.Lock() self._networks_lock = threading.Lock() self._reset_network_caches() cache_expiration_time = int(cloud_config.get_cache_expiration_time()) cache_class = cloud_config.get_cache_class() cache_arguments = cloud_config.get_cache_arguments() if cache_class != 'dogpile.cache.null': self._cache = cache.make_region( function_key_generator=self._make_cache_key ).configure( cache_class, expiration_time=cache_expiration_time, arguments=cache_arguments) self._SERVER_AGE = DEFAULT_SERVER_AGE self._PORT_AGE = DEFAULT_PORT_AGE else: def _fake_invalidate(unused): pass class _FakeCache(object): def invalidate(self): pass # Don't cache list_servers if we're not caching things. # Replace this with a more specific cache configuration # soon. self._SERVER_AGE = 0 self._PORT_AGE = 0 self._cache = _FakeCache() # Undecorate cache decorated methods. Otherwise the call stacks # wind up being stupidly long and hard to debug for method in _utils._decorated_methods: meth_obj = getattr(self, method, None) if not meth_obj: continue if (hasattr(meth_obj, 'invalidate') and hasattr(meth_obj, 'func')): new_func = functools.partial(meth_obj.func, self) new_func.invalidate = _fake_invalidate setattr(self, method, new_func) # If server expiration time is set explicitly, use that. Otherwise # fall back to whatever it was before self._SERVER_AGE = cloud_config.get_cache_resource_expiration( 'server', self._SERVER_AGE) self._PORT_AGE = cloud_config.get_cache_resource_expiration( 'port', self._PORT_AGE) self._container_cache = dict() self._file_hash_cache = dict() self._keystone_session = None self._cinder_client = None self._glance_client = None self._glance_endpoint = None self._heat_client = None self._keystone_client = None self._neutron_client = None self._nova_client = None self._swift_client = None self._swift_service = None # Lock used to reset swift client. Since swift client does not # support keystone sessions, we we have to make a new client # in order to get new auth prior to operations, otherwise # long-running sessions will fail. self._swift_client_lock = threading.Lock() self._swift_service_lock = threading.Lock() self._trove_client = None self._raw_clients = {} self._local_ipv6 = _utils.localhost_supports_ipv6() self.cloud_config = cloud_config def _make_cache_key(self, namespace, fn): fname = fn.__name__ if namespace is None: name_key = self.name else: name_key = '%s:%s' % (self.name, namespace) def generate_key(*args, **kwargs): arg_key = ','.join(args) kw_keys = sorted(kwargs.keys()) kwargs_key = ','.join( ['%s:%s' % (k, kwargs[k]) for k in kw_keys if k != 'cache']) ans = "_".join( [str(name_key), fname, arg_key, kwargs_key]) return ans return generate_key def _get_client( self, service_key, client_class, interface_key=None, pass_version_arg=True, **kwargs): try: client = self.cloud_config.get_legacy_client( service_key=service_key, client_class=client_class, interface_key=interface_key, pass_version_arg=pass_version_arg, **kwargs) except Exception: self.log.debug( "Couldn't construct {service} object".format( service=service_key), exc_info=True) raise if client is None: raise OpenStackCloudException( "Failed to instantiate {service} client." " This could mean that your credentials are wrong.".format( service=service_key)) return client def _get_raw_client(self, service_key): return self.cloud_config.get_session_client(service_key) @property def _compute_client(self): if 'compute' not in self._raw_clients: self._raw_clients['compute'] = self._get_raw_client('compute') return self._raw_clients['compute'] @property def nova_client(self): if self._nova_client is None: self._nova_client = self._get_client( 'compute', novaclient.client.Client) return self._nova_client @property def keystone_session(self): if self._keystone_session is None: try: self._keystone_session = self.cloud_config.get_session() except Exception as e: raise OpenStackCloudException( "Error authenticating to keystone: %s " % str(e)) return self._keystone_session @property def keystone_client(self): if self._keystone_client is None: self._keystone_client = self._get_client( 'identity', keystoneclient.client.Client) return self._keystone_client @property def service_catalog(self): return self.keystone_session.auth.get_access( self.keystone_session).service_catalog.catalog @property def auth_token(self): # Keystone's session will reuse a token if it is still valid. # We don't need to track validity here, just get_token() each time. return self.keystone_session.get_token() @property def _project_manager(self): # Keystone v2 calls this attribute tenants # Keystone v3 calls it projects # Yay for usable APIs! if self.cloud_config.get_api_version('identity').startswith('2'): return self.keystone_client.tenants return self.keystone_client.projects def _get_project_param_dict(self, name_or_id): project_dict = dict() if name_or_id: project = self.get_project(name_or_id) if not project: return project_dict if self.cloud_config.get_api_version('identity') == '3': project_dict['default_project'] = project['id'] else: project_dict['tenant_id'] = project['id'] return project_dict def _get_domain_param_dict(self, domain_id): """Get a useable domain.""" # Keystone v3 requires domains for user and project creation. v2 does # not. However, keystone v2 does not allow user creation by non-admin # users, so we can throw an error to the user that does not need to # mention api versions if self.cloud_config.get_api_version('identity') == '3': if not domain_id: raise OpenStackCloudException( "User creation requires an explicit domain_id argument.") else: return {'domain': domain_id} else: return {} def _get_identity_params(self, domain_id=None, project=None): """Get the domain and project/tenant parameters if needed. keystone v2 and v3 are divergent enough that we need to pass or not pass project or tenant_id or domain or nothing in a sane manner. """ ret = {} ret.update(self._get_domain_param_dict(domain_id)) ret.update(self._get_project_param_dict(project)) return ret def range_search(self, data, filters): """Perform integer range searches across a list of dictionaries. Given a list of dictionaries, search across the list using the given dictionary keys and a range of integer values for each key. Only dictionaries that match ALL search filters across the entire original data set will be returned. It is not a requirement that each dictionary contain the key used for searching. Those without the key will be considered non-matching. The range values must be string values and is either a set of digits representing an integer for matching, or a range operator followed by a set of digits representing an integer for matching. If a range operator is not given, exact value matching will be used. Valid operators are one of: <,>,<=,>= :param list data: List of dictionaries to be searched. :param dict filters: Dict describing the one or more range searches to perform. If more than one search is given, the result will be the members of the original data set that match ALL searches. An example of filtering by multiple ranges:: {"vcpus": "<=5", "ram": "<=2048", "disk": "1"} :returns: A list subset of the original data set. :raises: OpenStackCloudException on invalid range expressions. """ filtered = [] for key, range_value in filters.items(): # We always want to operate on the full data set so that # calculations for minimum and maximum are correct. results = _utils.range_filter(data, key, range_value) if not filtered: # First set of results filtered = results else: # The combination of all searches should be the intersection of # all result sets from each search. So adjust the current set # of filtered data by computing its intersection with the # latest result set. filtered = [r for r in results for f in filtered if r == f] return filtered @_utils.cache_on_arguments() def list_projects(self): """List Keystone Projects. :returns: a list of dicts containing the project description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ try: projects = self.manager.submitTask(_tasks.ProjectList()) except Exception as e: self.log.debug("Failed to list projects", exc_info=True) raise OpenStackCloudException(str(e)) return projects def search_projects(self, name_or_id=None, filters=None): """Seach Keystone projects. :param name: project name or id. :param filters: a dict containing additional filters to use. :returns: a list of dict containing the projects :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ projects = self.list_projects() return _utils._filter_list(projects, name_or_id, filters) def get_project(self, name_or_id, filters=None): """Get exactly one Keystone project. :param id: project name or id. :param filters: a dict containing additional filters to use. :returns: a list of dicts containing the project description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ return _utils._get_entity(self.search_projects, name_or_id, filters) def update_project(self, name_or_id, description=None, enabled=True): with _utils.shade_exceptions( "Error in updating project {project}".format( project=name_or_id)): proj = self.get_project(name_or_id) if not proj: raise OpenStackCloudException( "Project %s not found." % name_or_id) params = {} if self.cloud_config.get_api_version('identity') == '3': params['project'] = proj['id'] else: params['tenant_id'] = proj['id'] project = self.manager.submitTask(_tasks.ProjectUpdate( description=description, enabled=enabled, **params)) self.list_projects.invalidate(self) return project def create_project( self, name, description=None, domain_id=None, enabled=True): """Create a project.""" with _utils.shade_exceptions( "Error in creating project {project}".format(project=name)): params = self._get_domain_param_dict(domain_id) if self.cloud_config.get_api_version('identity') == '3': params['name'] = name else: params['tenant_name'] = name project = self.manager.submitTask(_tasks.ProjectCreate( project_name=name, description=description, enabled=enabled, **params)) self.list_projects.invalidate(self) return project def delete_project(self, name_or_id): with _utils.shade_exceptions( "Error in deleting project {project}".format( project=name_or_id)): project = self.update_project(name_or_id, enabled=False) params = {} if self.cloud_config.get_api_version('identity') == '3': params['project'] = project['id'] else: params['tenant'] = project['id'] self.manager.submitTask(_tasks.ProjectDelete(**params)) @_utils.cache_on_arguments() def list_users(self): """List Keystone Users. :returns: a list of dicts containing the user description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ with _utils.shade_exceptions("Failed to list users"): users = self.manager.submitTask(_tasks.UserList()) return _utils.normalize_users(users) def search_users(self, name_or_id=None, filters=None): """Seach Keystone users. :param string name: user name or id. :param dict filters: a dict containing additional filters to use. :returns: a list of dict containing the users :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ users = self.list_users() return _utils._filter_list(users, name_or_id, filters) def get_user(self, name_or_id, filters=None): """Get exactly one Keystone user. :param string name_or_id: user name or id. :param dict filters: a dict containing additional filters to use. :returns: a single dict containing the user description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ return _utils._get_entity(self.search_users, name_or_id, filters) def get_user_by_id(self, user_id, normalize=True): """Get a Keystone user by ID. :param string user_id: user ID :param bool normalize: Flag to control dict normalization :returns: a single dict containing the user description """ with _utils.shade_exceptions( "Error getting user with ID {user_id}".format( user_id=user_id)): user = self.manager.submitTask(_tasks.UserGet(user=user_id)) if user and normalize: return _utils.normalize_users([user])[0] return user # NOTE(Shrews): Keystone v2 supports updating only name, email and enabled. @_utils.valid_kwargs('name', 'email', 'enabled', 'domain_id', 'password', 'description', 'default_project') def update_user(self, name_or_id, **kwargs): self.list_users.invalidate(self) user = self.get_user(name_or_id) # normalized dict won't work kwargs['user'] = self.get_user_by_id(user['id'], normalize=False) if self.cloud_config.get_api_version('identity') != '3': # Do not pass v3 args to a v2 keystone. kwargs.pop('domain_id', None) kwargs.pop('description', None) kwargs.pop('default_project', None) password = kwargs.pop('password', None) if password is not None: with _utils.shade_exceptions( "Error updating password for {user}".format( user=name_or_id)): user = self.manager.submitTask(_tasks.UserPasswordUpdate( user=kwargs['user'], password=password)) elif 'domain_id' in kwargs: # The incoming parameter is domain_id in order to match the # parameter name in create_user(), but UserUpdate() needs it # to be domain. kwargs['domain'] = kwargs.pop('domain_id') with _utils.shade_exceptions("Error in updating user {user}".format( user=name_or_id)): user = self.manager.submitTask(_tasks.UserUpdate(**kwargs)) self.list_users.invalidate(self) return _utils.normalize_users([user])[0] def create_user( self, name, password=None, email=None, default_project=None, enabled=True, domain_id=None): """Create a user.""" with _utils.shade_exceptions("Error in creating user {user}".format( user=name)): identity_params = self._get_identity_params( domain_id, default_project) user = self.manager.submitTask(_tasks.UserCreate( name=name, password=password, email=email, enabled=enabled, **identity_params)) self.list_users.invalidate(self) return _utils.normalize_users([user])[0] def delete_user(self, name_or_id): self.list_users.invalidate(self) user = self.get_user(name_or_id) if not user: self.log.debug( "User {0} not found for deleting".format(name_or_id)) return False # normalized dict won't work user = self.get_user_by_id(user['id'], normalize=False) with _utils.shade_exceptions("Error in deleting user {user}".format( user=name_or_id)): self.manager.submitTask(_tasks.UserDelete(user=user)) self.list_users.invalidate(self) return True def _get_user_and_group(self, user_name_or_id, group_name_or_id): user = self.get_user(user_name_or_id) if not user: raise OpenStackCloudException( 'User {user} not found'.format(user=user_name_or_id)) group = self.get_group(group_name_or_id) if not group: raise OpenStackCloudException( 'Group {user} not found'.format(user=group_name_or_id)) return (user, group) def add_user_to_group(self, name_or_id, group_name_or_id): """Add a user to a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) with _utils.shade_exceptions( "Error adding user {user} to group {group}".format( user=name_or_id, group=group_name_or_id) ): self.manager.submitTask( _tasks.UserAddToGroup(user=user['id'], group=group['id']) ) def is_user_in_group(self, name_or_id, group_name_or_id): """Check to see if a user is in a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :returns: True if user is in the group, False otherwise :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) try: return self.manager.submitTask( _tasks.UserCheckInGroup(user=user['id'], group=group['id']) ) except keystoneauth1.exceptions.http.NotFound: # Because the keystone API returns either True or raises an # exception, which is awesome. return False except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Error adding user {user} to group {group}: {err}".format( user=name_or_id, group=group_name_or_id, err=str(e)) ) def remove_user_from_group(self, name_or_id, group_name_or_id): """Remove a user from a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) with _utils.shade_exceptions( "Error removing user {user} from group {group}".format( user=name_or_id, group=group_name_or_id) ): self.manager.submitTask( _tasks.UserRemoveFromGroup(user=user['id'], group=group['id']) ) @property def glance_client(self): if self._glance_client is None: self._glance_client = self._get_client( 'image', glanceclient.Client) return self._glance_client @property def heat_client(self): if self._heat_client is None: self._heat_client = self._get_client( 'orchestration', heatclient.client.Client) return self._heat_client def get_template_contents( self, template_file=None, template_url=None, template_object=None, files=None): try: return template_utils.get_template_contents( template_file=template_file, template_url=template_url, template_object=template_object, files=files) except Exception as e: raise OpenStackCloudException( "Error in processing template files: %s" % str(e)) @property def swift_client(self): with self._swift_client_lock: if self._swift_client is None: self._swift_client = self._get_client( 'object-store', swiftclient.client.Connection) return self._swift_client def _get_swift_kwargs(self): auth_version = self.cloud_config.get_api_version('identity') auth_args = self.cloud_config.config.get('auth', {}) os_options = {'auth_version': auth_version} if auth_version == '2.0': os_options['os_tenant_name'] = auth_args.get('project_name') os_options['os_tenant_id'] = auth_args.get('project_id') else: os_options['os_project_name'] = auth_args.get('project_name') os_options['os_project_id'] = auth_args.get('project_id') for key in ( 'username', 'password', 'auth_url', 'user_id', 'project_domain_id', 'project_domain_name', 'user_domain_id', 'user_domain_name'): os_options['os_{key}'.format(key=key)] = auth_args.get(key) return os_options @property def swift_service(self): with self._swift_service_lock: if self._swift_service is None: with _utils.shade_exceptions("Error constructing " "swift client"): endpoint = self.get_session_endpoint( service_key='object-store') options = dict(os_auth_token=self.auth_token, os_storage_url=endpoint, os_region_name=self.region_name) options.update(self._get_swift_kwargs()) self._swift_service = swiftclient.service.SwiftService( options=options) return self._swift_service @property def cinder_client(self): if self._cinder_client is None: self._cinder_client = self._get_client( 'volume', cinderclient.client.Client) return self._cinder_client @property def trove_client(self): if self._trove_client is None: self._trove_client = self._get_client( 'database', troveclient.client.Client) return self._trove_client @property def neutron_client(self): if self._neutron_client is None: self._neutron_client = self._get_client( 'network', neutronclient.neutron.client.Client) return self._neutron_client def create_stack( self, name, template_file=None, template_url=None, template_object=None, files=None, rollback=True, wait=False, timeout=180, environment_files=None, **parameters): envfiles, env = template_utils.process_multiple_environments_and_files( env_paths=environment_files) tpl_files, template = template_utils.get_template_contents( template_file=template_file, template_url=template_url, template_object=template_object, files=files) params = dict( stack_name=name, disable_rollback=not rollback, parameters=parameters, template=template, files=dict(list(tpl_files.items()) + list(envfiles.items())), environment=env, ) with _utils.shade_exceptions("Error creating stack {name}".format( name=name)): stack = self.manager.submitTask(_tasks.StackCreate(**params)) if not wait: return stack for count in _utils._iterate_timeout( timeout, "Timed out waiting for heat stack to finish"): stack = self.get_stack(name) if stack: return stack def delete_stack(self, name_or_id): """Delete a Heat Stack :param string name_or_id: Stack name or id. :returns: True if delete succeeded, False if the stack was not found. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call """ stack = self.get_stack(name_or_id) if stack is None: self.log.debug("Stack %s not found for deleting" % name_or_id) return False with _utils.shade_exceptions("Failed to delete stack {id}".format( id=stack['id'])): self.manager.submitTask(_tasks.StackDelete(id=stack['id'])) return True def get_name(self): return self.name def get_region(self): return self.region_name def get_flavor_name(self, flavor_id): flavor = self.get_flavor(flavor_id) if flavor: return flavor['name'] return None def get_flavor_by_ram(self, ram, include=None): """Get a flavor based on amount of RAM available. Finds the flavor with the least amount of RAM that is at least as much as the specified amount. If `include` is given, further filter based on matching flavor name. :param int ram: Minimum amount of RAM. :param string include: If given, will return a flavor whose name contains this string as a substring. """ flavors = self.list_flavors() for flavor in sorted(flavors, key=operator.itemgetter('ram')): if (flavor['ram'] >= ram and (not include or include in flavor['name'])): return flavor raise OpenStackCloudException( "Could not find a flavor with {ram} and '{include}'".format( ram=ram, include=include)) def get_session_endpoint(self, service_key): try: return self.cloud_config.get_session_endpoint(service_key) except keystoneauth1.exceptions.catalog.EndpointNotFound as e: self.log.debug( "Endpoint not found in %s cloud: %s", self.name, str(e)) endpoint = None except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Error getting {service} endpoint on {cloud}:{region}:" " {error}".format( service=service_key, cloud=self.name, region=self.region_name, error=str(e))) return endpoint def has_service(self, service_key): if not self.cloud_config.config.get('has_%s' % service_key, True): self.log.debug( "Disabling {service_key} entry in catalog per config".format( service_key=service_key)) return False try: endpoint = self.get_session_endpoint(service_key) except OpenStackCloudException: return False if endpoint: return True else: return False @_utils.cache_on_arguments() def _nova_extensions(self): extensions = set() with _utils.shade_exceptions("Error fetching extension list for nova"): for extension in self.manager.submitTask( _tasks.NovaListExtensions()): extensions.add(extension['alias']) return extensions def _has_nova_extension(self, extension_name): return extension_name in self._nova_extensions() def search_keypairs(self, name_or_id=None, filters=None): keypairs = self.list_keypairs() return _utils._filter_list(keypairs, name_or_id, filters) def search_networks(self, name_or_id=None, filters=None): """Search OpenStack networks :param name_or_id: Name or id of the desired network. :param filters: a dict containing additional filters to use. e.g. {'router:external': True} :returns: a list of dicts containing the network description. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ networks = self.list_networks(filters) return _utils._filter_list(networks, name_or_id, filters) def search_routers(self, name_or_id=None, filters=None): """Search OpenStack routers :param name_or_id: Name or id of the desired router. :param filters: a dict containing additional filters to use. e.g. {'admin_state_up': True} :returns: a list of dicts containing the router description. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ routers = self.list_routers(filters) return _utils._filter_list(routers, name_or_id, filters) def search_subnets(self, name_or_id=None, filters=None): """Search OpenStack subnets :param name_or_id: Name or id of the desired subnet. :param filters: a dict containing additional filters to use. e.g. {'enable_dhcp': True} :returns: a list of dicts containing the subnet description. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ subnets = self.list_subnets(filters) return _utils._filter_list(subnets, name_or_id, filters) def search_ports(self, name_or_id=None, filters=None): """Search OpenStack ports :param name_or_id: Name or id of the desired port. :param filters: a dict containing additional filters to use. e.g. {'device_id': '2711c67a-b4a7-43dd-ace7-6187b791c3f0'} :returns: a list of dicts containing the port description. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ # If port caching is enabled, do not push the filter down to # neutron; get all the ports (potentially from the cache) and # filter locally. if self._PORT_AGE: pushdown_filters = None else: pushdown_filters = filters ports = self.list_ports(pushdown_filters) return _utils._filter_list(ports, name_or_id, filters) def search_volumes(self, name_or_id=None, filters=None): volumes = self.list_volumes() return _utils._filter_list( volumes, name_or_id, filters) def search_volume_snapshots(self, name_or_id=None, filters=None): volumesnapshots = self.list_volume_snapshots() return _utils._filter_list( volumesnapshots, name_or_id, filters) def search_flavors(self, name_or_id=None, filters=None): flavors = self.list_flavors() return _utils._filter_list(flavors, name_or_id, filters) def search_security_groups(self, name_or_id=None, filters=None): groups = self.list_security_groups() return _utils._filter_list(groups, name_or_id, filters) def search_servers(self, name_or_id=None, filters=None, detailed=False): servers = self.list_servers(detailed=detailed) return _utils._filter_list(servers, name_or_id, filters) def search_images(self, name_or_id=None, filters=None): images = self.list_images() return _utils._filter_list(images, name_or_id, filters) def search_floating_ip_pools(self, name=None, filters=None): pools = self.list_floating_ip_pools() return _utils._filter_list(pools, name, filters) # Note (dguerri): when using Neutron, this can be optimized using # server-side search. # There are some cases in which such optimization is not possible (e.g. # nested attributes or list of objects) so we need to use the client-side # filtering when we can't do otherwise. # The same goes for all neutron-related search/get methods! def search_floating_ips(self, id=None, filters=None): floating_ips = self.list_floating_ips() return _utils._filter_list(floating_ips, id, filters) def search_stacks(self, name_or_id=None, filters=None): """Search Heat stacks. :param name_or_id: Name or id of the desired stack. :param filters: a dict containing additional filters to use. e.g. {'stack_status': 'CREATE_COMPLETE'} :returns: a list of dict containing the stack description. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ stacks = self.list_stacks() return _utils._filter_list(stacks, name_or_id, filters) def list_keypairs(self): """List all available keypairs. :returns: A list of keypair dicts. """ with _utils.shade_exceptions("Error fetching keypair list"): return self.manager.submitTask(_tasks.KeypairList()) def list_networks(self, filters=None): """List all available networks. :param filters: (optional) dict of filter conditions to push down :returns: A list of network dicts. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} with _utils.neutron_exceptions("Error fetching network list"): return self.manager.submitTask( _tasks.NetworkList(**filters))['networks'] def list_routers(self, filters=None): """List all available routers. :param filters: (optional) dict of filter conditions to push down :returns: A list of router dicts. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} with _utils.neutron_exceptions("Error fetching router list"): return self.manager.submitTask( _tasks.RouterList(**filters))['routers'] def list_subnets(self, filters=None): """List all available subnets. :param filters: (optional) dict of filter conditions to push down :returns: A list of subnet dicts. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} with _utils.neutron_exceptions("Error fetching subnet list"): return self.manager.submitTask( _tasks.SubnetList(**filters))['subnets'] def list_ports(self, filters=None): """List all available ports. :param filters: (optional) dict of filter conditions to push down :returns: A list of port dicts. """ # If pushdown filters are specified, bypass local caching. if filters: return self._list_ports(filters) # Translate None from search interface to empty {} for kwargs below filters = {} if (time.time() - self._ports_time) >= self._PORT_AGE: # Since we're using cached data anyway, we don't need to # have more than one thread actually submit the list # ports task. Let the first one submit it while holding # a lock, and the non-blocking acquire method will cause # subsequent threads to just skip this and use the old # data until it succeeds. # For the first time, when there is no data, make the call # blocking. if self._ports_lock.acquire(len(self._ports) == 0): try: self._ports = self._list_ports(filters) self._ports_time = time.time() finally: self._ports_lock.release() return self._ports def _list_ports(self, filters): with _utils.neutron_exceptions("Error fetching port list"): return self.manager.submitTask( _tasks.PortList(**filters))['ports'] @_utils.cache_on_arguments(should_cache_fn=_no_pending_volumes) def list_volumes(self, cache=True): """List all available volumes. :returns: A list of volume dicts. """ if not cache: warnings.warn('cache argument to list_volumes is deprecated. Use ' 'invalidate instead.') with _utils.shade_exceptions("Error fetching volume list"): return _utils.normalize_volumes( self.manager.submitTask(_tasks.VolumeList())) @_utils.cache_on_arguments() def list_flavors(self, get_extra=True): """List all available flavors. :returns: A list of flavor dicts. """ with _utils.shade_exceptions("Error fetching flavor list"): flavors = self.manager.submitTask( _tasks.FlavorList(is_public=None)) with _utils.shade_exceptions("Error fetching flavor extra specs"): for flavor in flavors: if 'OS-FLV-WITH-EXT-SPECS:extra_specs' in flavor: flavor.extra_specs = flavor.get( 'OS-FLV-WITH-EXT-SPECS:extra_specs') elif get_extra: flavor.extra_specs = self.manager.submitTask( _tasks.FlavorGetExtraSpecs(id=flavor.id)) return _utils.normalize_flavors(flavors) @_utils.cache_on_arguments(should_cache_fn=_no_pending_stacks) def list_stacks(self): """List all Heat stacks. :returns: a list of dict containing the stack description. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ with _utils.shade_exceptions("Error fetching stack list"): stacks = self.manager.submitTask(_tasks.StackList()) return _utils.normalize_stacks(stacks) def list_server_security_groups(self, server): """List all security groups associated with the given server. :returns: A list of security group dicts. """ # Don't even try if we're a cloud that doesn't have them if self.secgroup_source not in ('nova', 'neutron'): return [] with _utils.shade_exceptions(): groups = self.manager.submitTask( _tasks.ServerListSecurityGroups(server=server['id'])) return _utils.normalize_nova_secgroups(groups) def list_security_groups(self): """List all available security groups. :returns: A list of security group dicts. """ # Handle neutron security groups if self.secgroup_source == 'neutron': # Neutron returns dicts, so no need to convert objects here. with _utils.neutron_exceptions( "Error fetching security group list"): return self.manager.submitTask( _tasks.NeutronSecurityGroupList())['security_groups'] # Handle nova security groups elif self.secgroup_source == 'nova': with _utils.shade_exceptions("Error fetching security group list"): groups = self.manager.submitTask( _tasks.NovaSecurityGroupList()) return _utils.normalize_nova_secgroups(groups) # Security groups not supported else: raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) def list_servers(self, detailed=False): """List all available servers. :returns: A list of server dicts. """ if (time.time() - self._servers_time) >= self._SERVER_AGE: # Since we're using cached data anyway, we don't need to # have more than one thread actually submit the list # servers task. Let the first one submit it while holding # a lock, and the non-blocking acquire method will cause # subsequent threads to just skip this and use the old # data until it succeeds. # For the first time, when there is no data, make the call # blocking. if self._servers_lock.acquire(len(self._servers) == 0): try: self._servers = self._list_servers(detailed=detailed) self._servers_time = time.time() finally: self._servers_lock.release() return self._servers def _list_servers(self, detailed=False): with _utils.shade_exceptions( "Error fetching server list on {cloud}:{region}:".format( cloud=self.name, region=self.region_name)): servers = _utils.normalize_servers( self.manager.submitTask(_tasks.ServerList()), cloud_name=self.name, region_name=self.region_name) if detailed: return [ meta.get_hostvars_from_server(self, server) for server in servers ] else: return [ meta.add_server_interfaces(self, server) for server in servers ] @_utils.cache_on_arguments(should_cache_fn=_no_pending_images) def list_images(self, filter_deleted=True): """Get available glance images. :param filter_deleted: Control whether deleted images are returned. :returns: A list of glance images. """ # First, try to actually get images from glance, it's more efficient images = [] try: # Creates a generator - does not actually talk to the cloud API # hardcoding page size for now. We'll have to get MUCH smarter # if we want to deal with page size per unit of rate limiting image_gen = self.glance_client.images.list(page_size=1000) # Deal with the generator to make a list image_list = self.manager.submitTask( _tasks.GlanceImageList(image_gen=image_gen)) except glanceclient.exc.HTTPInternalServerError: # We didn't have glance, let's try nova # If this doesn't work - we just let the exception propagate with _utils.shade_exceptions("Error fetching image list"): image_list = self.manager.submitTask(_tasks.NovaImageList()) except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Error fetching image list: %s" % e) for image in image_list: # The cloud might return DELETED for invalid images. # While that's cute and all, that's an implementation detail. if not filter_deleted: images.append(image) elif image.status != 'DELETED': images.append(image) return images def list_floating_ip_pools(self): """List all available floating IP pools. :returns: A list of floating IP pool dicts. """ if not self._has_nova_extension('os-floating-ip-pools'): raise OpenStackCloudUnavailableExtension( 'Floating IP pools extension is not available on target cloud') with _utils.shade_exceptions("Error fetching floating IP pool list"): return self.manager.submitTask(_tasks.FloatingIPPoolList()) def list_floating_ips(self): """List all available floating IPs. :returns: A list of floating IP dicts. """ if self.has_service('network'): try: return _utils.normalize_neutron_floating_ips( self._neutron_list_floating_ips()) except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'{msg}'. Trying with Nova.".format(msg=str(e))) # Fall-through, trying with Nova floating_ips = self._nova_list_floating_ips() return _utils.normalize_nova_floating_ips(floating_ips) def _neutron_list_floating_ips(self): with _utils.neutron_exceptions("error fetching floating IPs list"): return self.manager.submitTask( _tasks.NeutronFloatingIPList())['floatingips'] def _nova_list_floating_ips(self): with _utils.shade_exceptions("Error fetching floating IPs list"): return self.manager.submitTask(_tasks.NovaFloatingIPList()) def use_external_network(self): return self._use_external_network def use_internal_network(self): return self._use_internal_network def _reset_network_caches(self): # Variables to prevent us from going through the network finding # logic again if we've done it once. This is different from just # the cached value, since "None" is a valid value to find. with self._networks_lock: self._external_networks = [] self._internal_networks = [] self._external_network_stamp = False self._internal_network_stamp = False def _get_network( self, name_or_id, use_network_func, network_cache, network_stamp, filters): if not use_network_func(): return [] if network_cache: return network_cache if network_stamp: return [] if not self.has_service('network'): return [] if name_or_id: ext_net = self.get_network(name_or_id) if not ext_net: raise OpenStackCloudException( "Network {network} was provided for external" " access and that network could not be found".format( network=name_or_id)) else: return [] try: # TODO(mordred): Rackspace exposes neutron but it does not # work. I think that overriding what the service catalog # reports should be a thing os-client-config should handle # in a vendor profile - but for now it does not. That means # this search_networks can just totally fail. If it does though, # that's fine, clearly the neutron introspection is not going # to work. return self.search_networks(filters=filters) except OpenStackCloudException: pass return [] def get_external_networks(self): """Return the networks that are configured to route northbound. :returns: A list of network dicts if one is found """ if self._networks_lock.acquire(): try: _all_networks = self._get_network( self._external_network_name_or_id, self.use_external_network, self._external_networks, self._external_network_stamp, filters=None) # Filter locally because we have an or condition _external_networks = [] for network in _all_networks: if (('router:external' in network and network['router:external']) or 'provider:network_type' in network): _external_networks.append(network) self._external_networks = _external_networks self._external_network_stamp = True finally: self._networks_lock.release() return self._external_networks def get_internal_networks(self): """Return the networks that are configured to not route northbound. :returns: A list of network dicts if one is found """ # Just router:external False is not enough. if self._networks_lock.acquire(): try: _all_networks = self._get_network( self._internal_network_name_or_id, self.use_internal_network, self._internal_networks, self._internal_network_stamp, filters={ 'router:external': False, }) _internal_networks = [] for network in _all_networks: if 'provider:network_type' not in network: _internal_networks.append(network) self._internal_networks = _internal_networks self._internal_network_stamp = True finally: self._networks_lock.release() return self._internal_networks def get_keypair(self, name_or_id, filters=None): """Get a keypair by name or ID. :param name_or_id: Name or ID of the keypair. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A keypair dict or None if no matching keypair is found. """ return _utils._get_entity(self.search_keypairs, name_or_id, filters) def get_network(self, name_or_id, filters=None): """Get a network by name or ID. :param name_or_id: Name or ID of the network. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A network dict or None if no matching network is found. """ return _utils._get_entity(self.search_networks, name_or_id, filters) def get_router(self, name_or_id, filters=None): """Get a router by name or ID. :param name_or_id: Name or ID of the router. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A router dict or None if no matching router is found. """ return _utils._get_entity(self.search_routers, name_or_id, filters) def get_subnet(self, name_or_id, filters=None): """Get a subnet by name or ID. :param name_or_id: Name or ID of the subnet. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A subnet dict or None if no matching subnet is found. """ return _utils._get_entity(self.search_subnets, name_or_id, filters) def get_port(self, name_or_id, filters=None): """Get a port by name or ID. :param name_or_id: Name or ID of the port. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A port dict or None if no matching port is found. """ return _utils._get_entity(self.search_ports, name_or_id, filters) def get_volume(self, name_or_id, filters=None): """Get a volume by name or ID. :param name_or_id: Name or ID of the volume. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A volume dict or None if no matching volume is found. """ return _utils._get_entity(self.search_volumes, name_or_id, filters) def get_flavor(self, name_or_id, filters=None): """Get a flavor by name or ID. :param name_or_id: Name or ID of the flavor. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A flavor dict or None if no matching flavor is found. """ return _utils._get_entity(self.search_flavors, name_or_id, filters) def get_security_group(self, name_or_id, filters=None): """Get a security group by name or ID. :param name_or_id: Name or ID of the security group. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A security group dict or None if no matching security group is found. """ return _utils._get_entity( self.search_security_groups, name_or_id, filters) def get_server(self, name_or_id=None, filters=None, detailed=False): """Get a server by name or ID. :param name_or_id: Name or ID of the server. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A server dict or None if no matching server is found. """ searchfunc = functools.partial(self.search_servers, detailed=detailed) return _utils._get_entity(searchfunc, name_or_id, filters) def get_server_by_id(self, id): return meta.add_server_interfaces(self, _utils.normalize_server( self.manager.submitTask(_tasks.ServerGet(server=id)), cloud_name=self.name, region_name=self.region_name)) def get_image(self, name_or_id, filters=None): """Get an image by name or ID. :param name_or_id: Name or ID of the image. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: An image dict or None if no matching image is found. """ return _utils._get_entity(self.search_images, name_or_id, filters) def download_image(self, name_or_id, output_path=None, output_file=None): """Download an image from glance by name or ID :param str name_or_id: Name or ID of the image. :param output_path: the output path to write the image to. Either this or output_file must be specified :param output_file: a file object (or file-like object) to write the image data to. Only write() will be called on this object. Either this or output_path must be specified :raises: OpenStackCloudException in the event download_image is called without exactly one of either output_path or output_file :raises: OpenStackCloudResourceNotFound if no images are found matching the name or id provided """ if output_path is None and output_file is None: raise OpenStackCloudException('No output specified, an output path' ' or file object is necessary to ' 'write the image data to') elif output_path is not None and output_file is not None: raise OpenStackCloudException('Both an output path and file object' ' were provided, however only one ' 'can be used at once') image = self.search_images(name_or_id) if len(image) == 0: raise OpenStackCloudResourceNotFound( "No images with name or id %s were found" % name_or_id) image_contents = self.glance_client.images.data(image[0]['id']) with _utils.shade_exceptions("Unable to download image"): if output_path: with open(output_path, 'wb') as fd: for chunk in image_contents: fd.write(chunk) return elif output_file: for chunk in image_contents: output_file.write(chunk) return def get_floating_ip(self, id, filters=None): """Get a floating IP by ID :param id: ID of the floating IP. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A floating IP dict or None if no matching floating IP is found. """ return _utils._get_entity(self.search_floating_ips, id, filters) def get_stack(self, name_or_id, filters=None): """Get exactly one Heat stack. :param name_or_id: Name or id of the desired stack. :param filters: a dict containing additional filters to use. e.g. {'stack_status': 'CREATE_COMPLETE'} :returns: a dict containing the stack description :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call or if multiple matches are found. """ return _utils._get_entity( self.search_stacks, name_or_id, filters) def create_keypair(self, name, public_key): """Create a new keypair. :param name: Name of the keypair being created. :param public_key: Public key for the new keypair. :raises: OpenStackCloudException on operation error. """ with _utils.shade_exceptions("Unable to create keypair {name}".format( name=name)): return self.manager.submitTask(_tasks.KeypairCreate( name=name, public_key=public_key)) def delete_keypair(self, name): """Delete a keypair. :param name: Name of the keypair to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ try: self.manager.submitTask(_tasks.KeypairDelete(key=name)) except nova_exceptions.NotFound: self.log.debug("Keypair %s not found for deleting" % name) return False except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Unable to delete keypair %s: %s" % (name, e)) return True def create_network(self, name, shared=False, admin_state_up=True, external=False, provider=None, project_id=None): """Create a network. :param string name: Name of the network being created. :param bool shared: Set the network as shared. :param bool admin_state_up: Set the network administrative state to up. :param bool external: Whether this network is externally accessible. :param dict provider: A dict of network provider options. Example:: { 'network_type': 'vlan', 'segmentation_id': 'vlan1' } :param string project_id: Specify the project ID this network will be created on (admin-only). :returns: The network object. :raises: OpenStackCloudException on operation error. """ network = { 'name': name, 'shared': shared, 'admin_state_up': admin_state_up, } if project_id is not None: network['tenant_id'] = project_id if provider: if not isinstance(provider, dict): raise OpenStackCloudException( "Parameter 'provider' must be a dict") # Only pass what we know for attr in ('physical_network', 'network_type', 'segmentation_id'): if attr in provider: arg = "provider:" + attr network[arg] = provider[attr] # Do not send 'router:external' unless it is explicitly # set since sending it *might* cause "Forbidden" errors in # some situations. It defaults to False in the client, anyway. if external: network['router:external'] = True with _utils.neutron_exceptions( "Error creating network {0}".format(name)): net = self.manager.submitTask( _tasks.NetworkCreate(body=dict({'network': network}))) # Reset cache so the new network is picked up self._reset_network_caches() return net['network'] def delete_network(self, name_or_id): """Delete a network. :param name_or_id: Name or ID of the network being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ network = self.get_network(name_or_id) if not network: self.log.debug("Network %s not found for deleting" % name_or_id) return False with _utils.neutron_exceptions( "Error deleting network {0}".format(name_or_id)): self.manager.submitTask( _tasks.NetworkDelete(network=network['id'])) # Reset cache so the deleted network is removed self._reset_network_caches() return True def _build_external_gateway_info(self, ext_gateway_net_id, enable_snat, ext_fixed_ips): info = {} if ext_gateway_net_id: info['network_id'] = ext_gateway_net_id # Only send enable_snat if it is different from the Neutron # default of True. Sending it can cause a policy violation error # on some clouds. if enable_snat is not None and not enable_snat: info['enable_snat'] = False if ext_fixed_ips: info['external_fixed_ips'] = ext_fixed_ips if info: return info return None def add_router_interface(self, router, subnet_id=None, port_id=None): """Attach a subnet to an internal router interface. Either a subnet ID or port ID must be specified for the internal interface. Supplying both will result in an error. :param dict router: The dict object of the router being changed :param string subnet_id: The ID of the subnet to use for the interface :param string port_id: The ID of the port to use for the interface :returns: A dict with the router id (id), subnet ID (subnet_id), port ID (port_id) and tenant ID (tenant_id). :raises: OpenStackCloudException on operation error. """ body = {} if subnet_id: body['subnet_id'] = subnet_id if port_id: body['port_id'] = port_id with _utils.neutron_exceptions( "Error attaching interface to router {0}".format(router['id']) ): return self.manager.submitTask( _tasks.RouterAddInterface(router=router['id'], body=body) ) def remove_router_interface(self, router, subnet_id=None, port_id=None): """Detach a subnet from an internal router interface. If you specify both subnet and port ID, the subnet ID must correspond to the subnet ID of the first IP address on the port specified by the port ID. Otherwise an error occurs. :param dict router: The dict object of the router being changed :param string subnet_id: The ID of the subnet to use for the interface :param string port_id: The ID of the port to use for the interface :returns: None on success :raises: OpenStackCloudException on operation error. """ body = {} if subnet_id: body['subnet_id'] = subnet_id if port_id: body['port_id'] = port_id with _utils.neutron_exceptions( "Error detaching interface from router {0}".format(router['id']) ): return self.manager.submitTask( _tasks.RouterRemoveInterface(router=router['id'], body=body) ) def list_router_interfaces(self, router, interface_type=None): """List all interfaces for a router. :param dict router: A router dict object. :param string interface_type: One of None, "internal", or "external". Controls whether all, internal interfaces or external interfaces are returned. :returns: A list of port dict objects. """ ports = self.search_ports(filters={'device_id': router['id']}) if interface_type: filtered_ports = [] if ('external_gateway_info' in router and 'external_fixed_ips' in router['external_gateway_info']): ext_fixed = \ router['external_gateway_info']['external_fixed_ips'] else: ext_fixed = [] # Compare the subnets (subnet_id, ip_address) on the ports with # the subnets making up the router external gateway. Those ports # that match are the external interfaces, and those that don't # are internal. for port in ports: matched_ext = False for port_subnet in port['fixed_ips']: for router_external_subnet in ext_fixed: if port_subnet == router_external_subnet: matched_ext = True if interface_type == 'internal' and not matched_ext: filtered_ports.append(port) elif interface_type == 'external' and matched_ext: filtered_ports.append(port) return filtered_ports return ports def create_router(self, name=None, admin_state_up=True, ext_gateway_net_id=None, enable_snat=None, ext_fixed_ips=None): """Create a logical router. :param string name: The router name. :param bool admin_state_up: The administrative state of the router. :param string ext_gateway_net_id: Network ID for the external gateway. :param bool enable_snat: Enable Source NAT (SNAT) attribute. :param list ext_fixed_ips: List of dictionaries of desired IP and/or subnet on the external network. Example:: [ { "subnet_id": "8ca37218-28ff-41cb-9b10-039601ea7e6b", "ip_address": "192.168.10.2" } ] :returns: The router object. :raises: OpenStackCloudException on operation error. """ router = { 'admin_state_up': admin_state_up } if name: router['name'] = name ext_gw_info = self._build_external_gateway_info( ext_gateway_net_id, enable_snat, ext_fixed_ips ) if ext_gw_info: router['external_gateway_info'] = ext_gw_info with _utils.neutron_exceptions( "Error creating router {0}".format(name)): new_router = self.manager.submitTask( _tasks.RouterCreate(body=dict(router=router))) return new_router['router'] def update_router(self, name_or_id, name=None, admin_state_up=None, ext_gateway_net_id=None, enable_snat=None, ext_fixed_ips=None): """Update an existing logical router. :param string name_or_id: The name or UUID of the router to update. :param string name: The new router name. :param bool admin_state_up: The administrative state of the router. :param string ext_gateway_net_id: The network ID for the external gateway. :param bool enable_snat: Enable Source NAT (SNAT) attribute. :param list ext_fixed_ips: List of dictionaries of desired IP and/or subnet on the external network. Example:: [ { "subnet_id": "8ca37218-28ff-41cb-9b10-039601ea7e6b", "ip_address": "192.168.10.2" } ] :returns: The router object. :raises: OpenStackCloudException on operation error. """ router = {} if name: router['name'] = name if admin_state_up is not None: router['admin_state_up'] = admin_state_up ext_gw_info = self._build_external_gateway_info( ext_gateway_net_id, enable_snat, ext_fixed_ips ) if ext_gw_info: router['external_gateway_info'] = ext_gw_info if not router: self.log.debug("No router data to update") return curr_router = self.get_router(name_or_id) if not curr_router: raise OpenStackCloudException( "Router %s not found." % name_or_id) with _utils.neutron_exceptions( "Error updating router {0}".format(name_or_id)): new_router = self.manager.submitTask( _tasks.RouterUpdate( router=curr_router['id'], body=dict(router=router))) return new_router['router'] def delete_router(self, name_or_id): """Delete a logical router. If a name, instead of a unique UUID, is supplied, it is possible that we could find more than one matching router since names are not required to be unique. An error will be raised in this case. :param name_or_id: Name or ID of the router being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ router = self.get_router(name_or_id) if not router: self.log.debug("Router %s not found for deleting" % name_or_id) return False with _utils.neutron_exceptions( "Error deleting router {0}".format(name_or_id)): self.manager.submitTask( _tasks.RouterDelete(router=router['id'])) return True def get_image_exclude(self, name_or_id, exclude): for image in self.search_images(name_or_id): if exclude: if exclude not in image.name: return image else: return image return None def get_image_name(self, image_id, exclude=None): image = self.get_image_exclude(image_id, exclude) if image: return image.name return None def get_image_id(self, image_name, exclude=None): image = self.get_image_exclude(image_name, exclude) if image: return image.id return None def create_image_snapshot( self, name, server, wait=False, timeout=3600, **metadata): image_id = str(self.manager.submitTask(_tasks.ImageSnapshotCreate( image_name=name, server=server, metadata=metadata))) self.list_images.invalidate(self) image = self.get_image(image_id) if not wait: return image return self.wait_for_image(image, timeout=timeout) def wait_for_image(self, image, timeout=3600): image_id = image['id'] for count in _utils._iterate_timeout( timeout, "Timeout waiting for image to snapshot"): self.list_images.invalidate(self) image = self.get_image(image_id) if not image: continue if image['status'] == 'active': return image elif image['status'] == 'error': raise OpenStackCloudException( 'Image {image} hit error state'.format(image=image_id)) def delete_image(self, name_or_id, wait=False, timeout=3600): image = self.get_image(name_or_id) with _utils.shade_exceptions("Error in deleting image"): # Note that in v1, the param name is image, but in v2, # it's image_id glance_api_version = self.cloud_config.get_api_version('image') if glance_api_version == '2': self.manager.submitTask( _tasks.ImageDelete(image_id=image.id)) elif glance_api_version == '1': self.manager.submitTask( _tasks.ImageDelete(image=image.id)) self.list_images.invalidate(self) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the image to be deleted."): self._cache.invalidate() if self.get_image(image.id) is None: return def _get_name_and_filename(self, name): # See if name points to an existing file if os.path.exists(name): # Neat. Easy enough return (os.path.splitext(os.path.basename(name))[0], name) # Try appending the disk format name_with_ext = '.'.join(( name, self.cloud_config.config['image_format'])) if os.path.exists(name_with_ext): return (os.path.basename(name), name_with_ext) raise OpenStackCloudException( 'No filename parameter was given to create_image,' ' and {name} was not the path to an existing file.' ' Please provide either a path to an existing file' ' or a name and a filename'.format(name=name)) def create_image( self, name, filename=None, container='images', md5=None, sha256=None, disk_format=None, container_format=None, disable_vendor_agent=True, wait=False, timeout=3600, **kwargs): """Upload an image to Glance. :param str name: Name of the image to create. If it is a pathname of an image, the name will be constructed from the extensionless basename of the path. :param str filename: The path to the file to upload, if needed. (optional, defaults to None) :param str container: Name of the container in swift where images should be uploaded for import if the cloud requires such a thing. (optiona, defaults to 'images') :param str md5: md5 sum of the image file. If not given, an md5 will be calculated. :param str sha256: sha256 sum of the image file. If not given, an md5 will be calculated. :param str disk_format: The disk format the image is in. (optional, defaults to the os-client-config config value for this cloud) :param str container_format: The container format the image is in. (optional, defaults to the os-client-config config value for this cloud) :param bool disable_vendor_agent: Whether or not to append metadata flags to the image to inform the cloud in question to not expect a vendor agent to be runing. (optional, defaults to True) :param bool wait: If true, waits for image to be created. Defaults to true - however, be aware that one of the upload methods is always synchronous. :param timeout: Seconds to wait for image creation. None is forever. Additional kwargs will be passed to the image creation as additional metadata for the image. :returns: A ``munch.Munch`` of the Image object :raises: OpenStackCloudException if there are problems uploading """ if not disk_format: disk_format = self.cloud_config.config['image_format'] # If there is no filename, see if name is actually the filename if not filename: name, filename = self._get_name_and_filename(name) if not disk_format: disk_format = self.cloud_config.config['image_format'] if not container_format: if disk_format == 'vhd': container_format = 'ovf' else: container_format = 'bare' if not md5 or not sha256: (md5, sha256) = self._get_file_hashes(filename) current_image = self.get_image(name) if (current_image and current_image.get(IMAGE_MD5_KEY, '') == md5 and current_image.get(IMAGE_SHA256_KEY, '') == sha256): self.log.debug( "image {name} exists and is up to date".format(name=name)) return current_image kwargs[IMAGE_MD5_KEY] = md5 kwargs[IMAGE_SHA256_KEY] = sha256 if disable_vendor_agent: kwargs.update(self.cloud_config.config['disable_vendor_agent']) # We can never have nice things. Glance v1 took "is_public" as a # boolean. Glance v2 takes "visibility". If the user gives us # is_public, we know what they mean. If they give us visibility, they # know that they mean. if self.cloud_config.get_api_version('image') == '2': if 'is_public' in kwargs: is_public = kwargs.pop('is_public') if is_public: kwargs['visibility'] = 'public' else: kwargs['visibility'] = 'private' try: # This makes me want to die inside if self.image_api_use_tasks: return self._upload_image_task( name, filename, container, current_image=current_image, wait=wait, timeout=timeout, **kwargs) else: image_kwargs = dict(properties=kwargs) if disk_format: image_kwargs['disk_format'] = disk_format if container_format: image_kwargs['container_format'] = container_format return self._upload_image_put(name, filename, **image_kwargs) except OpenStackCloudException: self.log.debug("Image creation failed", exc_info=True) raise except Exception as e: raise OpenStackCloudException( "Image creation failed: {message}".format(message=str(e))) def _upload_image_put_v2(self, name, image_data, **image_kwargs): if 'properties' in image_kwargs: img_props = image_kwargs.pop('properties') for k, v in iter(img_props.items()): image_kwargs[k] = str(v) # some MUST be integer for k in ('min_disk', 'min_ram'): if k in image_kwargs: image_kwargs[k] = int(image_kwargs[k]) image = self.manager.submitTask(_tasks.ImageCreate( name=name, **image_kwargs)) self.manager.submitTask(_tasks.ImageUpload( image_id=image.id, image_data=image_data)) return image def _upload_image_put_v1(self, name, image_data, **image_kwargs): image = self.manager.submitTask(_tasks.ImageCreate( name=name, **image_kwargs)) self.manager.submitTask(_tasks.ImageUpdate( image=image, data=image_data)) return image def _upload_image_put(self, name, filename, **image_kwargs): image_data = open(filename, 'rb') # Because reasons and crying bunnies if self.cloud_config.get_api_version('image') == '2': image = self._upload_image_put_v2(name, image_data, **image_kwargs) else: image = self._upload_image_put_v1(name, image_data, **image_kwargs) self._cache.invalidate() return self.get_image(image.id) def _upload_image_task( self, name, filename, container, current_image, wait, timeout, **image_properties): # get new client sessions with self._swift_client_lock: self._swift_client = None with self._swift_service_lock: self._swift_service = None self.create_object( container, name, filename, md5=image_properties.get('md5', None), sha256=image_properties.get('sha256', None)) if not current_image: current_image = self.get_image(name) # TODO(mordred): Can we do something similar to what nodepool does # using glance properties to not delete then upload but instead make a # new "good" image and then mark the old one as "bad" # self.glance_client.images.delete(current_image) task_args = dict( type='import', input=dict( import_from='{container}/{name}'.format( container=container, name=name), image_properties=dict(name=name))) glance_task = self.manager.submitTask( _tasks.ImageTaskCreate(**task_args)) self.list_images.invalidate(self) if wait: image_id = None for count in _utils._iterate_timeout( timeout, "Timeout waiting for the image to import."): try: if image_id is None: status = self.manager.submitTask( _tasks.ImageTaskGet(task_id=glance_task.id)) except glanceclient.exc.HTTPServiceUnavailable: # Intermittent failure - catch and try again continue if status.status == 'success': image_id = status.result['image_id'] try: image = self.get_image(image_id) except glanceclient.exc.HTTPServiceUnavailable: # Intermittent failure - catch and try again continue if image is None: continue self.update_image_properties( image=image, **image_properties) return self.get_image(status.result['image_id']) if status.status == 'failure': if status.message == IMAGE_ERROR_396: glance_task = self.manager.submitTask( _tasks.ImageTaskCreate(**task_args)) self.list_images.invalidate(self) else: raise OpenStackCloudException( "Image creation failed: {message}".format( message=status.message), extra_data=status) else: return glance_task def update_image_properties( self, image=None, name_or_id=None, **properties): if image is None: image = self.get_image(name_or_id) img_props = {} for k, v in iter(properties.items()): if v and k in ['ramdisk', 'kernel']: v = self.get_image_id(v) k = '{0}_id'.format(k) img_props[k] = v # This makes me want to die inside if self.cloud_config.get_api_version('image') == '2': return self._update_image_properties_v2(image, img_props) else: return self._update_image_properties_v1(image, img_props) def _update_image_properties_v2(self, image, properties): img_props = {} for k, v in iter(properties.items()): if image.get(k, None) != v: img_props[k] = str(v) if not img_props: return False self.manager.submitTask(_tasks.ImageUpdate( image_id=image.id, **img_props)) self.list_images.invalidate(self) return True def _update_image_properties_v1(self, image, properties): img_props = {} for k, v in iter(properties.items()): if image.properties.get(k, None) != v: img_props[k] = v if not img_props: return False self.manager.submitTask(_tasks.ImageUpdate( image=image, properties=img_props)) self.list_images.invalidate(self) return True def create_volume( self, size, wait=True, timeout=None, image=None, **kwargs): """Create a volume. :param size: Size, in GB of the volume to create. :param name: (optional) Name for the volume. :param description: (optional) Name for the volume. :param wait: If true, waits for volume to be created. :param timeout: Seconds to wait for volume creation. None is forever. :param image: (optional) Image name, id or object from which to create the volume :param kwargs: Keyword arguments as expected for cinder client. :returns: The created volume object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ if image: image_obj = self.get_image(image) if not image_obj: raise OpenStackCloudException( "Image {image} was requested as the basis for a new" " volume, but was not found on the cloud".format( image=image)) kwargs['imageRef'] = image_obj['id'] kwargs = self._get_volume_kwargs(kwargs) with _utils.shade_exceptions("Error in creating volume"): volume = self.manager.submitTask(_tasks.VolumeCreate( size=size, **kwargs)) self.list_volumes.invalidate(self) if volume['status'] == 'error': raise OpenStackCloudException("Error in creating volume") if wait: vol_id = volume['id'] for count in _utils._iterate_timeout( timeout, "Timeout waiting for the volume to be available."): volume = self.get_volume(vol_id) if not volume: continue if volume['status'] == 'available': return volume if volume['status'] == 'error': raise OpenStackCloudException( "Error in creating volume, please check logs") return _utils.normalize_volumes([volume])[0] def delete_volume(self, name_or_id=None, wait=True, timeout=None): """Delete a volume. :param name_or_id: Name or unique ID of the volume. :param wait: If true, waits for volume to be deleted. :param timeout: Seconds to wait for volume deletion. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ self.list_volumes.invalidate(self) volume = self.get_volume(name_or_id) if not volume: self.log.debug( "Volume {name_or_id} does not exist".format( name_or_id=name_or_id), exc_info=True) return False with _utils.shade_exceptions("Error in deleting volume"): try: self.manager.submitTask( _tasks.VolumeDelete(volume=volume['id'])) except cinder_exceptions.NotFound: self.log.debug( "Volume {id} not found when deleting. Ignoring.".format( id=volume['id'])) return False self.list_volumes.invalidate(self) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the volume to be deleted."): if not self.get_volume(volume['id']): break return True def get_volumes(self, server, cache=True): volumes = [] for volume in self.list_volumes(cache=cache): for attach in volume['attachments']: if attach['server_id'] == server['id']: volumes.append(volume) return volumes def get_volume_id(self, name_or_id): volume = self.get_volume(name_or_id) if volume: return volume['id'] return None def volume_exists(self, name_or_id): return self.get_volume(name_or_id) is not None def get_volume_attach_device(self, volume, server_id): """Return the device name a volume is attached to for a server. This can also be used to verify if a volume is attached to a particular server. :param volume: Volume dict :param server_id: ID of server to check :returns: Device name if attached, None if volume is not attached. """ for attach in volume['attachments']: if server_id == attach['server_id']: return attach['device'] return None def detach_volume(self, server, volume, wait=True, timeout=None): """Detach a volume from a server. :param server: The server dict to detach from. :param volume: The volume dict to detach. :param wait: If true, waits for volume to be detached. :param timeout: Seconds to wait for volume detachment. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ dev = self.get_volume_attach_device(volume, server['id']) if not dev: raise OpenStackCloudException( "Volume %s is not attached to server %s" % (volume['id'], server['id']) ) with _utils.shade_exceptions( "Error detaching volume {volume} from server {server}".format( volume=volume['id'], server=server['id'])): self.manager.submitTask( _tasks.VolumeDetach(attachment_id=volume['id'], server_id=server['id'])) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for volume %s to detach." % volume['id']): try: vol = self.get_volume(volume['id']) except Exception: self.log.debug( "Error getting volume info %s" % volume['id'], exc_info=True) continue if vol['status'] == 'available': return if vol['status'] == 'error': raise OpenStackCloudException( "Error in detaching volume %s" % volume['id'] ) def attach_volume(self, server, volume, device=None, wait=True, timeout=None): """Attach a volume to a server. This will attach a volume, described by the passed in volume dict (as returned by get_volume()), to the server described by the passed in server dict (as returned by get_server()) on the named device on the server. If the volume is already attached to the server, or generally not available, then an exception is raised. To re-attach to a server, but under a different device, the user must detach it first. :param server: The server dict to attach to. :param volume: The volume dict to attach. :param device: The device name where the volume will attach. :param wait: If true, waits for volume to be attached. :param timeout: Seconds to wait for volume attachment. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ dev = self.get_volume_attach_device(volume, server['id']) if dev: raise OpenStackCloudException( "Volume %s already attached to server %s on device %s" % (volume['id'], server['id'], dev) ) if volume['status'] != 'available': raise OpenStackCloudException( "Volume %s is not available. Status is '%s'" % (volume['id'], volume['status']) ) with _utils.shade_exceptions( "Error attaching volume {volume_id} to server " "{server_id}".format(volume_id=volume['id'], server_id=server['id'])): vol = self.manager.submitTask( _tasks.VolumeAttach(volume_id=volume['id'], server_id=server['id'], device=device)) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for volume %s to attach." % volume['id']): try: self.list_volumes.invalidate(self) vol = self.get_volume(volume['id']) except Exception: self.log.debug( "Error getting volume info %s" % volume['id'], exc_info=True) continue if self.get_volume_attach_device(vol, server['id']): break # TODO(Shrews) check to see if a volume can be in error status # and also attached. If so, we should move this # above the get_volume_attach_device call if vol['status'] == 'error': raise OpenStackCloudException( "Error in attaching volume %s" % volume['id'] ) return vol def _get_volume_kwargs(self, kwargs): name = kwargs.pop('name', kwargs.pop('display_name', None)) description = kwargs.pop('description', kwargs.pop('display_description', None)) if name: if self.cloud_config.get_api_version('volume').startswith('2'): kwargs['name'] = name else: kwargs['display_name'] = name if description: if self.cloud_config.get_api_version('volume').startswith('2'): kwargs['description'] = description else: kwargs['display_description'] = description return kwargs @_utils.valid_kwargs('name', 'display_name', 'description', 'display_description') def create_volume_snapshot(self, volume_id, force=False, wait=True, timeout=None, **kwargs): """Create a volume. :param volume_id: the id of the volume to snapshot. :param force: If set to True the snapshot will be created even if the volume is attached to an instance, if False it will not :param name: name of the snapshot, one will be generated if one is not provided :param description: description of the snapshot, one will be generated if one is not provided :param wait: If true, waits for volume snapshot to be created. :param timeout: Seconds to wait for volume snapshot creation. None is forever. :returns: The created volume object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ kwargs = self._get_volume_kwargs(kwargs) with _utils.shade_exceptions( "Error creating snapshot of volume {volume_id}".format( volume_id=volume_id)): snapshot = self.manager.submitTask( _tasks.VolumeSnapshotCreate( volume_id=volume_id, force=force, **kwargs)) if wait: snapshot_id = snapshot['id'] for count in _utils._iterate_timeout( timeout, "Timeout waiting for the volume snapshot to be available." ): snapshot = self.get_volume_snapshot_by_id(snapshot_id) if snapshot['status'] == 'available': break if snapshot['status'] == 'error': raise OpenStackCloudException( "Error in creating volume snapshot, please check logs") return _utils.normalize_volumes([snapshot])[0] def get_volume_snapshot_by_id(self, snapshot_id): """Takes a snapshot_id and gets a dict of the snapshot that maches that id. Note: This is more efficient than get_volume_snapshot. param: snapshot_id: ID of the volume snapshot. """ with _utils.shade_exceptions( "Error getting snapshot {snapshot_id}".format( snapshot_id=snapshot_id)): snapshot = self.manager.submitTask( _tasks.VolumeSnapshotGet( snapshot_id=snapshot_id ) ) return _utils.normalize_volumes([snapshot])[0] def get_volume_snapshot(self, name_or_id, filters=None): """Get a volume by name or ID. :param name_or_id: Name or ID of the volume snapshot. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A volume dict or None if no matching volume is found. """ return _utils._get_entity(self.search_volume_snapshots, name_or_id, filters) def list_volume_snapshots(self, detailed=True, search_opts=None): """List all volume snapshots. :returns: A list of volume snapshots dicts. """ with _utils.shade_exceptions("Error getting a list of snapshots"): return _utils.normalize_volumes( self.manager.submitTask( _tasks.VolumeSnapshotList( detailed=detailed, search_opts=search_opts))) def delete_volume_snapshot(self, name_or_id=None, wait=False, timeout=None): """Delete a volume snapshot. :param name_or_id: Name or unique ID of the volume snapshot. :param wait: If true, waits for volume snapshot to be deleted. :param timeout: Seconds to wait for volume snapshot deletion. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ volumesnapshot = self.get_volume_snapshot(name_or_id) if not volumesnapshot: return False with _utils.shade_exceptions("Error in deleting volume snapshot"): self.manager.submitTask( _tasks.VolumeSnapshotDelete( snapshot=volumesnapshot['id'] ) ) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the volume snapshot to be deleted."): if not self.get_volume_snapshot(volumesnapshot['id']): break return True def get_server_id(self, name_or_id): server = self.get_server(name_or_id) if server: return server['id'] return None def get_server_private_ip(self, server): return meta.get_server_private_ip(server, self) def get_server_public_ip(self, server): return meta.get_server_external_ipv4(self, server) def get_server_meta(self, server): # TODO(mordred) remove once ansible has moved to Inventory interface server_vars = meta.get_hostvars_from_server(self, server) groups = meta.get_groups_from_server(self, server, server_vars) return dict(server_vars=server_vars, groups=groups) def get_openstack_vars(self, server): return meta.get_hostvars_from_server(self, server) def _expand_server_vars(self, server): # Used by nodepool # TODO(mordred) remove after these make it into what we # actually want the API to be. return meta.expand_server_vars(self, server) def available_floating_ip(self, network=None, server=None): """Get a floating IP from a network or a pool. Return the first available floating IP or allocate a new one. :param network: Nova pool name or Neutron network name or id. :param server: Server the IP is for if known :returns: a (normalized) structure with a floating IP address description. """ if self.has_service('network'): try: f_ips = _utils.normalize_neutron_floating_ips( self._neutron_available_floating_ips( network=network, server=server)) return f_ips[0] except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'{msg}'. Trying with Nova.".format(msg=str(e))) # Fall-through, trying with Nova f_ips = _utils.normalize_nova_floating_ips( self._nova_available_floating_ips(pool=network) ) return f_ips[0] def _neutron_available_floating_ips( self, network=None, project_id=None, server=None): """Get a floating IP from a Neutron network. Return a list of available floating IPs or allocate a new one and return it in a list of 1 element. :param network: A single Neutron network name or id, or a list of them. :param server: (server) Server the Floating IP is for :returns: a list of floating IP addresses. :raises: ``OpenStackCloudResourceNotFound``, if an external network that meets the specified criteria cannot be found. """ if project_id is None: # Make sure we are only listing floatingIPs allocated the current # tenant. This is the default behaviour of Nova project_id = self.keystone_session.get_project_id() with _utils.neutron_exceptions("unable to get available floating IPs"): if network: if isinstance(network, six.string_types): network = [network] # Use given list to get first matching external network floating_network_id = None for net in network: for ext_net in self.get_external_networks(): if net in (ext_net['name'], ext_net['id']): floating_network_id = ext_net['id'] break if floating_network_id: break if floating_network_id is None: raise OpenStackCloudResourceNotFound( "unable to find external network {net}".format( net=network) ) else: # Get first existing external network networks = self.get_external_networks() if not networks: raise OpenStackCloudResourceNotFound( "unable to find an external network") floating_network_id = networks[0]['id'] filters = { 'port_id': None, 'floating_network_id': floating_network_id, 'tenant_id': project_id } floating_ips = self._neutron_list_floating_ips() available_ips = _utils._filter_list( floating_ips, name_or_id=None, filters=filters) if available_ips: return available_ips # No available IP found or we didn't try # allocate a new Floating IP f_ip = self._neutron_create_floating_ip( network_name_or_id=floating_network_id, server=server) return [f_ip] def _nova_available_floating_ips(self, pool=None): """Get available floating IPs from a floating IP pool. Return a list of available floating IPs or allocate a new one and return it in a list of 1 element. :param pool: Nova floating IP pool name. :returns: a list of floating IP addresses. :raises: ``OpenStackCloudResourceNotFound``, if a floating IP pool is not specified and cannot be found. """ with _utils.shade_exceptions( "Unable to create floating IP in pool {pool}".format( pool=pool)): if pool is None: pools = self.list_floating_ip_pools() if not pools: raise OpenStackCloudResourceNotFound( "unable to find a floating ip pool") pool = pools[0]['name'] filters = { 'instance_id': None, 'pool': pool } floating_ips = self._nova_list_floating_ips() available_ips = _utils._filter_list( floating_ips, name_or_id=None, filters=filters) if available_ips: return available_ips # No available IP found or we did not try. # Allocate a new Floating IP f_ip = self._nova_create_floating_ip(pool=pool) return [f_ip] def create_floating_ip(self, network=None, server=None): """Allocate a new floating IP from a network or a pool. :param network: Nova pool name or Neutron network name or id. :param server: (optional) Server dict for the server to create the IP for and to which it should be attached :returns: a floating IP address :raises: ``OpenStackCloudException``, on operation error. """ if self.has_service('network'): try: f_ips = _utils.normalize_neutron_floating_ips( [self._neutron_create_floating_ip( network_name_or_id=network, server=server)] ) return f_ips[0] except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'{msg}'. Trying with Nova.".format(msg=str(e))) # Fall-through, trying with Nova # Else, we are using Nova network f_ips = _utils.normalize_nova_floating_ips( [self._nova_create_floating_ip(pool=network)]) return f_ips[0] def _neutron_create_floating_ip( self, network_name_or_id=None, server=None): with _utils.neutron_exceptions( "unable to create floating IP for net " "{0}".format(network_name_or_id)): if network_name_or_id: networks = [self.get_network(network_name_or_id)] if not networks: raise OpenStackCloudResourceNotFound( "unable to find network for floating ips with id " "{0}".format(network_name_or_id)) else: networks = self.get_external_networks() if not networks: raise OpenStackCloudResourceNotFound( "Unable to find an external network in this cloud" " which makes getting a floating IP impossible") kwargs = { 'floating_network_id': networks[0]['id'], } if server: (port, fixed_address) = self._get_free_fixed_port(server) if port: kwargs['port_id'] = port['id'] kwargs['fixed_ip_address'] = fixed_address return self.manager.submitTask(_tasks.NeutronFloatingIPCreate( body={'floatingip': kwargs}))['floatingip'] def _nova_create_floating_ip(self, pool=None): with _utils.shade_exceptions( "Unable to create floating IP in pool {pool}".format( pool=pool)): if pool is None: pools = self.list_floating_ip_pools() if not pools: raise OpenStackCloudResourceNotFound( "unable to find a floating ip pool") pool = pools[0]['name'] pool_ip = self.manager.submitTask( _tasks.NovaFloatingIPCreate(pool=pool)) return pool_ip def delete_floating_ip(self, floating_ip_id): """Deallocate a floating IP from a tenant. :param floating_ip_id: a floating IP address id. :returns: True if the IP address has been deleted, False if the IP address was not found. :raises: ``OpenStackCloudException``, on operation error. """ if self.has_service('network'): try: return self._neutron_delete_floating_ip(floating_ip_id) except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'{msg}'. Trying with Nova.".format(msg=str(e))) # Fall-through, trying with Nova # Else, we are using Nova network return self._nova_delete_floating_ip(floating_ip_id) def _neutron_delete_floating_ip(self, floating_ip_id): try: with _utils.neutron_exceptions("unable to delete floating IP"): self.manager.submitTask( _tasks.NeutronFloatingIPDelete(floatingip=floating_ip_id)) except OpenStackCloudResourceNotFound: return False return True def _nova_delete_floating_ip(self, floating_ip_id): try: self.manager.submitTask( _tasks.NovaFloatingIPDelete(floating_ip=floating_ip_id)) except nova_exceptions.NotFound: return False except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Unable to delete floating IP id {fip_id}: {msg}".format( fip_id=floating_ip_id, msg=str(e))) return True def _attach_ip_to_server( self, server, floating_ip, fixed_address=None, wait=False, timeout=60, skip_attach=False): """Attach a floating IP to a server. :param server: Server dict :param floating_ip: Floating IP dict to attach :param fixed_address: (optional) fixed address to which attach the floating IP to. :param wait: (optional) Wait for the address to appear as assigned to the server in Nova. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param skip_attach: (optional) Skip the actual attach and just do the wait. Defaults to False. :returns: The server dict :raises: OpenStackCloudException, on operation error. """ # Short circuit if we're asking to attach an IP that's already # attached ext_ip = meta.get_server_ip(server, ext_tag='floating') if ext_ip == floating_ip['floating_ip_address']: return server if self.has_service('network'): if not skip_attach: try: self._neutron_attach_ip_to_server( server=server, floating_ip=floating_ip, fixed_address=fixed_address) except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'{msg}'. Trying with Nova.".format(msg=str(e))) # Fall-through, trying with Nova else: # Nova network self._nova_attach_ip_to_server( server_id=server['id'], floating_ip_id=floating_ip['id'], fixed_address=fixed_address) if wait: # Wait for the address to be assigned to the server server_id = server['id'] for _ in _utils._iterate_timeout( timeout, "Timeout waiting for the floating IP to be attached."): server = self.get_server_by_id(server_id) ext_ip = meta.get_server_ip(server, ext_tag='floating') if ext_ip == floating_ip['floating_ip_address']: return server return server def _get_free_fixed_port(self, server, fixed_address=None): # If we are caching port lists, we may not find the port for # our server if the list is old. Try for at least 2 cache # periods if that is the case. if self._PORT_AGE: timeout = self._PORT_AGE * 2 else: timeout = None for count in _utils._iterate_timeout( timeout, "Timeout waiting for port to show up in list", wait=self._PORT_AGE): try: ports = self.search_ports(filters={'device_id': server['id']}) break except OpenStackCloudTimeout: ports = None if not ports: return (None, None) port = None if not fixed_address: # We're assuming one, because we have no idea what to do with # more than one. # TODO(mordred) Fix this for real by allowing a configurable # NAT destination setting port = ports[0] # Select the first available IPv4 address for address in port.get('fixed_ips', list()): try: ip = ipaddress.ip_address(address['ip_address']) except Exception: continue if ip.version == 4: fixed_address = address['ip_address'] return port, fixed_address raise OpenStackCloudException( "unable to find a free fixed IPv4 address for server " "{0}".format(server_id)) # unfortunately a port can have more than one fixed IP: # we can't use the search_ports filtering for fixed_address as # they are contained in a list. e.g. # # "fixed_ips": [ # { # "subnet_id": "008ba151-0b8c-4a67-98b5-0d2b87666062", # "ip_address": "172.24.4.2" # } # ] # # Search fixed_address for p in ports: for fixed_ip in p['fixed_ips']: if fixed_address == fixed_ip['ip_address']: return (p, fixed_address) return (None, None) def _neutron_attach_ip_to_server( self, server, floating_ip, fixed_address=None): with _utils.neutron_exceptions( "unable to bind a floating ip to server " "{0}".format(server['id'])): # Find an available port (port, fixed_address) = self._get_free_fixed_port( server, fixed_address=fixed_address) if not port: raise OpenStackCloudException( "unable to find a port for server {0}".format( server['id'])) floating_ip_args = {'port_id': port['id']} if fixed_address is not None: floating_ip_args['fixed_ip_address'] = fixed_address return self.manager.submitTask(_tasks.NeutronFloatingIPUpdate( floatingip=floating_ip['id'], body={'floatingip': floating_ip_args} ))['floatingip'] def _nova_attach_ip_to_server(self, server_id, floating_ip_id, fixed_address=None): with _utils.shade_exceptions( "Error attaching IP {ip} to instance {id}".format( ip=floating_ip_id, id=server_id)): f_ip = self.get_floating_ip(id=floating_ip_id) return self.manager.submitTask(_tasks.NovaFloatingIPAttach( server=server_id, address=f_ip['floating_ip_address'], fixed_address=fixed_address)) def detach_ip_from_server(self, server_id, floating_ip_id): """Detach a floating IP from a server. :param server_id: id of a server. :param floating_ip_id: Id of the floating IP to detach. :returns: True if the IP has been detached, or False if the IP wasn't attached to any server. :raises: ``OpenStackCloudException``, on operation error. """ if self.has_service('network'): try: return self._neutron_detach_ip_from_server( server_id=server_id, floating_ip_id=floating_ip_id) except OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'{msg}'. Trying with Nova.".format(msg=str(e))) # Fall-through, trying with Nova # Nova network self._nova_detach_ip_from_server( server_id=server_id, floating_ip_id=floating_ip_id) def _neutron_detach_ip_from_server(self, server_id, floating_ip_id): with _utils.neutron_exceptions( "unable to detach a floating ip from server " "{0}".format(server_id)): f_ip = self.get_floating_ip(id=floating_ip_id) if f_ip is None or not f_ip['attached']: return False self.manager.submitTask(_tasks.NeutronFloatingIPUpdate( floatingip=floating_ip_id, body={'floatingip': {'port_id': None}})) return True def _nova_detach_ip_from_server(self, server_id, floating_ip_id): try: f_ip = self.get_floating_ip(id=floating_ip_id) if f_ip is None: raise OpenStackCloudException( "unable to find floating IP {0}".format(floating_ip_id)) self.manager.submitTask(_tasks.NovaFloatingIPDetach( server=server_id, address=f_ip['floating_ip_address'])) except nova_exceptions.Conflict as e: self.log.debug( "nova floating IP detach failed: {msg}".format(msg=str(e)), exc_info=True) return False except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Error detaching IP {ip} from instance {id}: {msg}".format( ip=floating_ip_id, id=server_id, msg=str(e))) return True def _add_ip_from_pool( self, server, network, fixed_address=None, reuse=True, wait=False, timeout=60): """Add a floating IP to a sever from a given pool This method reuses available IPs, when possible, or allocate new IPs to the current tenant. The floating IP is attached to the given fixed address or to the first server port/fixed address :param server: Server dict :param network: Nova pool name or Neutron network name or id. :param fixed_address: a fixed address :param reuse: Try to reuse existing ips. Defaults to True. :param wait: (optional) Wait for the address to appear as assigned to the server in Nova. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :returns: the update server dict """ if reuse: f_ip = self.available_floating_ip(network=network) else: f_ip = self.create_floating_ip(network=network) return self._attach_ip_to_server( server=server, floating_ip=f_ip, fixed_address=fixed_address, wait=wait, timeout=timeout) def add_ip_list( self, server, ips, wait=False, timeout=60, fixed_address=None): """Attach a list of IPs to a server. :param server: a server object :param ips: list of floating IP addresses or a single address :param wait: (optional) Wait for the address to appear as assigned to the server in Nova. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param fixed_address: (optional) Fixed address of the server to attach the IP to :returns: The updated server dict :raises: ``OpenStackCloudException``, on operation error. """ if type(ips) == list: ip = ips[0] else: ip = ips f_ip = self.get_floating_ip( id=None, filters={'floating_ip_address': ip}) return self._attach_ip_to_server( server=server, floating_ip=f_ip, wait=wait, timeout=timeout, fixed_address=fixed_address) def add_auto_ip(self, server, wait=False, timeout=60, reuse=True): """Add a floating IP to a server. This method is intended for basic usage. For advanced network architecture (e.g. multiple external networks or servers with multiple interfaces), use other floating IP methods. This method can reuse available IPs, or allocate new IPs to the current project. :param server: a server dictionary. :param reuse: Whether or not to attempt to reuse IPs, defaults to True. :param wait: (optional) Wait for the address to appear as assigned to the server in Nova. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param reuse: Try to reuse existing ips. Defaults to True. :returns: Floating IP address attached to server. """ server = self._add_auto_ip( server, wait=wait, timeout=timeout, reuse=reuse) return self.get_server_public_ip(server) def _add_auto_ip(self, server, wait=False, timeout=60, reuse=True): skip_attach = False if reuse: f_ip = self.available_floating_ip() else: f_ip = self.create_floating_ip(server=server) if server: # This gets passed in for both nova and neutron # but is only meaninful for the neutron logic branch skip_attach = True return self._attach_ip_to_server( server=server, floating_ip=f_ip, wait=wait, timeout=timeout, skip_attach=skip_attach) def add_ips_to_server( self, server, auto_ip=True, ips=None, ip_pool=None, wait=False, timeout=60, reuse=True, fixed_address=None): if ip_pool: server = self._add_ip_from_pool( server, ip_pool, reuse=reuse, wait=wait, timeout=timeout, fixed_address=fixed_address) elif ips: server = self.add_ip_list( server, ips, wait=wait, timeout=timeout, fixed_address=fixed_address) elif auto_ip: if not self.get_server_public_ip(server): server = self._add_auto_ip( server, wait=wait, timeout=timeout, reuse=reuse) return server def _get_boot_from_volume_kwargs( self, image, boot_from_volume, boot_volume, volume_size, terminate_volume, volumes, kwargs): if boot_volume or boot_from_volume or volumes: kwargs.setdefault('block_device_mapping_v2', []) else: return kwargs # If we have boot_from_volume but no root volume, then we're # booting an image from volume if boot_volume: volume = self.get_volume(boot_volume) if not volume: raise OpenStackCloudException( 'Volume {boot_volume} is not a valid volume' ' in {cloud}:{region}'.format( boot_volume=boot_volume, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': '0', 'delete_on_termination': terminate_volume, 'destination_type': 'volume', 'uuid': volume['id'], 'source_type': 'volume', } kwargs['block_device_mapping_v2'].append(block_mapping) kwargs['image'] = None elif boot_from_volume: if hasattr(image, 'id'): image_obj = image else: image_obj = self.get_image(image) if not image_obj: raise OpenStackCloudException( 'Image {image} is not a valid image in' ' {cloud}:{region}'.format( image=image, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': '0', 'delete_on_termination': terminate_volume, 'destination_type': 'volume', 'uuid': image_obj['id'], 'source_type': 'image', 'volume_size': volume_size, } kwargs['image'] = None kwargs['block_device_mapping_v2'].append(block_mapping) for volume in volumes: volume_obj = self.get_volume(volume) if not volume_obj: raise OpenStackCloudException( 'Volume {volume} is not a valid volume' ' in {cloud}:{region}'.format( volume=volume, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': None, 'delete_on_termination': False, 'destination_type': 'volume', 'uuid': volume_obj['id'], 'source_type': 'volume', } kwargs['block_device_mapping_v2'].append(block_mapping) if boot_volume or boot_from_volume or volumes: self.list_volumes.invalidate(self) return kwargs @_utils.valid_kwargs( 'meta', 'files', 'userdata', 'reservation_id', 'return_raw', 'min_count', 'max_count', 'security_groups', 'key_name', 'availability_zone', 'block_device_mapping', 'block_device_mapping_v2', 'nics', 'scheduler_hints', 'config_drive', 'admin_pass', 'disk_config') def create_server( self, name, image, flavor, auto_ip=True, ips=None, ip_pool=None, root_volume=None, terminate_volume=False, wait=False, timeout=180, reuse_ips=True, network=None, boot_from_volume=False, volume_size='50', boot_volume=None, volumes=None, **kwargs): """Create a virtual server instance. :param name: Something to name the server. :param image: Image dict or id to boot with. :param flavor: Flavor dict or id to boot onto. :param auto_ip: Whether to take actions to find a routable IP for the server. (defaults to True) :param ips: List of IPs to attach to the server (defaults to None) :param ip_pool: Name of the network or floating IP pool to get an address from. (defaults to None) :param root_volume: Name or id of a volume to boot from (defaults to None - deprecated, use boot_volume) :param boot_volume: Name or id of a volume to boot from (defaults to None) :param terminate_volume: If booting from a volume, whether it should be deleted when the server is destroyed. (defaults to False) :param volumes: (optional) A list of volumes to attach to the server :param meta: (optional) A dict of arbitrary key/value metadata to store for this server. Both keys and values must be <=255 characters. :param files: (optional, deprecated) A dict of files to overwrite on the server upon boot. Keys are file names (i.e. ``/etc/passwd``) and values are the file contents (either as a string or as a file-like object). A maximum of five entries is allowed, and each file must be 10k or less. :param reservation_id: a UUID for the set of servers being requested. :param min_count: (optional extension) The minimum number of servers to launch. :param max_count: (optional extension) The maximum number of servers to launch. :param security_groups: A list of security group names :param userdata: user data to pass to be exposed by the metadata server this can be a file type object as well or a string. :param key_name: (optional extension) name of previously created keypair to inject into the instance. :param availability_zone: Name of the availability zone for instance placement. :param block_device_mapping: (optional) A dict of block device mappings for this server. :param block_device_mapping_v2: (optional) A dict of block device mappings for this server. :param nics: (optional extension) an ordered list of nics to be added to this server, with information about connected networks, fixed IPs, port etc. :param scheduler_hints: (optional extension) arbitrary key-value pairs specified by the client to help boot an instance :param config_drive: (optional extension) value for config drive either boolean, or volume-id :param disk_config: (optional extension) control how the disk is partitioned when the server is created. possible values are 'AUTO' or 'MANUAL'. :param admin_pass: (optional extension) add a user supplied admin password. :param wait: (optional) Wait for the address to appear as assigned to the server in Nova. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param reuse_ips: (optional) Whether to attempt to reuse pre-existing floating ips should a floating IP be needed (defaults to True) :param network: (optional) Network name or id to attach the server to. Mutually exclusive with the nics parameter. :param boot_from_volume: Whether to boot from volume. 'boot_volume' implies True, but boot_from_volume=True with no boot_volume is valid and will create a volume from the image and use that. :param volume_size: When booting an image from volume, how big should the created volume be? Defaults to 50. :returns: A dict representing the created server. :raises: OpenStackCloudException on operation error. """ # nova cli calls this boot_volume. Let's be the same if volumes is None: volumes = [] if root_volume and not boot_volume: boot_volume = root_volume if 'nics' in kwargs and not isinstance(kwargs['nics'], list): if isinstance(kwargs['nics'], dict): # Be nice and help the user out kwargs['nics'] = [kwargs['nics']] else: raise OpenStackCloudException( 'nics parameter to create_server takes a list of dicts.' ' Got: {nics}'.format(nics=kwargs['nics'])) if network and ('nics' not in kwargs or not kwargs['nics']): network_obj = self.get_network(name_or_id=network) if not network_obj: raise OpenStackCloudException( 'Network {network} is not a valid network in' ' {cloud}:{region}'.format( network=network, cloud=self.name, region=self.region_name)) kwargs['nics'] = [{'net-id': network_obj['id']}] kwargs['image'] = image kwargs = self._get_boot_from_volume_kwargs( image=image, boot_from_volume=boot_from_volume, boot_volume=boot_volume, volume_size=str(volume_size), terminate_volume=terminate_volume, volumes=volumes, kwargs=kwargs) with _utils.shade_exceptions("Error in creating instance"): server = self.manager.submitTask(_tasks.ServerCreate( name=name, flavor=flavor, **kwargs)) admin_pass = server.get('adminPass') or kwargs.get('admin_pass') if not wait: # This is a direct get task call to skip the list_servers # cache which has absolutely no chance of containing the # new server # Only do this if we're not going to wait for the server # to complete booting, because the only reason we do it # is to get a server record that is the return value from # get/list rather than the return value of create. If we're # going to do the wait loop below, this is a waste of a call server = self.get_server_by_id(server.id) if server.status == 'ERROR': raise OpenStackCloudException( "Error in creating the server.") if wait: server = self.wait_for_server( server, auto_ip=auto_ip, ips=ips, ip_pool=ip_pool, reuse=reuse_ips, timeout=timeout ) server.adminPass = admin_pass return server def wait_for_server( self, server, auto_ip=True, ips=None, ip_pool=None, reuse=True, timeout=180): """ Wait for a server to reach ACTIVE status. """ server_id = server['id'] timeout_message = "Timeout waiting for the server to come up." start_time = time.time() # There is no point in iterating faster than the list_servers cache for count in _utils._iterate_timeout( timeout, timeout_message, wait=self._SERVER_AGE): try: # Use the get_server call so that the list_servers # cache can be leveraged server = self.get_server(server_id) except Exception: continue if not server: continue # We have more work to do, but the details of that are # hidden from the user. So, calculate remaining timeout # and pass it down into the IP stack. remaining_timeout = timeout - int(time.time() - start_time) if remaining_timeout <= 0: raise OpenStackCloudTimeout(timeout_message) server = self.get_active_server( server=server, reuse=reuse, auto_ip=auto_ip, ips=ips, ip_pool=ip_pool, wait=True, timeout=remaining_timeout) if server is not None and server['status'] == 'ACTIVE': return server def get_active_server( self, server, auto_ip=True, ips=None, ip_pool=None, reuse=True, wait=False, timeout=180): if server['status'] == 'ERROR': if 'fault' in server and 'message' in server['fault']: raise OpenStackCloudException( "Error in creating the server: {reason}".format( reason=server['fault']['message']), extra_data=dict(server=server)) raise OpenStackCloudException( "Error in creating the server", extra_data=dict(server=server)) if server['status'] == 'ACTIVE': if 'addresses' in server and server['addresses']: return self.add_ips_to_server( server, auto_ip, ips, ip_pool, reuse=reuse, wait=wait, timeout=timeout) self.log.debug( 'Server {server} reached ACTIVE state without' ' being allocated an IP address.' ' Deleting server.'.format(server=server['id'])) try: self._delete_server( server=server, wait=wait, timeout=timeout) except Exception as e: raise OpenStackCloudException( 'Server reached ACTIVE state without being' ' allocated an IP address AND then could not' ' be deleted: {0}'.format(e), extra_data=dict(server=server)) raise OpenStackCloudException( 'Server reached ACTIVE state without being' ' allocated an IP address.', extra_data=dict(server=server)) return None def rebuild_server(self, server_id, image_id, admin_pass=None, wait=False, timeout=180): with _utils.shade_exceptions("Error in rebuilding instance"): server = self.manager.submitTask(_tasks.ServerRebuild( server=server_id, image=image_id, password=admin_pass)) if wait: admin_pass = server.get('adminPass') or admin_pass for count in _utils._iterate_timeout( timeout, "Timeout waiting for server {0} to " "rebuild.".format(server_id)): try: server = self.get_server_by_id(server_id) except Exception: continue if server['status'] == 'ACTIVE': server.adminPass = admin_pass return server if server['status'] == 'ERROR': raise OpenStackCloudException( "Error in rebuilding the server", extra_data=dict(server=server)) return server def delete_server( self, name_or_id, wait=False, timeout=180, delete_ips=False): """Delete a server instance. :param bool wait: If true, waits for server to be deleted. :param int timeout: Seconds to wait for server deletion. :param bool delete_ips: If true, deletes any floating IPs associated with the instance. :returns: True if delete succeeded, False otherwise if the server does not exist. :raises: OpenStackCloudException on operation error. """ server = self.get_server(name_or_id) if not server: return False # This portion of the code is intentionally left as a separate # private method in order to avoid an unnecessary API call to get # a server we already have. return self._delete_server( server, wait=wait, timeout=timeout, delete_ips=delete_ips) def _delete_server( self, server, wait=False, timeout=180, delete_ips=False): if not server: return False if delete_ips: floating_ip = meta.get_server_ip(server, ext_tag='floating') if floating_ip: ips = self.search_floating_ips(filters={ 'floating_ip_address': floating_ip}) if len(ips) != 1: raise OpenStackCloudException( "Tried to delete floating ip {floating_ip}" " associated with server {id} but there was" " an error finding it. Something is exceptionally" " broken.".format( floating_ip=floating_ip, id=server['id'])) self.delete_floating_ip(ips[0]['id']) try: self.manager.submitTask( _tasks.ServerDelete(server=server['id'])) except nova_exceptions.NotFound: return False except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Error in deleting server: {0}".format(e)) if not wait: return True for count in _utils._iterate_timeout( timeout, "Timed out waiting for server to get deleted.", wait=self._SERVER_AGE): try: server = self.get_server_by_id(server['id']) if not server: break except nova_exceptions.NotFound: break except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Error in deleting server: {0}".format(e)) if self.has_service('volume'): # If the server has volume attachments, or if it has booted # from volume, deleting it will change volume state if (not server['image'] or not server['image']['id'] or self.get_volume(server)): self.list_volumes.invalidate(self) # Reset the list servers cache time so that the next list server # call gets a new list self._servers_time = self._servers_time - self._SERVER_AGE return True def list_containers(self, full_listing=True): try: return self.manager.submitTask(_tasks.ContainerList( full_listing=full_listing)) except swift_exceptions.ClientException as e: raise OpenStackCloudException( "Container list failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) def get_container(self, name, skip_cache=False): if skip_cache or name not in self._container_cache: try: container = self.manager.submitTask( _tasks.ContainerGet(container=name)) self._container_cache[name] = container except swift_exceptions.ClientException as e: if e.http_status == 404: return None raise OpenStackCloudException( "Container fetch failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) return self._container_cache[name] def create_container(self, name, public=False): container = self.get_container(name) if container: return container try: self.manager.submitTask( _tasks.ContainerCreate(container=name)) if public: self.set_container_access(name, 'public') return self.get_container(name, skip_cache=True) except swift_exceptions.ClientException as e: raise OpenStackCloudException( "Container creation failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) def delete_container(self, name): try: self.manager.submitTask( _tasks.ContainerDelete(container=name)) except swift_exceptions.ClientException as e: if e.http_status == 404: return raise OpenStackCloudException( "Container deletion failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) def update_container(self, name, headers): try: self.manager.submitTask( _tasks.ContainerUpdate(container=name, headers=headers)) except swift_exceptions.ClientException as e: raise OpenStackCloudException( "Container update failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) def set_container_access(self, name, access): if access not in OBJECT_CONTAINER_ACLS: raise OpenStackCloudException( "Invalid container access specified: %s. Must be one of %s" % (access, list(OBJECT_CONTAINER_ACLS.keys()))) header = {'x-container-read': OBJECT_CONTAINER_ACLS[access]} self.update_container(name, header) def get_container_access(self, name): container = self.get_container(name, skip_cache=True) if not container: raise OpenStackCloudException("Container not found: %s" % name) acl = container.get('x-container-read', '') try: return [p for p, a in OBJECT_CONTAINER_ACLS.items() if acl == a].pop() except IndexError: raise OpenStackCloudException( "Could not determine container access for ACL: %s." % acl) def _get_file_hashes(self, filename): if filename not in self._file_hash_cache: self.log.debug( 'Calculating hashes for {filename}'.format(filename=filename)) md5 = hashlib.md5() sha256 = hashlib.sha256() with open(filename, 'rb') as file_obj: for chunk in iter(lambda: file_obj.read(8192), b''): md5.update(chunk) sha256.update(chunk) self._file_hash_cache[filename] = dict( md5=md5.hexdigest(), sha256=sha256.hexdigest()) self.log.debug( "Image file {filename} md5:{md5} sha256:{sha256}".format( filename=filename, md5=self._file_hash_cache[filename]['md5'], sha256=self._file_hash_cache[filename]['sha256'])) return (self._file_hash_cache[filename]['md5'], self._file_hash_cache[filename]['sha256']) @_utils.cache_on_arguments() def get_object_capabilities(self): return self.manager.submitTask(_tasks.ObjectCapabilities()) def get_object_segment_size(self, segment_size): '''get a segment size that will work given capabilities''' if segment_size is None: segment_size = DEFAULT_OBJECT_SEGMENT_SIZE try: caps = self.get_object_capabilities() except swift_exceptions.ClientException as e: if e.http_status == 412: server_max_file_size = DEFAULT_MAX_FILE_SIZE self.log.info( "Swift capabilities not supported. " "Using default max file size.") else: raise OpenStackCloudException( "Could not determine capabilities") else: server_max_file_size = caps.get('swift', {}).get('max_file_size', 0) if segment_size > server_max_file_size: return server_max_file_size return segment_size def is_object_stale( self, container, name, filename, file_md5=None, file_sha256=None): metadata = self.get_object_metadata(container, name) if not metadata: self.log.debug( "swift stale check, no object: {container}/{name}".format( container=container, name=name)) return True if file_md5 is None or file_sha256 is None: (file_md5, file_sha256) = self._get_file_hashes(filename) if metadata.get(OBJECT_MD5_KEY, '') != file_md5: self.log.debug( "swift md5 mismatch: {filename}!={container}/{name}".format( filename=filename, container=container, name=name)) return True if metadata.get(OBJECT_SHA256_KEY, '') != file_sha256: self.log.debug( "swift sha256 mismatch: {filename}!={container}/{name}".format( filename=filename, container=container, name=name)) return True self.log.debug( "swift object up to date: {container}/{name}".format( container=container, name=name)) return False def create_object( self, container, name, filename=None, md5=None, sha256=None, segment_size=None, **headers): """Create a file object :param container: The name of the container to store the file in. This container will be created if it does not exist already. :param name: Name for the object within the container. :param filename: The path to the local file whose contents will be uploaded. :param md5: A hexadecimal md5 of the file. (Optional), if it is known and can be passed here, it will save repeating the expensive md5 process. It is assumed to be accurate. :param sha256: A hexadecimal sha256 of the file. (Optional) See md5. :param segment_size: Break the uploaded object into segments of this many bytes. (Optional) Shade will attempt to discover the maximum value for this from the server if it is not specified, or will use a reasonable default. :param headers: These will be passed through to the object creation API as HTTP Headers. :raises: ``OpenStackCloudException`` on operation error. """ if not filename: filename = name segment_size = self.get_object_segment_size(segment_size) if not md5 or not sha256: (md5, sha256) = self._get_file_hashes(filename) headers[OBJECT_MD5_KEY] = md5 headers[OBJECT_SHA256_KEY] = sha256 header_list = sorted([':'.join([k, v]) for (k, v) in headers.items()]) # On some clouds this is not necessary. On others it is. I'm confused. self.create_container(container) if self.is_object_stale(container, name, filename, md5, sha256): self.log.debug( "swift uploading {filename} to {container}/{name}".format( filename=filename, container=container, name=name)) upload = swiftclient.service.SwiftUploadObject( source=filename, object_name=name) for r in self.manager.submitTask(_tasks.ObjectCreate( container=container, objects=[upload], options=dict(header=header_list, segment_size=segment_size))): if not r['success']: raise OpenStackCloudException( 'Failed at action ({action}) [{error}]:'.format(**r)) def list_objects(self, container, full_listing=True): try: return self.manager.submitTask(_tasks.ObjectList( container=container, full_listing=full_listing)) except swift_exceptions.ClientException as e: raise OpenStackCloudException( "Object list failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) def delete_object(self, container, name): """Delete an object from a container. :param string container: Name of the container holding the object. :param string name: Name of the object to delete. :returns: True if delete succeeded, False if the object was not found. :raises: OpenStackCloudException on operation error. """ if not self.get_object_metadata(container, name): return False try: self.manager.submitTask(_tasks.ObjectDelete( container=container, obj=name)) except swift_exceptions.ClientException as e: raise OpenStackCloudException( "Object deletion failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) return True def get_object_metadata(self, container, name): try: return self.manager.submitTask(_tasks.ObjectMetadata( container=container, obj=name)) except swift_exceptions.ClientException as e: if e.http_status == 404: return None raise OpenStackCloudException( "Object metadata fetch failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) def get_object(self, container, obj, query_string=None, resp_chunk_size=None): """Get the headers and body of an object from swift :param string container: name of the container. :param string obj: name of the object. :param string query_string: query args for uri. (delimiter, prefix, etc.) :param int resp_chunk_size: chunk size of data to read. :returns: Tuple (headers, body) of the object, or None if the object is not found (404) :raises: OpenStackCloudException on operation error. """ try: return self.manager.submitTask(_tasks.ObjectGet( container=container, obj=obj, query_string=query_string, resp_chunk_size=resp_chunk_size)) except swift_exceptions.ClientException as e: if e.http_status == 404: return None raise OpenStackCloudException( "Object fetch failed: %s (%s/%s)" % ( e.http_reason, e.http_host, e.http_path)) def create_subnet(self, network_name_or_id, cidr, ip_version=4, enable_dhcp=False, subnet_name=None, tenant_id=None, allocation_pools=None, gateway_ip=None, disable_gateway_ip=False, dns_nameservers=None, host_routes=None, ipv6_ra_mode=None, ipv6_address_mode=None): """Create a subnet on a specified network. :param string network_name_or_id: The unique name or ID of the attached network. If a non-unique name is supplied, an exception is raised. :param string cidr: The CIDR. :param int ip_version: The IP version, which is 4 or 6. :param bool enable_dhcp: Set to ``True`` if DHCP is enabled and ``False`` if disabled. Default is ``False``. :param string subnet_name: The name of the subnet. :param string tenant_id: The ID of the tenant who owns the network. Only administrative users can specify a tenant ID other than their own. :param list allocation_pools: A list of dictionaries of the start and end addresses for the allocation pools. For example:: [ { "start": "192.168.199.2", "end": "192.168.199.254" } ] :param string gateway_ip: The gateway IP address. When you specify both allocation_pools and gateway_ip, you must ensure that the gateway IP does not overlap with the specified allocation pools. :param bool disable_gateway_ip: Set to ``True`` if gateway IP address is disabled and ``False`` if enabled. It is not allowed with gateway_ip. Default is ``False``. :param list dns_nameservers: A list of DNS name servers for the subnet. For example:: [ "8.8.8.7", "8.8.8.8" ] :param list host_routes: A list of host route dictionaries for the subnet. For example:: [ { "destination": "0.0.0.0/0", "nexthop": "123.456.78.9" }, { "destination": "192.168.0.0/24", "nexthop": "192.168.0.1" } ] :param string ipv6_ra_mode: IPv6 Router Advertisement mode. Valid values are: 'dhcpv6-stateful', 'dhcpv6-stateless', or 'slaac'. :param string ipv6_address_mode: IPv6 address mode. Valid values are: 'dhcpv6-stateful', 'dhcpv6-stateless', or 'slaac'. :returns: The new subnet object. :raises: OpenStackCloudException on operation error. """ network = self.get_network(network_name_or_id) if not network: raise OpenStackCloudException( "Network %s not found." % network_name_or_id) if disable_gateway_ip and gateway_ip: raise OpenStackCloudException( 'arg:disable_gateway_ip is not allowed with arg:gateway_ip') # The body of the neutron message for the subnet we wish to create. # This includes attributes that are required or have defaults. subnet = { 'network_id': network['id'], 'cidr': cidr, 'ip_version': ip_version, 'enable_dhcp': enable_dhcp } # Add optional attributes to the message. if subnet_name: subnet['name'] = subnet_name if tenant_id: subnet['tenant_id'] = tenant_id if allocation_pools: subnet['allocation_pools'] = allocation_pools if gateway_ip: subnet['gateway_ip'] = gateway_ip if disable_gateway_ip: subnet['gateway_ip'] = None if dns_nameservers: subnet['dns_nameservers'] = dns_nameservers if host_routes: subnet['host_routes'] = host_routes if ipv6_ra_mode: subnet['ipv6_ra_mode'] = ipv6_ra_mode if ipv6_address_mode: subnet['ipv6_address_mode'] = ipv6_address_mode with _utils.neutron_exceptions( "Error creating subnet on network " "{0}".format(network_name_or_id)): new_subnet = self.manager.submitTask( _tasks.SubnetCreate(body=dict(subnet=subnet))) return new_subnet['subnet'] def delete_subnet(self, name_or_id): """Delete a subnet. If a name, instead of a unique UUID, is supplied, it is possible that we could find more than one matching subnet since names are not required to be unique. An error will be raised in this case. :param name_or_id: Name or ID of the subnet being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ subnet = self.get_subnet(name_or_id) if not subnet: self.log.debug("Subnet %s not found for deleting" % name_or_id) return False with _utils.neutron_exceptions( "Error deleting subnet {0}".format(name_or_id)): self.manager.submitTask( _tasks.SubnetDelete(subnet=subnet['id'])) return True def update_subnet(self, name_or_id, subnet_name=None, enable_dhcp=None, gateway_ip=None, disable_gateway_ip=None, allocation_pools=None, dns_nameservers=None, host_routes=None): """Update an existing subnet. :param string name_or_id: Name or ID of the subnet to update. :param string subnet_name: The new name of the subnet. :param bool enable_dhcp: Set to ``True`` if DHCP is enabled and ``False`` if disabled. :param string gateway_ip: The gateway IP address. When you specify both allocation_pools and gateway_ip, you must ensure that the gateway IP does not overlap with the specified allocation pools. :param bool disable_gateway_ip: Set to ``True`` if gateway IP address is disabled and ``False`` if enabled. It is not allowed with gateway_ip. Default is ``False``. :param list allocation_pools: A list of dictionaries of the start and end addresses for the allocation pools. For example:: [ { "start": "192.168.199.2", "end": "192.168.199.254" } ] :param list dns_nameservers: A list of DNS name servers for the subnet. For example:: [ "8.8.8.7", "8.8.8.8" ] :param list host_routes: A list of host route dictionaries for the subnet. For example:: [ { "destination": "0.0.0.0/0", "nexthop": "123.456.78.9" }, { "destination": "192.168.0.0/24", "nexthop": "192.168.0.1" } ] :returns: The updated subnet object. :raises: OpenStackCloudException on operation error. """ subnet = {} if subnet_name: subnet['name'] = subnet_name if enable_dhcp is not None: subnet['enable_dhcp'] = enable_dhcp if gateway_ip: subnet['gateway_ip'] = gateway_ip if disable_gateway_ip: subnet['gateway_ip'] = None if allocation_pools: subnet['allocation_pools'] = allocation_pools if dns_nameservers: subnet['dns_nameservers'] = dns_nameservers if host_routes: subnet['host_routes'] = host_routes if not subnet: self.log.debug("No subnet data to update") return if disable_gateway_ip and gateway_ip: raise OpenStackCloudException( 'arg:disable_gateway_ip is not allowed with arg:gateway_ip') curr_subnet = self.get_subnet(name_or_id) if not curr_subnet: raise OpenStackCloudException( "Subnet %s not found." % name_or_id) with _utils.neutron_exceptions( "Error updating subnet {0}".format(name_or_id)): new_subnet = self.manager.submitTask( _tasks.SubnetUpdate( subnet=curr_subnet['id'], body=dict(subnet=subnet))) return new_subnet['subnet'] @_utils.valid_kwargs('name', 'admin_state_up', 'mac_address', 'fixed_ips', 'subnet_id', 'ip_address', 'security_groups', 'allowed_address_pairs', 'extra_dhcp_opts', 'device_owner', 'device_id') def create_port(self, network_id, **kwargs): """Create a port :param network_id: The ID of the network. (Required) :param name: A symbolic name for the port. (Optional) :param admin_state_up: The administrative status of the port, which is up (true, default) or down (false). (Optional) :param mac_address: The MAC address. (Optional) :param fixed_ips: List of ip_addresses and subnet_ids. See subnet_id and ip_address. (Optional) For example:: [ { "ip_address": "10.29.29.13", "subnet_id": "a78484c4-c380-4b47-85aa-21c51a2d8cbd" }, ... ] :param subnet_id: If you specify only a subnet ID, OpenStack Networking allocates an available IP from that subnet to the port. (Optional) If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. :param ip_address: If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. :param security_groups: List of security group UUIDs. (Optional) :param allowed_address_pairs: Allowed address pairs list (Optional) For example:: [ { "ip_address": "23.23.23.1", "mac_address": "fa:16:3e:c4:cd:3f" }, ... ] :param extra_dhcp_opts: Extra DHCP options. (Optional). For example:: [ { "opt_name": "opt name1", "opt_value": "value1" }, ... ] :param device_owner: The ID of the entity that uses this port. For example, a DHCP agent. (Optional) :param device_id: The ID of the device that uses this port. For example, a virtual server. (Optional) :returns: a dictionary describing the created port. :raises: ``OpenStackCloudException`` on operation error. """ kwargs['network_id'] = network_id with _utils.neutron_exceptions( "Error creating port for network {0}".format(network_id)): return self.manager.submitTask( _tasks.PortCreate(body={'port': kwargs}))['port'] @_utils.valid_kwargs('name', 'admin_state_up', 'fixed_ips', 'security_groups', 'allowed_address_pairs', 'extra_dhcp_opts', 'device_owner') def update_port(self, name_or_id, **kwargs): """Update a port Note: to unset an attribute use None value. To leave an attribute untouched just omit it. :param name_or_id: name or id of the port to update. (Required) :param name: A symbolic name for the port. (Optional) :param admin_state_up: The administrative status of the port, which is up (true) or down (false). (Optional) :param fixed_ips: List of ip_addresses and subnet_ids. (Optional) If you specify only a subnet ID, OpenStack Networking allocates an available IP from that subnet to the port. If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. For example:: [ { "ip_address": "10.29.29.13", "subnet_id": "a78484c4-c380-4b47-85aa-21c51a2d8cbd" }, ... ] :param security_groups: List of security group UUIDs. (Optional) :param allowed_address_pairs: Allowed address pairs list (Optional) For example:: [ { "ip_address": "23.23.23.1", "mac_address": "fa:16:3e:c4:cd:3f" }, ... ] :param extra_dhcp_opts: Extra DHCP options. (Optional). For example:: [ { "opt_name": "opt name1", "opt_value": "value1" }, ... ] :param device_owner: The ID of the entity that uses this port. For example, a DHCP agent. (Optional) :returns: a dictionary describing the updated port. :raises: OpenStackCloudException on operation error. """ port = self.get_port(name_or_id=name_or_id) if port is None: raise OpenStackCloudException( "failed to find port '{port}'".format(port=name_or_id)) with _utils.neutron_exceptions( "Error updating port {0}".format(name_or_id)): return self.manager.submitTask( _tasks.PortUpdate( port=port['id'], body={'port': kwargs}))['port'] def delete_port(self, name_or_id): """Delete a port :param name_or_id: id or name of the port to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ port = self.get_port(name_or_id=name_or_id) if port is None: self.log.debug("Port %s not found for deleting" % name_or_id) return False with _utils.neutron_exceptions( "Error deleting port {0}".format(name_or_id)): self.manager.submitTask(_tasks.PortDelete(port=port['id'])) return True def create_security_group(self, name, description): """Create a new security group :param string name: A name for the security group. :param string description: Describes the security group. :returns: A dict representing the new security group. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ if self.secgroup_source == 'neutron': with _utils.neutron_exceptions( "Error creating security group {0}".format(name)): group = self.manager.submitTask( _tasks.NeutronSecurityGroupCreate( body=dict(security_group=dict(name=name, description=description)) ) ) return group['security_group'] elif self.secgroup_source == 'nova': with _utils.shade_exceptions( "Failed to create security group '{name}'".format( name=name)): group = self.manager.submitTask( _tasks.NovaSecurityGroupCreate( name=name, description=description ) ) return _utils.normalize_nova_secgroups([group])[0] # Security groups not supported else: raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) def delete_security_group(self, name_or_id): """Delete a security group :param string name_or_id: The name or unique ID of the security group. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ secgroup = self.get_security_group(name_or_id) if secgroup is None: self.log.debug('Security group %s not found for deleting' % name_or_id) return False if self.secgroup_source == 'neutron': with _utils.neutron_exceptions( "Error deleting security group {0}".format(name_or_id)): self.manager.submitTask( _tasks.NeutronSecurityGroupDelete( security_group=secgroup['id'] ) ) return True elif self.secgroup_source == 'nova': with _utils.shade_exceptions( "Failed to delete security group '{group}'".format( group=name_or_id)): self.manager.submitTask( _tasks.NovaSecurityGroupDelete(group=secgroup['id']) ) return True # Security groups not supported else: raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) @_utils.valid_kwargs('name', 'description') def update_security_group(self, name_or_id, **kwargs): """Update a security group :param string name_or_id: Name or ID of the security group to update. :param string name: New name for the security group. :param string description: New description for the security group. :returns: A dictionary describing the updated security group. :raises: OpenStackCloudException on operation error. """ secgroup = self.get_security_group(name_or_id) if secgroup is None: raise OpenStackCloudException( "Security group %s not found." % name_or_id) if self.secgroup_source == 'neutron': with _utils.neutron_exceptions( "Error updating security group {0}".format(name_or_id)): group = self.manager.submitTask( _tasks.NeutronSecurityGroupUpdate( security_group=secgroup['id'], body={'security_group': kwargs}) ) return group['security_group'] elif self.secgroup_source == 'nova': with _utils.shade_exceptions( "Failed to update security group '{group}'".format( group=name_or_id)): group = self.manager.submitTask( _tasks.NovaSecurityGroupUpdate( group=secgroup['id'], **kwargs) ) return _utils.normalize_nova_secgroups([group])[0] # Security groups not supported else: raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) def create_security_group_rule(self, secgroup_name_or_id, port_range_min=None, port_range_max=None, protocol=None, remote_ip_prefix=None, remote_group_id=None, direction='ingress', ethertype='IPv4'): """Create a new security group rule :param string secgroup_name_or_id: The security group name or ID to associate with this security group rule. If a non-unique group name is given, an exception is raised. :param int port_range_min: The minimum port number in the range that is matched by the security group rule. If the protocol is TCP or UDP, this value must be less than or equal to the port_range_max attribute value. If nova is used by the cloud provider for security groups, then a value of None will be transformed to -1. :param int port_range_max: The maximum port number in the range that is matched by the security group rule. The port_range_min attribute constrains the port_range_max attribute. If nova is used by the cloud provider for security groups, then a value of None will be transformed to -1. :param string protocol: The protocol that is matched by the security group rule. Valid values are None, tcp, udp, and icmp. :param string remote_ip_prefix: The remote IP prefix to be associated with this security group rule. This attribute matches the specified IP prefix as the source IP address of the IP packet. :param string remote_group_id: The remote group ID to be associated with this security group rule. :param string direction: Ingress or egress: The direction in which the security group rule is applied. For a compute instance, an ingress security group rule is applied to incoming (ingress) traffic for that instance. An egress rule is applied to traffic leaving the instance. :param string ethertype: Must be IPv4 or IPv6, and addresses represented in CIDR must match the ingress or egress rules. :returns: A dict representing the new security group rule. :raises: OpenStackCloudException on operation error. """ secgroup = self.get_security_group(secgroup_name_or_id) if not secgroup: raise OpenStackCloudException( "Security group %s not found." % secgroup_name_or_id) if self.secgroup_source == 'neutron': # NOTE: Nova accepts -1 port numbers, but Neutron accepts None # as the equivalent value. rule_def = { 'security_group_id': secgroup['id'], 'port_range_min': None if port_range_min == -1 else port_range_min, 'port_range_max': None if port_range_max == -1 else port_range_max, 'protocol': protocol, 'remote_ip_prefix': remote_ip_prefix, 'remote_group_id': remote_group_id, 'direction': direction, 'ethertype': ethertype } with _utils.neutron_exceptions( "Error creating security group rule"): rule = self.manager.submitTask( _tasks.NeutronSecurityGroupRuleCreate( body={'security_group_rule': rule_def}) ) return rule['security_group_rule'] elif self.secgroup_source == 'nova': # NOTE: Neutron accepts None for protocol. Nova does not. if protocol is None: raise OpenStackCloudException('Protocol must be specified') if direction == 'egress': self.log.debug( 'Rule creation failed: Nova does not support egress rules' ) raise OpenStackCloudException('No support for egress rules') # NOTE: Neutron accepts None for ports, but Nova requires -1 # as the equivalent value for ICMP. # # For TCP/UDP, if both are None, Neutron allows this and Nova # represents this as all ports (1-65535). Nova does not accept # None values, so to hide this difference, we will automatically # convert to the full port range. If only a single port value is # specified, it will error as normal. if protocol == 'icmp': if port_range_min is None: port_range_min = -1 if port_range_max is None: port_range_max = -1 elif protocol in ['tcp', 'udp']: if port_range_min is None and port_range_max is None: port_range_min = 1 port_range_max = 65535 with _utils.shade_exceptions( "Failed to create security group rule"): rule = self.manager.submitTask( _tasks.NovaSecurityGroupRuleCreate( parent_group_id=secgroup['id'], ip_protocol=protocol, from_port=port_range_min, to_port=port_range_max, cidr=remote_ip_prefix, group_id=remote_group_id ) ) return _utils.normalize_nova_secgroup_rules([rule])[0] # Security groups not supported else: raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) def delete_security_group_rule(self, rule_id): """Delete a security group rule :param string rule_id: The unique ID of the security group rule. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ if self.secgroup_source == 'neutron': try: with _utils.neutron_exceptions( "Error deleting security group rule " "{0}".format(rule_id)): self.manager.submitTask( _tasks.NeutronSecurityGroupRuleDelete( security_group_rule=rule_id) ) except OpenStackCloudResourceNotFound: return False return True elif self.secgroup_source == 'nova': try: self.manager.submitTask( _tasks.NovaSecurityGroupRuleDelete(rule=rule_id) ) except nova_exceptions.NotFound: return False except OpenStackCloudException: raise except Exception as e: raise OpenStackCloudException( "Failed to delete security group rule {id}: {msg}".format( id=rule_id, msg=str(e))) return True # Security groups not supported else: raise OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) shade-1.7.0/shade/tests/0000775000567000056710000000000012677257023016205 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/unit/0000775000567000056710000000000012677257023017164 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/unit/test_create_volume_snapshot.py0000664000567000056710000001123412677256557025362 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_create_volume_snapshot ---------------------------------- Tests for the `create_volume_snapshot` command. """ from mock import patch import os_client_config from shade import _utils from shade import meta from shade import OpenStackCloud from shade.tests import base, fakes from shade.exc import (OpenStackCloudException, OpenStackCloudTimeout) class TestCreateVolumeSnapshot(base.TestCase): def setUp(self): super(TestCreateVolumeSnapshot, self).setUp() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) @patch.object(OpenStackCloud, 'cinder_client') def test_create_volume_snapshot_wait(self, mock_cinder): """ Test that create_volume_snapshot with a wait returns the volume snapshot when its status changes to "available". """ build_snapshot = fakes.FakeVolumeSnapshot('1234', 'creating', 'foo', 'derpysnapshot') fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') mock_cinder.volume_snapshots.create.return_value = build_snapshot mock_cinder.volume_snapshots.get.return_value = fake_snapshot mock_cinder.volume_snapshots.list.return_value = [ build_snapshot, fake_snapshot] self.assertEqual( _utils.normalize_volumes( [meta.obj_to_dict(fake_snapshot)])[0], self.client.create_volume_snapshot(volume_id='1234', wait=True) ) mock_cinder.volume_snapshots.create.assert_called_with( force=False, volume_id='1234' ) mock_cinder.volume_snapshots.get.assert_called_with( snapshot_id=meta.obj_to_dict(build_snapshot)['id'] ) @patch.object(OpenStackCloud, 'cinder_client') def test_create_volume_snapshot_with_timeout(self, mock_cinder): """ Test that a timeout while waiting for the volume snapshot to create raises an exception in create_volume_snapshot. """ build_snapshot = fakes.FakeVolumeSnapshot('1234', 'creating', 'foo', 'derpysnapshot') mock_cinder.volume_snapshots.create.return_value = build_snapshot mock_cinder.volume_snapshots.get.return_value = build_snapshot mock_cinder.volume_snapshots.list.return_value = [build_snapshot] self.assertRaises( OpenStackCloudTimeout, self.client.create_volume_snapshot, volume_id='1234', wait=True, timeout=1) mock_cinder.volume_snapshots.create.assert_called_with( force=False, volume_id='1234' ) mock_cinder.volume_snapshots.get.assert_called_with( snapshot_id=meta.obj_to_dict(build_snapshot)['id'] ) @patch.object(OpenStackCloud, 'cinder_client') def test_create_volume_snapshot_with_error(self, mock_cinder): """ Test that a error status while waiting for the volume snapshot to create raises an exception in create_volume_snapshot. """ build_snapshot = fakes.FakeVolumeSnapshot('1234', 'creating', 'bar', 'derpysnapshot') error_snapshot = fakes.FakeVolumeSnapshot('1234', 'error', 'blah', 'derpysnapshot') mock_cinder.volume_snapshots.create.return_value = build_snapshot mock_cinder.volume_snapshots.get.return_value = error_snapshot mock_cinder.volume_snapshots.list.return_value = [error_snapshot] self.assertRaises( OpenStackCloudException, self.client.create_volume_snapshot, volume_id='1234', wait=True, timeout=5) mock_cinder.volume_snapshots.create.assert_called_with( force=False, volume_id='1234' ) mock_cinder.volume_snapshots.get.assert_called_with( snapshot_id=meta.obj_to_dict(build_snapshot)['id'] ) shade-1.7.0/shade/tests/unit/test_groups.py0000664000567000056710000000461612677256557022136 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import shade from shade.tests.unit import base from shade.tests import fakes class TestGroups(base.TestCase): def setUp(self): super(TestGroups, self).setUp() self.cloud = shade.operator_cloud(validate=False) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_groups(self, mock_keystone): self.cloud.list_groups() mock_keystone.groups.list.assert_called_once_with() @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_get_group(self, mock_keystone): self.cloud.get_group('1234') mock_keystone.groups.list.assert_called_once_with() @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_delete_group(self, mock_keystone): mock_keystone.groups.list.return_value = [ fakes.FakeGroup('1234', 'name', 'desc') ] self.assertTrue(self.cloud.delete_group('1234')) mock_keystone.groups.list.assert_called_once_with() mock_keystone.groups.delete.assert_called_once_with( group='1234' ) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_group(self, mock_keystone): self.cloud.create_group('test-group', 'test desc') mock_keystone.groups.create.assert_called_once_with( name='test-group', description='test desc', domain=None ) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_update_group(self, mock_keystone): mock_keystone.groups.list.return_value = [ fakes.FakeGroup('1234', 'name', 'desc') ] self.cloud.update_group('1234', 'test-group', 'test desc') mock_keystone.groups.list.assert_called_once_with() mock_keystone.groups.update.assert_called_once_with( group='1234', name='test-group', description='test desc' ) shade-1.7.0/shade/tests/unit/test_object.py0000664000567000056710000003607112677256557022065 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os_client_config from os_client_config import cloud_config from swiftclient import service as swift_service from swiftclient import exceptions as swift_exc import testtools import shade import shade.openstackcloud from shade import exc from shade import OpenStackCloud from shade.tests.unit import base class TestObject(base.TestCase): def setUp(self): super(TestObject, self).setUp() config = os_client_config.OpenStackConfig() self.cloud = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_swift_client_no_endpoint(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = None get_session_mock.return_value = session_mock e = self.assertRaises( exc.OpenStackCloudException, lambda: self.cloud.swift_client) self.assertIn( 'Failed to instantiate object-store client.', str(e)) @mock.patch.object(shade.OpenStackCloud, 'auth_token') @mock.patch.object(shade.OpenStackCloud, 'get_session_endpoint') def test_swift_service(self, endpoint_mock, auth_mock): endpoint_mock.return_value = 'slayer' auth_mock.return_value = 'zulu' self.assertIsInstance(self.cloud.swift_service, swift_service.SwiftService) endpoint_mock.assert_called_with(service_key='object-store') @mock.patch.object(shade.OpenStackCloud, 'get_session_endpoint') def test_swift_service_no_endpoint(self, endpoint_mock): endpoint_mock.side_effect = KeyError e = self.assertRaises(exc.OpenStackCloudException, lambda: self.cloud.swift_service) self.assertIn( 'Error constructing swift client', str(e)) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_get_object_segment_size(self, swift_mock): swift_mock.get_capabilities.return_value = {'swift': {'max_file_size': 1000}} self.assertEqual(900, self.cloud.get_object_segment_size(900)) self.assertEqual(1000, self.cloud.get_object_segment_size(1000)) self.assertEqual(1000, self.cloud.get_object_segment_size(1100)) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_get_object_segment_size_http_412(self, swift_mock): swift_mock.get_capabilities.side_effect = swift_exc.ClientException( "Precondition failed", http_status=412) self.assertEqual(shade.openstackcloud.DEFAULT_OBJECT_SEGMENT_SIZE, self.cloud.get_object_segment_size(None)) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_create_container(self, mock_swift): """Test creating a (private) container""" name = 'test_container' mock_swift.head_container.return_value = None self.cloud.create_container(name) expected_head_container_calls = [ # once for exist test mock.call(container=name), # once for the final return mock.call(container=name, skip_cache=True) ] self.assertTrue(expected_head_container_calls, mock_swift.head_container.call_args_list) mock_swift.put_container.assert_called_once_with(container=name) # Because the default is 'private', we shouldn't be calling update self.assertFalse(mock_swift.post_container.called) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_create_container_public(self, mock_swift): """Test creating a public container""" name = 'test_container' mock_swift.head_container.return_value = None self.cloud.create_container(name, public=True) expected_head_container_calls = [ # once for exist test mock.call(container=name), # once for the final return mock.call(container=name, skip_cache=True) ] self.assertTrue(expected_head_container_calls, mock_swift.head_container.call_args_list) mock_swift.put_container.assert_called_once_with(container=name) mock_swift.post_container.assert_called_once_with( container=name, headers={'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS['public']} ) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_create_container_exists(self, mock_swift): """Test creating a container that already exists""" name = 'test_container' fake_container = dict(id='1', name='name') mock_swift.head_container.return_value = fake_container container = self.cloud.create_container(name) mock_swift.head_container.assert_called_once_with(container=name) self.assertEqual(fake_container, container) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_delete_container(self, mock_swift): name = 'test_container' self.cloud.delete_container(name) mock_swift.delete_container.assert_called_once_with(container=name) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_delete_container_404(self, mock_swift): """No exception when deleting a container that does not exist""" name = 'test_container' mock_swift.delete_container.side_effect = swift_exc.ClientException( 'ERROR', http_status=404) self.cloud.delete_container(name) mock_swift.delete_container.assert_called_once_with(container=name) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_delete_container_error(self, mock_swift): """Non-404 swift error re-raised as OSCE""" mock_swift.delete_container.side_effect = swift_exc.ClientException( 'ERROR') self.assertRaises(shade.OpenStackCloudException, self.cloud.delete_container, '') @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_update_container(self, mock_swift): name = 'test_container' headers = {'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS['public']} self.cloud.update_container(name, headers) mock_swift.post_container.assert_called_once_with( container=name, headers=headers) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_update_container_error(self, mock_swift): """Swift error re-raised as OSCE""" mock_swift.post_container.side_effect = swift_exc.ClientException( 'ERROR') self.assertRaises(shade.OpenStackCloudException, self.cloud.update_container, '', '') @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_set_container_access_public(self, mock_swift): name = 'test_container' self.cloud.set_container_access(name, 'public') mock_swift.post_container.assert_called_once_with( container=name, headers={'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS['public']}) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_set_container_access_private(self, mock_swift): name = 'test_container' self.cloud.set_container_access(name, 'private') mock_swift.post_container.assert_called_once_with( container=name, headers={'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS['private']}) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_set_container_access_invalid(self, mock_swift): self.assertRaises(shade.OpenStackCloudException, self.cloud.set_container_access, '', 'invalid') @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_get_container(self, mock_swift): fake_container = { 'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS['public'] } mock_swift.head_container.return_value = fake_container access = self.cloud.get_container_access('foo') self.assertEqual('public', access) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_get_container_invalid(self, mock_swift): fake_container = {'x-container-read': 'invalid'} mock_swift.head_container.return_value = fake_container with testtools.ExpectedException( exc.OpenStackCloudException, "Could not determine container access for ACL: invalid" ): self.cloud.get_container_access('foo') @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_get_container_access_not_found(self, mock_swift): name = 'invalid_container' mock_swift.head_container.return_value = None with testtools.ExpectedException( exc.OpenStackCloudException, "Container not found: %s" % name ): self.cloud.get_container_access(name) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_list_containers(self, mock_swift): containers = [dict(id='1', name='containter1')] mock_swift.get_account.return_value = ('response_headers', containers) ret = self.cloud.list_containers() mock_swift.get_account.assert_called_once_with(full_listing=True) self.assertEqual(containers, ret) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_list_containers_not_full(self, mock_swift): containers = [dict(id='1', name='containter1')] mock_swift.get_account.return_value = ('response_headers', containers) ret = self.cloud.list_containers(full_listing=False) mock_swift.get_account.assert_called_once_with(full_listing=False) self.assertEqual(containers, ret) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_list_containers_exception(self, mock_swift): mock_swift.get_account.side_effect = swift_exc.ClientException("ERROR") self.assertRaises(exc.OpenStackCloudException, self.cloud.list_containers) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_list_objects(self, mock_swift): objects = [dict(id='1', name='object1')] mock_swift.get_container.return_value = ('response_headers', objects) ret = self.cloud.list_objects('container_name') mock_swift.get_container.assert_called_once_with( container='container_name', full_listing=True) self.assertEqual(objects, ret) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_list_objects_not_full(self, mock_swift): objects = [dict(id='1', name='object1')] mock_swift.get_container.return_value = ('response_headers', objects) ret = self.cloud.list_objects('container_name', full_listing=False) mock_swift.get_container.assert_called_once_with( container='container_name', full_listing=False) self.assertEqual(objects, ret) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_list_objects_exception(self, mock_swift): mock_swift.get_container.side_effect = swift_exc.ClientException( "ERROR") self.assertRaises(exc.OpenStackCloudException, self.cloud.list_objects, 'container_name') @mock.patch.object(shade.OpenStackCloud, 'get_object_metadata') @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_delete_object(self, mock_swift, mock_get_meta): container_name = 'container_name' object_name = 'object_name' mock_get_meta.return_value = {'object': object_name} self.assertTrue(self.cloud.delete_object(container_name, object_name)) mock_get_meta.assert_called_once_with(container_name, object_name) mock_swift.delete_object.assert_called_once_with( container=container_name, obj=object_name ) @mock.patch.object(shade.OpenStackCloud, 'get_object_metadata') @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_delete_object_not_found(self, mock_swift, mock_get_meta): container_name = 'container_name' object_name = 'object_name' mock_get_meta.return_value = None self.assertFalse(self.cloud.delete_object(container_name, object_name)) mock_get_meta.assert_called_once_with(container_name, object_name) self.assertFalse(mock_swift.delete_object.called) @mock.patch.object(shade.OpenStackCloud, 'get_object_metadata') @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_delete_object_exception(self, mock_swift, mock_get_meta): container_name = 'container_name' object_name = 'object_name' mock_get_meta.return_value = {'object': object_name} mock_swift.delete_object.side_effect = swift_exc.ClientException( "ERROR") self.assertRaises(shade.OpenStackCloudException, self.cloud.delete_object, container_name, object_name) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_get_object(self, mock_swift): fake_resp = ({'headers': 'yup'}, 'test body') mock_swift.get_object.return_value = fake_resp container_name = 'container_name' object_name = 'object_name' resp = self.cloud.get_object(container_name, object_name) self.assertEqual(fake_resp, resp) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_get_object_not_found(self, mock_swift): mock_swift.get_object.side_effect = swift_exc.ClientException( 'ERROR', http_status=404) container_name = 'container_name' object_name = 'object_name' self.assertIsNone(self.cloud.get_object(container_name, object_name)) mock_swift.get_object.assert_called_once_with( container=container_name, obj=object_name, query_string=None, resp_chunk_size=None) @mock.patch.object(shade.OpenStackCloud, 'swift_client') def test_get_object_exception(self, mock_swift): mock_swift.get_object.side_effect = swift_exc.ClientException("ERROR") container_name = 'container_name' object_name = 'object_name' self.assertRaises(shade.OpenStackCloudException, self.cloud.get_object, container_name, object_name) shade-1.7.0/shade/tests/unit/test_services.py0000664000567000056710000001702212677256557022435 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_cloud_services ---------------------------------- Tests Keystone services commands. """ from mock import patch import os_client_config from shade import _utils from shade import meta from shade import OpenStackCloudException from shade.exc import OpenStackCloudUnavailableFeature from shade import OperatorCloud from shade.tests.fakes import FakeService from shade.tests.unit import base class CloudServices(base.TestCase): mock_services = [ {'id': 'id1', 'name': 'service1', 'type': 'type1', 'service_type': 'type1', 'description': 'desc1', 'enabled': True}, {'id': 'id2', 'name': 'service2', 'type': 'type2', 'service_type': 'type2', 'description': 'desc2', 'enabled': True}, {'id': 'id3', 'name': 'service3', 'type': 'type2', 'service_type': 'type2', 'description': 'desc3', 'enabled': True}, {'id': 'id4', 'name': 'service4', 'type': 'type3', 'service_type': 'type3', 'description': 'desc4', 'enabled': True} ] def setUp(self): super(CloudServices, self).setUp() config = os_client_config.OpenStackConfig() self.client = OperatorCloud(cloud_config=config.get_one_cloud( validate=False)) self.mock_ks_services = [FakeService(**kwa) for kwa in self.mock_services] @patch.object(_utils, 'normalize_keystone_services') @patch.object(OperatorCloud, 'keystone_client') @patch.object(os_client_config.cloud_config.CloudConfig, 'get_api_version') def test_create_service_v2(self, mock_api_version, mock_keystone_client, mock_norm): mock_api_version.return_value = '2.0' kwargs = { 'name': 'a service', 'type': 'network', 'description': 'This is a test service' } self.client.create_service(**kwargs) kwargs['service_type'] = kwargs.pop('type') mock_keystone_client.services.create.assert_called_with(**kwargs) self.assertTrue(mock_norm.called) @patch.object(_utils, 'normalize_keystone_services') @patch.object(OperatorCloud, 'keystone_client') @patch.object(os_client_config.cloud_config.CloudConfig, 'get_api_version') def test_create_service_v3(self, mock_api_version, mock_keystone_client, mock_norm): mock_api_version.return_value = '3' kwargs = { 'name': 'a v3 service', 'type': 'cinderv2', 'description': 'This is a test service', 'enabled': False } self.client.create_service(**kwargs) mock_keystone_client.services.create.assert_called_with(**kwargs) self.assertTrue(mock_norm.called) @patch.object(os_client_config.cloud_config.CloudConfig, 'get_api_version') def test_update_service_v2(self, mock_api_version): mock_api_version.return_value = '2.0' # NOTE(SamYaple): Update service only works with v3 api self.assertRaises(OpenStackCloudUnavailableFeature, self.client.update_service, 'service_id', name='new name') @patch.object(_utils, 'normalize_keystone_services') @patch.object(OperatorCloud, 'keystone_client') @patch.object(os_client_config.cloud_config.CloudConfig, 'get_api_version') def test_update_service_v3(self, mock_api_version, mock_keystone_client, mock_norm): mock_api_version.return_value = '3' kwargs = { 'name': 'updated_name', 'type': 'updated_type', 'service_type': 'updated_type', 'description': 'updated_name', 'enabled': False } service_obj = FakeService(id='id1', **kwargs) mock_keystone_client.services.update.return_value = service_obj self.client.update_service('id1', **kwargs) del kwargs['service_type'] mock_keystone_client.services.update.assert_called_once_with( service='id1', **kwargs ) mock_norm.assert_called_once_with([meta.obj_to_dict(service_obj)]) @patch.object(OperatorCloud, 'keystone_client') def test_list_services(self, mock_keystone_client): mock_keystone_client.services.list.return_value = \ self.mock_ks_services services = self.client.list_services() mock_keystone_client.services.list.assert_called_with() self.assertItemsEqual(self.mock_services, services) @patch.object(OperatorCloud, 'keystone_client') def test_get_service(self, mock_keystone_client): mock_keystone_client.services.list.return_value = \ self.mock_ks_services # Search by id service = self.client.get_service(name_or_id='id4') # test we are getting exactly 1 element self.assertEqual(service, self.mock_services[3]) # Search by name service = self.client.get_service(name_or_id='service2') # test we are getting exactly 1 element self.assertEqual(service, self.mock_services[1]) # Not found service = self.client.get_service(name_or_id='blah!') self.assertIs(None, service) # Multiple matches # test we are getting an Exception self.assertRaises(OpenStackCloudException, self.client.get_service, name_or_id=None, filters={'type': 'type2'}) @patch.object(OperatorCloud, 'keystone_client') def test_search_services(self, mock_keystone_client): mock_keystone_client.services.list.return_value = \ self.mock_ks_services # Search by id services = self.client.search_services(name_or_id='id4') # test we are getting exactly 1 element self.assertEqual(1, len(services)) self.assertEqual(services, [self.mock_services[3]]) # Search by name services = self.client.search_services(name_or_id='service2') # test we are getting exactly 1 element self.assertEqual(1, len(services)) self.assertEqual(services, [self.mock_services[1]]) # Not found services = self.client.search_services(name_or_id='blah!') self.assertEqual(0, len(services)) # Multiple matches services = self.client.search_services( filters={'type': 'type2'}) # test we are getting exactly 2 elements self.assertEqual(2, len(services)) self.assertEqual(services, [self.mock_services[1], self.mock_services[2]]) @patch.object(OperatorCloud, 'keystone_client') def test_delete_service(self, mock_keystone_client): mock_keystone_client.services.list.return_value = \ self.mock_ks_services # Delete by name self.client.delete_service(name_or_id='service3') mock_keystone_client.services.delete.assert_called_with(id='id3') # Delete by id self.client.delete_service('id1') mock_keystone_client.services.delete.assert_called_with(id='id1') shade-1.7.0/shade/tests/unit/test_network.py0000664000567000056710000001306312677256557022304 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import testtools import shade from shade.tests.unit import base class TestNetwork(base.TestCase): @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_network(self, mock_neutron): self.cloud.create_network("netname") mock_neutron.create_network.assert_called_with( body=dict( network=dict( name='netname', shared=False, admin_state_up=True ) ) ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_network_specific_tenant(self, mock_neutron): self.cloud.create_network("netname", project_id="project_id_value") mock_neutron.create_network.assert_called_with( body=dict( network=dict( name='netname', shared=False, admin_state_up=True, tenant_id="project_id_value", ) ) ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_network_external(self, mock_neutron): self.cloud.create_network("netname", external=True) mock_neutron.create_network.assert_called_with( body=dict( network={ 'name': 'netname', 'shared': False, 'admin_state_up': True, 'router:external': True } ) ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_network_provider(self, mock_neutron): provider_opts = {'physical_network': 'mynet', 'network_type': 'vlan', 'segmentation_id': 'vlan1'} self.cloud.create_network("netname", provider=provider_opts) mock_neutron.create_network.assert_called_once_with( body=dict( network={ 'name': 'netname', 'shared': False, 'admin_state_up': True, 'provider:physical_network': provider_opts['physical_network'], 'provider:network_type': provider_opts['network_type'], 'provider:segmentation_id': provider_opts['segmentation_id'], } ) ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_network_provider_ignored_value(self, mock_neutron): provider_opts = {'physical_network': 'mynet', 'network_type': 'vlan', 'segmentation_id': 'vlan1', 'should_not_be_passed': 1} self.cloud.create_network("netname", provider=provider_opts) mock_neutron.create_network.assert_called_once_with( body=dict( network={ 'name': 'netname', 'shared': False, 'admin_state_up': True, 'provider:physical_network': provider_opts['physical_network'], 'provider:network_type': provider_opts['network_type'], 'provider:segmentation_id': provider_opts['segmentation_id'], } ) ) def test_create_network_provider_wrong_type(self): provider_opts = "invalid" with testtools.ExpectedException( shade.OpenStackCloudException, "Parameter 'provider' must be a dict" ): self.cloud.create_network("netname", provider=provider_opts) @mock.patch.object(shade.OpenStackCloud, 'get_network') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_network(self, mock_neutron, mock_get): mock_get.return_value = dict(id='net-id', name='test-net') self.assertTrue(self.cloud.delete_network('test-net')) mock_get.assert_called_once_with('test-net') mock_neutron.delete_network.assert_called_once_with(network='net-id') @mock.patch.object(shade.OpenStackCloud, 'get_network') def test_delete_network_not_found(self, mock_get): mock_get.return_value = None self.assertFalse(self.cloud.delete_network('test-net')) mock_get.assert_called_once_with('test-net') @mock.patch.object(shade.OpenStackCloud, 'get_network') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_network_exception(self, mock_neutron, mock_get): mock_get.return_value = dict(id='net-id', name='test-net') mock_neutron.delete_network.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Error deleting network test-net" ): self.cloud.delete_network('test-net') mock_get.assert_called_once_with('test-net') mock_neutron.delete_network.assert_called_once_with(network='net-id') shade-1.7.0/shade/tests/unit/test_floating_ip_neutron.py0000664000567000056710000004526512677256557024671 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_neutron ---------------------------------- Tests Floating IP resource methods for Neutron """ from mock import patch import os_client_config from neutronclient.common import exceptions as n_exc from shade import _utils from shade import exc from shade import meta from shade import OpenStackCloud from shade.tests import fakes from shade.tests.unit import base class TestFloatingIP(base.TestCase): mock_floating_ip_list_rep = { 'floatingips': [ { 'router_id': 'd23abc8d-2991-4a55-ba98-2aaea84cc72f', 'tenant_id': '4969c491a3c74ee4af974e6d800c62de', 'floating_network_id': '376da547-b977-4cfe-9cba-275c80debf57', 'fixed_ip_address': '192.0.2.29', 'floating_ip_address': '203.0.113.29', 'port_id': 'ce705c24-c1ef-408a-bda3-7bbd946164ab', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda7', 'status': 'ACTIVE' }, { 'router_id': None, 'tenant_id': '4969c491a3c74ee4af974e6d800c62de', 'floating_network_id': '376da547-b977-4cfe-9cba-275c80debf57', 'fixed_ip_address': None, 'floating_ip_address': '203.0.113.30', 'port_id': None, 'id': '61cea855-49cb-4846-997d-801b70c71bdd', 'status': 'DOWN' } ] } mock_floating_ip_new_rep = { 'floatingip': { 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'floating_network_id': 'my-network-id', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda8', 'port_id': None, 'router_id': None, 'status': 'ACTIVE', 'tenant_id': '4969c491a3c74ee4af974e6d800c62df' } } mock_get_network_rep = { 'status': 'ACTIVE', 'subnets': [ '54d6f61d-db07-451c-9ab3-b9609b6b6f0b' ], 'name': 'my-network', 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id': '4fd44f30292945e481c7b8a0c8908869', 'provider:network_type': 'local', 'router:external': True, 'shared': True, 'id': 'my-network-id', 'provider:segmentation_id': None } mock_search_ports_rep = [ { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': 'first-port', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': '70c1db1f-b701-45bd-96e0-a313ee3430b3', 'tenant_id': '', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'compute:None', 'mac_address': 'fa:16:3e:58:42:ed', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '008ba151-0b8c-4a67-98b5-0d2b87666062', 'ip_address': u'172.24.4.2' } ], 'id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'security_groups': [], 'device_id': 'server_id' } ] def assertAreInstances(self, elements, elem_type): for e in elements: self.assertIsInstance(e, elem_type) def setUp(self): super(TestFloatingIP, self).setUp() # floating_ip_source='neutron' is default for OpenStackCloud() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) self.fake_server = meta.obj_to_dict( fakes.FakeServer( 'server-id', '', 'ACTIVE', addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': '192.0.2.129', u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42'}]})) self.floating_ip = _utils.normalize_neutron_floating_ips( self.mock_floating_ip_list_rep['floatingips'])[0] @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'has_service') def test_list_floating_ips(self, mock_has_service, mock_neutron_client): mock_has_service.return_value = True mock_neutron_client.list_floatingips.return_value = \ self.mock_floating_ip_list_rep floating_ips = self.client.list_floating_ips() mock_neutron_client.list_floatingips.assert_called_with() self.assertIsInstance(floating_ips, list) self.assertAreInstances(floating_ips, dict) self.assertEqual(2, len(floating_ips)) @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'has_service') def test_search_floating_ips(self, mock_has_service, mock_neutron_client): mock_has_service.return_value = True mock_neutron_client.list_floatingips.return_value = \ self.mock_floating_ip_list_rep floating_ips = self.client.search_floating_ips( filters={'attached': False}) mock_neutron_client.list_floatingips.assert_called_with() self.assertIsInstance(floating_ips, list) self.assertAreInstances(floating_ips, dict) self.assertEqual(1, len(floating_ips)) @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'has_service') def test_get_floating_ip(self, mock_has_service, mock_neutron_client): mock_has_service.return_value = True mock_neutron_client.list_floatingips.return_value = \ self.mock_floating_ip_list_rep floating_ip = self.client.get_floating_ip( id='2f245a7b-796b-4f26-9cf9-9e82d248fda7') mock_neutron_client.list_floatingips.assert_called_with() self.assertIsInstance(floating_ip, dict) self.assertEqual('203.0.113.29', floating_ip['floating_ip_address']) @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'has_service') def test_get_floating_ip_not_found( self, mock_has_service, mock_neutron_client): mock_has_service.return_value = True mock_neutron_client.list_floatingips.return_value = \ self.mock_floating_ip_list_rep floating_ip = self.client.get_floating_ip(id='non-existent') self.assertIsNone(floating_ip) @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'search_networks') @patch.object(OpenStackCloud, 'has_service') def test_create_floating_ip( self, mock_has_service, mock_search_networks, mock_neutron_client): mock_has_service.return_value = True mock_search_networks.return_value = [self.mock_get_network_rep] mock_neutron_client.create_floatingip.return_value = \ self.mock_floating_ip_new_rep ip = self.client.create_floating_ip(network='my-network') mock_neutron_client.create_floatingip.assert_called_with( body={'floatingip': {'floating_network_id': 'my-network-id'}} ) self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['floating_ip_address'], ip['floating_ip_address']) @patch.object(_utils, 'normalize_neutron_floating_ips') @patch.object(OpenStackCloud, '_neutron_available_floating_ips') @patch.object(OpenStackCloud, 'has_service') @patch.object(OpenStackCloud, 'keystone_session') def test_available_floating_ip_neutron(self, mock_keystone, mock_has_service, mock__neutron_call, mock_normalize): """ Test the correct path is taken when using neutron. """ # force neutron path mock_has_service.return_value = True mock__neutron_call.return_value = [] self.client.available_floating_ip(network='netname') mock_has_service.assert_called_once_with('network') mock__neutron_call.assert_called_once_with(network='netname', server=None) mock_normalize.assert_called_once_with([]) @patch.object(_utils, '_filter_list') @patch.object(OpenStackCloud, '_neutron_create_floating_ip') @patch.object(OpenStackCloud, '_neutron_list_floating_ips') @patch.object(OpenStackCloud, 'get_external_networks') @patch.object(OpenStackCloud, 'keystone_session') def test__neutron_available_floating_ips( self, mock_keystone_session, mock_get_ext_nets, mock__neutron_list_fips, mock__neutron_create_fip, mock__filter_list): """ Test without specifying a network name. """ mock_keystone_session.get_project_id.return_value = 'proj-id' mock_get_ext_nets.return_value = [self.mock_get_network_rep] mock__neutron_list_fips.return_value = [] mock__filter_list.return_value = [] # Test if first network is selected if no network is given self.client._neutron_available_floating_ips() mock_keystone_session.get_project_id.assert_called_once_with() mock_get_ext_nets.assert_called_once_with() mock__neutron_list_fips.assert_called_once_with() mock__filter_list.assert_called_once_with( [], name_or_id=None, filters={'port_id': None, 'floating_network_id': self.mock_get_network_rep['id'], 'tenant_id': 'proj-id'} ) mock__neutron_create_fip.assert_called_once_with( network_name_or_id=self.mock_get_network_rep['id'], server=None ) @patch.object(_utils, '_filter_list') @patch.object(OpenStackCloud, '_neutron_create_floating_ip') @patch.object(OpenStackCloud, '_neutron_list_floating_ips') @patch.object(OpenStackCloud, 'get_external_networks') @patch.object(OpenStackCloud, 'keystone_session') def test__neutron_available_floating_ips_network( self, mock_keystone_session, mock_get_ext_nets, mock__neutron_list_fips, mock__neutron_create_fip, mock__filter_list): """ Test with specifying a network name. """ mock_keystone_session.get_project_id.return_value = 'proj-id' mock_get_ext_nets.return_value = [self.mock_get_network_rep] mock__neutron_list_fips.return_value = [] mock__filter_list.return_value = [] self.client._neutron_available_floating_ips( network=self.mock_get_network_rep['name'] ) mock_keystone_session.get_project_id.assert_called_once_with() mock_get_ext_nets.assert_called_once_with() mock__neutron_list_fips.assert_called_once_with() mock__filter_list.assert_called_once_with( [], name_or_id=None, filters={'port_id': None, 'floating_network_id': self.mock_get_network_rep['id'], 'tenant_id': 'proj-id'} ) mock__neutron_create_fip.assert_called_once_with( network_name_or_id=self.mock_get_network_rep['id'], server=None ) @patch.object(OpenStackCloud, 'get_external_networks') @patch.object(OpenStackCloud, 'keystone_session') def test__neutron_available_floating_ips_invalid_network( self, mock_keystone_session, mock_get_ext_nets): """ Test with an invalid network name. """ mock_keystone_session.get_project_id.return_value = 'proj-id' mock_get_ext_nets.return_value = [] self.assertRaises(exc.OpenStackCloudException, self.client._neutron_available_floating_ips, network='INVALID') @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'keystone_session') @patch.object(OpenStackCloud, '_neutron_create_floating_ip') @patch.object(OpenStackCloud, '_attach_ip_to_server') @patch.object(OpenStackCloud, 'has_service') def test_auto_ip_pool_no_reuse( self, mock_has_service, mock_attach_ip_to_server, mock__neutron_create_floating_ip, mock_keystone_session, mock_nova_client): mock_has_service.return_value = True mock__neutron_create_floating_ip.return_value = \ self.mock_floating_ip_list_rep['floatingips'][0] mock_keystone_session.get_project_id.return_value = \ '4969c491a3c74ee4af974e6d800c62df' self.client.add_ips_to_server( dict(id='1234'), ip_pool='my-network', reuse=False) mock__neutron_create_floating_ip.assert_called_once_with( network_name_or_id='my-network', server=None) mock_attach_ip_to_server.assert_called_once_with( server={'id': '1234'}, fixed_address=None, floating_ip=self.floating_ip, wait=False, timeout=60) @patch.object(OpenStackCloud, 'keystone_session') @patch.object(OpenStackCloud, '_neutron_create_floating_ip') @patch.object(OpenStackCloud, '_neutron_list_floating_ips') @patch.object(OpenStackCloud, 'search_networks') @patch.object(OpenStackCloud, 'has_service') def test_available_floating_ip_new( self, mock_has_service, mock_search_networks, mock__neutron_list_floating_ips, mock__neutron_create_floating_ip, mock_keystone_session): mock_has_service.return_value = True mock_search_networks.return_value = [self.mock_get_network_rep] mock__neutron_list_floating_ips.return_value = [] mock__neutron_create_floating_ip.return_value = \ self.mock_floating_ip_new_rep['floatingip'] mock_keystone_session.get_project_id.return_value = \ '4969c491a3c74ee4af974e6d800c62df' ip = self.client.available_floating_ip(network='my-network') self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['floating_ip_address'], ip['floating_ip_address']) @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'has_service') def test_delete_floating_ip_existing( self, mock_has_service, mock_neutron_client, mock_get_floating_ip): mock_has_service.return_value = True mock_get_floating_ip.return_value = { 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda7', } mock_neutron_client.delete_floatingip.return_value = None ret = self.client.delete_floating_ip( floating_ip_id='2f245a7b-796b-4f26-9cf9-9e82d248fda7') mock_neutron_client.delete_floatingip.assert_called_with( floatingip='2f245a7b-796b-4f26-9cf9-9e82d248fda7' ) self.assertTrue(ret) @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'has_service') def test_delete_floating_ip_not_found( self, mock_has_service, mock_neutron_client): mock_has_service.return_value = True mock_neutron_client.delete_floatingip.side_effect = \ n_exc.NotFound() ret = self.client.delete_floating_ip( floating_ip_id='a-wild-id-appears') self.assertFalse(ret) @patch.object(OpenStackCloud, 'search_ports') @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'has_service') def test_attach_ip_to_server( self, mock_has_service, mock_neutron_client, mock_search_ports): mock_has_service.return_value = True mock_search_ports.return_value = self.mock_search_ports_rep mock_neutron_client.list_floatingips.return_value = \ self.mock_floating_ip_list_rep self.client._attach_ip_to_server( server=self.fake_server, floating_ip=self.floating_ip) mock_neutron_client.update_floatingip.assert_called_with( floatingip=self.mock_floating_ip_list_rep['floatingips'][0]['id'], body={ 'floatingip': { 'port_id': self.mock_search_ports_rep[0]['id'], 'fixed_ip_address': self.mock_search_ports_rep[0][ 'fixed_ips'][0]['ip_address'] } } ) @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, 'neutron_client') @patch.object(OpenStackCloud, 'has_service') def test_detach_ip_from_server( self, mock_has_service, mock_neutron_client, mock_get_floating_ip): mock_has_service.return_value = True mock_get_floating_ip.return_value = \ _utils.normalize_neutron_floating_ips( self.mock_floating_ip_list_rep['floatingips'])[0] self.client.detach_ip_from_server( server_id='server-id', floating_ip_id='2f245a7b-796b-4f26-9cf9-9e82d248fda7') mock_neutron_client.update_floatingip.assert_called_with( floatingip='2f245a7b-796b-4f26-9cf9-9e82d248fda7', body={ 'floatingip': { 'port_id': None } } ) @patch.object(OpenStackCloud, '_attach_ip_to_server') @patch.object(OpenStackCloud, 'available_floating_ip') @patch.object(OpenStackCloud, 'has_service') def test_add_ip_from_pool( self, mock_has_service, mock_available_floating_ip, mock_attach_ip_to_server): mock_has_service.return_value = True mock_available_floating_ip.return_value = \ _utils.normalize_neutron_floating_ips([ self.mock_floating_ip_new_rep['floatingip']])[0] mock_attach_ip_to_server.return_value = self.fake_server server = self.client._add_ip_from_pool( server=self.fake_server, network='network-name', fixed_address='192.0.2.129') self.assertEqual(server, self.fake_server) shade-1.7.0/shade/tests/unit/test_port.py0000664000567000056710000002412112677256557021574 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_port ---------------------------------- Test port resource (managed by neutron) """ from mock import patch import os_client_config from shade import OpenStackCloud from shade.exc import OpenStackCloudException from shade.tests.unit import base class TestPort(base.TestCase): mock_neutron_port_create_rep = { 'port': { 'status': 'DOWN', 'binding:host_id': '', 'name': 'test-port-name', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'test-net-id', 'tenant_id': 'test-tenant-id', 'binding:vif_details': {}, 'binding:vnic_type': 'normal', 'binding:vif_type': 'unbound', 'device_owner': '', 'mac_address': '50:1c:0d:e4:f0:0d', 'binding:profile': {}, 'fixed_ips': [ { 'subnet_id': 'test-subnet-id', 'ip_address': '29.29.29.29' } ], 'id': 'test-port-id', 'security_groups': [], 'device_id': '' } } mock_neutron_port_update_rep = { 'port': { 'status': 'DOWN', 'binding:host_id': '', 'name': 'test-port-name-updated', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'test-net-id', 'tenant_id': 'test-tenant-id', 'binding:vif_details': {}, 'binding:vnic_type': 'normal', 'binding:vif_type': 'unbound', 'device_owner': '', 'mac_address': '50:1c:0d:e4:f0:0d', 'binding:profile': {}, 'fixed_ips': [ { 'subnet_id': 'test-subnet-id', 'ip_address': '29.29.29.29' } ], 'id': 'test-port-id', 'security_groups': [], 'device_id': '' } } mock_neutron_port_list_rep = { 'ports': [ { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': 'first-port', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': '70c1db1f-b701-45bd-96e0-a313ee3430b3', 'tenant_id': '', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'network:router_gateway', 'mac_address': 'fa:16:3e:58:42:ed', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '008ba151-0b8c-4a67-98b5-0d2b87666062', 'ip_address': '172.24.4.2' } ], 'id': 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b', 'security_groups': [], 'device_id': '9ae135f4-b6e0-4dad-9e91-3c223e385824' }, { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': '', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'f27aa545-cbdd-4907-b0c6-c9e8b039dcc2', 'tenant_id': 'd397de8a63f341818f198abb0966f6f3', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'network:router_interface', 'mac_address': 'fa:16:3e:bb:3c:e4', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '288bf4a1-51ba-43b6-9d0a-520e9005db17', 'ip_address': '10.0.0.1' } ], 'id': 'f71a6703-d6de-4be1-a91a-a570ede1d159', 'security_groups': [], 'device_id': '9ae135f4-b6e0-4dad-9e91-3c223e385824' } ] } def setUp(self): super(TestPort, self).setUp() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) @patch.object(OpenStackCloud, 'neutron_client') def test_create_port(self, mock_neutron_client): mock_neutron_client.create_port.return_value = \ self.mock_neutron_port_create_rep port = self.client.create_port( network_id='test-net-id', name='test-port-name', admin_state_up=True) mock_neutron_client.create_port.assert_called_with( body={'port': dict(network_id='test-net-id', name='test-port-name', admin_state_up=True)}) self.assertEqual(self.mock_neutron_port_create_rep['port'], port) def test_create_port_parameters(self): """Test that we detect invalid arguments passed to create_port""" self.assertRaises( TypeError, self.client.create_port, network_id='test-net-id', nome='test-port-name', stato_amministrativo_porta=True) @patch.object(OpenStackCloud, 'neutron_client') def test_create_port_exception(self, mock_neutron_client): mock_neutron_client.create_port.side_effect = Exception('blah') self.assertRaises( OpenStackCloudException, self.client.create_port, network_id='test-net-id', name='test-port-name', admin_state_up=True) @patch.object(OpenStackCloud, 'neutron_client') def test_update_port(self, mock_neutron_client): mock_neutron_client.list_ports.return_value = \ self.mock_neutron_port_list_rep mock_neutron_client.update_port.return_value = \ self.mock_neutron_port_update_rep port = self.client.update_port( name_or_id='d80b1a3b-4fc1-49f3-952e-1e2ab7081d8b', name='test-port-name-updated') mock_neutron_client.update_port.assert_called_with( port='d80b1a3b-4fc1-49f3-952e-1e2ab7081d8b', body={'port': dict(name='test-port-name-updated')}) self.assertEqual(self.mock_neutron_port_update_rep['port'], port) def test_update_port_parameters(self): """Test that we detect invalid arguments passed to update_port""" self.assertRaises( TypeError, self.client.update_port, name_or_id='test-port-id', nome='test-port-name-updated') @patch.object(OpenStackCloud, 'neutron_client') def test_update_port_exception(self, mock_neutron_client): mock_neutron_client.list_ports.return_value = \ self.mock_neutron_port_list_rep mock_neutron_client.update_port.side_effect = Exception('blah') self.assertRaises( OpenStackCloudException, self.client.update_port, name_or_id='d80b1a3b-4fc1-49f3-952e-1e2ab7081d8b', name='test-port-name-updated') @patch.object(OpenStackCloud, 'neutron_client') def test_list_ports(self, mock_neutron_client): mock_neutron_client.list_ports.return_value = \ self.mock_neutron_port_list_rep ports = self.client.list_ports() mock_neutron_client.list_ports.assert_called_with() self.assertItemsEqual(self.mock_neutron_port_list_rep['ports'], ports) @patch.object(OpenStackCloud, 'neutron_client') def test_list_ports_exception(self, mock_neutron_client): mock_neutron_client.list_ports.side_effect = Exception('blah') self.assertRaises(OpenStackCloudException, self.client.list_ports) @patch.object(OpenStackCloud, 'neutron_client') def test_search_ports_by_id(self, mock_neutron_client): mock_neutron_client.list_ports.return_value = \ self.mock_neutron_port_list_rep ports = self.client.search_ports( name_or_id='f71a6703-d6de-4be1-a91a-a570ede1d159') mock_neutron_client.list_ports.assert_called_with() self.assertEquals(1, len(ports)) self.assertEquals('fa:16:3e:bb:3c:e4', ports[0]['mac_address']) @patch.object(OpenStackCloud, 'neutron_client') def test_search_ports_by_name(self, mock_neutron_client): mock_neutron_client.list_ports.return_value = \ self.mock_neutron_port_list_rep ports = self.client.search_ports(name_or_id='first-port') mock_neutron_client.list_ports.assert_called_with() self.assertEquals(1, len(ports)) self.assertEquals('fa:16:3e:58:42:ed', ports[0]['mac_address']) @patch.object(OpenStackCloud, 'neutron_client') def test_search_ports_not_found(self, mock_neutron_client): mock_neutron_client.list_ports.return_value = \ self.mock_neutron_port_list_rep ports = self.client.search_ports(name_or_id='non-existent') mock_neutron_client.list_ports.assert_called_with() self.assertEquals(0, len(ports)) @patch.object(OpenStackCloud, 'neutron_client') def test_delete_port(self, mock_neutron_client): mock_neutron_client.list_ports.return_value = \ self.mock_neutron_port_list_rep self.client.delete_port(name_or_id='first-port') mock_neutron_client.delete_port.assert_called_with( port='d80b1a3b-4fc1-49f3-952e-1e2ab7081d8b') shade-1.7.0/shade/tests/unit/test_users.py0000664000567000056710000001600712677256557021755 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import munch import os_client_config as occ import testtools import shade from shade.tests import fakes from shade.tests.unit import base class TestUsers(base.TestCase): def setUp(self): super(TestUsers, self).setUp() self.cloud = shade.operator_cloud(validate=False) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_user_v2(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2' name = 'Mickey Mouse' email = 'mickey@disney.com' password = 'mice-rule' fake_user = fakes.FakeUser('1', email, name) mock_keystone.users.create.return_value = fake_user user = self.cloud.create_user(name=name, email=email, password=password) mock_keystone.users.create.assert_called_once_with( name=name, password=password, email=email, enabled=True, ) self.assertEqual(name, user.name) self.assertEqual(email, user.email) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_user_v3(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' name = 'Mickey Mouse' email = 'mickey@disney.com' password = 'mice-rule' domain_id = '456' fake_user = fakes.FakeUser('1', email, name) mock_keystone.users.create.return_value = fake_user user = self.cloud.create_user(name=name, email=email, password=password, domain_id=domain_id) mock_keystone.users.create.assert_called_once_with( name=name, password=password, email=email, enabled=True, domain=domain_id ) self.assertEqual(name, user.name) self.assertEqual(email, user.email) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_update_user_password_v2(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2' name = 'Mickey Mouse' email = 'mickey@disney.com' password = 'mice-rule' domain_id = '1' user = {'id': '1', 'name': name, 'email': email} fake_user = fakes.FakeUser(**user) munch_fake_user = munch.Munch(user) mock_keystone.users.list.return_value = [fake_user] mock_keystone.users.get.return_value = fake_user mock_keystone.users.update.return_value = fake_user mock_keystone.users.update_password.return_value = fake_user user = self.cloud.update_user(name, name=name, email=email, password=password, domain_id=domain_id) mock_keystone.users.update.assert_called_once_with( user=munch_fake_user, name=name, email=email) mock_keystone.users.update_password.assert_called_once_with( user=munch_fake_user, password=password) self.assertEqual(name, user.name) self.assertEqual(email, user.email) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_user_v3_no_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' name = 'Mickey Mouse' email = 'mickey@disney.com' password = 'mice-rule' with testtools.ExpectedException( shade.OpenStackCloudException, "User creation requires an explicit domain_id argument." ): self.cloud.create_user(name=name, email=email, password=password) @mock.patch.object(shade.OpenStackCloud, 'get_user_by_id') @mock.patch.object(shade.OpenStackCloud, 'get_user') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_delete_user(self, mock_keystone, mock_get_user, mock_get_by_id): mock_get_user.return_value = dict(id='123') fake_user = fakes.FakeUser('123', 'email', 'name') mock_get_by_id.return_value = fake_user self.assertTrue(self.cloud.delete_user('name')) mock_get_by_id.assert_called_once_with('123', normalize=False) mock_keystone.users.delete.assert_called_once_with(user=fake_user) @mock.patch.object(shade.OpenStackCloud, 'get_user') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_delete_user_not_found(self, mock_keystone, mock_get_user): mock_get_user.return_value = None self.assertFalse(self.cloud.delete_user('name')) self.assertFalse(mock_keystone.users.delete.called) @mock.patch.object(shade.OpenStackCloud, 'get_user') @mock.patch.object(shade.OperatorCloud, 'get_group') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_add_user_to_group(self, mock_keystone, mock_group, mock_user): mock_user.return_value = munch.Munch(dict(id=1)) mock_group.return_value = munch.Munch(dict(id=2)) self.cloud.add_user_to_group("user", "group") mock_keystone.users.add_to_group.assert_called_once_with( user=1, group=2 ) @mock.patch.object(shade.OpenStackCloud, 'get_user') @mock.patch.object(shade.OperatorCloud, 'get_group') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_is_user_in_group(self, mock_keystone, mock_group, mock_user): mock_user.return_value = munch.Munch(dict(id=1)) mock_group.return_value = munch.Munch(dict(id=2)) mock_keystone.users.check_in_group.return_value = True self.assertTrue(self.cloud.is_user_in_group("user", "group")) mock_keystone.users.check_in_group.assert_called_once_with( user=1, group=2 ) @mock.patch.object(shade.OpenStackCloud, 'get_user') @mock.patch.object(shade.OperatorCloud, 'get_group') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_remove_user_from_group(self, mock_keystone, mock_group, mock_user): mock_user.return_value = munch.Munch(dict(id=1)) mock_group.return_value = munch.Munch(dict(id=2)) self.cloud.remove_user_from_group("user", "group") mock_keystone.users.remove_from_group.assert_called_once_with( user=1, group=2 ) shade-1.7.0/shade/tests/unit/test_shade_operator.py0000664000567000056710000012524212677256557023615 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import plugin as ksa_plugin import mock import testtools from os_client_config import cloud_config import shade from shade import exc from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestShadeOperator(base.TestCase): def setUp(self): super(TestShadeOperator, self).setUp() self.cloud = shade.operator_cloud(validate=False) def test_operator_cloud(self): self.assertIsInstance(self.cloud, shade.OperatorCloud) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_get_machine(self, mock_client): node = fakes.FakeMachine(id='00000000-0000-0000-0000-000000000000', name='bigOlFaker') mock_client.node.get.return_value = node machine = self.cloud.get_machine('bigOlFaker') mock_client.node.get.assert_called_with(node_id='bigOlFaker') self.assertEqual(meta.obj_to_dict(node), machine) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_get_machine_by_mac(self, mock_client): class port_value: node_uuid = '00000000-0000-0000-0000-000000000000' address = '00:00:00:00:00:00' class node_value: uuid = '00000000-0000-0000-0000-000000000000' expected_value = dict( uuid='00000000-0000-0000-0000-000000000000') mock_client.port.get_by_address.return_value = port_value mock_client.node.get.return_value = node_value machine = self.cloud.get_machine_by_mac('00:00:00:00:00:00') mock_client.port.get_by_address.assert_called_with( address='00:00:00:00:00:00') mock_client.node.get.assert_called_with( node_id='00000000-0000-0000-0000-000000000000') self.assertEqual(machine, expected_value) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_list_machines(self, mock_client): m1 = fakes.FakeMachine(1, 'fake_machine1') mock_client.node.list.return_value = [m1] machines = self.cloud.list_machines() self.assertTrue(mock_client.node.list.called) self.assertEqual(meta.obj_to_dict(m1), machines[0]) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_validate_node(self, mock_client): node_uuid = '123' self.cloud.validate_node(node_uuid) mock_client.node.validate.assert_called_once_with( node_uuid=node_uuid ) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_list_nics(self, mock_client): port1 = fakes.FakeMachinePort(1, "aa:bb:cc:dd", "node1") port2 = fakes.FakeMachinePort(2, "dd:cc:bb:aa", "node2") port_list = [port1, port2] port_dict_list = meta.obj_list_to_dict(port_list) mock_client.port.list.return_value = port_list nics = self.cloud.list_nics() self.assertTrue(mock_client.port.list.called) self.assertEqual(port_dict_list, nics) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_list_nics_failure(self, mock_client): mock_client.port.list.side_effect = Exception() self.assertRaises(exc.OpenStackCloudException, self.cloud.list_nics) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_list_nics_for_machine(self, mock_client): mock_client.node.list_ports.return_value = [] self.cloud.list_nics_for_machine("123") mock_client.node.list_ports.assert_called_with(node_id="123") @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_list_nics_for_machine_failure(self, mock_client): mock_client.node.list_ports.side_effect = Exception() self.assertRaises(exc.OpenStackCloudException, self.cloud.list_nics_for_machine, None) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_patch_machine(self, mock_client): node_id = 'node01' patch = [] patch.append({'op': 'remove', 'path': '/instance_info'}) self.cloud.patch_machine(node_id, patch) self.assertTrue(mock_client.node.update.called) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_no_action(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' name = 'node01' expected_machine = dict( uuid='00000000-0000-0000-0000-000000000000', name='node01' ) mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine('node01') self.assertIsNone(update_dict['changes']) self.assertFalse(mock_patch.called) self.assertDictEqual(expected_machine, update_dict['node']) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_no_action_name(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' name = 'node01' expected_machine = dict( uuid='00000000-0000-0000-0000-000000000000', name='node01' ) mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine('node01', name='node01') self.assertIsNone(update_dict['changes']) self.assertFalse(mock_patch.called) self.assertDictEqual(expected_machine, update_dict['node']) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_action_name(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' name = 'evil' expected_patch = [dict(op='replace', path='/name', value='good')] mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine('evil', name='good') self.assertIsNotNone(update_dict['changes']) self.assertEqual('/name', update_dict['changes'][0]) self.assertTrue(mock_patch.called) mock_patch.assert_called_with( '00000000-0000-0000-0000-000000000000', expected_patch) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_update_name(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' name = 'evil' expected_patch = [dict(op='replace', path='/name', value='good')] mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine('evil', name='good') self.assertIsNotNone(update_dict['changes']) self.assertEqual('/name', update_dict['changes'][0]) self.assertTrue(mock_patch.called) mock_patch.assert_called_with( '00000000-0000-0000-0000-000000000000', expected_patch) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_update_chassis_uuid(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' chassis_uuid = None expected_patch = [ dict( op='replace', path='/chassis_uuid', value='00000000-0000-0000-0000-000000000001' )] mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine( '00000000-0000-0000-0000-000000000000', chassis_uuid='00000000-0000-0000-0000-000000000001') self.assertIsNotNone(update_dict['changes']) self.assertEqual('/chassis_uuid', update_dict['changes'][0]) self.assertTrue(mock_patch.called) mock_patch.assert_called_with( '00000000-0000-0000-0000-000000000000', expected_patch) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_update_driver(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' driver = None expected_patch = [ dict( op='replace', path='/driver', value='fake' )] mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine( '00000000-0000-0000-0000-000000000000', driver='fake' ) self.assertIsNotNone(update_dict['changes']) self.assertEqual('/driver', update_dict['changes'][0]) self.assertTrue(mock_patch.called) mock_patch.assert_called_with( '00000000-0000-0000-0000-000000000000', expected_patch) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_update_driver_info(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' driver_info = None expected_patch = [ dict( op='replace', path='/driver_info', value=dict(var='fake') )] mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine( '00000000-0000-0000-0000-000000000000', driver_info=dict(var="fake") ) self.assertIsNotNone(update_dict['changes']) self.assertEqual('/driver_info', update_dict['changes'][0]) self.assertTrue(mock_patch.called) mock_patch.assert_called_with( '00000000-0000-0000-0000-000000000000', expected_patch) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_update_instance_info(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' instance_info = None expected_patch = [ dict( op='replace', path='/instance_info', value=dict(var='fake') )] mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine( '00000000-0000-0000-0000-000000000000', instance_info=dict(var="fake") ) self.assertIsNotNone(update_dict['changes']) self.assertEqual('/instance_info', update_dict['changes'][0]) self.assertTrue(mock_patch.called) mock_patch.assert_called_with( '00000000-0000-0000-0000-000000000000', expected_patch) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_update_instance_uuid(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' instance_uuid = None expected_patch = [ dict( op='replace', path='/instance_uuid', value='00000000-0000-0000-0000-000000000002' )] mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine( '00000000-0000-0000-0000-000000000000', instance_uuid='00000000-0000-0000-0000-000000000002' ) self.assertIsNotNone(update_dict['changes']) self.assertEqual('/instance_uuid', update_dict['changes'][0]) self.assertTrue(mock_patch.called) mock_patch.assert_called_with( '00000000-0000-0000-0000-000000000000', expected_patch) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'patch_machine') def test_update_machine_patch_update_properties(self, mock_patch, mock_client): class client_return_value: uuid = '00000000-0000-0000-0000-000000000000' properties = None expected_patch = [ dict( op='replace', path='/properties', value=dict(var='fake') )] mock_client.node.get.return_value = client_return_value update_dict = self.cloud.update_machine( '00000000-0000-0000-0000-000000000000', properties=dict(var="fake") ) self.assertIsNotNone(update_dict['changes']) self.assertEqual('/properties', update_dict['changes'][0]) self.assertTrue(mock_patch.called) mock_patch.assert_called_with( '00000000-0000-0000-0000-000000000000', expected_patch) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_inspect_machine_fail_active(self, mock_client): machine_uuid = '00000000-0000-0000-0000-000000000000' class active_machine: uuid = machine_uuid provision_state = "active" mock_client.node.get.return_value = active_machine self.assertRaises( shade.OpenStackCloudException, self.cloud.inspect_machine, machine_uuid, wait=True, timeout=1) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_inspect_machine_failed(self, mock_client): machine_uuid = '00000000-0000-0000-0000-000000000000' class inspect_failed_machine: uuid = machine_uuid provision_state = "inspect failed" last_error = "kaboom" mock_client.node.get.return_value = inspect_failed_machine self.cloud.inspect_machine(machine_uuid) self.assertTrue(mock_client.node.set_provision_state.called) self.assertEqual( mock_client.node.set_provision_state.call_count, 1) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_inspect_machine_managable(self, mock_client): machine_uuid = '00000000-0000-0000-0000-000000000000' class manageable_machine: uuid = machine_uuid provision_state = "manageable" mock_client.node.get.return_value = manageable_machine self.cloud.inspect_machine(machine_uuid) self.assertEqual( mock_client.node.set_provision_state.call_count, 1) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_inspect_machine_available(self, mock_client): machine_uuid = '00000000-0000-0000-0000-000000000000' class available_machine: uuid = machine_uuid provision_state = "available" class manageable_machine: uuid = machine_uuid provision_state = "manageable" class inspecting_machine: uuid = machine_uuid provision_state = "inspecting" mock_client.node.get.side_effect = iter([ available_machine, available_machine, manageable_machine, manageable_machine, inspecting_machine]) self.cloud.inspect_machine(machine_uuid) self.assertTrue(mock_client.node.set_provision_state.called) self.assertEqual( mock_client.node.set_provision_state.call_count, 3) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_inspect_machine_available_wait(self, mock_client): machine_uuid = '00000000-0000-0000-0000-000000000000' class available_machine: uuid = machine_uuid provision_state = "available" class manageable_machine: uuid = machine_uuid provision_state = "manageable" class inspecting_machine: uuid = machine_uuid provision_state = "inspecting" mock_client.node.get.side_effect = iter([ available_machine, available_machine, manageable_machine, inspecting_machine, manageable_machine, available_machine, available_machine]) expected_return_value = dict( uuid=machine_uuid, provision_state="available" ) return_value = self.cloud.inspect_machine( machine_uuid, wait=True, timeout=1) self.assertTrue(mock_client.node.set_provision_state.called) self.assertEqual( mock_client.node.set_provision_state.call_count, 3) self.assertDictEqual(expected_return_value, return_value) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_inspect_machine_wait(self, mock_client): machine_uuid = '00000000-0000-0000-0000-000000000000' class manageable_machine: uuid = machine_uuid provision_state = "manageable" class inspecting_machine: uuid = machine_uuid provision_state = "inspecting" expected_return_value = dict( uuid=machine_uuid, provision_state="manageable" ) mock_client.node.get.side_effect = iter([ manageable_machine, inspecting_machine, inspecting_machine, manageable_machine, manageable_machine]) return_value = self.cloud.inspect_machine( machine_uuid, wait=True, timeout=1) self.assertDictEqual(expected_return_value, return_value) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_inspect_machine_inspect_failed(self, mock_client): machine_uuid = '00000000-0000-0000-0000-000000000000' class manageable_machine: uuid = machine_uuid provision_state = "manageable" last_error = None class inspecting_machine: uuid = machine_uuid provision_state = "inspecting" last_error = None class inspect_failed_machine: uuid = machine_uuid provision_state = "inspect failed" last_error = "kaboom" mock_client.node.get.side_effect = iter([ manageable_machine, inspecting_machine, inspect_failed_machine]) self.assertRaises( shade.OpenStackCloudException, self.cloud.inspect_machine, machine_uuid, wait=True, timeout=1) self.assertEqual( mock_client.node.set_provision_state.call_count, 1) self.assertEqual(mock_client.node.get.call_count, 3) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_register_machine(self, mock_client): class fake_node: uuid = "00000000-0000-0000-0000-000000000000" provision_state = "available" reservation = None last_error = None expected_return_value = dict( uuid="00000000-0000-0000-0000-000000000000", provision_state="available", reservation=None, last_error=None ) mock_client.node.create.return_value = fake_node mock_client.node.get.return_value = fake_node nics = [{'mac': '00:00:00:00:00:00'}] return_value = self.cloud.register_machine(nics) self.assertDictEqual(expected_return_value, return_value) self.assertTrue(mock_client.node.create.called) self.assertTrue(mock_client.port.create.called) self.assertFalse(mock_client.node.get.called) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'node_set_provision_state') def test_register_machine_enroll( self, mock_set_state, mock_client): machine_uuid = "00000000-0000-0000-0000-000000000000" class fake_node_init_state: uuid = machine_uuid provision_state = "enroll" reservation = None last_error = None class fake_node_post_manage: uuid = machine_uuid provision_state = "enroll" reservation = "do you have a flag?" last_error = None class fake_node_post_manage_done: uuid = machine_uuid provision_state = "manage" reservation = None last_error = None class fake_node_post_provide: uuid = machine_uuid provision_state = "available" reservation = None last_error = None class fake_node_post_enroll_failure: uuid = machine_uuid provision_state = "enroll" reservation = None last_error = "insufficent lolcats" expected_return_value = dict( uuid=machine_uuid, provision_state="available", reservation=None, last_error=None ) mock_client.node.get.side_effect = iter([ fake_node_init_state, fake_node_post_manage, fake_node_post_manage_done, fake_node_post_provide]) mock_client.node.create.return_value = fake_node_init_state nics = [{'mac': '00:00:00:00:00:00'}] return_value = self.cloud.register_machine(nics) self.assertDictEqual(expected_return_value, return_value) self.assertTrue(mock_client.node.create.called) self.assertTrue(mock_client.port.create.called) self.assertTrue(mock_client.node.get.called) mock_client.reset_mock() mock_client.node.get.side_effect = iter([ fake_node_init_state, fake_node_post_manage, fake_node_post_manage_done, fake_node_post_provide]) return_value = self.cloud.register_machine(nics, wait=True) self.assertDictEqual(expected_return_value, return_value) self.assertTrue(mock_client.node.create.called) self.assertTrue(mock_client.port.create.called) self.assertTrue(mock_client.node.get.called) mock_client.reset_mock() mock_client.node.get.side_effect = iter([ fake_node_init_state, fake_node_post_manage, fake_node_post_enroll_failure]) self.assertRaises( shade.OpenStackCloudException, self.cloud.register_machine, nics) self.assertRaises( shade.OpenStackCloudException, self.cloud.register_machine, nics, wait=True) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade.OperatorCloud, 'node_set_provision_state') def test_register_machine_enroll_timeout( self, mock_set_state, mock_client): machine_uuid = "00000000-0000-0000-0000-000000000000" class fake_node_init_state: uuid = machine_uuid provision_state = "enroll" reservation = "do you have a flag?" last_error = None mock_client.node.get.return_value = fake_node_init_state mock_client.node.create.return_value = fake_node_init_state nics = [{'mac': '00:00:00:00:00:00'}] self.assertRaises( shade.OpenStackCloudException, self.cloud.register_machine, nics, lock_timeout=0.001) self.assertTrue(mock_client.node.create.called) self.assertTrue(mock_client.port.create.called) self.assertTrue(mock_client.node.get.called) mock_client.node.get.reset_mock() mock_client.node.create.reset_mock() self.assertRaises( shade.OpenStackCloudException, self.cloud.register_machine, nics, wait=True, timeout=0.001) self.assertTrue(mock_client.node.create.called) self.assertTrue(mock_client.port.create.called) self.assertTrue(mock_client.node.get.called) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_register_machine_port_create_failed(self, mock_client): class fake_node: uuid = "00000000-0000-0000-0000-000000000000" provision_state = "available" resevation = None last_error = None nics = [{'mac': '00:00:00:00:00:00'}] mock_client.node.create.return_value = fake_node mock_client.port.create.side_effect = ( exc.OpenStackCloudException("Error")) self.assertRaises(exc.OpenStackCloudException, self.cloud.register_machine, nics) self.assertTrue(mock_client.node.create.called) self.assertTrue(mock_client.port.create.called) self.assertTrue(mock_client.node.delete.called) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_unregister_machine(self, mock_client): class fake_node: provision_state = 'available' class fake_port: uuid = '00000000-0000-0000-0000-000000000001' mock_client.port.get_by_address.return_value = fake_port mock_client.node.get.return_value = fake_node nics = [{'mac': '00:00:00:00:00:00'}] uuid = "00000000-0000-0000-0000-000000000000" self.cloud.unregister_machine(nics, uuid) self.assertTrue(mock_client.node.delete.called) self.assertTrue(mock_client.port.get_by_address.called) self.assertTrue(mock_client.port.delete.called) self.assertTrue(mock_client.port.get_by_address.called) mock_client.port.get_by_address.assert_called_with( address='00:00:00:00:00:00') mock_client.port.delete.assert_called_with( port_id='00000000-0000-0000-0000-000000000001') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_unregister_machine_unavailable(self, mock_client): invalid_states = ['active', 'cleaning', 'clean wait', 'clean failed'] nics = [{'mac': '00:00:00:00:00:00'}] uuid = "00000000-0000-0000-0000-000000000000" for state in invalid_states: class fake_node: provision_state = state mock_client.node.get.return_value = fake_node self.assertRaises( exc.OpenStackCloudException, self.cloud.unregister_machine, nics, uuid) self.assertFalse(mock_client.node.delete.called) self.assertFalse(mock_client.port.delete.called) self.assertFalse(mock_client.port.get_by_address.called) self.assertTrue(mock_client.node.get.called) mock_client.node.reset_mock() mock_client.node.reset_mock() @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_unregister_machine_timeout(self, mock_client): class fake_node: provision_state = 'available' mock_client.node.get.return_value = fake_node nics = [{'mac': '00:00:00:00:00:00'}] uuid = "00000000-0000-0000-0000-000000000000" self.assertRaises( exc.OpenStackCloudException, self.cloud.unregister_machine, nics, uuid, wait=True, timeout=0.001) self.assertTrue(mock_client.node.delete.called) self.assertTrue(mock_client.port.delete.called) self.assertTrue(mock_client.port.get_by_address.called) self.assertTrue(mock_client.node.get.called) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_set_machine_maintenace_state(self, mock_client): mock_client.node.set_maintenance.return_value = None node_id = 'node01' reason = 'no reason' self.cloud.set_machine_maintenance_state(node_id, True, reason=reason) mock_client.node.set_maintenance.assert_called_with( node_id='node01', state='true', maint_reason='no reason') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_set_machine_maintenace_state_false(self, mock_client): mock_client.node.set_maintenance.return_value = None node_id = 'node01' self.cloud.set_machine_maintenance_state(node_id, False) mock_client.node.set_maintenance.assert_called_with( node_id='node01', state='false') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_remove_machine_from_maintenance(self, mock_client): mock_client.node.set_maintenance.return_value = None node_id = 'node01' self.cloud.remove_machine_from_maintenance(node_id) mock_client.node.set_maintenance.assert_called_with( node_id='node01', state='false') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_set_machine_power_on(self, mock_client): mock_client.node.set_power_state.return_value = None node_id = 'node01' return_value = self.cloud.set_machine_power_on(node_id) self.assertEqual(None, return_value) mock_client.node.set_power_state.assert_called_with( node_id='node01', state='on') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_set_machine_power_off(self, mock_client): mock_client.node.set_power_state.return_value = None node_id = 'node01' return_value = self.cloud.set_machine_power_off(node_id) self.assertEqual(None, return_value) mock_client.node.set_power_state.assert_called_with( node_id='node01', state='off') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_set_machine_power_reboot(self, mock_client): mock_client.node.set_power_state.return_value = None node_id = 'node01' return_value = self.cloud.set_machine_power_reboot(node_id) self.assertEqual(None, return_value) mock_client.node.set_power_state.assert_called_with( node_id='node01', state='reboot') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_set_machine_power_reboot_failure(self, mock_client): mock_client.node.set_power_state.return_value = 'failure' self.assertRaises(shade.OpenStackCloudException, self.cloud.set_machine_power_reboot, 'node01') mock_client.node.set_power_state.assert_called_with( node_id='node01', state='reboot') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_node_set_provision_state(self, mock_client): class active_node_state: provision_state = "active" active_return_value = dict( provision_state="active") mock_client.node.set_provision_state.return_value = None mock_client.node.get.return_value = active_node_state node_id = 'node01' return_value = self.cloud.node_set_provision_state( node_id, 'active', configdrive='http://127.0.0.1/file.iso') self.assertEqual(active_return_value, return_value) mock_client.node.set_provision_state.assert_called_with( node_uuid='node01', state='active', configdrive='http://127.0.0.1/file.iso') self.assertTrue(mock_client.node.get.called) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_node_set_provision_state_wait_timeout(self, mock_client): class deploying_node_state: provision_state = "deploying" class active_node_state: provision_state = "active" class managable_node_state: provision_state = "managable" class available_node_state: provision_state = "available" active_return_value = dict( provision_state="active") mock_client.node.get.return_value = active_node_state mock_client.node.set_provision_state.return_value = None node_id = 'node01' return_value = self.cloud.node_set_provision_state( node_id, 'active', configdrive='http://127.0.0.1/file.iso', wait=True) self.assertEqual(active_return_value, return_value) mock_client.node.set_provision_state.assert_called_with( node_uuid='node01', state='active', configdrive='http://127.0.0.1/file.iso') self.assertTrue(mock_client.node.get.called) mock_client.mock_reset() mock_client.node.get.return_value = deploying_node_state self.assertRaises( shade.OpenStackCloudException, self.cloud.node_set_provision_state, node_id, 'active', configdrive='http://127.0.0.1/file.iso', wait=True, timeout=0.001) self.assertTrue(mock_client.node.get.called) mock_client.node.set_provision_state.assert_called_with( node_uuid='node01', state='active', configdrive='http://127.0.0.1/file.iso') @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_node_set_provision_state_wait_provide(self, mock_client): class managable_node_state: provision_state = "managable" class available_node_state: provision_state = "available" node_provide_return_value = dict( provision_state="available") mock_client.node.get.side_effect = iter([ managable_node_state, available_node_state]) return_value = self.cloud.node_set_provision_state( 'test_node', 'provide', wait=True) self.assertEqual(mock_client.node.get.call_count, 2) self.assertDictEqual(node_provide_return_value, return_value) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade._utils, '_iterate_timeout') def test_activate_node(self, mock_timeout, mock_client): mock_client.node.set_provision_state.return_value = None node_id = 'node02' return_value = self.cloud.activate_node( node_id, configdrive='http://127.0.0.1/file.iso') self.assertEqual(None, return_value) mock_client.node.set_provision_state.assert_called_with( node_uuid='node02', state='active', configdrive='http://127.0.0.1/file.iso') self.assertFalse(mock_timeout.called) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_activate_node_timeout(self, mock_client): class active_node_state: provision_state = 'active' class available_node_state: provision_state = 'available' mock_client.node.get.side_effect = iter([ available_node_state, active_node_state]) mock_client.node.set_provision_state.return_value = None node_id = 'node04' return_value = self.cloud.activate_node( node_id, configdrive='http://127.0.0.1/file.iso', wait=True, timeout=2) self.assertEqual(None, return_value) mock_client.node.set_provision_state.assert_called_with( node_uuid='node04', state='active', configdrive='http://127.0.0.1/file.iso') self.assertEqual(mock_client.node.get.call_count, 2) @mock.patch.object(shade.OperatorCloud, 'ironic_client') @mock.patch.object(shade._utils, '_iterate_timeout') def test_deactivate_node(self, mock_timeout, mock_client): mock_client.node.set_provision_state.return_value = None node_id = 'node03' return_value = self.cloud.deactivate_node( node_id, wait=False) self.assertEqual(None, return_value) mock_client.node.set_provision_state.assert_called_with( node_uuid='node03', state='deleted', configdrive=None) self.assertFalse(mock_timeout.called) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_deactivate_node_timeout(self, mock_client): class active_node_state: provision_state = 'active' class deactivated_node_state: provision_state = 'available' mock_client.node.get.side_effect = iter([ active_node_state, deactivated_node_state]) mock_client.node.set_provision_state.return_value = None node_id = 'node03' return_value = self.cloud.deactivate_node( node_id, wait=True, timeout=2) self.assertEqual(None, return_value) mock_client.node.set_provision_state.assert_called_with( node_uuid='node03', state='deleted', configdrive=None) self.assertEqual(mock_client.node.get.call_count, 2) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_set_node_instance_info(self, mock_client): uuid = 'aaa' patch = [{'op': 'add', 'foo': 'bar'}] self.cloud.set_node_instance_info(uuid, patch) mock_client.node.update.assert_called_with( node_id=uuid, patch=patch ) @mock.patch.object(shade.OperatorCloud, 'ironic_client') def test_purge_node_instance_info(self, mock_client): uuid = 'aaa' expected_patch = [{'op': 'remove', 'path': '/instance_info'}] self.cloud.purge_node_instance_info(uuid) mock_client.node.update.assert_called_with( node_id=uuid, patch=expected_patch ) @mock.patch.object(shade.OpenStackCloud, 'glance_client') def test_get_image_name(self, glance_mock): class Image(object): id = '22' name = '22 name' status = 'success' fake_image = Image() glance_mock.images.list.return_value = [fake_image] self.assertEqual('22 name', self.cloud.get_image_name('22')) self.assertEqual('22 name', self.cloud.get_image_name('22 name')) @mock.patch.object(shade.OpenStackCloud, 'glance_client') def test_get_image_id(self, glance_mock): class Image(object): id = '22' name = '22 name' status = 'success' fake_image = Image() glance_mock.images.list.return_value = [fake_image] self.assertEqual('22', self.cloud.get_image_id('22')) self.assertEqual('22', self.cloud.get_image_id('22 name')) @mock.patch.object(cloud_config.CloudConfig, 'get_endpoint') def test_get_session_endpoint_provided(self, fake_get_endpoint): fake_get_endpoint.return_value = 'http://fake.url' self.assertEqual( 'http://fake.url', self.cloud.get_session_endpoint('image')) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_get_session_endpoint_session(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = 'http://fake.url' get_session_mock.return_value = session_mock self.assertEqual( 'http://fake.url', self.cloud.get_session_endpoint('image')) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_get_session_endpoint_exception(self, get_session_mock): class FakeException(Exception): pass def side_effect(*args, **kwargs): raise FakeException("No service") session_mock = mock.Mock() session_mock.get_endpoint.side_effect = side_effect get_session_mock.return_value = session_mock self.cloud.name = 'testcloud' self.cloud.region_name = 'testregion' with testtools.ExpectedException( exc.OpenStackCloudException, "Error getting image endpoint on testcloud:testregion:" " No service"): self.cloud.get_session_endpoint("image") @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_get_session_endpoint_unavailable(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = None get_session_mock.return_value = session_mock image_endpoint = self.cloud.get_session_endpoint("image") self.assertIsNone(image_endpoint) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_get_session_endpoint_identity(self, get_session_mock): session_mock = mock.Mock() get_session_mock.return_value = session_mock self.cloud.get_session_endpoint('identity') session_mock.get_endpoint.assert_called_with( interface=ksa_plugin.AUTH_INTERFACE) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_has_service_no(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = None get_session_mock.return_value = session_mock self.assertFalse(self.cloud.has_service("image")) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_has_service_yes(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = 'http://fake.url' get_session_mock.return_value = session_mock self.assertTrue(self.cloud.has_service("image")) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_hypervisors(self, mock_nova): '''This test verifies that calling list_hypervisors results in a call to nova client.''' mock_nova.hypervisors.list.return_value = [ fakes.FakeHypervisor('1', 'testserver1'), fakes.FakeHypervisor('2', 'testserver2'), ] r = self.cloud.list_hypervisors() mock_nova.hypervisors.list.assert_called_once_with() self.assertEquals(2, len(r)) self.assertEquals('testserver1', r[0]['hypervisor_hostname']) self.assertEquals('testserver2', r[1]['hypervisor_hostname']) shade-1.7.0/shade/tests/unit/test_create_server.py0000664000567000056710000003010112677256557023434 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_create_server ---------------------------------- Tests for the `create_server` command. """ from mock import patch, Mock import mock import os_client_config from shade import _utils from shade import meta from shade import OpenStackCloud from shade.exc import (OpenStackCloudException, OpenStackCloudTimeout) from shade.tests import base, fakes class TestCreateServer(base.TestCase): def setUp(self): super(TestCreateServer, self).setUp() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) self.client._SERVER_AGE = 0 def test_create_server_with_create_exception(self): """ Test that an exception in the novaclient create raises an exception in create_server. """ with patch("shade.OpenStackCloud"): config = { "servers.create.side_effect": Exception("exception"), } OpenStackCloud.nova_client = Mock(**config) self.assertRaises( OpenStackCloudException, self.client.create_server, 'server-name', 'image-id', 'flavor-id') def test_create_server_with_get_exception(self): """ Test that an exception when attempting to get the server instance via the novaclient raises an exception in create_server. """ with patch("shade.OpenStackCloud"): config = { "servers.create.return_value": Mock(status="BUILD"), "servers.get.side_effect": Exception("exception") } OpenStackCloud.nova_client = Mock(**config) self.assertRaises( OpenStackCloudException, self.client.create_server, 'server-name', 'image-id', 'flavor-id') def test_create_server_with_server_error(self): """ Test that a server error before we return or begin waiting for the server instance spawn raises an exception in create_server. """ build_server = fakes.FakeServer('1234', '', 'BUILD') error_server = fakes.FakeServer('1234', '', 'ERROR') with patch("shade.OpenStackCloud"): config = { "servers.create.return_value": build_server, "servers.get.return_value": error_server, } OpenStackCloud.nova_client = Mock(**config) self.assertRaises( OpenStackCloudException, self.client.create_server, 'server-name', 'image-id', 'flavor-id') def test_create_server_wait_server_error(self): """ Test that a server error while waiting for the server to spawn raises an exception in create_server. """ with patch("shade.OpenStackCloud"): build_server = fakes.FakeServer('1234', '', 'BUILD') error_server = fakes.FakeServer('1234', '', 'ERROR') config = { "servers.create.return_value": build_server, "servers.get.return_value": build_server, "servers.list.side_effect": [ [build_server], [error_server]] } OpenStackCloud.nova_client = Mock(**config) self.assertRaises( OpenStackCloudException, self.client.create_server, 'server-name', 'image-id', 'flavor-id', wait=True) def test_create_server_with_timeout(self): """ Test that a timeout while waiting for the server to spawn raises an exception in create_server. """ with patch("shade.OpenStackCloud"): fake_server = fakes.FakeServer('1234', '', 'BUILD') config = { "servers.create.return_value": fake_server, "servers.get.return_value": fake_server, "servers.list.return_value": [fake_server], } OpenStackCloud.nova_client = Mock(**config) self.assertRaises( OpenStackCloudTimeout, self.client.create_server, 'server-name', 'image-id', 'flavor-id', wait=True, timeout=1) def test_create_server_no_wait(self): """ Test that create_server with no wait and no exception in the novaclient create call returns the server instance. """ with patch("shade.OpenStackCloud"): fake_server = fakes.FakeServer('1234', '', 'BUILD') config = { "servers.create.return_value": fake_server, "servers.get.return_value": fake_server } OpenStackCloud.nova_client = Mock(**config) self.assertEqual( _utils.normalize_server( meta.obj_to_dict(fake_server), cloud_name=self.client.name, region_name=self.client.region_name), self.client.create_server( name='server-name', image='image=id', flavor='flavor-id')) def test_create_server_with_admin_pass_no_wait(self): """ Test that a server with an admin_pass passed returns the password """ with patch("shade.OpenStackCloud"): fake_server = fakes.FakeServer('1234', '', 'BUILD') fake_create_server = fakes.FakeServer('1234', '', 'BUILD', adminPass='ooBootheiX0edoh') config = { "servers.create.return_value": fake_create_server, "servers.get.return_value": fake_server } OpenStackCloud.nova_client = Mock(**config) self.assertEqual( _utils.normalize_server( meta.obj_to_dict(fake_create_server), cloud_name=self.client.name, region_name=self.client.region_name), self.client.create_server( name='server-name', image='image=id', flavor='flavor-id', admin_pass='ooBootheiX0edoh')) @patch.object(OpenStackCloud, "wait_for_server") @patch.object(OpenStackCloud, "nova_client") def test_create_server_with_admin_pass_wait(self, mock_nova, mock_wait): """ Test that a server with an admin_pass passed returns the password """ fake_server = fakes.FakeServer('1234', '', 'BUILD') fake_server_with_pass = fakes.FakeServer('1234', '', 'BUILD', adminPass='ooBootheiX0edoh') mock_nova.servers.create.return_value = fake_server mock_nova.servers.get.return_value = fake_server # The wait returns non-password server mock_wait.return_value = _utils.normalize_server( meta.obj_to_dict(fake_server), None, None) server = self.client.create_server( name='server-name', image='image-id', flavor='flavor-id', admin_pass='ooBootheiX0edoh', wait=True) # Assert that we did wait self.assertTrue(mock_wait.called) # Even with the wait, we should still get back a passworded server self.assertEqual( server, _utils.normalize_server(meta.obj_to_dict(fake_server_with_pass), None, None) ) @patch.object(OpenStackCloud, "get_active_server") @patch.object(OpenStackCloud, "get_server") def test_wait_for_server(self, mock_get_server, mock_get_active_server): """ Test that waiting for a server returns the server instance when its status changes to "ACTIVE". """ building_server = {'id': 'fake_server_id', 'status': 'BUILDING'} active_server = {'id': 'fake_server_id', 'status': 'ACTIVE'} mock_get_server.side_effect = iter([building_server, active_server]) mock_get_active_server.side_effect = iter([ building_server, active_server]) server = self.client.wait_for_server(building_server) self.assertEqual(2, mock_get_server.call_count) mock_get_server.assert_has_calls([ mock.call(building_server['id']), mock.call(active_server['id']), ]) self.assertEqual(2, mock_get_active_server.call_count) mock_get_active_server.assert_has_calls([ mock.call(server=building_server, reuse=True, auto_ip=True, ips=None, ip_pool=None, wait=True, timeout=mock.ANY), mock.call(server=active_server, reuse=True, auto_ip=True, ips=None, ip_pool=None, wait=True, timeout=mock.ANY), ]) self.assertEqual('ACTIVE', server['status']) @patch.object(OpenStackCloud, 'wait_for_server') @patch.object(OpenStackCloud, 'nova_client') def test_create_server_wait(self, mock_nova, mock_wait): """ Test that create_server with a wait actually does the wait. """ fake_server = {'id': 'fake_server_id', 'status': 'BUILDING'} mock_nova.servers.create.return_value = fake_server self.client.create_server( 'server-name', 'image-id', 'flavor-id', wait=True), mock_wait.assert_called_once_with( fake_server, auto_ip=True, ips=None, ip_pool=None, reuse=True, timeout=180 ) @patch('time.sleep') def test_create_server_no_addresses(self, mock_sleep): """ Test that create_server with a wait throws an exception if the server doesn't have addresses. """ with patch("shade.OpenStackCloud"): build_server = fakes.FakeServer('1234', '', 'BUILD') fake_server = fakes.FakeServer('1234', '', 'ACTIVE') config = { "servers.create.return_value": build_server, "servers.get.return_value": [build_server, None], "servers.list.side_effect": [ [build_server], [fake_server]], "servers.delete.return_value": None, } OpenStackCloud.nova_client = Mock(**config) self.client._SERVER_AGE = 0 with patch.object(OpenStackCloud, "add_ips_to_server", return_value=fake_server): self.assertRaises( OpenStackCloudException, self.client.create_server, 'server-name', 'image-id', 'flavor-id', wait=True) @patch('shade.OpenStackCloud.nova_client') @patch('shade.OpenStackCloud.get_network') def test_create_server_network_with_no_nics(self, mock_get_network, mock_nova): """ Verify that if 'network' is supplied, and 'nics' is not, that we attempt to get the network for the server. """ self.client.create_server('server-name', 'image-id', 'flavor-id', network='network-name') mock_get_network.assert_called_once_with(name_or_id='network-name') @patch('shade.OpenStackCloud.nova_client') @patch('shade.OpenStackCloud.get_network') def test_create_server_network_with_empty_nics(self, mock_get_network, mock_nova): """ Verify that if 'network' is supplied, along with an empty 'nics' list, it's treated the same as if 'nics' were not included. """ self.client.create_server('server-name', 'image-id', 'flavor-id', network='network-name', nics=[]) mock_get_network.assert_called_once_with(name_or_id='network-name') shade-1.7.0/shade/tests/unit/test_endpoints.py0000664000567000056710000002066412677256557022623 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_cloud_endpoints ---------------------------------- Tests Keystone endpoints commands. """ from mock import patch import os_client_config from shade import OperatorCloud from shade.exc import OpenStackCloudException from shade.tests.fakes import FakeEndpoint from shade.tests.fakes import FakeEndpointv3 from shade.tests.unit import base class TestCloudEndpoints(base.TestCase): mock_endpoints = [ {'id': 'id1', 'service_id': 'sid1', 'region': 'region1', 'publicurl': 'purl1', 'internalurl': None, 'adminurl': None}, {'id': 'id2', 'service_id': 'sid2', 'region': 'region1', 'publicurl': 'purl2', 'internalurl': None, 'adminurl': None}, {'id': 'id3', 'service_id': 'sid3', 'region': 'region2', 'publicurl': 'purl3', 'internalurl': 'iurl3', 'adminurl': 'aurl3'} ] mock_endpoints_v3 = [ {'id': 'id1_v3', 'service_id': 'sid1', 'region': 'region1', 'url': 'url1', 'interface': 'public'}, {'id': 'id2_v3', 'service_id': 'sid1', 'region': 'region1', 'url': 'url2', 'interface': 'admin'}, {'id': 'id3_v3', 'service_id': 'sid1', 'region': 'region1', 'url': 'url3', 'interface': 'internal'} ] def setUp(self): super(TestCloudEndpoints, self).setUp() config = os_client_config.OpenStackConfig() self.client = OperatorCloud(cloud_config=config.get_one_cloud( validate=False)) self.mock_ks_endpoints = \ [FakeEndpoint(**kwa) for kwa in self.mock_endpoints] self.mock_ks_endpoints_v3 = \ [FakeEndpointv3(**kwa) for kwa in self.mock_endpoints_v3] @patch.object(OperatorCloud, 'list_services') @patch.object(OperatorCloud, 'keystone_client') @patch.object(os_client_config.cloud_config.CloudConfig, 'get_api_version') def test_create_endpoint_v2(self, mock_api_version, mock_keystone_client, mock_list_services): mock_api_version.return_value = '2.0' mock_list_services.return_value = [ { 'id': 'service_id1', 'name': 'service1', 'type': 'type1', 'description': 'desc1' } ] mock_keystone_client.endpoints.create.return_value = \ self.mock_ks_endpoints[2] endpoints = self.client.create_endpoint( service_name_or_id='service1', region='mock_region', public_url='mock_public_url', internal_url='mock_internal_url', admin_url='mock_admin_url' ) mock_keystone_client.endpoints.create.assert_called_with( service_id='service_id1', region='mock_region', publicurl='mock_public_url', internalurl='mock_internal_url', adminurl='mock_admin_url', ) # test keys and values are correct for k, v in self.mock_endpoints[2].items(): self.assertEquals(v, endpoints[0].get(k)) # test v3 semantics on v2.0 endpoint mock_keystone_client.endpoints.create.return_value = \ self.mock_ks_endpoints[0] self.assertRaises(OpenStackCloudException, self.client.create_endpoint, service_name_or_id='service1', interface='mock_admin_url', url='admin') endpoints_3on2 = self.client.create_endpoint( service_name_or_id='service1', region='mock_region', interface='public', url='mock_public_url' ) # test keys and values are correct for k, v in self.mock_endpoints[0].items(): self.assertEquals(v, endpoints_3on2[0].get(k)) @patch.object(OperatorCloud, 'list_services') @patch.object(OperatorCloud, 'keystone_client') @patch.object(os_client_config.cloud_config.CloudConfig, 'get_api_version') def test_create_endpoint_v3(self, mock_api_version, mock_keystone_client, mock_list_services): mock_api_version.return_value = '3' mock_list_services.return_value = [ { 'id': 'service_id1', 'name': 'service1', 'type': 'type1', 'description': 'desc1' } ] mock_keystone_client.endpoints.create.return_value = \ self.mock_ks_endpoints_v3[0] endpoints = self.client.create_endpoint( service_name_or_id='service1', region='mock_region', url='mock_url', interface='mock_interface', enabled=False ) mock_keystone_client.endpoints.create.assert_called_with( service='service_id1', region='mock_region', url='mock_url', interface='mock_interface', enabled=False ) # test keys and values are correct for k, v in self.mock_endpoints_v3[0].items(): self.assertEquals(v, endpoints[0].get(k)) # test v2.0 semantics on v3 endpoint mock_keystone_client.endpoints.create.side_effect = \ self.mock_ks_endpoints_v3 endpoints_2on3 = self.client.create_endpoint( service_name_or_id='service1', region='mock_region', public_url='mock_public_url', internal_url='mock_internal_url', admin_url='mock_admin_url', ) # Three endpoints should be returned, public, internal, and admin self.assertEquals(len(endpoints_2on3), 3) # test keys and values are correct for count in range(len(endpoints_2on3)): for k, v in self.mock_endpoints_v3[count].items(): self.assertEquals(v, endpoints_2on3[count].get(k)) @patch.object(OperatorCloud, 'keystone_client') def test_list_endpoints(self, mock_keystone_client): mock_keystone_client.endpoints.list.return_value = \ self.mock_ks_endpoints endpoints = self.client.list_endpoints() mock_keystone_client.endpoints.list.assert_called_with() # test we are getting exactly len(self.mock_endpoints) elements self.assertEqual(len(self.mock_endpoints), len(endpoints)) # test keys and values are correct for mock_endpoint in self.mock_endpoints: found = False for e in endpoints: if e['id'] == mock_endpoint['id']: found = True for k, v in mock_endpoint.items(): self.assertEquals(v, e.get(k)) break self.assertTrue( found, msg="endpoint {id} not found!".format( id=mock_endpoint['id'])) @patch.object(OperatorCloud, 'keystone_client') def test_search_endpoints(self, mock_keystone_client): mock_keystone_client.endpoints.list.return_value = \ self.mock_ks_endpoints # Search by id endpoints = self.client.search_endpoints(id='id3') # # test we are getting exactly 1 element self.assertEqual(1, len(endpoints)) for k, v in self.mock_endpoints[2].items(): self.assertEquals(v, endpoints[0].get(k)) # Not found endpoints = self.client.search_endpoints(id='blah!') self.assertEqual(0, len(endpoints)) # Multiple matches endpoints = self.client.search_endpoints( filters={'region': 'region1'}) # # test we are getting exactly 2 elements self.assertEqual(2, len(endpoints)) @patch.object(OperatorCloud, 'keystone_client') def test_delete_endpoint(self, mock_keystone_client): mock_keystone_client.endpoints.list.return_value = \ self.mock_ks_endpoints # Delete by id self.client.delete_endpoint(id='id2') mock_keystone_client.endpoints.delete.assert_called_with(id='id2') shade-1.7.0/shade/tests/unit/test_floating_ip_pool.py0000664000567000056710000000437412677256557024144 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_pool ---------------------------------- Test floating IP pool resource (managed by nova) """ from mock import patch import os_client_config from shade import OpenStackCloud from shade import OpenStackCloudException from shade.tests.unit import base from shade.tests.fakes import FakeFloatingIPPool class TestFloatingIPPool(base.TestCase): mock_pools = [ {'id': 'pool1_id', 'name': 'pool1'}, {'id': 'pool2_id', 'name': 'pool2'}] def setUp(self): super(TestFloatingIPPool, self).setUp() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) @patch.object(OpenStackCloud, '_has_nova_extension') @patch.object(OpenStackCloud, 'nova_client') def test_list_floating_ip_pools( self, mock_nova_client, mock__has_nova_extension): mock_nova_client.floating_ip_pools.list.return_value = [ FakeFloatingIPPool(**p) for p in self.mock_pools ] mock__has_nova_extension.return_value = True floating_ip_pools = self.client.list_floating_ip_pools() self.assertItemsEqual(floating_ip_pools, self.mock_pools) @patch.object(OpenStackCloud, '_has_nova_extension') @patch.object(OpenStackCloud, 'nova_client') def test_list_floating_ip_pools_exception( self, mock_nova_client, mock__has_nova_extension): mock_nova_client.floating_ip_pools.list.side_effect = \ Exception('whatever') mock__has_nova_extension.return_value = True self.assertRaises( OpenStackCloudException, self.client.list_floating_ip_pools) shade-1.7.0/shade/tests/unit/test_security_groups.py0000664000567000056710000003644412677256557024071 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from novaclient import exceptions as nova_exc from neutronclient.common import exceptions as neutron_exc import shade from shade import meta from shade.tests.unit import base from shade.tests import fakes neutron_grp_obj = fakes.FakeSecgroup( id='1', name='neutron-sec-group', description='Test Neutron security group', rules=[ dict(id='1', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0') ] ) nova_grp_obj = fakes.FakeSecgroup( id='2', name='nova-sec-group', description='Test Nova security group #1', rules=[ dict(id='2', from_port=8000, to_port=8001, ip_protocol='tcp', ip_range=dict(cidr='0.0.0.0/0'), parent_group_id=None) ] ) # Neutron returns dicts instead of objects, so the dict versions should # be used as expected return values from neutron API methods. neutron_grp_dict = meta.obj_to_dict(neutron_grp_obj) nova_grp_dict = meta.obj_to_dict(nova_grp_obj) class TestSecurityGroups(base.TestCase): @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_security_groups_neutron(self, mock_nova, mock_neutron): self.cloud.secgroup_source = 'neutron' self.cloud.list_security_groups() self.assertTrue(mock_neutron.list_security_groups.called) self.assertFalse(mock_nova.security_groups.list.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_security_groups_nova(self, mock_nova, mock_neutron): self.cloud.secgroup_source = 'nova' self.cloud.list_security_groups() self.assertFalse(mock_neutron.list_security_groups.called) self.assertTrue(mock_nova.security_groups.list.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_security_groups_none(self, mock_nova, mock_neutron): self.cloud.secgroup_source = None self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.list_security_groups) self.assertFalse(mock_neutron.list_security_groups.called) self.assertFalse(mock_nova.security_groups.list.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_security_group_neutron(self, mock_neutron): self.cloud.secgroup_source = 'neutron' neutron_return = dict(security_groups=[neutron_grp_dict]) mock_neutron.list_security_groups.return_value = neutron_return self.cloud.delete_security_group('1') mock_neutron.delete_security_group.assert_called_once_with( security_group='1' ) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_security_group_nova(self, mock_nova): self.cloud.secgroup_source = 'nova' nova_return = [nova_grp_obj] mock_nova.security_groups.list.return_value = nova_return self.cloud.delete_security_group('2') mock_nova.security_groups.delete.assert_called_once_with( group='2' ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_security_group_neutron_not_found(self, mock_neutron): self.cloud.secgroup_source = 'neutron' neutron_return = dict(security_groups=[neutron_grp_dict]) mock_neutron.list_security_groups.return_value = neutron_return self.cloud.delete_security_group('doesNotExist') self.assertFalse(mock_neutron.delete_security_group.called) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_security_group_nova_not_found(self, mock_nova): self.cloud.secgroup_source = 'nova' nova_return = [nova_grp_obj] mock_nova.security_groups.list.return_value = nova_return self.cloud.delete_security_group('doesNotExist') self.assertFalse(mock_nova.security_groups.delete.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_security_group_none(self, mock_nova, mock_neutron): self.cloud.secgroup_source = None self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.delete_security_group, 'doesNotExist') self.assertFalse(mock_neutron.delete_security_group.called) self.assertFalse(mock_nova.security_groups.delete.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_security_group_neutron(self, mock_neutron): self.cloud.secgroup_source = 'neutron' group_name = self.getUniqueString() group_desc = 'security group from test_create_security_group_neutron' self.cloud.create_security_group(group_name, group_desc) mock_neutron.create_security_group.assert_called_once_with( body=dict(security_group=dict(name=group_name, description=group_desc)) ) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_create_security_group_nova(self, mock_nova): group_name = self.getUniqueString() group_desc = 'security group from test_create_security_group_neutron' new_group = fakes.FakeSecgroup(id='2', name=group_name, description=group_desc, rules=[]) mock_nova.security_groups.create.return_value = new_group self.cloud.secgroup_source = 'nova' r = self.cloud.create_security_group(group_name, group_desc) mock_nova.security_groups.create.assert_called_once_with( name=group_name, description=group_desc ) self.assertEqual(group_name, r['name']) self.assertEqual(group_desc, r['description']) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_create_security_group_none(self, mock_nova, mock_neutron): self.cloud.secgroup_source = None self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.create_security_group, '', '') self.assertFalse(mock_neutron.create_security_group.called) self.assertFalse(mock_nova.security_groups.create.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_update_security_group_neutron(self, mock_neutron): self.cloud.secgroup_source = 'neutron' neutron_return = dict(security_groups=[neutron_grp_dict]) mock_neutron.list_security_groups.return_value = neutron_return self.cloud.update_security_group(neutron_grp_obj.id, name='new_name') mock_neutron.update_security_group.assert_called_once_with( security_group=neutron_grp_dict['id'], body={'security_group': {'name': 'new_name'}} ) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_update_security_group_nova(self, mock_nova): new_name = self.getUniqueString() self.cloud.secgroup_source = 'nova' nova_return = [nova_grp_obj] update_return = copy.deepcopy(nova_grp_obj) update_return.name = new_name mock_nova.security_groups.list.return_value = nova_return mock_nova.security_groups.update.return_value = update_return r = self.cloud.update_security_group(nova_grp_obj.id, name=new_name) mock_nova.security_groups.update.assert_called_once_with( group=nova_grp_obj.id, name=new_name ) self.assertEqual(r['name'], new_name) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_update_security_group_bad_kwarg(self, mock_nova, mock_neutron): self.assertRaises(TypeError, self.cloud.update_security_group, 'doesNotExist', bad_arg='') self.assertFalse(mock_neutron.create_security_group.called) self.assertFalse(mock_nova.security_groups.create.called) @mock.patch.object(shade.OpenStackCloud, 'get_security_group') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_security_group_rule_neutron(self, mock_neutron, mock_get): self.cloud.secgroup_source = 'neutron' args = dict( port_range_min=-1, port_range_max=40000, protocol='tcp', remote_ip_prefix='0.0.0.0/0', remote_group_id='456', direction='egress', ethertype='IPv6' ) mock_get.return_value = {'id': 'abc'} self.cloud.create_security_group_rule(secgroup_name_or_id='abc', **args) # For neutron, -1 port should be converted to None args['port_range_min'] = None args['security_group_id'] = 'abc' mock_neutron.create_security_group_rule.assert_called_once_with( body={'security_group_rule': args} ) @mock.patch.object(shade.OpenStackCloud, 'get_security_group') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_create_security_group_rule_nova(self, mock_nova, mock_get): self.cloud.secgroup_source = 'nova' new_rule = fakes.FakeNovaSecgroupRule( id='xyz', from_port=1, to_port=2000, ip_protocol='tcp', cidr='1.2.3.4/32') mock_nova.security_group_rules.create.return_value = new_rule mock_get.return_value = {'id': 'abc'} self.cloud.create_security_group_rule( 'abc', port_range_min=1, port_range_max=2000, protocol='tcp', remote_ip_prefix='1.2.3.4/32', remote_group_id='123') mock_nova.security_group_rules.create.assert_called_once_with( parent_group_id='abc', ip_protocol='tcp', from_port=1, to_port=2000, cidr='1.2.3.4/32', group_id='123' ) @mock.patch.object(shade.OpenStackCloud, 'get_security_group') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_create_security_group_rule_nova_no_ports(self, mock_nova, mock_get): self.cloud.secgroup_source = 'nova' new_rule = fakes.FakeNovaSecgroupRule( id='xyz', from_port=1, to_port=65535, ip_protocol='tcp', cidr='1.2.3.4/32') mock_nova.security_group_rules.create.return_value = new_rule mock_get.return_value = {'id': 'abc'} self.cloud.create_security_group_rule( 'abc', protocol='tcp', remote_ip_prefix='1.2.3.4/32', remote_group_id='123') mock_nova.security_group_rules.create.assert_called_once_with( parent_group_id='abc', ip_protocol='tcp', from_port=1, to_port=65535, cidr='1.2.3.4/32', group_id='123' ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_create_security_group_rule_none(self, mock_nova, mock_neutron): self.cloud.secgroup_source = None self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.create_security_group_rule, '') self.assertFalse(mock_neutron.create_security_group.called) self.assertFalse(mock_nova.security_groups.create.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_security_group_rule_neutron(self, mock_neutron): self.cloud.secgroup_source = 'neutron' r = self.cloud.delete_security_group_rule('xyz') mock_neutron.delete_security_group_rule.assert_called_once_with( security_group_rule='xyz') self.assertTrue(r) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_security_group_rule_nova(self, mock_nova): self.cloud.secgroup_source = 'nova' r = self.cloud.delete_security_group_rule('xyz') mock_nova.security_group_rules.delete.assert_called_once_with( rule='xyz') self.assertTrue(r) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_security_group_rule_none(self, mock_nova, mock_neutron): self.cloud.secgroup_source = None self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.delete_security_group_rule, '') self.assertFalse(mock_neutron.create_security_group.called) self.assertFalse(mock_nova.security_groups.create.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_security_group_rule_not_found(self, mock_nova, mock_neutron): self.cloud.secgroup_source = 'neutron' mock_neutron.delete_security_group_rule.side_effect = ( neutron_exc.NotFound() ) r = self.cloud.delete_security_group('doesNotExist') self.assertFalse(r) self.cloud.secgroup_source = 'nova' mock_neutron.security_group_rules.delete.side_effect = ( nova_exc.NotFound("uh oh") ) r = self.cloud.delete_security_group('doesNotExist') self.assertFalse(r) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_nova_egress_security_group_rule(self, mock_nova): self.cloud.secgroup_source = 'nova' mock_nova.security_groups.list.return_value = [nova_grp_obj] self.assertRaises(shade.OpenStackCloudException, self.cloud.create_security_group_rule, secgroup_name_or_id='nova-sec-group', direction='egress') @mock.patch.object(shade._utils, 'normalize_nova_secgroups') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_server_security_groups(self, mock_nova, mock_norm): server = dict(id='server_id') self.cloud.list_server_security_groups(server) mock_nova.servers.list_security_group.assert_called_once_with( server='server_id' ) self.assertTrue(mock_norm.called) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_server_security_groups_bad_source(self, mock_nova): self.cloud.secgroup_source = 'invalid' server = dict(id='server_id') ret = self.cloud.list_server_security_groups(server) self.assertEqual([], ret) self.assertFalse(mock_nova.servers.list_security_group.called) shade-1.7.0/shade/tests/unit/test_meta.py0000664000567000056710000005171712677256562021545 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import warlock from neutronclient.common import exceptions as neutron_exceptions import shade from shade import _utils from shade import meta from shade.tests import fakes from shade.tests.unit import base PRIVATE_V4 = '198.51.100.3' PUBLIC_V4 = '192.0.2.99' PUBLIC_V6 = '2001:0db8:face:0da0:face::0b00:1c' # rfc3849 class FakeCloud(object): region_name = 'test-region' name = 'test-name' private = False force_ipv4 = False service_val = True _unused = "useless" _local_ipv6 = True def get_flavor_name(self, id): return 'test-flavor-name' def get_image_name(self, id): return 'test-image-name' def get_volumes(self, server): return [] def has_service(self, service_name): return self.service_val def use_internal_network(self): return True def use_external_network(self): return True def get_internal_networks(self): return [] def get_external_networks(self): return [] def list_server_security_groups(self, server): return [] standard_fake_server = fakes.FakeServer( id='test-id-0', name='test-id-0', status='ACTIVE', metadata={'group': 'test-group'}, addresses={'private': [{'OS-EXT-IPS:type': 'fixed', 'addr': PRIVATE_V4, 'version': 4}], 'public': [{'OS-EXT-IPS:type': 'floating', 'addr': PUBLIC_V4, 'version': 4}]}, flavor={'id': '101'}, image={'id': '471c2475-da2f-47ac-aba5-cb4aa3d546f5'}, accessIPv4='', accessIPv6='', ) class TestMeta(base.TestCase): def test_find_nova_addresses_key_name(self): # Note 198.51.100.0/24 is TEST-NET-2 from rfc5737 addrs = {'public': [{'addr': '198.51.100.1', 'version': 4}], 'private': [{'addr': '192.0.2.5', 'version': 4}]} self.assertEqual( ['198.51.100.1'], meta.find_nova_addresses(addrs, key_name='public')) self.assertEqual([], meta.find_nova_addresses(addrs, key_name='foo')) def test_find_nova_addresses_ext_tag(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses(addrs, ext_tag='fixed')) self.assertEqual([], meta.find_nova_addresses(addrs, ext_tag='foo')) def test_find_nova_addresses_key_name_and_ext_tag(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed')) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='public', ext_tag='foo')) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='bar', ext_tag='fixed')) def test_find_nova_addresses_all(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed', version=4)) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed', version=6)) def test_get_server_ip(self): srv = meta.obj_to_dict(standard_fake_server) self.assertEqual( PRIVATE_V4, meta.get_server_ip(srv, ext_tag='fixed')) self.assertEqual( PUBLIC_V4, meta.get_server_ip(srv, ext_tag='floating')) @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(shade.OpenStackCloud, 'search_networks') def test_get_server_private_ip( self, mock_search_networks, mock_has_service): mock_has_service.return_value = True mock_search_networks.return_value = [{ 'id': 'test-net-id', 'name': 'test-net-name' }] srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', addresses={'private': [{'OS-EXT-IPS:type': 'fixed', 'addr': PRIVATE_V4, 'version': 4}], 'public': [{'OS-EXT-IPS:type': 'floating', 'addr': PUBLIC_V4, 'version': 4}]} )) self.assertEqual( PRIVATE_V4, meta.get_server_private_ip(srv, self.cloud)) mock_has_service.assert_called_with('network') mock_search_networks.assert_called_with( filters={'router:external': False} ) @mock.patch.object(shade.OpenStackCloud, 'list_server_security_groups') @mock.patch.object(shade.OpenStackCloud, 'get_volumes') @mock.patch.object(shade.OpenStackCloud, 'get_image_name') @mock.patch.object(shade.OpenStackCloud, 'get_flavor_name') @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(shade.OpenStackCloud, 'search_networks') def test_get_server_private_ip_devstack( self, mock_search_networks, mock_has_service, mock_get_flavor_name, mock_get_image_name, mock_get_volumes, mock_list_server_security_groups): mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_has_service.return_value = True mock_get_volumes.return_value = [] mock_search_networks.return_value = [ { 'id': 'test_pnztt_net', 'name': 'test_pnztt_net' }, { 'id': 'private', 'name': 'private', }, ] srv = self.cloud.get_openstack_vars(meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': PRIVATE_V4, u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42' }]} ))) self.assertEqual(PRIVATE_V4, srv['private_v4']) mock_has_service.assert_called_with('volume') mock_search_networks.assert_called_with( filters={'router:external': False} ) @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(shade.OpenStackCloud, 'search_networks') def test_get_server_external_ipv4_neutron( self, mock_search_networks, mock_has_service): # Testing Clouds with Neutron mock_has_service.return_value = True mock_search_networks.return_value = [{ 'id': 'test-net-id', 'name': 'test-net', 'router:external': True, }] srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, )) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(shade.OpenStackCloud, 'search_networks') def test_get_server_external_provider_ipv4_neutron( self, mock_search_networks, mock_has_service): # Testing Clouds with Neutron mock_has_service.return_value = True mock_search_networks.return_value = [{ 'id': 'test-net-id', 'name': 'test-net', 'provider:network_type': 'vlan', }] srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, )) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(shade.OpenStackCloud, 'search_networks') def test_get_server_external_none_ipv4_neutron( self, mock_search_networks, mock_has_service): # Testing Clouds with Neutron mock_has_service.return_value = True mock_search_networks.return_value = [{ 'id': 'test-net-id', 'name': 'test-net', 'router:external': False, }] srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, )) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(None, ip) def test_get_server_external_ipv4_neutron_accessIPv4(self): srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', accessIPv4=PUBLIC_V4)) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) def test_get_server_external_ipv4_neutron_accessIPv6(self): srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', accessIPv6=PUBLIC_V6)) ip = meta.get_server_external_ipv6(server=srv) self.assertEqual(PUBLIC_V6, ip) @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(shade.OpenStackCloud, 'search_networks') @mock.patch.object(shade.OpenStackCloud, 'search_ports') @mock.patch.object(meta, 'get_server_ip') def test_get_server_external_ipv4_neutron_exception( self, mock_get_server_ip, mock_search_ports, mock_search_networks, mock_has_service): # Testing Clouds with a non working Neutron mock_has_service.return_value = True mock_search_networks.return_value = [] mock_search_ports.side_effect = neutron_exceptions.NotFound() mock_get_server_ip.return_value = PUBLIC_V4 srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE')) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) self.assertTrue(mock_get_server_ip.called) @mock.patch.object(shade.OpenStackCloud, 'has_service') def test_get_server_external_ipv4_nova_public( self, mock_has_service): # Testing Clouds w/o Neutron and a network named public mock_has_service.return_value = False srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', addresses={'public': [{'addr': PUBLIC_V4, 'version': 4}]})) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(meta, 'get_server_ip') def test_get_server_external_ipv4_nova_none( self, mock_get_server_ip, mock_has_service): # Testing Clouds w/o Neutron and a globally routable IP mock_has_service.return_value = False mock_get_server_ip.return_value = None srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{'addr': PRIVATE_V4}]})) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertIsNone(ip) self.assertTrue(mock_get_server_ip.called) def test_get_server_external_ipv6(self): srv = meta.obj_to_dict(fakes.FakeServer( id='test-id', name='test-name', status='ACTIVE', addresses={ 'test-net': [ {'addr': PUBLIC_V4, 'version': 4}, {'addr': PUBLIC_V6, 'version': 6} ] } )) ip = meta.get_server_external_ipv6(srv) self.assertEqual(PUBLIC_V6, ip) def test_get_groups_from_server(self): server_vars = {'flavor': 'test-flavor', 'image': 'test-image', 'az': 'test-az'} self.assertEqual( ['test-name', 'test-region', 'test-name_test-region', 'test-group', 'instance-test-id-0', 'meta-group_test-group', 'test-az', 'test-region_test-az', 'test-name_test-region_test-az'], meta.get_groups_from_server( FakeCloud(), meta.obj_to_dict(standard_fake_server), server_vars ) ) def test_obj_list_to_dict(self): """Test conversion of a list of objects to a list of dictonaries""" class obj0(object): value = 0 class obj1(object): value = 1 list = [obj0, obj1] new_list = meta.obj_list_to_dict(list) self.assertEqual(new_list[0]['value'], 0) self.assertEqual(new_list[1]['value'], 1) @mock.patch.object(FakeCloud, 'list_server_security_groups') def test_get_security_groups(self, mock_list_server_security_groups): '''This test verifies that calling get_hostvars_froms_server ultimately calls list_server_security_groups, and that the return value from list_server_security_groups ends up in server['security_groups'].''' mock_list_server_security_groups.return_value = [ {'name': 'testgroup', 'id': '1'}] server = meta.obj_to_dict(standard_fake_server) hostvars = meta.get_hostvars_from_server(FakeCloud(), server) mock_list_server_security_groups.assert_called_once_with(server) self.assertEqual('testgroup', hostvars['security_groups'][0]['name']) @mock.patch.object(shade.meta, 'get_server_external_ipv6') @mock.patch.object(shade.meta, 'get_server_external_ipv4') def test_basic_hostvars( self, mock_get_server_external_ipv4, mock_get_server_external_ipv6): mock_get_server_external_ipv4.return_value = PUBLIC_V4 mock_get_server_external_ipv6.return_value = PUBLIC_V6 hostvars = meta.get_hostvars_from_server( FakeCloud(), _utils.normalize_server( meta.obj_to_dict(standard_fake_server), cloud_name='CLOUD_NAME', region_name='REGION_NAME')) self.assertNotIn('links', hostvars) self.assertEqual(PRIVATE_V4, hostvars['private_v4']) self.assertEqual(PUBLIC_V4, hostvars['public_v4']) self.assertEqual(PUBLIC_V6, hostvars['public_v6']) self.assertEqual(PUBLIC_V6, hostvars['interface_ip']) self.assertEquals('REGION_NAME', hostvars['region']) self.assertEquals('CLOUD_NAME', hostvars['cloud']) self.assertEquals("test-image-name", hostvars['image']['name']) self.assertEquals(standard_fake_server.image['id'], hostvars['image']['id']) self.assertNotIn('links', hostvars['image']) self.assertEquals(standard_fake_server.flavor['id'], hostvars['flavor']['id']) self.assertEquals("test-flavor-name", hostvars['flavor']['name']) self.assertNotIn('links', hostvars['flavor']) # test having volumes # test volume exception self.assertEquals([], hostvars['volumes']) @mock.patch.object(shade.meta, 'get_server_external_ipv6') @mock.patch.object(shade.meta, 'get_server_external_ipv4') def test_ipv4_hostvars( self, mock_get_server_external_ipv4, mock_get_server_external_ipv6): mock_get_server_external_ipv4.return_value = PUBLIC_V4 mock_get_server_external_ipv6.return_value = PUBLIC_V6 fake_cloud = FakeCloud() fake_cloud.force_ipv4 = True hostvars = meta.get_hostvars_from_server( fake_cloud, meta.obj_to_dict(standard_fake_server)) self.assertEqual(PUBLIC_V4, hostvars['interface_ip']) @mock.patch.object(shade.meta, 'get_server_external_ipv4') def test_private_interface_ip(self, mock_get_server_external_ipv4): mock_get_server_external_ipv4.return_value = PUBLIC_V4 cloud = FakeCloud() cloud.private = True hostvars = meta.get_hostvars_from_server( cloud, meta.obj_to_dict(standard_fake_server)) self.assertEqual(PRIVATE_V4, hostvars['interface_ip']) @mock.patch.object(shade.meta, 'get_server_external_ipv4') def test_image_string(self, mock_get_server_external_ipv4): mock_get_server_external_ipv4.return_value = PUBLIC_V4 server = standard_fake_server server.image = 'fake-image-id' hostvars = meta.get_hostvars_from_server( FakeCloud(), meta.obj_to_dict(server)) self.assertEquals('fake-image-id', hostvars['image']['id']) def test_az(self): server = standard_fake_server server.__dict__['OS-EXT-AZ:availability_zone'] = 'az1' hostvars = _utils.normalize_server( meta.obj_to_dict(server), cloud_name='', region_name='') self.assertEquals('az1', hostvars['az']) def test_has_volume(self): mock_cloud = mock.MagicMock() fake_volume = fakes.FakeVolume( id='volume1', status='available', name='Volume 1 Display Name', attachments=[{'device': '/dev/sda0'}]) fake_volume_dict = meta.obj_to_dict(fake_volume) mock_cloud.get_volumes.return_value = [fake_volume_dict] hostvars = meta.get_hostvars_from_server( mock_cloud, meta.obj_to_dict(standard_fake_server)) self.assertEquals('volume1', hostvars['volumes'][0]['id']) self.assertEquals('/dev/sda0', hostvars['volumes'][0]['device']) def test_has_no_volume_service(self): fake_cloud = FakeCloud() fake_cloud.service_val = False hostvars = meta.get_hostvars_from_server( fake_cloud, meta.obj_to_dict(standard_fake_server)) self.assertEquals([], hostvars['volumes']) def test_unknown_volume_exception(self): mock_cloud = mock.MagicMock() class FakeException(Exception): pass def side_effect(*args): raise FakeException("No Volumes") mock_cloud.get_volumes.side_effect = side_effect self.assertRaises( FakeException, meta.get_hostvars_from_server, mock_cloud, meta.obj_to_dict(standard_fake_server)) def test_obj_to_dict(self): cloud = FakeCloud() cloud.server = standard_fake_server cloud_dict = meta.obj_to_dict(cloud) self.assertEqual(FakeCloud.name, cloud_dict['name']) self.assertNotIn('_unused', cloud_dict) self.assertNotIn('get_flavor_name', cloud_dict) self.assertNotIn('server', cloud_dict) self.assertTrue(hasattr(cloud_dict, 'name')) self.assertEquals(cloud_dict.name, cloud_dict['name']) def test_obj_to_dict_subclass(self): class FakeObjDict(dict): additional = 1 obj = FakeObjDict(foo='bar') obj_dict = meta.obj_to_dict(obj) self.assertIn('additional', obj_dict) self.assertIn('foo', obj_dict) self.assertEquals(obj_dict['additional'], 1) self.assertEquals(obj_dict['foo'], 'bar') def test_warlock_to_dict(self): schema = { 'name': 'Test', 'properties': { 'id': {'type': 'string'}, 'name': {'type': 'string'}, '_unused': {'type': 'string'}, } } test_model = warlock.model_factory(schema) test_obj = test_model( id='471c2475-da2f-47ac-aba5-cb4aa3d546f5', name='test-image') test_dict = meta.obj_to_dict(test_obj) self.assertNotIn('_unused', test_dict) self.assertEqual('test-image', test_dict['name']) self.assertTrue(hasattr(test_dict, 'name')) self.assertEquals(test_dict.name, test_dict['name']) shade-1.7.0/shade/tests/unit/test_caching.py0000664000567000056710000005631612677256557022217 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile from glanceclient.v2 import shell import mock import os_client_config as occ import testtools import warlock import shade.openstackcloud from shade import _utils from shade import exc from shade import meta from shade.tests import fakes from shade.tests.unit import base # Mock out the gettext function so that the task schema can be copypasta def _(msg): return msg _TASK_PROPERTIES = { "id": { "description": _("An identifier for the task"), "pattern": _('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}' '-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$'), "type": "string" }, "type": { "description": _("The type of task represented by this content"), "enum": [ "import", ], "type": "string" }, "status": { "description": _("The current status of this task"), "enum": [ "pending", "processing", "success", "failure" ], "type": "string" }, "input": { "description": _("The parameters required by task, JSON blob"), "type": ["null", "object"], }, "result": { "description": _("The result of current task, JSON blob"), "type": ["null", "object"], }, "owner": { "description": _("An identifier for the owner of this task"), "type": "string" }, "message": { "description": _("Human-readable informative message only included" " when appropriate (usually on failure)"), "type": "string", }, "expires_at": { "description": _("Datetime when this resource would be" " subject to removal"), "type": ["null", "string"] }, "created_at": { "description": _("Datetime when this resource was created"), "type": "string" }, "updated_at": { "description": _("Datetime when this resource was updated"), "type": "string" }, 'self': {'type': 'string'}, 'schema': {'type': 'string'} } _TASK_SCHEMA = dict( name='Task', properties=_TASK_PROPERTIES, additionalProperties=False, ) class TestMemoryCache(base.TestCase): CLOUD_CONFIG = { 'cache': { 'max_age': 90, 'class': 'dogpile.cache.memory', 'expiration': { 'server': 1, }, }, 'clouds': { '_test_cloud_': { 'auth': { 'auth_url': 'http://198.51.100.1:35357/v2.0', 'username': '_test_user_', 'password': '_test_pass_', 'project_name': '_test_project_', }, 'region_name': '_test_region_', }, }, } def test_openstack_cloud(self): self.assertIsInstance(self.cloud, shade.OpenStackCloud) @mock.patch('shade.OpenStackCloud.keystone_client') def test_list_projects_v3(self, keystone_mock): project = fakes.FakeProject('project_a') keystone_mock.projects.list.return_value = [project] self.cloud.cloud_config.config['identity_api_version'] = '3' self.assertEqual( meta.obj_list_to_dict([project]), self.cloud.list_projects()) project_b = fakes.FakeProject('project_b') keystone_mock.projects.list.return_value = [project, project_b] self.assertEqual( meta.obj_list_to_dict([project]), self.cloud.list_projects()) self.cloud.list_projects.invalidate(self.cloud) self.assertEqual( meta.obj_list_to_dict([project, project_b]), self.cloud.list_projects()) @mock.patch('shade.OpenStackCloud.keystone_client') def test_list_projects_v2(self, keystone_mock): project = fakes.FakeProject('project_a') keystone_mock.tenants.list.return_value = [project] self.cloud.cloud_config.config['identity_api_version'] = '2' self.assertEqual( meta.obj_list_to_dict([project]), self.cloud.list_projects()) project_b = fakes.FakeProject('project_b') keystone_mock.tenants.list.return_value = [project, project_b] self.assertEqual( meta.obj_list_to_dict([project]), self.cloud.list_projects()) self.cloud.list_projects.invalidate(self.cloud) self.assertEqual( meta.obj_list_to_dict([project, project_b]), self.cloud.list_projects()) @mock.patch('shade.OpenStackCloud.cinder_client') def test_list_volumes(self, cinder_mock): fake_volume = fakes.FakeVolume('volume1', 'available', 'Volume 1 Display Name') fake_volume_dict = _utils.normalize_volumes( [meta.obj_to_dict(fake_volume)])[0] cinder_mock.volumes.list.return_value = [fake_volume] self.assertEqual([fake_volume_dict], self.cloud.list_volumes()) fake_volume2 = fakes.FakeVolume('volume2', 'available', 'Volume 2 Display Name') fake_volume2_dict = _utils.normalize_volumes( [meta.obj_to_dict(fake_volume2)])[0] cinder_mock.volumes.list.return_value = [fake_volume, fake_volume2] self.assertEqual([fake_volume_dict], self.cloud.list_volumes()) self.cloud.list_volumes.invalidate(self.cloud) self.assertEqual([fake_volume_dict, fake_volume2_dict], self.cloud.list_volumes()) @mock.patch('shade.OpenStackCloud.cinder_client') def test_list_volumes_creating_invalidates(self, cinder_mock): fake_volume = fakes.FakeVolume('volume1', 'creating', 'Volume 1 Display Name') fake_volume_dict = _utils.normalize_volumes( [meta.obj_to_dict(fake_volume)])[0] cinder_mock.volumes.list.return_value = [fake_volume] self.assertEqual([fake_volume_dict], self.cloud.list_volumes()) fake_volume2 = fakes.FakeVolume('volume2', 'available', 'Volume 2 Display Name') fake_volume2_dict = _utils.normalize_volumes( [meta.obj_to_dict(fake_volume2)])[0] cinder_mock.volumes.list.return_value = [fake_volume, fake_volume2] self.assertEqual([fake_volume_dict, fake_volume2_dict], self.cloud.list_volumes()) @mock.patch.object(shade.OpenStackCloud, 'cinder_client') def test_create_volume_invalidates(self, cinder_mock): fake_volb4 = fakes.FakeVolume('volume1', 'available', 'Volume 1 Display Name') fake_volb4_dict = _utils.normalize_volumes( [meta.obj_to_dict(fake_volb4)])[0] cinder_mock.volumes.list.return_value = [fake_volb4] self.assertEqual([fake_volb4_dict], self.cloud.list_volumes()) volume = dict(display_name='junk_vol', size=1, display_description='test junk volume') fake_vol = fakes.FakeVolume('12345', 'creating', '') fake_vol_dict = meta.obj_to_dict(fake_vol) fake_vol_dict = _utils.normalize_volumes( [meta.obj_to_dict(fake_vol)])[0] cinder_mock.volumes.create.return_value = fake_vol cinder_mock.volumes.list.return_value = [fake_volb4, fake_vol] def creating_available(): def now_available(): fake_vol.status = 'available' fake_vol_dict['status'] = 'available' return mock.DEFAULT cinder_mock.volumes.list.side_effect = now_available return mock.DEFAULT cinder_mock.volumes.list.side_effect = creating_available self.cloud.create_volume(wait=True, timeout=None, **volume) self.assertTrue(cinder_mock.volumes.create.called) self.assertEqual(3, cinder_mock.volumes.list.call_count) # If cache was not invalidated, we would not see our own volume here # because the first volume was available and thus would already be # cached. self.assertEqual([fake_volb4_dict, fake_vol_dict], self.cloud.list_volumes()) # And now delete and check same thing since list is cached as all # available fake_vol.status = 'deleting' fake_vol_dict = meta.obj_to_dict(fake_vol) def deleting_gone(): def now_gone(): cinder_mock.volumes.list.return_value = [fake_volb4] return mock.DEFAULT cinder_mock.volumes.list.side_effect = now_gone return mock.DEFAULT cinder_mock.volumes.list.return_value = [fake_volb4, fake_vol] cinder_mock.volumes.list.side_effect = deleting_gone cinder_mock.volumes.delete.return_value = fake_vol_dict self.cloud.delete_volume('12345') self.assertEqual([fake_volb4_dict], self.cloud.list_volumes()) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_users(self, keystone_mock): fake_user = fakes.FakeUser('999', '', '') keystone_mock.users.list.return_value = [fake_user] users = self.cloud.list_users() self.assertEqual(1, len(users)) self.assertEqual('999', users[0]['id']) self.assertEqual('', users[0]['name']) self.assertEqual('', users[0]['email']) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_modify_user_invalidates_cache(self, keystone_mock): fake_user = fakes.FakeUser('abc123', 'abc123@domain.test', 'abc123 name') # first cache an empty list keystone_mock.users.list.return_value = [] self.assertEqual([], self.cloud.list_users()) # now add one keystone_mock.users.list.return_value = [fake_user] keystone_mock.users.create.return_value = fake_user created = self.cloud.create_user(name='abc123 name', email='abc123@domain.test') self.assertEqual('abc123', created['id']) self.assertEqual('abc123 name', created['name']) self.assertEqual('abc123@domain.test', created['email']) # Cache should have been invalidated users = self.cloud.list_users() self.assertEqual(1, len(users)) self.assertEqual('abc123', users[0]['id']) self.assertEqual('abc123 name', users[0]['name']) self.assertEqual('abc123@domain.test', users[0]['email']) # Update and check to see if it is updated fake_user2 = fakes.FakeUser('abc123', 'abc123-changed@domain.test', 'abc123 name') fake_user2_dict = meta.obj_to_dict(fake_user2) keystone_mock.users.update.return_value = fake_user2 keystone_mock.users.list.return_value = [fake_user2] keystone_mock.users.get.return_value = fake_user2_dict self.cloud.update_user('abc123', email='abc123-changed@domain.test') keystone_mock.users.update.assert_called_with( user=fake_user2_dict, email='abc123-changed@domain.test') users = self.cloud.list_users() self.assertEqual(1, len(users)) self.assertEqual('abc123', users[0]['id']) self.assertEqual('abc123 name', users[0]['name']) self.assertEqual('abc123-changed@domain.test', users[0]['email']) # Now delete and ensure it disappears keystone_mock.users.list.return_value = [] self.cloud.delete_user('abc123') self.assertEqual([], self.cloud.list_users()) self.assertTrue(keystone_mock.users.delete.was_called) @mock.patch.object(shade.OpenStackCloud, '_compute_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_flavors(self, nova_mock, mock_compute): nova_mock.flavors.list.return_value = [] nova_mock.flavors.api.client.get.return_value = {} mock_response = mock.Mock() mock_response.json.return_value = dict(extra_specs={}) mock_response.headers.get.return_value = 'request-id' mock_compute.get.return_value = mock_response self.assertEqual([], self.cloud.list_flavors()) fake_flavor = fakes.FakeFlavor( '555', 'vanilla', 100, dict( x_openstack_request_ids=['request-id'])) fake_flavor_dict = _utils.normalize_flavors( [meta.obj_to_dict(fake_flavor)] )[0] nova_mock.flavors.list.return_value = [fake_flavor] self.cloud.list_flavors.invalidate(self.cloud) self.assertEqual([fake_flavor_dict], self.cloud.list_flavors()) @mock.patch.object(shade.OpenStackCloud, 'glance_client') def test_list_images(self, glance_mock): glance_mock.images.list.return_value = [] self.assertEqual([], self.cloud.list_images()) fake_image = fakes.FakeImage('22', '22 name', 'success') fake_image_dict = meta.obj_to_dict(fake_image) glance_mock.images.list.return_value = [fake_image] self.cloud.list_images.invalidate(self.cloud) self.assertEqual([fake_image_dict], self.cloud.list_images()) @mock.patch.object(shade.OpenStackCloud, 'glance_client') def test_list_images_ignores_unsteady_status(self, glance_mock): steady_image = fakes.FakeImage('68', 'Jagr', 'active') steady_image_dict = meta.obj_to_dict(steady_image) for status in ('queued', 'saving', 'pending_delete'): active_image = fakes.FakeImage(self.getUniqueString(), self.getUniqueString(), status) glance_mock.images.list.return_value = [active_image] active_image_dict = meta.obj_to_dict(active_image) self.assertEqual([active_image_dict], self.cloud.list_images()) glance_mock.images.list.return_value = [active_image, steady_image] # Should expect steady_image to appear if active wasn't cached self.assertEqual([active_image_dict, steady_image_dict], self.cloud.list_images()) @mock.patch.object(shade.OpenStackCloud, 'glance_client') def test_list_images_caches_steady_status(self, glance_mock): steady_image = fakes.FakeImage('91', 'Federov', 'active') first_image = None for status in ('active', 'deleted', 'killed'): active_image = fakes.FakeImage(self.getUniqueString(), self.getUniqueString(), status) active_image_dict = meta.obj_to_dict(active_image) if not first_image: first_image = active_image_dict glance_mock.images.list.return_value = [active_image] self.assertEqual([first_image], self.cloud.list_images()) glance_mock.images.list.return_value = [active_image, steady_image] # because we skipped the create_image code path, no invalidation # was done, so we _SHOULD_ expect steady state images to cache and # therefore we should _not_ expect to see the new one here self.assertEqual([first_image], self.cloud.list_images()) def _call_create_image(self, name, container=None, **kwargs): imagefile = tempfile.NamedTemporaryFile(delete=False) imagefile.write(b'\0') imagefile.close() self.cloud.create_image( name, imagefile.name, container=container, wait=True, is_public=False, **kwargs) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'glance_client') def test_create_image_put_v1(self, glance_mock, mock_api_version): mock_api_version.return_value = '1' glance_mock.images.list.return_value = [] self.assertEqual([], self.cloud.list_images()) fake_image = fakes.FakeImage('42', '42 name', 'success') glance_mock.images.create.return_value = fake_image glance_mock.images.list.return_value = [fake_image] self._call_create_image('42 name') args = {'name': '42 name', 'container_format': 'bare', 'disk_format': 'qcow2', 'properties': {'owner_specified.shade.md5': mock.ANY, 'owner_specified.shade.sha256': mock.ANY, 'is_public': False}} fake_image_dict = meta.obj_to_dict(fake_image) glance_mock.images.create.assert_called_with(**args) glance_mock.images.update.assert_called_with( data=mock.ANY, image=fake_image_dict) self.assertEqual([fake_image_dict], self.cloud.list_images()) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'glance_client') def test_create_image_put_v2(self, glance_mock, mock_api_version): mock_api_version.return_value = '2' self.cloud.image_api_use_tasks = False glance_mock.images.list.return_value = [] self.assertEqual([], self.cloud.list_images()) fake_image = fakes.FakeImage('42', '42 name', 'success') glance_mock.images.create.return_value = fake_image glance_mock.images.list.return_value = [fake_image] self._call_create_image('42 name', min_disk=0, min_ram=0) args = {'name': '42 name', 'container_format': 'bare', 'disk_format': 'qcow2', 'owner_specified.shade.md5': mock.ANY, 'owner_specified.shade.sha256': mock.ANY, 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} glance_mock.images.create.assert_called_with(**args) glance_mock.images.upload.assert_called_with( image_data=mock.ANY, image_id=fake_image.id) fake_image_dict = meta.obj_to_dict(fake_image) self.assertEqual([fake_image_dict], self.cloud.list_images()) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, '_get_file_hashes') @mock.patch.object(shade.OpenStackCloud, 'glance_client') @mock.patch.object(shade.OpenStackCloud, 'swift_client') @mock.patch.object(shade.OpenStackCloud, 'swift_service') def test_create_image_task(self, swift_service_mock, swift_mock, glance_mock, get_file_hashes, mock_api_version): mock_api_version.return_value = '2' self.cloud.image_api_use_tasks = True class Container(object): name = 'image_upload_v2_test_container' fake_container = Container() swift_mock.get_capabilities.return_value = { 'swift': { 'max_file_size': 1000 } } swift_mock.put_container.return_value = fake_container swift_mock.head_object.return_value = {} glance_mock.images.list.return_value = [] self.assertEqual([], self.cloud.list_images()) fake_md5 = "fake-md5" fake_sha256 = "fake-sha256" get_file_hashes.return_value = (fake_md5, fake_sha256) FakeImage = warlock.model_factory(shell.get_image_schema()) fake_image = FakeImage( id='a35e8afc-cae9-4e38-8441-2cd465f79f7b', name='name-99', status='active', visibility='private') glance_mock.images.list.return_value = [fake_image] FakeTask = warlock.model_factory(_TASK_SCHEMA) args = { 'id': '21FBD9A7-85EC-4E07-9D58-72F1ACF7CB1F', 'status': 'success', 'type': 'import', 'result': { 'image_id': 'a35e8afc-cae9-4e38-8441-2cd465f79f7b', }, } fake_task = FakeTask(**args) glance_mock.tasks.get.return_value = fake_task self._call_create_image(name='name-99', container='image_upload_v2_test_container') args = {'header': ['x-object-meta-x-shade-md5:fake-md5', 'x-object-meta-x-shade-sha256:fake-sha256'], 'segment_size': 1000} swift_service_mock.upload.assert_called_with( container='image_upload_v2_test_container', objects=mock.ANY, options=args) glance_mock.tasks.create.assert_called_with(type='import', input={ 'import_from': 'image_upload_v2_test_container/name-99', 'image_properties': {'name': 'name-99'}}) args = {'owner_specified.shade.md5': fake_md5, 'owner_specified.shade.sha256': fake_sha256, 'image_id': 'a35e8afc-cae9-4e38-8441-2cd465f79f7b'} glance_mock.images.update.assert_called_with(**args) fake_image_dict = meta.obj_to_dict(fake_image) self.assertEqual([fake_image_dict], self.cloud.list_images()) @mock.patch.object(shade.OpenStackCloud, 'glance_client') def test_cache_no_cloud_name(self, glance_mock): class FakeImage(object): status = 'active' name = 'None Test Image' def __init__(self, id): self.id = id fi = FakeImage(id=1) glance_mock.images.list.return_value = [fi] self.cloud.name = None self.assertEqual( meta.obj_list_to_dict([fi]), self.cloud.list_images()) # Now test that the list was cached fi2 = FakeImage(id=2) glance_mock.images.list.return_value = [fi, fi2] self.assertEqual( meta.obj_list_to_dict([fi]), self.cloud.list_images()) # Invalidation too self.cloud.list_images.invalidate(self.cloud) self.assertEqual( meta.obj_list_to_dict([fi, fi2]), self.cloud.list_images()) class TestBogusAuth(base.TestCase): CLOUD_CONFIG = { 'clouds': { '_test_cloud_': { 'auth': { 'auth_url': 'http://198.51.100.1:35357/v2.0', 'username': '_test_user_', 'password': '_test_pass_', 'project_name': '_test_project_', }, 'region_name': '_test_region_', }, '_bogus_test_': { 'auth_type': 'bogus', 'auth': { 'auth_url': 'http://198.51.100.1:35357/v2.0', 'username': '_test_user_', 'password': '_test_pass_', 'project_name': '_test_project_', }, 'region_name': '_test_region_', }, }, } def setUp(self): super(TestBogusAuth, self).setUp() def test_get_auth_bogus(self): with testtools.ExpectedException(exc.OpenStackCloudException): shade.openstack_cloud( cloud='_bogus_test_', config=self.config) shade-1.7.0/shade/tests/unit/base.py0000664000567000056710000000471612677256557020473 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright 2010-2011 OpenStack Foundation # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import fixtures import os_client_config as occ import tempfile import yaml import shade.openstackcloud from shade.tests import base class TestCase(base.TestCase): """Test case base class for all unit tests.""" CLOUD_CONFIG = { 'clouds': { '_test_cloud_': { 'auth': { 'auth_url': 'http://198.51.100.1:35357/v2.0', 'username': '_test_user_', 'password': '_test_pass_', 'project_name': '_test_project_', }, 'region_name': '_test_region_', }, }, } def setUp(self): """Run before each test method to initialize test environment.""" super(TestCase, self).setUp() # Sleeps are for real testing, but unit tests shouldn't need them realsleep = time.sleep def _nosleep(seconds): return realsleep(seconds * 0.0001) self.sleep_fixture = self.useFixture(fixtures.MonkeyPatch( 'time.sleep', _nosleep)) # Isolate os-client-config from test environment config = tempfile.NamedTemporaryFile(delete=False) config.write(bytes(yaml.dump(self.CLOUD_CONFIG).encode('utf-8'))) config.close() vendor = tempfile.NamedTemporaryFile(delete=False) vendor.write(b'{}') vendor.close() self.config = occ.OpenStackConfig( config_files=[config.name], vendor_files=[vendor.name]) self.cloud_config = self.config.get_one_cloud(cloud='_test_cloud_') self.cloud = shade.OpenStackCloud( cloud_config=self.cloud_config, log_inner_exceptions=True) shade-1.7.0/shade/tests/unit/test_volume.py0000664000567000056710000002122012677256557022114 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import cinderclient.exceptions as cinder_exc import mock import testtools import shade from shade.tests.unit import base class TestVolume(base.TestCase): @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_attach_volume(self, mock_nova): server = dict(id='server001') volume = dict(id='volume001', status='available', attachments=[]) rvol = dict(id='volume001', status='attached', attachments=[ {'server_id': server['id'], 'device': 'device001'} ]) mock_nova.volumes.create_server_volume.return_value = rvol ret = self.cloud.attach_volume(server, volume, wait=False) self.assertEqual(rvol, ret) mock_nova.volumes.create_server_volume.assert_called_once_with( volume_id=volume['id'], server_id=server['id'], device=None ) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_attach_volume_exception(self, mock_nova): server = dict(id='server001') volume = dict(id='volume001', status='available', attachments=[]) mock_nova.volumes.create_server_volume.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Error attaching volume %s to server %s" % ( volume['id'], server['id']) ): self.cloud.attach_volume(server, volume, wait=False) @mock.patch.object(shade.OpenStackCloud, 'get_volume') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_attach_volume_wait(self, mock_nova, mock_get): server = dict(id='server001') volume = dict(id='volume001', status='available', attachments=[]) attached_volume = dict( id=volume['id'], status='attached', attachments=[{'server_id': server['id'], 'device': 'device001'}] ) mock_get.side_effect = iter([volume, attached_volume]) # defaults to wait=True ret = self.cloud.attach_volume(server, volume) mock_nova.volumes.create_server_volume.assert_called_once_with( volume_id=volume['id'], server_id=server['id'], device=None ) self.assertEqual(2, mock_get.call_count) self.assertEqual(attached_volume, ret) @mock.patch.object(shade.OpenStackCloud, 'get_volume') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_attach_volume_wait_error(self, mock_nova, mock_get): server = dict(id='server001') volume = dict(id='volume001', status='available', attachments=[]) errored_volume = dict(id=volume['id'], status='error', attachments=[]) mock_get.side_effect = iter([volume, errored_volume]) with testtools.ExpectedException( shade.OpenStackCloudException, "Error in attaching volume %s" % errored_volume['id'] ): self.cloud.attach_volume(server, volume) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_attach_volume_not_available(self, mock_nova): server = dict(id='server001') volume = dict(id='volume001', status='error', attachments=[]) with testtools.ExpectedException( shade.OpenStackCloudException, "Volume %s is not available. Status is '%s'" % ( volume['id'], volume['status']) ): self.cloud.attach_volume(server, volume) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_attach_volume_already_attached(self, mock_nova): device_id = 'device001' server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': device_id} ]) with testtools.ExpectedException( shade.OpenStackCloudException, "Volume %s already attached to server %s on device %s" % ( volume['id'], server['id'], device_id) ): self.cloud.attach_volume(server, volume) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_detach_volume(self, mock_nova): server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': 'device001'} ]) self.cloud.detach_volume(server, volume, wait=False) mock_nova.volumes.delete_server_volume.assert_called_once_with( attachment_id=volume['id'], server_id=server['id'] ) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_detach_volume_exception(self, mock_nova): server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': 'device001'} ]) mock_nova.volumes.delete_server_volume.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Error detaching volume %s from server %s" % ( volume['id'], server['id']) ): self.cloud.detach_volume(server, volume, wait=False) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_detach_volume_not_attached(self, mock_nova): server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server999', 'device': 'device001'} ]) with testtools.ExpectedException( shade.OpenStackCloudException, "Volume %s is not attached to server %s" % ( volume['id'], server['id']) ): self.cloud.detach_volume(server, volume, wait=False) @mock.patch.object(shade.OpenStackCloud, 'get_volume') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_detach_volume_wait(self, mock_nova, mock_get): server = dict(id='server001') volume = dict(id='volume001', status='attached', attachments=[ {'server_id': 'server001', 'device': 'device001'} ]) avail_volume = dict(id=volume['id'], status='available', attachments=[]) mock_get.side_effect = iter([volume, avail_volume]) self.cloud.detach_volume(server, volume) mock_nova.volumes.delete_server_volume.assert_called_once_with( attachment_id=volume['id'], server_id=server['id'] ) self.assertEqual(2, mock_get.call_count) @mock.patch.object(shade.OpenStackCloud, 'get_volume') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_detach_volume_wait_error(self, mock_nova, mock_get): server = dict(id='server001') volume = dict(id='volume001', status='attached', attachments=[ {'server_id': 'server001', 'device': 'device001'} ]) errored_volume = dict(id=volume['id'], status='error', attachments=[]) mock_get.side_effect = iter([volume, errored_volume]) with testtools.ExpectedException( shade.OpenStackCloudException, "Error in detaching volume %s" % errored_volume['id'] ): self.cloud.detach_volume(server, volume) @mock.patch.object(shade.OpenStackCloud, 'get_volume') @mock.patch.object(shade.OpenStackCloud, 'cinder_client') def test_delete_volume_deletes(self, mock_cinder, mock_get): volume = dict(id='volume001', status='attached') mock_get.side_effect = iter([volume, None]) self.assertTrue(self.cloud.delete_volume(volume['id'])) @mock.patch.object(shade.OpenStackCloud, 'get_volume') @mock.patch.object(shade.OpenStackCloud, 'cinder_client') def test_delete_volume_gone_away(self, mock_cinder, mock_get): volume = dict(id='volume001', status='attached') mock_get.side_effect = iter([volume]) mock_cinder.volumes.delete.side_effect = cinder_exc.NotFound('N/A') self.assertFalse(self.cloud.delete_volume(volume['id'])) shade-1.7.0/shade/tests/unit/test_domains.py0000664000567000056710000001104312677256557022241 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import testtools import shade from shade import meta from shade.tests.unit import base from shade.tests import fakes domain_obj = fakes.FakeDomain( id='1', name='a-domain', description='A wonderful keystone domain', enabled=True, ) class TestDomains(base.TestCase): def setUp(self): super(TestDomains, self).setUp() self.cloud = shade.operator_cloud(validate=False) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_domains(self, mock_keystone): self.cloud.list_domains() self.assertTrue(mock_keystone.domains.list.called) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_get_domain(self, mock_keystone): mock_keystone.domains.get.return_value = domain_obj domain = self.cloud.get_domain(domain_id='1234') self.assertFalse(mock_keystone.domains.list.called) self.assertTrue(mock_keystone.domains.get.called) self.assertEqual(domain['name'], 'a-domain') @mock.patch.object(shade._utils, 'normalize_domains') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_domain(self, mock_keystone, mock_normalize): mock_keystone.domains.create.return_value = domain_obj self.cloud.create_domain(domain_obj.name, domain_obj.description) mock_keystone.domains.create.assert_called_once_with( name=domain_obj.name, description=domain_obj.description, enabled=True) mock_normalize.assert_called_once_with([meta.obj_to_dict(domain_obj)]) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_domain_exception(self, mock_keystone): mock_keystone.domains.create.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Failed to create domain domain_name" ): self.cloud.create_domain('domain_name') @mock.patch.object(shade.OperatorCloud, 'update_domain') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_delete_domain(self, mock_keystone, mock_update): mock_update.return_value = dict(id='update_domain_id') self.cloud.delete_domain('domain_id') mock_update.assert_called_once_with('domain_id', enabled=False) mock_keystone.domains.delete.assert_called_once_with( domain='update_domain_id') @mock.patch.object(shade.OperatorCloud, 'update_domain') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_delete_domain_exception(self, mock_keystone, mock_update): mock_keystone.domains.delete.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Failed to delete domain domain_id" ): self.cloud.delete_domain('domain_id') @mock.patch.object(shade._utils, 'normalize_domains') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_update_domain(self, mock_keystone, mock_normalize): mock_keystone.domains.update.return_value = domain_obj self.cloud.update_domain('domain_id', name='new name', description='new description', enabled=False) mock_keystone.domains.update.assert_called_once_with( domain='domain_id', name='new name', description='new description', enabled=False) mock_normalize.assert_called_once_with( [meta.obj_to_dict(domain_obj)]) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_update_domain_exception(self, mock_keystone): mock_keystone.domains.update.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Error in updating domain domain_id" ): self.cloud.delete_domain('domain_id') shade-1.7.0/shade/tests/unit/test_delete_server.py0000664000567000056710000001416412677256557023446 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_delete_server ---------------------------------- Tests for the `delete_server` command. """ import mock from novaclient import exceptions as nova_exc import os_client_config from shade import OpenStackCloud from shade import exc as shade_exc from shade.tests import fakes from shade.tests.unit import base class TestDeleteServer(base.TestCase): novaclient_exceptions = (nova_exc.BadRequest, nova_exc.Unauthorized, nova_exc.Forbidden, nova_exc.MethodNotAllowed, nova_exc.Conflict, nova_exc.OverLimit, nova_exc.RateLimit, nova_exc.HTTPNotImplemented) def setUp(self): super(TestDeleteServer, self).setUp() config = os_client_config.OpenStackConfig() self.cloud = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) @mock.patch('shade.OpenStackCloud.nova_client') def test_delete_server(self, nova_mock): """ Test that novaclient server delete is called when wait=False """ server = fakes.FakeServer('1234', 'daffy', 'ACTIVE') nova_mock.servers.list.return_value = [server] self.assertTrue(self.cloud.delete_server('daffy', wait=False)) nova_mock.servers.delete.assert_called_with(server=server.id) @mock.patch('shade.OpenStackCloud.nova_client') def test_delete_server_already_gone(self, nova_mock): """ Test that we return immediately when server is already gone """ nova_mock.servers.list.return_value = [] self.assertFalse(self.cloud.delete_server('tweety', wait=False)) self.assertFalse(nova_mock.servers.delete.called) @mock.patch('shade.OpenStackCloud.nova_client') def test_delete_server_already_gone_wait(self, nova_mock): self.assertFalse(self.cloud.delete_server('speedy', wait=True)) self.assertFalse(nova_mock.servers.delete.called) @mock.patch('shade.OpenStackCloud.nova_client') def test_delete_server_wait_for_notfound(self, nova_mock): """ Test that delete_server waits for NotFound from novaclient """ server = fakes.FakeServer('9999', 'wily', 'ACTIVE') nova_mock.servers.list.return_value = [server] def _delete_wily(*args, **kwargs): self.assertIn('server', kwargs) self.assertEqual('9999', kwargs['server']) nova_mock.servers.list.return_value = [] def _raise_notfound(*args, **kwargs): self.assertIn('server', kwargs) self.assertEqual('9999', kwargs['server']) raise nova_exc.NotFound(code='404') nova_mock.servers.get.side_effect = _raise_notfound nova_mock.servers.delete.side_effect = _delete_wily self.assertTrue(self.cloud.delete_server('wily', wait=True)) nova_mock.servers.delete.assert_called_with(server=server.id) @mock.patch('shade.OpenStackCloud.nova_client') def test_delete_server_fails(self, nova_mock): """ Test that delete_server wraps novaclient exceptions """ nova_mock.servers.list.return_value = [fakes.FakeServer('1212', 'speedy', 'ACTIVE')] for fail in self.novaclient_exceptions: def _raise_fail(server): raise fail(code=fail.http_status) nova_mock.servers.delete.side_effect = _raise_fail exc = self.assertRaises(shade_exc.OpenStackCloudException, self.cloud.delete_server, 'speedy', wait=False) # Note that message is deprecated from Exception, but not in # the novaclient exceptions. self.assertIn(fail.message, str(exc)) @mock.patch('shade.OpenStackCloud.nova_client') def test_delete_server_get_fails(self, nova_mock): """ Test that delete_server wraps novaclient exceptions on wait fails """ nova_mock.servers.list.return_value = [fakes.FakeServer('2000', 'yosemite', 'ACTIVE')] for fail in self.novaclient_exceptions: def _raise_fail(server): raise fail(code=fail.http_status) nova_mock.servers.get.side_effect = _raise_fail exc = self.assertRaises(shade_exc.OpenStackCloudException, self.cloud.delete_server, 'yosemite', wait=True) # Note that message is deprecated from Exception, but not in # the novaclient exceptions. self.assertIn(fail.message, str(exc)) @mock.patch('shade.OpenStackCloud.get_volume') @mock.patch('shade.OpenStackCloud.nova_client') def test_delete_server_no_cinder(self, nova_mock, cinder_mock): """ Test that novaclient server delete is called when wait=False """ server = fakes.FakeServer('1234', 'porky', 'ACTIVE') nova_mock.servers.list.return_value = [server] with mock.patch('shade.OpenStackCloud.has_service', return_value=False): self.assertTrue(self.cloud.delete_server('porky', wait=False)) nova_mock.servers.delete.assert_called_with(server=server.id) self.assertFalse(cinder_mock.called) shade-1.7.0/shade/tests/unit/test_shade.py0000664000567000056710000007722412677256562021704 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import munch import glanceclient from heatclient import client as heat_client from neutronclient.common import exceptions as n_exc import testtools from os_client_config import cloud_config import shade from shade import _utils from shade import exc from shade.tests import fakes from shade.tests.unit import base RANGE_DATA = [ dict(id=1, key1=1, key2=5), dict(id=2, key1=1, key2=20), dict(id=3, key1=2, key2=10), dict(id=4, key1=2, key2=30), dict(id=5, key1=3, key2=40), dict(id=6, key1=3, key2=40), ] class TestShade(base.TestCase): def test_openstack_cloud(self): self.assertIsInstance(self.cloud, shade.OpenStackCloud) @mock.patch.object(shade.OpenStackCloud, 'search_images') def test_get_images(self, mock_search): image1 = dict(id='123', name='mickey') mock_search.return_value = [image1] r = self.cloud.get_image('mickey') self.assertIsNotNone(r) self.assertDictEqual(image1, r) @mock.patch.object(shade.OpenStackCloud, 'search_images') def test_get_image_not_found(self, mock_search): mock_search.return_value = [] r = self.cloud.get_image('doesNotExist') self.assertIsNone(r) @mock.patch.object(shade.OpenStackCloud, 'search_servers') def test_get_server(self, mock_search): server1 = dict(id='123', name='mickey') mock_search.return_value = [server1] r = self.cloud.get_server('mickey') self.assertIsNotNone(r) self.assertDictEqual(server1, r) @mock.patch.object(shade.OpenStackCloud, 'search_servers') def test_get_server_not_found(self, mock_search): mock_search.return_value = [] r = self.cloud.get_server('doesNotExist') self.assertIsNone(r) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_servers_exception(self, mock_client): mock_client.servers.list.side_effect = Exception() self.assertRaises(exc.OpenStackCloudException, self.cloud.list_servers) @mock.patch.object(cloud_config.CloudConfig, 'get_session') @mock.patch.object(cloud_config.CloudConfig, 'get_legacy_client') def test_glance_args(self, get_legacy_client_mock, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = 'http://example.com/v2' get_session_mock.return_value = session_mock self.cloud.glance_client get_legacy_client_mock.assert_called_once_with( service_key='image', client_class=glanceclient.Client, interface_key=None, pass_version_arg=True, ) @mock.patch.object(cloud_config.CloudConfig, 'get_session') @mock.patch.object(cloud_config.CloudConfig, 'get_legacy_client') def test_heat_args(self, get_legacy_client_mock, get_session_mock): session_mock = mock.Mock() get_session_mock.return_value = session_mock self.cloud.heat_client get_legacy_client_mock.assert_called_once_with( service_key='orchestration', client_class=heat_client.Client, interface_key=None, pass_version_arg=True, ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_list_networks(self, mock_neutron): net1 = {'id': '1', 'name': 'net1'} net2 = {'id': '2', 'name': 'net2'} mock_neutron.list_networks.return_value = { 'networks': [net1, net2] } nets = self.cloud.list_networks() mock_neutron.list_networks.assert_called_once_with() self.assertEqual([net1, net2], nets) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_list_networks_filtered(self, mock_neutron): self.cloud.list_networks(filters={'name': 'test'}) mock_neutron.list_networks.assert_called_once_with(name='test') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_list_networks_exception(self, mock_neutron): mock_neutron.list_networks.side_effect = Exception() with testtools.ExpectedException( exc.OpenStackCloudException, "Error fetching network list" ): self.cloud.list_networks() @mock.patch.object(shade.OpenStackCloud, 'search_subnets') def test_get_subnet(self, mock_search): subnet = dict(id='123', name='mickey') mock_search.return_value = [subnet] r = self.cloud.get_subnet('mickey') self.assertIsNotNone(r) self.assertDictEqual(subnet, r) @mock.patch.object(shade.OpenStackCloud, 'search_routers') def test_get_router(self, mock_search): router1 = dict(id='123', name='mickey') mock_search.return_value = [router1] r = self.cloud.get_router('mickey') self.assertIsNotNone(r) self.assertDictEqual(router1, r) @mock.patch.object(shade.OpenStackCloud, 'search_routers') def test_get_router_not_found(self, mock_search): mock_search.return_value = [] r = self.cloud.get_router('goofy') self.assertIsNone(r) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_router(self, mock_client): self.cloud.create_router(name='goofy', admin_state_up=True) self.assertTrue(mock_client.create_router.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_router_with_enable_snat_True(self, mock_client): """Do not send enable_snat when same as neutron default.""" self.cloud.create_router(name='goofy', admin_state_up=True, enable_snat=True) mock_client.create_router.assert_called_once_with( body=dict( router=dict( name='goofy', admin_state_up=True, ) ) ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_router_with_enable_snat_False(self, mock_client): """Send enable_snat when it is False.""" self.cloud.create_router(name='goofy', admin_state_up=True, enable_snat=False) mock_client.create_router.assert_called_once_with( body=dict( router=dict( name='goofy', admin_state_up=True, external_gateway_info=dict( enable_snat=False ) ) ) ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_add_router_interface(self, mock_client): self.cloud.add_router_interface({'id': '123'}, subnet_id='abc') mock_client.add_interface_router.assert_called_once_with( router='123', body={'subnet_id': 'abc'} ) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_remove_router_interface(self, mock_client): self.cloud.remove_router_interface({'id': '123'}, subnet_id='abc') mock_client.remove_interface_router.assert_called_once_with( router='123', body={'subnet_id': 'abc'} ) @mock.patch.object(shade.OpenStackCloud, 'get_router') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_update_router(self, mock_client, mock_get): router1 = dict(id='123', name='mickey') mock_get.return_value = router1 self.cloud.update_router('123', name='goofy') self.assertTrue(mock_client.update_router.called) @mock.patch.object(shade.OpenStackCloud, 'search_routers') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_router(self, mock_client, mock_search): router1 = dict(id='123', name='mickey') mock_search.return_value = [router1] self.cloud.delete_router('mickey') self.assertTrue(mock_client.delete_router.called) @mock.patch.object(shade.OpenStackCloud, 'search_routers') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_router_not_found(self, mock_client, mock_search): mock_search.return_value = [] r = self.cloud.delete_router('goofy') self.assertFalse(r) self.assertFalse(mock_client.delete_router.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_router_multiple_found(self, mock_client): router1 = dict(id='123', name='mickey') router2 = dict(id='456', name='mickey') mock_client.list_routers.return_value = dict(routers=[router1, router2]) self.assertRaises(exc.OpenStackCloudException, self.cloud.delete_router, 'mickey') self.assertFalse(mock_client.delete_router.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_router_multiple_using_id(self, mock_client): router1 = dict(id='123', name='mickey') router2 = dict(id='456', name='mickey') mock_client.list_routers.return_value = dict(routers=[router1, router2]) self.cloud.delete_router('123') self.assertTrue(mock_client.delete_router.called) @mock.patch.object(shade.OpenStackCloud, 'search_ports') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_list_router_interfaces_no_gw(self, mock_client, mock_search): """ If a router does not have external_gateway_info, do not fail. """ external_port = {'id': 'external_port_id', 'fixed_ips': [ ('external_subnet_id', 'ip_address'), ]} port_list = [external_port] router = { 'id': 'router_id', } mock_search.return_value = port_list ret = self.cloud.list_router_interfaces(router, interface_type='external') mock_search.assert_called_once_with( filters={'device_id': router['id']} ) self.assertEqual([], ret) @mock.patch.object(shade.OpenStackCloud, 'search_ports') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_list_router_interfaces_all(self, mock_client, mock_search): internal_port = {'id': 'internal_port_id', 'fixed_ips': [ ('internal_subnet_id', 'ip_address'), ]} external_port = {'id': 'external_port_id', 'fixed_ips': [ ('external_subnet_id', 'ip_address'), ]} port_list = [internal_port, external_port] router = { 'id': 'router_id', 'external_gateway_info': { 'external_fixed_ips': [('external_subnet_id', 'ip_address')] } } mock_search.return_value = port_list ret = self.cloud.list_router_interfaces(router) mock_search.assert_called_once_with( filters={'device_id': router['id']} ) self.assertEqual(port_list, ret) @mock.patch.object(shade.OpenStackCloud, 'search_ports') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_list_router_interfaces_internal(self, mock_client, mock_search): internal_port = {'id': 'internal_port_id', 'fixed_ips': [ ('internal_subnet_id', 'ip_address'), ]} external_port = {'id': 'external_port_id', 'fixed_ips': [ ('external_subnet_id', 'ip_address'), ]} port_list = [internal_port, external_port] router = { 'id': 'router_id', 'external_gateway_info': { 'external_fixed_ips': [('external_subnet_id', 'ip_address')] } } mock_search.return_value = port_list ret = self.cloud.list_router_interfaces(router, interface_type='internal') mock_search.assert_called_once_with( filters={'device_id': router['id']} ) self.assertEqual([internal_port], ret) @mock.patch.object(shade.OpenStackCloud, 'search_ports') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_list_router_interfaces_external(self, mock_client, mock_search): internal_port = {'id': 'internal_port_id', 'fixed_ips': [ ('internal_subnet_id', 'ip_address'), ]} external_port = {'id': 'external_port_id', 'fixed_ips': [ ('external_subnet_id', 'ip_address'), ]} port_list = [internal_port, external_port] router = { 'id': 'router_id', 'external_gateway_info': { 'external_fixed_ips': [('external_subnet_id', 'ip_address')] } } mock_search.return_value = port_list ret = self.cloud.list_router_interfaces(router, interface_type='external') mock_search.assert_called_once_with( filters={'device_id': router['id']} ) self.assertEqual([external_port], ret) @mock.patch.object(shade.OpenStackCloud, 'search_networks') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_subnet(self, mock_client, mock_search): net1 = dict(id='123', name='donald') mock_search.return_value = [net1] pool = [{'start': '192.168.199.2', 'end': '192.168.199.254'}] dns = ['8.8.8.8'] routes = [{"destination": "0.0.0.0/0", "nexthop": "123.456.78.9"}] self.cloud.create_subnet('donald', '192.168.199.0/24', allocation_pools=pool, dns_nameservers=dns, host_routes=routes) self.assertTrue(mock_client.create_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'search_networks') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_subnet_without_gateway_ip(self, mock_client, mock_search): net1 = dict(id='123', name='donald') mock_search.return_value = [net1] pool = [{'start': '192.168.200.2', 'end': '192.168.200.254'}] dns = ['8.8.8.8'] self.cloud.create_subnet('kooky', '192.168.200.0/24', allocation_pools=pool, dns_nameservers=dns, disable_gateway_ip=True) self.assertTrue(mock_client.create_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'search_networks') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_subnet_with_gateway_ip(self, mock_client, mock_search): net1 = dict(id='123', name='donald') mock_search.return_value = [net1] pool = [{'start': '192.168.200.8', 'end': '192.168.200.254'}] dns = ['8.8.8.8'] gateway = '192.168.200.2' self.cloud.create_subnet('kooky', '192.168.200.0/24', allocation_pools=pool, dns_nameservers=dns, gateway_ip=gateway) self.assertTrue(mock_client.create_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'search_networks') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_subnet_conflict_gw_ops(self, mock_client, mock_search): net1 = dict(id='123', name='donald') mock_search.return_value = [net1] gateway = '192.168.200.3' self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, 'kooky', '192.168.200.0/24', gateway_ip=gateway, disable_gateway_ip=True) @mock.patch.object(shade.OpenStackCloud, 'list_networks') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_subnet_bad_network(self, mock_client, mock_list): net1 = dict(id='123', name='donald') mock_list.return_value = [net1] self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, 'duck', '192.168.199.0/24') self.assertFalse(mock_client.create_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'search_networks') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_create_subnet_non_unique_network(self, mock_client, mock_search): net1 = dict(id='123', name='donald') net2 = dict(id='456', name='donald') mock_search.return_value = [net1, net2] self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, 'donald', '192.168.199.0/24') self.assertFalse(mock_client.create_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'search_subnets') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_subnet(self, mock_client, mock_search): subnet1 = dict(id='123', name='mickey') mock_search.return_value = [subnet1] self.cloud.delete_subnet('mickey') self.assertTrue(mock_client.delete_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'search_subnets') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_subnet_not_found(self, mock_client, mock_search): mock_search.return_value = [] r = self.cloud.delete_subnet('goofy') self.assertFalse(r) self.assertFalse(mock_client.delete_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_subnet_multiple_found(self, mock_client): subnet1 = dict(id='123', name='mickey') subnet2 = dict(id='456', name='mickey') mock_client.list_subnets.return_value = dict(subnets=[subnet1, subnet2]) self.assertRaises(exc.OpenStackCloudException, self.cloud.delete_subnet, 'mickey') self.assertFalse(mock_client.delete_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_delete_subnet_multiple_using_id(self, mock_client): subnet1 = dict(id='123', name='mickey') subnet2 = dict(id='456', name='mickey') mock_client.list_subnets.return_value = dict(subnets=[subnet1, subnet2]) self.cloud.delete_subnet('123') self.assertTrue(mock_client.delete_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'get_subnet') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_update_subnet(self, mock_client, mock_get): subnet1 = dict(id='123', name='mickey') mock_get.return_value = subnet1 self.cloud.update_subnet('123', subnet_name='goofy') self.assertTrue(mock_client.update_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'get_subnet') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_update_subnet_gateway_ip(self, mock_client, mock_get): subnet1 = dict(id='456', name='kooky') mock_get.return_value = subnet1 gateway = '192.168.200.3' self.cloud.update_subnet( '456', gateway_ip=gateway) self.assertTrue(mock_client.update_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'get_subnet') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_update_subnet_disable_gateway_ip(self, mock_client, mock_get): subnet1 = dict(id='456', name='kooky') mock_get.return_value = subnet1 self.cloud.update_subnet( '456', disable_gateway_ip=True) self.assertTrue(mock_client.update_subnet.called) @mock.patch.object(shade.OpenStackCloud, 'get_subnet') @mock.patch.object(shade.OpenStackCloud, 'neutron_client') def test_update_subnet_conflict_gw_ops(self, mock_client, mock_get): subnet1 = dict(id='456', name='kooky') mock_get.return_value = subnet1 gateway = '192.168.200.3' self.assertRaises(exc.OpenStackCloudException, self.cloud.update_subnet, '456', gateway_ip=gateway, disable_gateway_ip=True) @mock.patch.object(shade.OpenStackCloud, '_compute_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_get_flavor_by_ram(self, mock_nova_client, mock_compute): vanilla = fakes.FakeFlavor('1', 'vanilla ice cream', 100) chocolate = fakes.FakeFlavor('1', 'chocolate ice cream', 200) mock_nova_client.flavors.list.return_value = [vanilla, chocolate] mock_response = mock.Mock() mock_response.json.return_value = dict(extra_specs=[]) mock_compute.get.return_value = mock_response flavor = self.cloud.get_flavor_by_ram(ram=150) self.assertEquals(chocolate.id, flavor['id']) @mock.patch.object(shade.OpenStackCloud, '_compute_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_get_flavor_by_ram_and_include( self, mock_nova_client, mock_compute): vanilla = fakes.FakeFlavor('1', 'vanilla ice cream', 100) chocolate = fakes.FakeFlavor('2', 'chocoliate ice cream', 200) strawberry = fakes.FakeFlavor('3', 'strawberry ice cream', 250) mock_response = mock.Mock() mock_response.json.return_value = dict(extra_specs=[]) mock_compute.get.return_value = mock_response mock_nova_client.flavors.list.return_value = [ vanilla, chocolate, strawberry] flavor = self.cloud.get_flavor_by_ram(ram=150, include='strawberry') self.assertEquals(strawberry.id, flavor['id']) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_get_flavor_by_ram_not_found(self, mock_nova_client): mock_nova_client.flavors.list.return_value = [] self.assertRaises(shade.OpenStackCloudException, self.cloud.get_flavor_by_ram, ram=100) @mock.patch.object(shade.OpenStackCloud, '_compute_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_get_flavor_string_and_int( self, mock_nova_client, mock_compute): vanilla = fakes.FakeFlavor('1', 'vanilla ice cream', 100) mock_nova_client.flavors.list.return_value = [vanilla] mock_response = mock.Mock() mock_response.json.return_value = dict(extra_specs=[]) mock_compute.get.return_value = mock_response flavor1 = self.cloud.get_flavor('1') self.assertEquals(vanilla.id, flavor1['id']) flavor2 = self.cloud.get_flavor(1) self.assertEquals(vanilla.id, flavor2['id']) def test__neutron_exceptions_resource_not_found(self): with mock.patch.object( shade._tasks, 'NetworkList', side_effect=n_exc.NotFound()): self.assertRaises(exc.OpenStackCloudResourceNotFound, self.cloud.list_networks) def test__neutron_exceptions_url_not_found(self): with mock.patch.object( shade._tasks, 'NetworkList', side_effect=n_exc.NeutronClientException(status_code=404)): self.assertRaises(exc.OpenStackCloudURINotFound, self.cloud.list_networks) def test__neutron_exceptions_neutron_client_generic(self): with mock.patch.object( shade._tasks, 'NetworkList', side_effect=n_exc.NeutronClientException()): self.assertRaises(exc.OpenStackCloudException, self.cloud.list_networks) def test__neutron_exceptions_generic(self): with mock.patch.object( shade._tasks, 'NetworkList', side_effect=Exception()): self.assertRaises(exc.OpenStackCloudException, self.cloud.list_networks) @mock.patch.object(shade._tasks.ServerList, 'main') @mock.patch('shade.meta.add_server_interfaces') def test_list_servers(self, mock_add_srv_int, mock_serverlist): '''This test verifies that calling list_servers results in a call to the ServerList task.''' server_obj = munch.Munch({'name': 'testserver', 'id': '1', 'flavor': {}, 'addresses': {}, 'accessIPv4': '', 'accessIPv6': '', 'image': ''}) mock_serverlist.return_value = [server_obj] mock_add_srv_int.side_effect = [server_obj] r = self.cloud.list_servers() self.assertEquals(1, len(r)) self.assertEquals(1, mock_add_srv_int.call_count) self.assertEquals('testserver', r[0]['name']) @mock.patch.object(shade._tasks.ServerList, 'main') @mock.patch('shade.meta.get_hostvars_from_server') def test_list_servers_detailed(self, mock_get_hostvars_from_server, mock_serverlist): '''This test verifies that when list_servers is called with `detailed=True` that it calls `get_hostvars_from_server` for each server in the list.''' mock_serverlist.return_value = [ fakes.FakeServer('server1', '', 'ACTIVE'), fakes.FakeServer('server2', '', 'ACTIVE'), ] mock_get_hostvars_from_server.side_effect = [ {'name': 'server1', 'id': '1'}, {'name': 'server2', 'id': '2'}, ] r = self.cloud.list_servers(detailed=True) self.assertEquals(2, len(r)) self.assertEquals(len(r), mock_get_hostvars_from_server.call_count) self.assertEquals('server1', r[0]['name']) self.assertEquals('server2', r[1]['name']) def test_iterate_timeout_bad_wait(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Wait value must be an int or float value."): for count in _utils._iterate_timeout( 1, "test_iterate_timeout_bad_wait", wait="timeishard"): pass @mock.patch('time.sleep') def test_iterate_timeout_str_wait(self, mock_sleep): iter = _utils._iterate_timeout( 10, "test_iterate_timeout_str_wait", wait="1.6") next(iter) next(iter) mock_sleep.assert_called_with(1.6) @mock.patch('time.sleep') def test_iterate_timeout_int_wait(self, mock_sleep): iter = _utils._iterate_timeout( 10, "test_iterate_timeout_int_wait", wait=1) next(iter) next(iter) mock_sleep.assert_called_with(1.0) @mock.patch('time.sleep') def test_iterate_timeout_timeout(self, mock_sleep): message = "timeout test" with testtools.ExpectedException( exc.OpenStackCloudTimeout, message): for count in _utils._iterate_timeout(0.1, message, wait=1): pass mock_sleep.assert_called_with(1.0) @mock.patch.object(shade.OpenStackCloud, '_compute_client') def test__nova_extensions(self, mock_compute): body = { 'extensions': [ { "updated": "2014-12-03T00:00:00Z", "name": "Multinic", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "NMN", "description": "Multiple network support." }, { "updated": "2014-12-03T00:00:00Z", "name": "DiskConfig", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "OS-DCF", "description": "Disk Management Extension." }, ] } mock_response = mock.Mock() mock_response.json.return_value = body mock_compute.get.return_value = mock_response extensions = self.cloud._nova_extensions() mock_compute.get.assert_called_once_with('/extensions') self.assertEqual(set(['NMN', 'OS-DCF']), extensions) @mock.patch.object(shade.OpenStackCloud, '_compute_client') def test__nova_extensions_fails(self, mock_compute): mock_compute.get.side_effect = Exception() with testtools.ExpectedException( exc.OpenStackCloudException, "Error fetching extension list for nova" ): self.cloud._nova_extensions() @mock.patch.object(shade.OpenStackCloud, '_compute_client') def test__has_nova_extension(self, mock_compute): body = { 'extensions': [ { "updated": "2014-12-03T00:00:00Z", "name": "Multinic", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "NMN", "description": "Multiple network support." }, { "updated": "2014-12-03T00:00:00Z", "name": "DiskConfig", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "OS-DCF", "description": "Disk Management Extension." }, ] } mock_response = mock.Mock() mock_response.json.return_value = body mock_compute.get.return_value = mock_response self.assertTrue(self.cloud._has_nova_extension('NMN')) self.assertFalse(self.cloud._has_nova_extension('invalid')) def test_range_search(self): filters = {"key1": "min", "key2": "20"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(1, len(retval)) self.assertEqual([RANGE_DATA[1]], retval) def test_range_search_2(self): filters = {"key1": "<=2", "key2": ">10"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual([RANGE_DATA[1], RANGE_DATA[3]], retval) def test_range_search_3(self): filters = {"key1": "2", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(0, len(retval)) def test_range_search_4(self): filters = {"key1": "max", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(0, len(retval)) def test_range_search_5(self): filters = {"key1": "min", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(1, len(retval)) self.assertEqual([RANGE_DATA[0]], retval) shade-1.7.0/shade/tests/unit/test_stack.py0000664000567000056710000001407712677256557021726 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools from heatclient.common import template_utils import shade from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestStack(base.TestCase): @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_list_stacks(self, mock_heat): fake_stacks = [ fakes.FakeStack('001', 'stack1'), fakes.FakeStack('002', 'stack2'), ] mock_heat.stacks.list.return_value = fake_stacks stacks = self.cloud.list_stacks() mock_heat.stacks.list.assert_called_once_with() self.assertEqual(meta.obj_list_to_dict(fake_stacks), stacks) @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_list_stacks_exception(self, mock_heat): mock_heat.stacks.list.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Error fetching stack list" ): self.cloud.list_stacks() @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_search_stacks(self, mock_heat): fake_stacks = [ fakes.FakeStack('001', 'stack1'), fakes.FakeStack('002', 'stack2'), ] mock_heat.stacks.list.return_value = fake_stacks stacks = self.cloud.search_stacks() mock_heat.stacks.list.assert_called_once_with() self.assertEqual(meta.obj_list_to_dict(fake_stacks), stacks) @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_search_stacks_filters(self, mock_heat): fake_stacks = [ fakes.FakeStack('001', 'stack1', status='GOOD'), fakes.FakeStack('002', 'stack2', status='BAD'), ] mock_heat.stacks.list.return_value = fake_stacks filters = {'stack_status': 'GOOD'} stacks = self.cloud.search_stacks(filters=filters) mock_heat.stacks.list.assert_called_once_with() self.assertEqual(meta.obj_list_to_dict(fake_stacks[:1]), stacks) @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_search_stacks_exception(self, mock_heat): mock_heat.stacks.list.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Error fetching stack list" ): self.cloud.search_stacks() @mock.patch.object(shade.OpenStackCloud, 'get_stack') @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_delete_stack(self, mock_heat, mock_get): stack = {'id': 'stack_id', 'name': 'stack_name'} mock_get.return_value = stack self.assertTrue(self.cloud.delete_stack('stack_name')) mock_get.assert_called_once_with('stack_name') mock_heat.stacks.delete.assert_called_once_with(stack['id']) @mock.patch.object(shade.OpenStackCloud, 'get_stack') @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_delete_stack_not_found(self, mock_heat, mock_get): mock_get.return_value = None self.assertFalse(self.cloud.delete_stack('stack_name')) mock_get.assert_called_once_with('stack_name') self.assertFalse(mock_heat.stacks.delete.called) @mock.patch.object(shade.OpenStackCloud, 'get_stack') @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_delete_stack_exception(self, mock_heat, mock_get): stack = {'id': 'stack_id', 'name': 'stack_name'} mock_get.return_value = stack mock_heat.stacks.delete.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Failed to delete stack %s" % stack['id'] ): self.cloud.delete_stack('stack_name') @mock.patch.object(template_utils, 'get_template_contents') @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_create_stack(self, mock_heat, mock_template): mock_template.return_value = ({}, {}) self.cloud.create_stack('stack_name') self.assertTrue(mock_template.called) mock_heat.stacks.create.assert_called_once_with( stack_name='stack_name', disable_rollback=False, environment={}, parameters={}, template={}, files={} ) @mock.patch.object(template_utils, 'get_template_contents') @mock.patch.object(shade.OpenStackCloud, 'get_stack') @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_create_stack_wait(self, mock_heat, mock_get, mock_template): stack = {'id': 'stack_id', 'name': 'stack_name'} mock_template.return_value = ({}, {}) mock_get.side_effect = iter([None, stack]) ret = self.cloud.create_stack('stack_name', wait=True) self.assertTrue(mock_template.called) mock_heat.stacks.create.assert_called_once_with( stack_name='stack_name', disable_rollback=False, environment={}, parameters={}, template={}, files={} ) self.assertEqual(2, mock_get.call_count) self.assertEqual(stack, ret) @mock.patch.object(shade.OpenStackCloud, 'heat_client') def test_get_stack(self, mock_heat): stack = fakes.FakeStack('azerty', 'stack',) mock_heat.stacks.list.return_value = [stack] res = self.cloud.get_stack('stack') self.assertIsNotNone(res) self.assertEqual(stack.stack_name, res['stack_name']) self.assertEqual(stack.stack_name, res['name']) self.assertEqual(stack.stack_status, res['stack_status']) shade-1.7.0/shade/tests/unit/test_project.py0000664000567000056710000001245312677256557022263 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import munch import os_client_config as occ import testtools import shade from shade.tests.unit import base class TestProject(base.TestCase): def setUp(self): super(TestProject, self).setUp() self.cloud = shade.operator_cloud(validate=False) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_project_v2(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2' name = 'project_name' description = 'Project description' self.cloud.create_project(name=name, description=description) mock_keystone.tenants.create.assert_called_once_with( project_name=name, description=description, enabled=True, tenant_name=name ) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_project_v3(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' name = 'project_name' description = 'Project description' domain_id = '123' self.cloud.create_project(name=name, description=description, domain_id=domain_id) mock_keystone.projects.create.assert_called_once_with( project_name=name, description=description, enabled=True, name=name, domain=domain_id ) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_project_v3_no_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' with testtools.ExpectedException( shade.OpenStackCloudException, "User creation requires an explicit domain_id argument." ): self.cloud.create_project(name='foo', description='bar') @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'update_project') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_delete_project_v2(self, mock_keystone, mock_update, mock_api_version): mock_api_version.return_value = '2' mock_update.return_value = dict(id='123') self.cloud.delete_project('123') mock_update.assert_called_once_with('123', enabled=False) mock_keystone.tenants.delete.assert_called_once_with(tenant='123') @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'update_project') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_delete_project_v3(self, mock_keystone, mock_update, mock_api_version): mock_api_version.return_value = '3' mock_update.return_value = dict(id='123') self.cloud.delete_project('123') mock_update.assert_called_once_with('123', enabled=False) mock_keystone.projects.delete.assert_called_once_with(project='123') @mock.patch.object(shade.OpenStackCloud, 'get_project') def test_update_project_not_found(self, mock_get_project): mock_get_project.return_value = None with testtools.ExpectedException( shade.OpenStackCloudException, "Project ABC not found." ): self.cloud.update_project('ABC') @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'get_project') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_update_project_v2(self, mock_keystone, mock_get_project, mock_api_version): mock_api_version.return_value = '2' mock_get_project.return_value = munch.Munch(dict(id='123')) self.cloud.update_project('123', description='new', enabled=False) mock_keystone.tenants.update.assert_called_once_with( description='new', enabled=False, tenant_id='123') @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'get_project') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_update_project_v3(self, mock_keystone, mock_get_project, mock_api_version): mock_api_version.return_value = '3' mock_get_project.return_value = munch.Munch(dict(id='123')) self.cloud.update_project('123', description='new', enabled=False) mock_keystone.projects.update.assert_called_once_with( description='new', enabled=False, project='123') shade-1.7.0/shade/tests/unit/test_domain_params.py0000664000567000056710000000572012677256557023426 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os_client_config as occ import munch import shade from shade import exc from shade.tests.unit import base class TestDomainParams(base.TestCase): @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'get_project') def test_identity_params_v3(self, mock_get_project, mock_api_version): mock_get_project.return_value = munch.Munch(id=1234) mock_api_version.return_value = '3' ret = self.cloud._get_identity_params(domain_id='5678', project='bar') self.assertIn('default_project', ret) self.assertEqual(ret['default_project'], 1234) self.assertIn('domain', ret) self.assertEqual(ret['domain'], '5678') @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'get_project') def test_identity_params_v3_no_domain( self, mock_get_project, mock_api_version): mock_get_project.return_value = munch.Munch(id=1234) mock_api_version.return_value = '3' self.assertRaises( exc.OpenStackCloudException, self.cloud._get_identity_params, domain_id=None, project='bar') @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'get_project') def test_identity_params_v2(self, mock_get_project, mock_api_version): mock_get_project.return_value = munch.Munch(id=1234) mock_api_version.return_value = '2' ret = self.cloud._get_identity_params(domain_id='foo', project='bar') self.assertIn('tenant_id', ret) self.assertEqual(ret['tenant_id'], 1234) self.assertNotIn('domain', ret) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'get_project') def test_identity_params_v2_no_domain(self, mock_get_project, mock_api_version): mock_get_project.return_value = munch.Munch(id=1234) mock_api_version.return_value = '2' ret = self.cloud._get_identity_params(domain_id=None, project='bar') api_calls = [mock.call('identity'), mock.call('identity')] mock_api_version.assert_has_calls(api_calls) self.assertIn('tenant_id', ret) self.assertEqual(ret['tenant_id'], 1234) self.assertNotIn('domain', ret) shade-1.7.0/shade/tests/unit/test_operator_noauth.py0000664000567000056710000000427412677256557024030 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import ironicclient from os_client_config import cloud_config import shade from shade.tests import base class TestShadeOperatorNoAuth(base.TestCase): def setUp(self): """Setup Noauth OperatorCloud tests Setup the test to utilize no authentication and an endpoint URL in the auth data. This is permits testing of the basic mechanism that enables Ironic noauth mode to be utilized with Shade. """ super(TestShadeOperatorNoAuth, self).setUp() self.cloud_noauth = shade.operator_cloud( auth_type='admin_token', auth=dict(endpoint="http://localhost:6385"), validate=False, ) @mock.patch.object(cloud_config.CloudConfig, 'get_session') @mock.patch.object(ironicclient.client, 'Client') def test_ironic_noauth_selection_using_a_task( self, mock_client, get_session_mock): """Test noauth selection for Ironic in OperatorCloud Utilize a task to trigger the client connection attempt and evaluate if get_session_endpoint was called while the client was still called. We want session_endpoint to be called because we're storing the endpoint in a noauth token Session object now. """ session_mock = mock.Mock() session_mock.get_endpoint.return_value = None session_mock.get_token.return_value = 'yankee' get_session_mock.return_value = session_mock self.cloud_noauth.patch_machine('name', {}) self.assertTrue(get_session_mock.called) self.assertTrue(mock_client.called) shade-1.7.0/shade/tests/unit/test_floating_ip_common.py0000664000567000056710000001001112677256557024444 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_common ---------------------------------- Tests floating IP resource methods for Neutron and Nova-network. """ from mock import patch import os_client_config from shade import meta from shade import OpenStackCloud from shade.tests.fakes import FakeServer from shade.tests.unit import base class TestFloatingIP(base.TestCase): def setUp(self): super(TestFloatingIP, self).setUp() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_attach_ip_to_server') @patch.object(OpenStackCloud, 'available_floating_ip') def test_add_auto_ip( self, mock_available_floating_ip, mock_attach_ip_to_server, mock_get_floating_ip): server = FakeServer( id='server-id', name='test-server', status="ACTIVE", addresses={} ) server_dict = meta.obj_to_dict(server) floating_ip_dict = { "id": "this-is-a-floating-ip-id", "fixed_ip_address": None, "floating_ip_address": "203.0.113.29", "network": "this-is-a-net-or-pool-id", "attached": False, "status": "ACTIVE" } mock_available_floating_ip.return_value = floating_ip_dict self.client.add_auto_ip(server=server_dict) mock_attach_ip_to_server.assert_called_with( timeout=60, wait=False, server=server_dict, floating_ip=floating_ip_dict, skip_attach=False) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, '_add_ip_from_pool') def test_add_ips_to_server_pool( self, mock_add_ip_from_pool, mock_nova_client): server = FakeServer( id='romeo', name='test-server', status="ACTIVE", addresses={} ) server_dict = meta.obj_to_dict(server) pool = 'nova' mock_nova_client.servers.get.return_value = server self.client.add_ips_to_server(server_dict, ip_pool=pool) mock_add_ip_from_pool.assert_called_with( server_dict, pool, reuse=True, wait=False, timeout=60, fixed_address=None) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'add_ip_list') def test_add_ips_to_server_ip_list( self, mock_add_ip_list, mock_nova_client): server = FakeServer( id='server-id', name='test-server', status="ACTIVE", addresses={} ) server_dict = meta.obj_to_dict(server) ips = ['203.0.113.29', '172.24.4.229'] mock_nova_client.servers.get.return_value = server self.client.add_ips_to_server(server_dict, ips=ips) mock_add_ip_list.assert_called_with( server_dict, ips, wait=False, timeout=60, fixed_address=None) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_auto_ip( self, mock_add_auto_ip, mock_nova_client): server = FakeServer( id='server-id', name='test-server', status="ACTIVE", addresses={} ) server_dict = meta.obj_to_dict(server) mock_nova_client.servers.get.return_value = server self.client.add_ips_to_server(server_dict) mock_add_auto_ip.assert_called_with( server_dict, wait=False, timeout=60, reuse=True) shade-1.7.0/shade/tests/unit/test_flavors.py0000664000567000056710000001021612677256557022264 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import shade from shade.tests import fakes from shade.tests.unit import base class TestFlavors(base.TestCase): def setUp(self): super(TestFlavors, self).setUp() self.op_cloud = shade.operator_cloud(validate=False) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_create_flavor(self, mock_nova): self.op_cloud.create_flavor( 'vanilla', 12345, 4, 100 ) mock_nova.flavors.create.assert_called_once_with( name='vanilla', ram=12345, vcpus=4, disk=100, flavorid='auto', ephemeral=0, swap=0, rxtx_factor=1.0, is_public=True ) @mock.patch.object(shade.OpenStackCloud, '_compute_client') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_flavor(self, mock_nova, mock_compute): mock_response = mock.Mock() mock_response.json.return_value = dict(extra_specs=[]) mock_compute.get.return_value = mock_response mock_nova.flavors.list.return_value = [ fakes.FakeFlavor('123', 'lemon', 100) ] self.assertTrue(self.op_cloud.delete_flavor('lemon')) mock_nova.flavors.delete.assert_called_once_with(flavor='123') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_flavor_not_found(self, mock_nova): mock_nova.flavors.list.return_value = [] self.assertFalse(self.op_cloud.delete_flavor('invalid')) self.assertFalse(mock_nova.flavors.delete.called) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_flavor_exception(self, mock_nova): mock_nova.flavors.list.return_value = [ fakes.FakeFlavor('123', 'lemon', 100) ] mock_nova.flavors.delete.side_effect = Exception() self.assertRaises(shade.OpenStackCloudException, self.op_cloud.delete_flavor, '') @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_list_flavors(self, mock_nova): self.op_cloud.list_flavors() mock_nova.flavors.list.assert_called_once_with(is_public=None) @mock.patch.object(shade.OpenStackCloud, '_compute_client') def test_set_flavor_specs(self, mock_compute): extra_specs = dict(key1='value1') self.op_cloud.set_flavor_specs(1, extra_specs) mock_compute.post.assert_called_once_with( '/flavors/{id}/os-extra_specs'.format(id=1), json=dict(extra_specs=extra_specs)) @mock.patch.object(shade.OpenStackCloud, '_compute_client') def test_unset_flavor_specs(self, mock_compute): keys = ['key1', 'key2'] self.op_cloud.unset_flavor_specs(1, keys) api_spec = '/flavors/{id}/os-extra_specs/{key}' self.assertEqual( mock_compute.delete.call_args_list[0], mock.call(api_spec.format(id=1, key='key1'))) self.assertEqual( mock_compute.delete.call_args_list[1], mock.call(api_spec.format(id=1, key='key2'))) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_add_flavor_access(self, mock_nova): self.op_cloud.add_flavor_access('flavor_id', 'tenant_id') mock_nova.flavor_access.add_tenant_access.assert_called_once_with( flavor='flavor_id', tenant='tenant_id' ) @mock.patch.object(shade.OpenStackCloud, 'nova_client') def test_remove_flavor_access(self, mock_nova): self.op_cloud.remove_flavor_access('flavor_id', 'tenant_id') mock_nova.flavor_access.remove_tenant_access.assert_called_once_with( flavor='flavor_id', tenant='tenant_id' ) shade-1.7.0/shade/tests/unit/test_image.py0000664000567000056710000000762512677256557021704 0ustar jenkinsjenkins00000000000000# Copyright 2016 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile import uuid import mock import six import shade from shade import exc from shade.tests.unit import base class TestImage(base.TestCase): def setUp(self): super(TestImage, self).setUp() self.image_id = str(uuid.uuid4()) self.fake_search_return = [{ u'image_state': u'available', u'container_format': u'bare', u'min_ram': 0, u'ramdisk_id': None, u'updated_at': u'2016-02-10T05:05:02Z', u'file': '/v2/images/' + self.image_id + '/file', u'size': 3402170368, u'image_type': u'snapshot', u'disk_format': u'qcow2', u'id': self.image_id, u'schema': u'/v2/schemas/image', u'status': u'active', u'tags': [], u'visibility': u'private', u'locations': [{ u'url': u'http://127.0.0.1/images/' + self.image_id, u'metadata': {}}], u'min_disk': 40, u'virtual_size': None, u'name': u'fake_image', u'checksum': u'ee36e35a297980dee1b514de9803ec6d', u'created_at': u'2016-02-10T05:03:11Z', u'protected': False}] self.output = six.BytesIO() self.output.write(uuid.uuid4().bytes) self.output.seek(0) def test_download_image_no_output(self): self.assertRaises(exc.OpenStackCloudException, self.cloud.download_image, 'fake_image') def test_download_image_two_outputs(self): fake_fd = six.BytesIO() self.assertRaises(exc.OpenStackCloudException, self.cloud.download_image, 'fake_image', output_path='fake_path', output_file=fake_fd) @mock.patch.object(shade.OpenStackCloud, 'search_images', return_value=[]) def test_download_image_no_images_found(self, mock_search): self.assertRaises(exc.OpenStackCloudResourceNotFound, self.cloud.download_image, 'fake_image', output_path='fake_path') @mock.patch.object(shade.OpenStackCloud, 'glance_client') @mock.patch.object(shade.OpenStackCloud, 'search_images') def test_download_image_with_fd(self, mock_search, mock_glance): output_file = six.BytesIO() mock_glance.images.data.return_value = self.output mock_search.return_value = self.fake_search_return self.cloud.download_image('fake_image', output_file=output_file) mock_glance.images.data.assert_called_once_with(self.image_id) output_file.seek(0) self.output.seek(0) self.assertEqual(output_file.read(), self.output.read()) @mock.patch.object(shade.OpenStackCloud, 'glance_client') @mock.patch.object(shade.OpenStackCloud, 'search_images') def test_download_image_with_path(self, mock_search, mock_glance): output_file = tempfile.NamedTemporaryFile() mock_glance.images.data.return_value = self.output mock_search.return_value = self.fake_search_return self.cloud.download_image('fake_image', output_path=output_file.name) mock_glance.images.data.assert_called_once_with(self.image_id) output_file.seek(0) self.output.seek(0) self.assertEqual(output_file.read(), self.output.read()) shade-1.7.0/shade/tests/unit/test_floating_ip_nova.py0000664000567000056710000002252512677256557024134 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_nova ---------------------------------- Tests Floating IP resource methods for nova-network """ from mock import patch from novaclient import exceptions as n_exc import os_client_config from shade import _utils from shade import meta from shade import OpenStackCloud from shade.tests import fakes from shade.tests.unit import base def has_service_side_effect(s): if s == 'network': return False return True class TestFloatingIP(base.TestCase): mock_floating_ip_list_rep = [ { 'fixed_ip': None, 'id': 1, 'instance_id': None, 'ip': '203.0.113.1', 'pool': 'nova' }, { 'fixed_ip': None, 'id': 2, 'instance_id': None, 'ip': '203.0.113.2', 'pool': 'nova' }, { 'fixed_ip': '192.0.2.3', 'id': 29, 'instance_id': 'myself', 'ip': '198.51.100.29', 'pool': 'black_hole' } ] mock_floating_ip_pools = [ {'id': 'pool1_id', 'name': 'nova'}, {'id': 'pool2_id', 'name': 'pool2'}] def assertAreInstances(self, elements, elem_type): for e in elements: self.assertIsInstance(e, elem_type) def setUp(self): super(TestFloatingIP, self).setUp() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) self.floating_ips = [ fakes.FakeFloatingIP(**ip) for ip in self.mock_floating_ip_list_rep ] self.fake_server = meta.obj_to_dict( fakes.FakeServer( 'server-id', '', 'ACTIVE', addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': '192.0.2.129', u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42'}]})) self.floating_ip = _utils.normalize_nova_floating_ips( meta.obj_list_to_dict(self.floating_ips))[0] @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_list_floating_ips(self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.list.return_value = self.floating_ips floating_ips = self.client.list_floating_ips() mock_nova_client.floating_ips.list.assert_called_with() self.assertIsInstance(floating_ips, list) self.assertEqual(3, len(floating_ips)) self.assertAreInstances(floating_ips, dict) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_search_floating_ips(self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.list.return_value = self.floating_ips floating_ips = self.client.search_floating_ips( filters={'attached': False}) mock_nova_client.floating_ips.list.assert_called_with() self.assertIsInstance(floating_ips, list) self.assertEqual(2, len(floating_ips)) self.assertAreInstances(floating_ips, dict) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_get_floating_ip(self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.list.return_value = self.floating_ips floating_ip = self.client.get_floating_ip(id='29') mock_nova_client.floating_ips.list.assert_called_with() self.assertIsInstance(floating_ip, dict) self.assertEqual('198.51.100.29', floating_ip['floating_ip_address']) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_get_floating_ip_not_found( self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.list.return_value = self.floating_ips floating_ip = self.client.get_floating_ip(id='666') self.assertIsNone(floating_ip) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_create_floating_ip(self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.create.return_value =\ fakes.FakeFloatingIP(**self.mock_floating_ip_list_rep[1]) self.client.create_floating_ip(network='nova') mock_nova_client.floating_ips.create.assert_called_with(pool='nova') @patch.object(OpenStackCloud, '_nova_list_floating_ips') @patch.object(OpenStackCloud, 'has_service') def test_available_floating_ip_existing( self, mock_has_service, mock__nova_list_floating_ips): mock_has_service.side_effect = has_service_side_effect mock__nova_list_floating_ips.return_value = \ self.mock_floating_ip_list_rep[:1] ip = self.client.available_floating_ip(network='nova') self.assertEqual(self.mock_floating_ip_list_rep[0]['ip'], ip['floating_ip_address']) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, '_nova_list_floating_ips') @patch.object(OpenStackCloud, 'has_service') def test_available_floating_ip_new( self, mock_has_service, mock__nova_list_floating_ips, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock__nova_list_floating_ips.return_value = [] mock_nova_client.floating_ips.create.return_value = \ fakes.FakeFloatingIP(**self.mock_floating_ip_list_rep[0]) ip = self.client.available_floating_ip(network='nova') self.assertEqual(self.mock_floating_ip_list_rep[0]['ip'], ip['floating_ip_address']) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_delete_floating_ip_existing( self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.delete.return_value = None ret = self.client.delete_floating_ip( floating_ip_id='a-wild-id-appears') mock_nova_client.floating_ips.delete.assert_called_with( floating_ip='a-wild-id-appears') self.assertTrue(ret) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'get_floating_ip') def test_delete_floating_ip_not_found( self, mock_get_floating_ip, mock_nova_client): mock_get_floating_ip.return_value = None mock_nova_client.floating_ips.delete.side_effect = n_exc.NotFound( code=404) ret = self.client.delete_floating_ip( floating_ip_id='a-wild-id-appears') self.assertFalse(ret) @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_attach_ip_to_server(self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.list.return_value = self.floating_ips self.client._attach_ip_to_server( server=self.fake_server, floating_ip=self.floating_ip, fixed_address='192.0.2.129') mock_nova_client.servers.add_floating_ip.assert_called_with( server='server-id', address='203.0.113.1', fixed_address='192.0.2.129') @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_detach_ip_from_server(self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.list.return_value = [ fakes.FakeFloatingIP(**ip) for ip in self.mock_floating_ip_list_rep ] self.client.detach_ip_from_server( server_id='server-id', floating_ip_id=1) mock_nova_client.servers.remove_floating_ip.assert_called_with( server='server-id', address='203.0.113.1') @patch.object(OpenStackCloud, 'nova_client') @patch.object(OpenStackCloud, 'has_service') def test_add_ip_from_pool(self, mock_has_service, mock_nova_client): mock_has_service.side_effect = has_service_side_effect mock_nova_client.floating_ips.list.return_value = self.floating_ips server = self.client._add_ip_from_pool( server=self.fake_server, network='nova', fixed_address='192.0.2.129') self.assertEqual(server, self.fake_server) shade-1.7.0/shade/tests/unit/test_task_manager.py0000664000567000056710000000511512677256557023246 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from shade import task_manager from shade.tests.unit import base class TestException(Exception): pass class TestTask(task_manager.Task): def main(self, client): raise TestException("This is a test exception") class TestTaskGenerator(task_manager.Task): def main(self, client): yield 1 class TestTaskInt(task_manager.Task): def main(self, client): return int(1) class TestTaskFloat(task_manager.Task): def main(self, client): return float(2.0) class TestTaskStr(task_manager.Task): def main(self, client): return "test" class TestTaskBool(task_manager.Task): def main(self, client): return True class TestTaskSet(task_manager.Task): def main(self, client): return set([1, 2]) class TestTaskManager(base.TestCase): def setUp(self): super(TestTaskManager, self).setUp() self.manager = task_manager.TaskManager(name='test', client=self) def test_wait_re_raise(self): """Test that Exceptions thrown in a Task is reraised correctly This test is aimed to six.reraise(), called in Task::wait(). Specifically, we test if we get the same behaviour with all the configured interpreters (e.g. py27, p34, pypy, ...) """ self.assertRaises(TestException, self.manager.submitTask, TestTask()) def test_dont_munchify_int(self): ret = self.manager.submitTask(TestTaskInt()) self.assertIsInstance(ret, int) def test_dont_munchify_float(self): ret = self.manager.submitTask(TestTaskFloat()) self.assertIsInstance(ret, float) def test_dont_munchify_str(self): ret = self.manager.submitTask(TestTaskStr()) self.assertIsInstance(ret, str) def test_dont_munchify_bool(self): ret = self.manager.submitTask(TestTaskBool()) self.assertIsInstance(ret, bool) def test_dont_munchify_set(self): ret = self.manager.submitTask(TestTaskSet()) self.assertIsInstance(ret, set) shade-1.7.0/shade/tests/unit/__init__.py0000664000567000056710000000000012677256557021276 0ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/unit/test_rebuild_server.py0000664000567000056710000001435712677256557023636 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_rebuild_server ---------------------------------- Tests for the `rebuild_server` command. """ from mock import patch, Mock import os_client_config from shade import _utils from shade import meta from shade import OpenStackCloud from shade.exc import (OpenStackCloudException, OpenStackCloudTimeout) from shade.tests import base, fakes class TestRebuildServer(base.TestCase): def setUp(self): super(TestRebuildServer, self).setUp() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) def test_rebuild_server_rebuild_exception(self): """ Test that an exception in the novaclient rebuild raises an exception in rebuild_server. """ with patch("shade.OpenStackCloud"): config = { "servers.rebuild.side_effect": Exception("exception"), } OpenStackCloud.nova_client = Mock(**config) self.assertRaises( OpenStackCloudException, self.client.rebuild_server, "a", "b") def test_rebuild_server_server_error(self): """ Test that a server error while waiting for the server to rebuild raises an exception in rebuild_server. """ rebuild_server = fakes.FakeServer('1234', '', 'REBUILD') error_server = fakes.FakeServer('1234', '', 'ERROR') with patch("shade.OpenStackCloud"): config = { "servers.rebuild.return_value": rebuild_server, "servers.get.return_value": error_server, } OpenStackCloud.nova_client = Mock(**config) self.assertRaises( OpenStackCloudException, self.client.rebuild_server, "a", "b", wait=True) def test_rebuild_server_timeout(self): """ Test that a timeout while waiting for the server to rebuild raises an exception in rebuild_server. """ rebuild_server = fakes.FakeServer('1234', '', 'REBUILD') with patch("shade.OpenStackCloud"): config = { "servers.rebuild.return_value": rebuild_server, "servers.get.return_value": rebuild_server, } OpenStackCloud.nova_client = Mock(**config) self.assertRaises( OpenStackCloudTimeout, self.client.rebuild_server, "a", "b", wait=True, timeout=0.001) def test_rebuild_server_no_wait(self): """ Test that rebuild_server with no wait and no exception in the novaclient rebuild call returns the server instance. """ with patch("shade.OpenStackCloud"): rebuild_server = fakes.FakeServer('1234', '', 'REBUILD') config = { "servers.rebuild.return_value": rebuild_server } OpenStackCloud.nova_client = Mock(**config) self.assertEqual(meta.obj_to_dict(rebuild_server), self.client.rebuild_server("a", "b")) def test_rebuild_server_with_admin_pass_no_wait(self): """ Test that a server with an admin_pass passed returns the password """ with patch("shade.OpenStackCloud"): rebuild_server = fakes.FakeServer('1234', '', 'REBUILD', adminPass='ooBootheiX0edoh') config = { "servers.rebuild.return_value": rebuild_server, } OpenStackCloud.nova_client = Mock(**config) self.assertEqual( meta.obj_to_dict(rebuild_server), self.client.rebuild_server('a', 'b', admin_pass='ooBootheiX0edoh')) def test_rebuild_server_with_admin_pass_wait(self): """ Test that a server with an admin_pass passed returns the password """ with patch("shade.OpenStackCloud"): rebuild_server = fakes.FakeServer('1234', '', 'REBUILD', adminPass='ooBootheiX0edoh') active_server = fakes.FakeServer('1234', '', 'ACTIVE') ret_active_server = fakes.FakeServer('1234', '', 'ACTIVE', adminPass='ooBootheiX0edoh') config = { "servers.rebuild.return_value": rebuild_server, "servers.get.return_value": active_server, } OpenStackCloud.nova_client = Mock(**config) self.client.name = 'cloud-name' self.assertEqual( _utils.normalize_server( meta.obj_to_dict(ret_active_server), cloud_name='cloud-name', region_name=''), self.client.rebuild_server("a", "b", wait=True, admin_pass='ooBootheiX0edoh')) def test_rebuild_server_wait(self): """ Test that rebuild_server with a wait returns the server instance when its status changes to "ACTIVE". """ with patch("shade.OpenStackCloud"): rebuild_server = fakes.FakeServer('1234', '', 'REBUILD') active_server = fakes.FakeServer('1234', '', 'ACTIVE') config = { "servers.rebuild.return_value": rebuild_server, "servers.get.return_value": active_server } OpenStackCloud.nova_client = Mock(**config) self.client.name = 'cloud-name' self.assertEqual( _utils.normalize_server( meta.obj_to_dict(active_server), cloud_name='cloud-name', region_name=''), self.client.rebuild_server("a", "b", wait=True)) shade-1.7.0/shade/tests/unit/test_delete_volume_snapshot.py0000664000567000056710000000674012677256557025367 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_delete_volume_snapshot ---------------------------------- Tests for the `delete_volume_snapshot` command. """ from mock import patch import os_client_config from shade import OpenStackCloud from shade.tests import base, fakes from shade.exc import (OpenStackCloudException, OpenStackCloudTimeout) class TestDeleteVolumeSnapshot(base.TestCase): def setUp(self): super(TestDeleteVolumeSnapshot, self).setUp() config = os_client_config.OpenStackConfig() self.client = OpenStackCloud( cloud_config=config.get_one_cloud(validate=False)) @patch.object(OpenStackCloud, 'cinder_client') def test_delete_volume_snapshot(self, mock_cinder): """ Test that delete_volume_snapshot without a wait returns True instance when the volume snapshot deletes. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') mock_cinder.volume_snapshots.list.return_value = [fake_snapshot] self.assertEqual( True, self.client.delete_volume_snapshot(name_or_id='1234', wait=False) ) mock_cinder.volume_snapshots.list.assert_called_with(detailed=True, search_opts=None) @patch.object(OpenStackCloud, 'cinder_client') def test_delete_volume_snapshot_with_error(self, mock_cinder): """ Test that a exception while deleting a volume snapshot will cause an OpenStackCloudException. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') mock_cinder.volume_snapshots.delete.side_effect = Exception( "exception") mock_cinder.volume_snapshots.list.return_value = [fake_snapshot] self.assertRaises( OpenStackCloudException, self.client.delete_volume_snapshot, name_or_id='1234', wait=True, timeout=1) mock_cinder.volume_snapshots.delete.assert_called_with( snapshot='1234') @patch.object(OpenStackCloud, 'cinder_client') def test_delete_volume_snapshot_with_timeout(self, mock_cinder): """ Test that a timeout while waiting for the volume snapshot to delete raises an exception in delete_volume_snapshot. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') mock_cinder.volume_snapshots.list.return_value = [fake_snapshot] self.assertRaises( OpenStackCloudTimeout, self.client.delete_volume_snapshot, name_or_id='1234', wait=True, timeout=1) mock_cinder.volume_snapshots.list.assert_called_with(detailed=True, search_opts=None) shade-1.7.0/shade/tests/unit/test_keypair.py0000664000567000056710000000475712677256557022271 0ustar jenkinsjenkins00000000000000# # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import shade from mock import patch from novaclient import exceptions as nova_exc from shade import exc from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestKeypair(base.TestCase): @patch.object(shade.OpenStackCloud, 'nova_client') def test_create_keypair(self, mock_nova): keyname = 'my_keyname' pub_key = 'ssh-rsa BLAH' key = fakes.FakeKeypair('keyid', keyname, pub_key) mock_nova.keypairs.create.return_value = key new_key = self.cloud.create_keypair(keyname, pub_key) mock_nova.keypairs.create.assert_called_once_with( name=keyname, public_key=pub_key ) self.assertEqual(meta.obj_to_dict(key), new_key) @patch.object(shade.OpenStackCloud, 'nova_client') def test_create_keypair_exception(self, mock_nova): mock_nova.keypairs.create.side_effect = Exception() self.assertRaises(exc.OpenStackCloudException, self.cloud.create_keypair, '', '') @patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_keypair(self, mock_nova): self.assertTrue(self.cloud.delete_keypair('mykey')) mock_nova.keypairs.delete.assert_called_once_with( key='mykey' ) @patch.object(shade.OpenStackCloud, 'nova_client') def test_delete_keypair_not_found(self, mock_nova): mock_nova.keypairs.delete.side_effect = nova_exc.NotFound('') self.assertFalse(self.cloud.delete_keypair('invalid')) @patch.object(shade.OpenStackCloud, 'nova_client') def test_list_keypairs(self, mock_nova): self.cloud.list_keypairs() mock_nova.keypairs.list.assert_called_once_with() @patch.object(shade.OpenStackCloud, 'nova_client') def test_list_keypairs_exception(self, mock_nova): mock_nova.keypairs.list.side_effect = Exception() self.assertRaises(exc.OpenStackCloudException, self.cloud.list_keypairs) shade-1.7.0/shade/tests/unit/test_image_snapshot.py0000664000567000056710000000416612677256557023620 0ustar jenkinsjenkins00000000000000# Copyright 2016 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock import shade from shade import exc from shade.tests.unit import base class TestImageSnapshot(base.TestCase): def setUp(self): super(TestImageSnapshot, self).setUp() self.image_id = str(uuid.uuid4()) @mock.patch.object(shade.OpenStackCloud, 'nova_client') @mock.patch.object(shade.OpenStackCloud, 'get_image') def test_create_image_snapshot_wait_until_active_never_active(self, mock_get, mock_nova): mock_nova.servers.create_image.return_value = { 'status': 'queued', 'id': self.image_id, } mock_get.return_value = {'status': 'saving', 'id': self.image_id} self.assertRaises(exc.OpenStackCloudTimeout, self.cloud.create_image_snapshot, 'test-snapshot', 'fake-server', wait=True, timeout=2) @mock.patch.object(shade.OpenStackCloud, 'nova_client') @mock.patch.object(shade.OpenStackCloud, 'get_image') def test_create_image_snapshot_wait_active(self, mock_get, mock_nova): mock_nova.servers.create_image.return_value = { 'status': 'queued', 'id': self.image_id, } mock_get.return_value = {'status': 'active', 'id': self.image_id} image = self.cloud.create_image_snapshot( 'test-snapshot', 'fake-server', wait=True, timeout=2) self.assertEqual(image['id'], self.image_id) shade-1.7.0/shade/tests/unit/test_inventory.py0000664000567000056710000001344712677256557022656 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os_client_config from os_client_config import exceptions as occ_exc from shade import _utils from shade import exc from shade import inventory from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestInventory(base.TestCase): def setUp(self): super(TestInventory, self).setUp() @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test__init(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() mock_config.assert_called_once_with( config_files=os_client_config.config.CONFIG_FILES ) self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) self.assertTrue(mock_config.return_value.get_all_clouds.called) @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test__init_one_cloud(self, mock_cloud, mock_config): mock_config.return_value.get_one_cloud.return_value = [{}] inv = inventory.OpenStackInventory(cloud='supercloud') mock_config.assert_called_once_with( config_files=os_client_config.config.CONFIG_FILES ) self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) self.assertFalse(mock_config.return_value.get_all_clouds.called) mock_config.return_value.get_one_cloud.assert_called_once_with( 'supercloud') @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test__raise_exception_on_no_cloud(self, mock_cloud, mock_config): """ Test that when os-client-config can't find a named cloud, a shade exception is emitted. """ mock_config.return_value.get_one_cloud.side_effect = ( occ_exc.OpenStackConfigException() ) self.assertRaises(exc.OpenStackCloudException, inventory.OpenStackInventory, cloud='supercloud') mock_config.return_value.get_one_cloud.assert_called_once_with( 'supercloud') @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test_list_hosts(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.list_hosts() inv.clouds[0].list_servers.assert_called_once_with(detailed=True) self.assertFalse(inv.clouds[0].get_openstack_vars.called) self.assertEqual([server], ret) @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test_list_hosts_no_detail(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() server = _utils.normalize_server( meta.obj_to_dict(fakes.FakeServer( '1234', 'test', 'ACTIVE', addresses={})), region_name='', cloud_name='') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.list_hosts(expand=False) inv.clouds[0].list_servers.assert_called_once_with(detailed=False) self.assertFalse(inv.clouds[0].get_openstack_vars.called) @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test_search_hosts(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.search_hosts('server_id') self.assertEqual([server], ret) @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test_get_host(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.get_host('server_id') self.assertEqual(server, ret) @mock.patch("shade.inventory.OpenStackInventory.search_hosts") def test_get_host_no_detail(self, mock_search): inv = inventory.OpenStackInventory() inv.get_host('server_id', expand=False) mock_search.assert_called_once_with('server_id', None, expand=False) shade-1.7.0/shade/tests/unit/test_role_assignment.py0000664000567000056710000012227712677256557024014 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from mock import patch import os_client_config as occ from shade import OperatorCloud, operator_cloud from shade.exc import OpenStackCloudException, OpenStackCloudTimeout from shade.meta import obj_to_dict from shade.tests import base, fakes import testtools class TestRoleAssignment(base.TestCase): def setUp(self): super(TestRoleAssignment, self).setUp() self.cloud = operator_cloud(validate=False) self.fake_role = obj_to_dict(fakes.FakeRole('12345', 'test')) self.fake_user = obj_to_dict(fakes.FakeUser('12345', 'test@nobody.org', 'test', domain_id='test-domain')) self.fake_group = obj_to_dict(fakes.FakeGroup('12345', 'test', 'test group', domain_id='test-domain')) self.fake_project = obj_to_dict( fakes.FakeProject('12345', domain_id='test-domain')) self.fake_domain = obj_to_dict(fakes.FakeDomain('test-domain', 'test', 'test domain', enabled=True)) self.user_project_assignment = obj_to_dict({ 'role': { 'id': self.fake_role['id'] }, 'scope': { 'project': { 'id': self.fake_project['id'] } }, 'user': { 'id': self.fake_user['id'] } }) self.group_project_assignment = obj_to_dict({ 'role': { 'id': self.fake_role['id'] }, 'scope': { 'project': { 'id': self.fake_project['id'] } }, 'group': { 'id': self.fake_group['id'] } }) self.user_domain_assignment = obj_to_dict({ 'role': { 'id': self.fake_role['id'] }, 'scope': { 'domain': { 'id': self.fake_domain['id'] } }, 'user': { 'id': self.fake_user['id'] } }) self.group_domain_assignment = obj_to_dict({ 'role': { 'id': self.fake_role['id'] }, 'scope': { 'domain': { 'id': self.fake_domain['id'] } }, 'group': { 'id': self.fake_group['id'] } }) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_v2(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.roles.roles_for_user.return_value = [] mock_keystone.roles.add_user_role.return_value = self.fake_role self.assertTrue(self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertTrue(self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_project_v2(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.roles_for_user.return_value = [] mock_keystone.roles.add_user_role.return_value = self.fake_role self.assertTrue(self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertTrue(self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['id'], project=self.fake_project['id'])) self.assertTrue(self.cloud.grant_role(self.fake_role['id'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertTrue(self.cloud.grant_role(self.fake_role['id'], user=self.fake_user['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_project_v2_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.roles_for_user.return_value = [self.fake_role] self.assertFalse(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_project(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.role_assignments.list.return_value = [] self.assertTrue(self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertTrue(self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_project_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.role_assignments.list.return_value = \ [self.user_project_assignment] self.assertFalse(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertFalse(self.cloud.grant_role( self.fake_role['id'], user=self.fake_user['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_group_project(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.groups.list.return_value = [self.fake_group] mock_keystone.role_assignments.list.return_value = [] self.assertTrue(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['name'], project=self.fake_project['id'])) self.assertTrue(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_group_project_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.groups.list.return_value = [self.fake_group] mock_keystone.role_assignments.list.return_value = \ [self.group_project_assignment] self.assertFalse(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['name'], project=self.fake_project['id'])) self.assertFalse(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = [] self.assertTrue(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['name'], domain=self.fake_domain['id'])) self.assertTrue(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['id'], domain=self.fake_domain['id'])) self.assertTrue(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['name'], domain=self.fake_domain['name'])) self.assertTrue(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['id'], domain=self.fake_domain['name'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_domain_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = \ [self.user_domain_assignment] self.assertFalse(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['name'], domain=self.fake_domain['name'])) self.assertFalse(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['id'], domain=self.fake_domain['name'])) self.assertFalse(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['name'], domain=self.fake_domain['id'])) self.assertFalse(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['id'], domain=self.fake_domain['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_group_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.groups.list.return_value = [self.fake_group] mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = [] self.assertTrue(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['name'])) self.assertTrue(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['id'], domain=self.fake_domain['name'])) self.assertTrue(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['id'])) self.assertTrue(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['id'], domain=self.fake_domain['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_group_domain_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.groups.list.return_value = [self.fake_group] mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = \ [self.group_domain_assignment] self.assertFalse(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['name'])) self.assertFalse(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['id'], domain=self.fake_domain['name'])) self.assertFalse(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['id'])) self.assertFalse(self.cloud.grant_role( self.fake_role['name'], group=self.fake_group['id'], domain=self.fake_domain['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_v2(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.roles.roles_for_user.return_value = [self.fake_role] mock_keystone.roles.remove_user_role.return_value = self.fake_role self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_project_v2(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.roles_for_user.return_value = [] self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['id'], project=self.fake_project['id'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['id'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['id'], user=self.fake_user['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_project_v2_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.roles_for_user.return_value = [self.fake_role] mock_keystone.roles.remove_user_role.return_value = self.fake_role self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_project(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.role_assignments.list.return_value = [] self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_project_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.role_assignments.list.return_value = \ [self.user_project_assignment] self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['id'], user=self.fake_user['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_group_project(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.groups.list.return_value = [self.fake_group] mock_keystone.role_assignments.list.return_value = [] self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['name'], project=self.fake_project['id'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_group_project_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.groups.list.return_value = [self.fake_group] mock_keystone.role_assignments.list.return_value = \ [self.group_project_assignment] self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['name'], project=self.fake_project['id'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['id'], project=self.fake_project['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = [] self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], domain=self.fake_domain['id'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['id'], domain=self.fake_domain['id'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], domain=self.fake_domain['name'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['id'], domain=self.fake_domain['name'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_domain_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = \ [self.user_domain_assignment] self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], domain=self.fake_domain['name'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['id'], domain=self.fake_domain['name'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], domain=self.fake_domain['id'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['id'], domain=self.fake_domain['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_group_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.groups.list.return_value = [self.fake_group] mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = [] self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['name'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['id'], domain=self.fake_domain['name'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['id'])) self.assertFalse(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['id'], domain=self.fake_domain['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_group_domain_exists(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.groups.list.return_value = [self.fake_group] mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = \ [self.group_domain_assignment] self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['name'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['id'], domain=self.fake_domain['name'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['id'])) self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], group=self.fake_group['id'], domain=self.fake_domain['id'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_no_role(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [] with testtools.ExpectedException( OpenStackCloudException, 'Role {0} not found'.format(self.fake_role['name']) ): self.cloud.grant_role(self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_no_role(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [] with testtools.ExpectedException( OpenStackCloudException, 'Role {0} not found'.format(self.fake_role['name']) ): self.cloud.revoke_role(self.fake_role['name'], group=self.fake_group['name'], domain=self.fake_domain['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_no_user_or_group_specified(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] with testtools.ExpectedException( OpenStackCloudException, 'Must specify either a user or a group' ): self.cloud.grant_role(self.fake_role['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_no_user_or_group_specified(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] with testtools.ExpectedException( OpenStackCloudException, 'Must specify either a user or a group' ): self.cloud.revoke_role(self.fake_role['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_no_user_or_group(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [] with testtools.ExpectedException( OpenStackCloudException, 'Must specify either a user or a group' ): self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_no_user_or_group(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [] with testtools.ExpectedException( OpenStackCloudException, 'Must specify either a user or a group' ): self.cloud.revoke_role(self.fake_role['name'], user=self.fake_user['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_both_user_and_group(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.groups.list.return_value = [self.fake_group] with testtools.ExpectedException( OpenStackCloudException, 'Specify either a group or a user, not both' ): self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name'], group=self.fake_group['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_both_user_and_group(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.groups.list.return_value = [self.fake_group] with testtools.ExpectedException( OpenStackCloudException, 'Specify either a group or a user, not both' ): self.cloud.revoke_role(self.fake_role['name'], user=self.fake_user['name'], group=self.fake_group['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_both_project_and_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' fake_user2 = fakes.FakeUser('12345', 'test@nobody.org', 'test', domain_id='default') mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user, fake_user2] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.domains.get.return_value = self.fake_domain self.assertTrue(self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'], domain=self.fake_domain['name'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_both_project_and_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' fake_user2 = fakes.FakeUser('12345', 'test@nobody.org', 'test', domain_id='default') mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user, fake_user2] mock_keystone.projects.list.return_value = [self.fake_project] mock_keystone.domains.get.return_value = self.fake_domain mock_keystone.role_assignments.list.return_value = \ [self.user_project_assignment] self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'], domain=self.fake_domain['name'])) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_no_project_or_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.projects.list.return_value = [] mock_keystone.domains.get.return_value = None with testtools.ExpectedException( OpenStackCloudException, 'Must specify either a domain or project' ): self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_no_project_or_domain(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.projects.list.return_value = [] mock_keystone.domains.get.return_value = None mock_keystone.role_assignments.list.return_value = \ [self.user_project_assignment] with testtools.ExpectedException( OpenStackCloudException, 'Must specify either a domain or project' ): self.cloud.revoke_role(self.fake_role['name'], user=self.fake_user['name']) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_bad_domain_exception(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.domains.get.side_effect = Exception('test') with testtools.ExpectedException( OpenStackCloudException, 'Failed to get domain baddomain \(Inner Exception: test\)' ): self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name'], domain='baddomain') @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_bad_domain_exception(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.domains.get.side_effect = Exception('test') with testtools.ExpectedException( OpenStackCloudException, 'Failed to get domain baddomain \(Inner Exception: test\)' ): self.cloud.revoke_role(self.fake_role['name'], user=self.fake_user['name'], domain='baddomain') @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_project_v2_wait(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.roles_for_user.side_effect = [ [], [], [self.fake_role]] mock_keystone.roles.add_user_role.return_value = self.fake_role self.assertTrue(self.cloud.grant_role(self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'], wait=True)) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_grant_role_user_project_v2_wait_exception(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.roles_for_user.side_effect = [ [], [], [self.fake_role]] mock_keystone.roles.add_user_role.return_value = self.fake_role with testtools.ExpectedException( OpenStackCloudTimeout, 'Timeout waiting for role to be granted' ): self.assertTrue(self.cloud.grant_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'], wait=True, timeout=1)) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_project_v2_wait(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.roles_for_user.side_effect = [ [self.fake_role], [self.fake_role], []] mock_keystone.roles.remove_user_role.return_value = self.fake_role self.assertTrue(self.cloud.revoke_role(self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'], wait=True)) @patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @patch.object(OperatorCloud, 'keystone_client') def test_revoke_role_user_project_v2_wait_exception(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' mock_keystone.roles.list.return_value = [self.fake_role] mock_keystone.tenants.list.return_value = [self.fake_project] mock_keystone.users.list.return_value = [self.fake_user] mock_keystone.roles.roles_for_user.side_effect = [ [self.fake_role], [self.fake_role], []] mock_keystone.roles.remove_user_role.return_value = self.fake_role with testtools.ExpectedException( OpenStackCloudTimeout, 'Timeout waiting for role to be revoked' ): self.assertTrue(self.cloud.revoke_role( self.fake_role['name'], user=self.fake_user['name'], project=self.fake_project['id'], wait=True, timeout=1)) shade-1.7.0/shade/tests/unit/test__utils.py0000664000567000056710000002673212677256557022121 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from shade import _utils from shade import exc from shade.tests.unit import base RANGE_DATA = [ dict(id=1, key1=1, key2=5), dict(id=2, key1=1, key2=20), dict(id=3, key1=2, key2=10), dict(id=4, key1=2, key2=30), dict(id=5, key1=3, key2=40), dict(id=6, key1=3, key2=40), ] class TestUtils(base.TestCase): def test__filter_list_name_or_id(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto') data = [el1, el2] ret = _utils._filter_list(data, 'donald', None) self.assertEquals([el1], ret) def test__filter_list_filter(self): el1 = dict(id=100, name='donald', other='duck') el2 = dict(id=200, name='donald', other='trump') data = [el1, el2] ret = _utils._filter_list(data, 'donald', {'other': 'duck'}) self.assertEquals([el1], ret) def test__filter_list_dict1(self): el1 = dict(id=100, name='donald', last='duck', other=dict(category='duck')) el2 = dict(id=200, name='donald', last='trump', other=dict(category='human')) el3 = dict(id=300, name='donald', last='ronald mac', other=dict(category='clown')) data = [el1, el2, el3] ret = _utils._filter_list( data, 'donald', {'other': {'category': 'clown'}}) self.assertEquals([el3], ret) def test__filter_list_dict2(self): el1 = dict(id=100, name='donald', last='duck', other=dict(category='duck', financial=dict(status='poor'))) el2 = dict(id=200, name='donald', last='trump', other=dict(category='human', financial=dict(status='rich'))) el3 = dict(id=300, name='donald', last='ronald mac', other=dict(category='clown', financial=dict(status='rich'))) data = [el1, el2, el3] ret = _utils._filter_list( data, 'donald', {'other': { 'financial': {'status': 'rich'} }}) self.assertEquals([el2, el3], ret) def test_normalize_nova_secgroups(self): nova_secgroup = dict( id='abc123', name='nova_secgroup', description='A Nova security group', rules=[ dict(id='123', from_port=80, to_port=81, ip_protocol='tcp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] ) expected = dict( id='abc123', name='nova_secgroup', description='A Nova security group', security_group_rules=[ dict(id='123', direction='ingress', ethertype='IPv4', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0', security_group_id='xyz123') ] ) retval = _utils.normalize_nova_secgroups([nova_secgroup])[0] self.assertEqual(expected, retval) def test_normalize_nova_secgroups_negone_port(self): nova_secgroup = dict( id='abc123', name='nova_secgroup', description='A Nova security group with -1 ports', rules=[ dict(id='123', from_port=-1, to_port=-1, ip_protocol='icmp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] ) retval = _utils.normalize_nova_secgroups([nova_secgroup])[0] self.assertIsNone(retval['security_group_rules'][0]['port_range_min']) self.assertIsNone(retval['security_group_rules'][0]['port_range_max']) def test_normalize_nova_secgroup_rules(self): nova_rules = [ dict(id='123', from_port=80, to_port=81, ip_protocol='tcp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] expected = [ dict(id='123', direction='ingress', ethertype='IPv4', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0', security_group_id='xyz123') ] retval = _utils.normalize_nova_secgroup_rules(nova_rules) self.assertEqual(expected, retval) def test_normalize_volumes_v1(self): vol = dict( display_name='test', display_description='description', bootable=u'false', # unicode type multiattach='true', # str type ) expected = dict( name=vol['display_name'], display_name=vol['display_name'], description=vol['display_description'], display_description=vol['display_description'], bootable=False, multiattach=True, ) retval = _utils.normalize_volumes([vol]) self.assertEqual([expected], retval) def test_normalize_volumes_v2(self): vol = dict( display_name='test', display_description='description', bootable=False, multiattach=True, ) expected = dict( name=vol['display_name'], display_name=vol['display_name'], description=vol['display_description'], display_description=vol['display_description'], bootable=False, multiattach=True, ) retval = _utils.normalize_volumes([vol]) self.assertEqual([expected], retval) def test_safe_dict_min_ints(self): """Test integer comparison""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_strs(self): """Test integer as strings comparison""" data = [{'f1': '3'}, {'f1': '2'}, {'f1': '1'}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_None(self): """Test None values""" data = [{'f1': 3}, {'f1': None}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_key_missing(self): """Test missing key for an entry still works""" data = [{'f1': 3}, {'x': 2}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_key_not_found(self): """Test key not found in any elements returns None""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_min('doesnotexist', data) self.assertIsNone(retval) def test_safe_dict_min_not_int(self): """Test non-integer key value raises OSCE""" data = [{'f1': 3}, {'f1': "aaa"}, {'f1': 1}] with testtools.ExpectedException( exc.OpenStackCloudException, "Search for minimum value failed. " "Value for f1 is not an integer: aaa" ): _utils.safe_dict_min('f1', data) def test_safe_dict_max_ints(self): """Test integer comparison""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_strs(self): """Test integer as strings comparison""" data = [{'f1': '3'}, {'f1': '2'}, {'f1': '1'}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_None(self): """Test None values""" data = [{'f1': 3}, {'f1': None}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_key_missing(self): """Test missing key for an entry still works""" data = [{'f1': 3}, {'x': 2}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_key_not_found(self): """Test key not found in any elements returns None""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_max('doesnotexist', data) self.assertIsNone(retval) def test_safe_dict_max_not_int(self): """Test non-integer key value raises OSCE""" data = [{'f1': 3}, {'f1': "aaa"}, {'f1': 1}] with testtools.ExpectedException( exc.OpenStackCloudException, "Search for maximum value failed. " "Value for f1 is not an integer: aaa" ): _utils.safe_dict_max('f1', data) def test_parse_range_None(self): self.assertIsNone(_utils.parse_range(None)) def test_parse_range_invalid(self): self.assertIsNone(_utils.parse_range("1024") self.assertIsInstance(retval, tuple) self.assertEqual(">", retval[0]) self.assertEqual(1024, retval[1]) def test_parse_range_le(self): retval = _utils.parse_range("<=1024") self.assertIsInstance(retval, tuple) self.assertEqual("<=", retval[0]) self.assertEqual(1024, retval[1]) def test_parse_range_ge(self): retval = _utils.parse_range(">=1024") self.assertIsInstance(retval, tuple) self.assertEqual(">=", retval[0]) self.assertEqual(1024, retval[1]) def test_range_filter_min(self): retval = _utils.range_filter(RANGE_DATA, "key1", "min") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[:2], retval) def test_range_filter_max(self): retval = _utils.range_filter(RANGE_DATA, "key1", "max") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[-2:], retval) def test_range_filter_range(self): retval = _utils.range_filter(RANGE_DATA, "key1", "<3") self.assertIsInstance(retval, list) self.assertEqual(4, len(retval)) self.assertEqual(RANGE_DATA[:4], retval) def test_range_filter_exact(self): retval = _utils.range_filter(RANGE_DATA, "key1", "2") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[2:4], retval) def test_range_filter_invalid_int(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Invalid range value: <1A0" ): _utils.range_filter(RANGE_DATA, "key1", "<1A0") def test_range_filter_invalid_op(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Invalid range value: <>100" ): _utils.range_filter(RANGE_DATA, "key1", "<>100") shade-1.7.0/shade/tests/unit/test_identity_roles.py0000664000567000056710000001634312677256557023654 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import testtools import os_client_config as occ import shade from shade import meta from shade import _utils from shade.tests.unit import base from shade.tests import fakes RAW_ROLE_ASSIGNMENTS = [ { "links": {"assignment": "http://example"}, "role": {"id": "123456"}, "scope": {"domain": {"id": "161718"}}, "user": {"id": "313233"} }, { "links": {"assignment": "http://example"}, "group": {"id": "101112"}, "role": {"id": "123456"}, "scope": {"project": {"id": "456789"}} } ] class TestIdentityRoles(base.TestCase): def setUp(self): super(TestIdentityRoles, self).setUp() self.cloud = shade.operator_cloud(validate=False) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_roles(self, mock_keystone): self.cloud.list_roles() self.assertTrue(mock_keystone.roles.list.called) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_get_role(self, mock_keystone): role_obj = fakes.FakeRole(id='1234', name='fake_role') mock_keystone.roles.list.return_value = [role_obj] role = self.cloud.get_role('fake_role') self.assertTrue(mock_keystone.roles.list.called) self.assertIsNotNone(role) self.assertEqual('1234', role['id']) self.assertEqual('fake_role', role['name']) @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_create_role(self, mock_keystone): role_name = 'tootsie_roll' role_obj = fakes.FakeRole(id='1234', name=role_name) mock_keystone.roles.create.return_value = role_obj role = self.cloud.create_role(role_name) mock_keystone.roles.create.assert_called_once_with( name=role_name ) self.assertIsNotNone(role) self.assertEqual(role_name, role['name']) @mock.patch.object(shade.OperatorCloud, 'get_role') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_delete_role(self, mock_keystone, mock_get): role_obj = fakes.FakeRole(id='1234', name='aaa') mock_get.return_value = meta.obj_to_dict(role_obj) self.assertTrue(self.cloud.delete_role('1234')) self.assertTrue(mock_keystone.roles.delete.called) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_role_assignments(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.role_assignments.list.return_value = RAW_ROLE_ASSIGNMENTS ret = self.cloud.list_role_assignments() mock_keystone.role_assignments.list.assert_called_once_with() normalized_assignments = _utils.normalize_role_assignments( RAW_ROLE_ASSIGNMENTS ) self.assertEqual(normalized_assignments, ret) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_role_assignments_filters(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' params = dict(user='123', domain='456', effective=True) self.cloud.list_role_assignments(filters=params) mock_keystone.role_assignments.list.assert_called_once_with(**params) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_role_assignments_exception(self, mock_keystone, mock_api_version): mock_api_version.return_value = '3' mock_keystone.role_assignments.list.side_effect = Exception() with testtools.ExpectedException( shade.OpenStackCloudException, "Failed to list role assignments" ): self.cloud.list_role_assignments() @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_role_assignments_keystone_v2(self, mock_keystone, mock_api_version): fake_role = fakes.FakeRole(id='1234', name='fake_role') mock_api_version.return_value = '2.0' mock_keystone.roles.roles_for_user.return_value = [fake_role] ret = self.cloud.list_role_assignments(filters={'user': '2222', 'project': '3333'}) self.assertEqual(ret, [{'id': fake_role.id, 'project': '3333', 'user': '2222'}]) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_role_assignments_keystone_v2_with_role(self, mock_keystone, mock_api_version): fake_role1 = fakes.FakeRole(id='1234', name='fake_role') fake_role2 = fakes.FakeRole(id='4321', name='fake_role') mock_api_version.return_value = '2.0' mock_keystone.roles.roles_for_user.return_value = [fake_role1, fake_role2] ret = self.cloud.list_role_assignments(filters={'role': fake_role1.id, 'user': '2222', 'project': '3333'}) self.assertEqual(ret, [{'id': fake_role1.id, 'project': '3333', 'user': '2222'}]) @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_role_assignments_exception_v2(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' with testtools.ExpectedException( shade.OpenStackCloudException, "Must provide project and user for keystone v2" ): self.cloud.list_role_assignments() @mock.patch.object(occ.cloud_config.CloudConfig, 'get_api_version') @mock.patch.object(shade.OpenStackCloud, 'keystone_client') def test_list_role_assignments_exception_v2_no_project(self, mock_keystone, mock_api_version): mock_api_version.return_value = '2.0' with testtools.ExpectedException( shade.OpenStackCloudException, "Must provide project and user for keystone v2" ): self.cloud.list_role_assignments(filters={'user': '12345'}) shade-1.7.0/shade/tests/base.py0000664000567000056710000000357212677256557017513 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Copyright 2010-2011 OpenStack Foundation # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures import testtools _TRUE_VALUES = ('true', '1', 'yes') class TestCase(testtools.TestCase): """Test case base class for all tests.""" def setUp(self): """Run before each test method to initialize test environment.""" super(TestCase, self).setUp() test_timeout = os.environ.get('OS_TEST_TIMEOUT', 0) try: test_timeout = int(test_timeout) except ValueError: # If timeout value is invalid do not set a timeout. test_timeout = 0 if test_timeout > 0: self.useFixture(fixtures.Timeout(test_timeout, gentle=True)) self.useFixture(fixtures.NestedTempfile()) self.useFixture(fixtures.TempHomeDir()) if os.environ.get('OS_STDOUT_CAPTURE') in _TRUE_VALUES: stdout = self.useFixture(fixtures.StringStream('stdout')).stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout)) if os.environ.get('OS_STDERR_CAPTURE') in _TRUE_VALUES: stderr = self.useFixture(fixtures.StringStream('stderr')).stream self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr)) self.log_fixture = self.useFixture(fixtures.FakeLogger()) shade-1.7.0/shade/tests/__init__.py0000664000567000056710000000000012677256557020317 0ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/functional/0000775000567000056710000000000012677257023020347 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/functional/test_groups.py0000664000567000056710000000762612677256557023325 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_groups ---------------------------------- Functional tests for `shade` keystone group resource. """ import shade from shade.tests.functional import base class TestGroup(base.BaseFunctionalTestCase): def setUp(self): super(TestGroup, self).setUp() i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support groups') self.group_prefix = self.getUniqueString('group') self.addCleanup(self._cleanup_groups) def _cleanup_groups(self): exception_list = list() for group in self.operator_cloud.list_groups(): if group['name'].startswith(self.group_prefix): try: self.operator_cloud.delete_group(group['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise shade.OpenStackCloudException('\n'.join(exception_list)) def test_create_group(self): group_name = self.group_prefix + '_create' group = self.operator_cloud.create_group(group_name, 'test group') for key in ('id', 'name', 'description', 'domain_id'): self.assertIn(key, group) self.assertEqual(group_name, group['name']) self.assertEqual('test group', group['description']) def test_delete_group(self): group_name = self.group_prefix + '_delete' group = self.operator_cloud.create_group(group_name, 'test group') self.assertIsNotNone(group) self.assertTrue(self.operator_cloud.delete_group(group_name)) results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(0, len(results)) def test_delete_group_not_exists(self): self.assertFalse(self.operator_cloud.delete_group('xInvalidGroupx')) def test_search_groups(self): group_name = self.group_prefix + '_search' # Shouldn't find any group with this name yet results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(0, len(results)) # Now create a new group group = self.operator_cloud.create_group(group_name, 'test group') self.assertEqual(group_name, group['name']) # Now we should find only the new group results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(1, len(results)) self.assertEqual(group_name, results[0]['name']) def test_update_group(self): group_name = self.group_prefix + '_update' group_desc = 'test group' group = self.operator_cloud.create_group(group_name, group_desc) self.assertEqual(group_name, group['name']) self.assertEqual(group_desc, group['description']) updated_group_name = group_name + '_xyz' updated_group_desc = group_desc + ' updated' updated_group = self.operator_cloud.update_group( group_name, name=updated_group_name, description=updated_group_desc) self.assertEqual(updated_group_name, updated_group['name']) self.assertEqual(updated_group_desc, updated_group['description']) shade-1.7.0/shade/tests/functional/test_object.py0000664000567000056710000000551412677256557023246 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_object ---------------------------------- Functional tests for `shade` object methods. """ import tempfile from testtools import content from shade.tests.functional import base class TestObject(base.BaseFunctionalTestCase): def setUp(self): super(TestObject, self).setUp() if not self.demo_cloud.has_service('object-store'): self.skipTest('Object service not supported by cloud') def test_create_object(self): '''Test uploading small and large files.''' container_name = self.getUniqueString('container') self.addDetail('container', content.text_content(container_name)) self.addCleanup(self.demo_cloud.delete_container, container_name) self.demo_cloud.create_container(container_name) self.assertEqual(container_name, self.demo_cloud.list_containers()[0]['name']) sizes = ( (64 * 1024, 1), # 64K, one segment (50 * 1024 ** 2, 5) # 50MB, 5 segments ) for size, nseg in sizes: segment_size = round(size / nseg) with tempfile.NamedTemporaryFile() as sparse_file: sparse_file.seek(size) sparse_file.write("\0") sparse_file.flush() name = 'test-%d' % size self.demo_cloud.create_object( container_name, name, sparse_file.name, segment_size=segment_size) self.assertFalse(self.demo_cloud.is_object_stale( container_name, name, sparse_file.name ) ) self.assertIsNotNone( self.demo_cloud.get_object_metadata(container_name, name)) self.assertIsNotNone( self.demo_cloud.get_object(container_name, name)) self.assertEqual( name, self.demo_cloud.list_objects(container_name)[0]['name']) self.demo_cloud.delete_object(container_name, name) self.assertEqual([], self.demo_cloud.list_objects(container_name)) self.assertEqual(container_name, self.demo_cloud.list_containers()[0]['name']) self.demo_cloud.delete_container(container_name) shade-1.7.0/shade/tests/functional/test_identity.py0000664000567000056710000002432412677256557023631 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_identity ---------------------------------- Functional tests for `shade` identity methods. """ import random import string from shade import operator_cloud from shade import OpenStackCloudException from shade.tests.functional import base class TestIdentity(base.BaseFunctionalTestCase): def setUp(self): super(TestIdentity, self).setUp() self.cloud = operator_cloud(cloud='devstack-admin') self.role_prefix = 'test_role' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.user_prefix = self.getUniqueString('user') self.group_prefix = self.getUniqueString('group') self.addCleanup(self._cleanup_users) self.identity_version = \ self.cloud.cloud_config.get_api_version('identity') if self.identity_version not in ('2', '2.0'): self.addCleanup(self._cleanup_groups) self.addCleanup(self._cleanup_roles) def _cleanup_groups(self): exception_list = list() for group in self.cloud.list_groups(): if group['name'].startswith(self.group_prefix): try: self.cloud.delete_group(group['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_users(self): exception_list = list() for user in self.cloud.list_users(): if user['name'].startswith(self.user_prefix): try: self.cloud.delete_user(user['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_roles(self): exception_list = list() for role in self.cloud.list_roles(): if role['name'].startswith(self.role_prefix): try: self.cloud.delete_role(role['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _create_user(self, **kwargs): domain_id = None if self.identity_version not in ('2', '2.0'): domain = self.cloud.get_domain('default') domain_id = domain['id'] return self.cloud.create_user(domain_id=domain_id, **kwargs) def test_list_roles(self): roles = self.cloud.list_roles() self.assertIsNotNone(roles) self.assertNotEqual([], roles) def test_get_role(self): role = self.cloud.get_role('admin') self.assertIsNotNone(role) self.assertIn('id', role) self.assertIn('name', role) self.assertEqual('admin', role['name']) def test_search_roles(self): roles = self.cloud.search_roles(filters={'name': 'admin'}) self.assertIsNotNone(roles) self.assertEqual(1, len(roles)) self.assertEqual('admin', roles[0]['name']) def test_create_role(self): role_name = self.role_prefix + '_create_role' role = self.cloud.create_role(role_name) self.assertIsNotNone(role) self.assertIn('id', role) self.assertIn('name', role) self.assertEqual(role_name, role['name']) def test_delete_role(self): role_name = self.role_prefix + '_delete_role' role = self.cloud.create_role(role_name) self.assertIsNotNone(role) self.assertTrue(self.cloud.delete_role(role_name)) # TODO(Shrews): Once we can support assigning roles within shade, we # need to make this test a little more specific, and add more for testing # filtering functionality. def test_list_role_assignments(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support role assignments") assignments = self.cloud.list_role_assignments() self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) > 0) def test_list_role_assignments_v2(self): user = self.cloud.get_user('demo') project = self.cloud.get_project('demo') assignments = self.cloud.list_role_assignments( filters={'user': user['id'], 'project': project['id']}) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) > 0) def test_grant_revoke_role_user_project(self): user_name = self.user_prefix + '_user_project' user_email = 'nobody@nowhere.com' role_name = self.role_prefix + '_grant_user_project' role = self.cloud.create_role(role_name) user = self._create_user(name=user_name, email=user_email, default_project='demo') self.assertTrue(self.cloud.grant_role( role_name, user=user['id'], project='demo', wait=True)) assignments = self.cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'project': self.cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) == 1) self.assertTrue(self.cloud.revoke_role( role_name, user=user['id'], project='demo', wait=True)) assignments = self.cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'project': self.cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) == 0) def test_grant_revoke_role_group_project(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support group") role_name = self.role_prefix + '_grant_group_project' role = self.cloud.create_role(role_name) group_name = self.group_prefix + '_group_project' group = self.cloud.create_group(name=group_name, description='test group', domain='default') self.assertTrue(self.cloud.grant_role( role_name, group=group['id'], project='demo')) assignments = self.cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'project': self.cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) == 1) self.assertTrue(self.cloud.revoke_role( role_name, group=group['id'], project='demo')) assignments = self.cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'project': self.cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) == 0) def test_grant_revoke_role_user_domain(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support domain") role_name = self.role_prefix + '_grant_user_domain' role = self.cloud.create_role(role_name) user_name = self.user_prefix + '_user_domain' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email, default_project='demo') self.assertTrue(self.cloud.grant_role( role_name, user=user['id'], domain='default')) assignments = self.cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'domain': self.cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) == 1) self.assertTrue(self.cloud.revoke_role( role_name, user=user['id'], domain='default')) assignments = self.cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'domain': self.cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) == 0) def test_grant_revoke_role_group_domain(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support domain or group") role_name = self.role_prefix + '_grant_group_domain' role = self.cloud.create_role(role_name) group_name = self.group_prefix + '_group_domain' group = self.cloud.create_group(name=group_name, description='test group', domain='default') self.assertTrue(self.cloud.grant_role( role_name, group=group['id'], domain='default')) assignments = self.cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'domain': self.cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) == 1) self.assertTrue(self.cloud.revoke_role( role_name, group=group['id'], domain='default')) assignments = self.cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'domain': self.cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertTrue(len(assignments) == 0) shade-1.7.0/shade/tests/functional/test_services.py0000664000567000056710000001227512677256557023625 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_services ---------------------------------- Functional tests for `shade` service resource. """ import string import random from shade.exc import OpenStackCloudException from shade.exc import OpenStackCloudUnavailableFeature from shade.tests.functional import base class TestServices(base.BaseFunctionalTestCase): service_attributes = ['id', 'name', 'type', 'description'] def setUp(self): super(TestServices, self).setUp() # Generate a random name for services in this test self.new_service_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_services) def _cleanup_services(self): exception_list = list() for s in self.operator_cloud.list_services(): if s['name'] is not None and \ s['name'].startswith(self.new_service_name): try: self.operator_cloud.delete_service(name_or_id=s['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_service(self): service = self.operator_cloud.create_service( name=self.new_service_name + '_create', type='test_type', description='this is a test description') self.assertIsNotNone(service.get('id')) def test_update_service(self): if self.operator_cloud.cloud_config.get_api_version( 'identity').startswith('2'): # NOTE(SamYaple): Update service only works with v3 api self.assertRaises(OpenStackCloudUnavailableFeature, self.operator_cloud.update_service, 'service_id', name='new name') else: service = self.operator_cloud.create_service( name=self.new_service_name + '_create', type='test_type', description='this is a test description', enabled=True) new_service = self.operator_cloud.update_service( service.id, name=self.new_service_name + '_update', description='this is an updated description', enabled=False ) self.assertEqual(new_service.name, self.new_service_name + '_update') self.assertEqual(new_service.description, 'this is an updated description') self.assertFalse(new_service.enabled) self.assertEqual(service.id, new_service.id) def test_list_services(self): service = self.operator_cloud.create_service( name=self.new_service_name + '_list', type='test_type') observed_services = self.operator_cloud.list_services() self.assertIsInstance(observed_services, list) found = False for s in observed_services: # Test all attributes are returned if s['id'] == service['id']: self.assertEqual(self.new_service_name + '_list', s.get('name')) self.assertEqual('test_type', s.get('type')) found = True self.assertTrue(found, msg='new service not found in service list!') def test_delete_service_by_name(self): # Test delete by name service = self.operator_cloud.create_service( name=self.new_service_name + '_delete_by_name', type='test_type') self.operator_cloud.delete_service(name_or_id=service['name']) observed_services = self.operator_cloud.list_services() found = False for s in observed_services: if s['id'] == service['id']: found = True break self.failUnlessEqual(False, found, message='service was not deleted!') def test_delete_service_by_id(self): # Test delete by id service = self.operator_cloud.create_service( name=self.new_service_name + '_delete_by_id', type='test_type') self.operator_cloud.delete_service(name_or_id=service['id']) observed_services = self.operator_cloud.list_services() found = False for s in observed_services: if s['id'] == service['id']: found = True self.failUnlessEqual(False, found, message='service was not deleted!') shade-1.7.0/shade/tests/functional/test_network.py0000664000567000056710000000662312677256557023473 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_network ---------------------------------- Functional tests for `shade` network methods. """ from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestNetwork(base.BaseFunctionalTestCase): def setUp(self): super(TestNetwork, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') self.network_name = self.getUniqueString('network') self.addCleanup(self._cleanup_networks) def _cleanup_networks(self): exception_list = list() for network in self.operator_cloud.list_networks(): if network['name'].startswith(self.network_name): try: self.operator_cloud.delete_network(network['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_network_basic(self): net1 = self.operator_cloud.create_network(name=self.network_name) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertFalse(net1['shared']) self.assertFalse(net1['router:external']) self.assertTrue(net1['admin_state_up']) def test_create_network_advanced(self): net1 = self.operator_cloud.create_network( name=self.network_name, shared=True, external=True, admin_state_up=False, ) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertTrue(net1['router:external']) self.assertTrue(net1['shared']) self.assertFalse(net1['admin_state_up']) def test_create_network_provider_flat(self): net1 = self.operator_cloud.create_network( name=self.network_name, shared=True, provider={ 'physical_network': 'private', 'network_type': 'flat', } ) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertEqual('flat', net1['provider:network_type']) self.assertEqual('private', net1['provider:physical_network']) self.assertIsNone(net1['provider:segmentation_id']) def test_list_networks_filtered(self): net1 = self.operator_cloud.create_network(name=self.network_name) self.assertIsNotNone(net1) net2 = self.operator_cloud.create_network( name=self.network_name + 'other') self.assertIsNotNone(net2) match = self.operator_cloud.list_networks( filters=dict(name=self.network_name)) self.assertEqual(1, len(match)) self.assertEqual(net1['name'], match[0]['name']) shade-1.7.0/shade/tests/functional/test_port.py0000664000567000056710000001121512677256557022757 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_port ---------------------------------- Functional tests for `shade` port resource. """ import string import random from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestPort(base.BaseFunctionalTestCase): def setUp(self): super(TestPort, self).setUp() # Skip Neutron tests if neutron is not present if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') # Generate a unique port name to allow concurrent tests self.new_port_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_ports) def _cleanup_ports(self): exception_list = list() for p in self.operator_cloud.list_ports(): if p['name'].startswith(self.new_port_name): try: self.operator_cloud.delete_port(name_or_id=p['id']) except Exception as e: # We were unable to delete this port, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_port(self): port_name = self.new_port_name + '_create' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertTrue('id' in port) self.assertEqual(port.get('name'), port_name) def test_get_port(self): port_name = self.new_port_name + '_get' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertTrue('id' in port) self.assertEqual(port.get('name'), port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) # extra_dhcp_opts is added later by Neutron... if 'extra_dhcp_opts' in updated_port and 'extra_dhcp_opts' not in port: del updated_port['extra_dhcp_opts'] self.assertEqual(port, updated_port) def test_update_port(self): port_name = self.new_port_name + '_update' new_port_name = port_name + '_new' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) port = self.operator_cloud.update_port( name_or_id=port_name, name=new_port_name) self.assertIsInstance(port, dict) self.assertEqual(port.get('name'), new_port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertEqual(port.get('name'), new_port_name) self.assertEqual(port, updated_port) def test_delete_port(self): port_name = self.new_port_name + '_delete' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertTrue('id' in port) self.assertEqual(port.get('name'), port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertIsNotNone(updated_port) self.operator_cloud.delete_port(name_or_id=port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertIsNone(updated_port) shade-1.7.0/shade/tests/functional/test_users.py0000664000567000056710000001371312677256557023141 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_users ---------------------------------- Functional tests for `shade` user methods. """ from shade import operator_cloud from shade import OpenStackCloudException from shade.tests.functional import base class TestUsers(base.BaseFunctionalTestCase): def setUp(self): super(TestUsers, self).setUp() self.user_prefix = self.getUniqueString('user') self.addCleanup(self._cleanup_users) def _cleanup_users(self): exception_list = list() for user in self.operator_cloud.list_users(): if user['name'].startswith(self.user_prefix): try: self.operator_cloud.delete_user(user['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _create_user(self, **kwargs): domain_id = None i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver not in ('2', '2.0'): domain = self.operator_cloud.get_domain('default') domain_id = domain['id'] return self.operator_cloud.create_user(domain_id=domain_id, **kwargs) def test_list_users(self): users = self.operator_cloud.list_users() self.assertIsNotNone(users) self.assertNotEqual([], users) def test_get_user(self): user = self.operator_cloud.get_user('admin') self.assertIsNotNone(user) self.assertIn('id', user) self.assertIn('name', user) self.assertEqual('admin', user['name']) def test_search_users(self): users = self.operator_cloud.search_users(filters={'enabled': True}) self.assertIsNotNone(users) def test_create_user(self): user_name = self.user_prefix + '_create' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertEqual(user_name, user['name']) self.assertEqual(user_email, user['email']) self.assertTrue(user['enabled']) def test_delete_user(self): user_name = self.user_prefix + '_delete' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertTrue(self.operator_cloud.delete_user(user['id'])) def test_delete_user_not_found(self): self.assertFalse(self.operator_cloud.delete_user('does_not_exist')) def test_update_user(self): user_name = self.user_prefix + '_updatev3' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertTrue(user['enabled']) # Pass some keystone v3 params. This should work no matter which # version of keystone we are testing against. new_user = self.operator_cloud.update_user( user['id'], name=user_name + '2', email='somebody@nowhere.com', enabled=False, password='secret', description='') self.assertIsNotNone(new_user) self.assertEqual(user['id'], new_user['id']) self.assertEqual(user_name + '2', new_user['name']) self.assertEqual('somebody@nowhere.com', new_user['email']) self.assertFalse(new_user['enabled']) def test_update_user_password(self): user_name = self.user_prefix + '_password' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email, password='old_secret') self.assertIsNotNone(user) self.assertTrue(user['enabled']) # This should work for both v2 and v3 new_user = self.operator_cloud.update_user( user['id'], password='new_secret') self.assertIsNotNone(new_user) self.assertEqual(user['id'], new_user['id']) self.assertEqual(user_name, new_user['name']) self.assertEqual(user_email, new_user['email']) self.assertTrue(new_user['enabled']) self.assertIsNotNone(operator_cloud( username=user_name, password='new_secret', auth_url=self.operator_cloud.auth['auth_url']).keystone_client) def test_users_and_groups(self): i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support groups') group_name = self.getUniqueString('group') self.addCleanup(self.operator_cloud.delete_group, group_name) # Create a group group = self.operator_cloud.create_group(group_name, 'test group') self.assertIsNotNone(group) # Create a user user_name = self.user_prefix + '_ug' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) # Add the user to the group self.operator_cloud.add_user_to_group(user_name, group_name) self.assertTrue( self.operator_cloud.is_user_in_group(user_name, group_name)) # Remove them from the group self.operator_cloud.remove_user_from_group(user_name, group_name) self.assertFalse( self.operator_cloud.is_user_in_group(user_name, group_name)) shade-1.7.0/shade/tests/functional/test_endpoints.py0000664000567000056710000001417712677256557024010 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_endpoint ---------------------------------- Functional tests for `shade` endpoint resource. """ import string import random from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestEndpoints(base.BaseFunctionalTestCase): endpoint_attributes = ['id', 'region', 'publicurl', 'internalurl', 'service_id', 'adminurl'] def setUp(self): super(TestEndpoints, self).setUp() # Generate a random name for services and regions in this test self.new_item_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_services) self.addCleanup(self._cleanup_endpoints) def _cleanup_endpoints(self): exception_list = list() for e in self.operator_cloud.list_endpoints(): if e.get('region') is not None and \ e['region'].startswith(self.new_item_name): try: self.operator_cloud.delete_endpoint(id=e['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_services(self): exception_list = list() for s in self.operator_cloud.list_services(): if s['name'] is not None and \ s['name'].startswith(self.new_item_name): try: self.operator_cloud.delete_service(name_or_id=s['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_endpoint(self): service_name = self.new_item_name + '_create' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', admin_url='http://admin.url/', region=service_name) self.assertNotEqual([], endpoints) self.assertIsNotNone(endpoints[0].get('id')) # Test None parameters endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', region=service_name) self.assertNotEqual([], endpoints) self.assertIsNotNone(endpoints[0].get('id')) def test_list_endpoints(self): service_name = self.new_item_name + '_list' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', region=service_name) observed_endpoints = self.operator_cloud.list_endpoints() found = False for e in observed_endpoints: # Test all attributes are returned for endpoint in endpoints: if e['id'] == endpoint['id']: found = True self.assertEqual(service['id'], e['service_id']) if 'interface' in e: if 'interface' == 'internal': self.assertEqual('http://internal.test/', e['url']) elif 'interface' == 'public': self.assertEqual('http://public.test/', e['url']) else: self.assertEqual('http://public.test/', e['publicurl']) self.assertEqual('http://internal.test/', e['internalurl']) self.assertEqual(service_name, e['region']) self.assertTrue(found, msg='new endpoint not found in endpoints list!') def test_delete_endpoint(self): service_name = self.new_item_name + '_delete' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', region=service_name) self.assertNotEqual([], endpoints) for endpoint in endpoints: self.operator_cloud.delete_endpoint(endpoint['id']) observed_endpoints = self.operator_cloud.list_endpoints() found = False for e in observed_endpoints: for endpoint in endpoints: if e['id'] == endpoint['id']: found = True break self.failUnlessEqual( False, found, message='new endpoint was not deleted!') shade-1.7.0/shade/tests/functional/test_router.py0000664000567000056710000002717712677256557023331 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_router ---------------------------------- Functional tests for `shade` router methods. """ import ipaddress from shade.exc import OpenStackCloudException from shade.tests.functional import base EXPECTED_TOPLEVEL_FIELDS = ( 'id', 'name', 'admin_state_up', 'external_gateway_info', 'tenant_id', 'routes', 'status' ) EXPECTED_GW_INFO_FIELDS = ('network_id', 'enable_snat', 'external_fixed_ips') class TestRouter(base.BaseFunctionalTestCase): def setUp(self): super(TestRouter, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') self.router_prefix = self.getUniqueString('router') self.network_prefix = self.getUniqueString('network') self.subnet_prefix = self.getUniqueString('subnet') # NOTE(Shrews): Order matters! self.addCleanup(self._cleanup_networks) self.addCleanup(self._cleanup_subnets) self.addCleanup(self._cleanup_routers) def _cleanup_routers(self): exception_list = list() for router in self.operator_cloud.list_routers(): if router['name'].startswith(self.router_prefix): try: self.operator_cloud.delete_router(router['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_networks(self): exception_list = list() for network in self.operator_cloud.list_networks(): if network['name'].startswith(self.network_prefix): try: self.operator_cloud.delete_network(network['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_subnets(self): exception_list = list() for subnet in self.operator_cloud.list_subnets(): if subnet['name'].startswith(self.subnet_prefix): try: self.operator_cloud.delete_subnet(subnet['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_router_basic(self): net1_name = self.network_prefix + '_net1' net1 = self.operator_cloud.create_network( name=net1_name, external=True) router_name = self.router_prefix + '_create_basic' router = self.operator_cloud.create_router( name=router_name, admin_state_up=True, ext_gateway_net_id=net1['id'], ) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, router) ext_gw_info = router['external_gateway_info'] for field in EXPECTED_GW_INFO_FIELDS: self.assertIn(field, ext_gw_info) self.assertEqual(router_name, router['name']) self.assertEqual('ACTIVE', router['status']) self.assertEqual(net1['id'], ext_gw_info['network_id']) self.assertTrue(ext_gw_info['enable_snat']) def _create_and_verify_advanced_router(self, external_cidr, external_gateway_ip=None): # NOTE(Shrews): The arguments are needed because these tests # will run in parallel and we want to make sure that each test # is using different resources to prevent race conditions. net1_name = self.network_prefix + '_net1' sub1_name = self.subnet_prefix + '_sub1' net1 = self.operator_cloud.create_network( name=net1_name, external=True) sub1 = self.operator_cloud.create_subnet( net1['id'], external_cidr, subnet_name=sub1_name, gateway_ip=external_gateway_ip ) ip_net = ipaddress.IPv4Network(unicode(external_cidr)) last_ip = str(list(ip_net.hosts())[-1]) router_name = self.router_prefix + '_create_advanced' router = self.operator_cloud.create_router( name=router_name, admin_state_up=False, ext_gateway_net_id=net1['id'], enable_snat=False, ext_fixed_ips=[ {'subnet_id': sub1['id'], 'ip_address': last_ip} ] ) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, router) ext_gw_info = router['external_gateway_info'] for field in EXPECTED_GW_INFO_FIELDS: self.assertIn(field, ext_gw_info) self.assertEqual(router_name, router['name']) self.assertEqual('ACTIVE', router['status']) self.assertFalse(router['admin_state_up']) self.assertEqual(1, len(ext_gw_info['external_fixed_ips'])) self.assertEqual( sub1['id'], ext_gw_info['external_fixed_ips'][0]['subnet_id'] ) self.assertEqual( last_ip, ext_gw_info['external_fixed_ips'][0]['ip_address'] ) return router def test_create_router_advanced(self): self._create_and_verify_advanced_router(external_cidr='10.2.2.0/24') def test_add_remove_router_interface(self): router = self._create_and_verify_advanced_router( external_cidr='10.3.3.0/24') net_name = self.network_prefix + '_intnet1' sub_name = self.subnet_prefix + '_intsub1' net = self.operator_cloud.create_network(name=net_name) sub = self.operator_cloud.create_subnet( net['id'], '10.4.4.0/24', subnet_name=sub_name, gateway_ip='10.4.4.1' ) iface = self.operator_cloud.add_router_interface( router, subnet_id=sub['id']) self.assertIsNone( self.operator_cloud.remove_router_interface( router, subnet_id=sub['id']) ) # Test return values *after* the interface is detached so the # resources we've created can be cleaned up if these asserts fail. self.assertIsNotNone(iface) for key in ('id', 'subnet_id', 'port_id', 'tenant_id'): self.assertIn(key, iface) self.assertEqual(router['id'], iface['id']) self.assertEqual(sub['id'], iface['subnet_id']) def test_list_router_interfaces(self): router = self._create_and_verify_advanced_router( external_cidr='10.5.5.0/24') net_name = self.network_prefix + '_intnet1' sub_name = self.subnet_prefix + '_intsub1' net = self.operator_cloud.create_network(name=net_name) sub = self.operator_cloud.create_subnet( net['id'], '10.6.6.0/24', subnet_name=sub_name, gateway_ip='10.6.6.1' ) iface = self.operator_cloud.add_router_interface( router, subnet_id=sub['id']) all_ifaces = self.operator_cloud.list_router_interfaces(router) int_ifaces = self.operator_cloud.list_router_interfaces( router, interface_type='internal') ext_ifaces = self.operator_cloud.list_router_interfaces( router, interface_type='external') self.assertIsNone( self.operator_cloud.remove_router_interface( router, subnet_id=sub['id']) ) # Test return values *after* the interface is detached so the # resources we've created can be cleaned up if these asserts fail. self.assertIsNotNone(iface) self.assertEqual(2, len(all_ifaces)) self.assertEqual(1, len(int_ifaces)) self.assertEqual(1, len(ext_ifaces)) ext_fixed_ips = router['external_gateway_info']['external_fixed_ips'] self.assertEqual(ext_fixed_ips[0]['subnet_id'], ext_ifaces[0]['fixed_ips'][0]['subnet_id']) self.assertEqual(sub['id'], int_ifaces[0]['fixed_ips'][0]['subnet_id']) def test_update_router_name(self): router = self._create_and_verify_advanced_router( external_cidr='10.7.7.0/24') new_name = self.router_prefix + '_update_name' updated = self.operator_cloud.update_router( router['id'], name=new_name) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # Name is the only change we expect self.assertEqual(new_name, updated['name']) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['admin_state_up'], updated['admin_state_up']) self.assertEqual(router['external_gateway_info'], updated['external_gateway_info']) def test_update_router_admin_state(self): router = self._create_and_verify_advanced_router( external_cidr='10.8.8.0/24') updated = self.operator_cloud.update_router( router['id'], admin_state_up=True) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # admin_state_up is the only change we expect self.assertTrue(updated['admin_state_up']) self.assertNotEqual(router['admin_state_up'], updated['admin_state_up']) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['name'], updated['name']) self.assertEqual(router['external_gateway_info'], updated['external_gateway_info']) def test_update_router_ext_gw_info(self): router = self._create_and_verify_advanced_router( external_cidr='10.9.9.0/24') # create a new subnet existing_net_id = router['external_gateway_info']['network_id'] sub_name = self.subnet_prefix + '_update' sub = self.operator_cloud.create_subnet( existing_net_id, '10.10.10.0/24', subnet_name=sub_name, gateway_ip='10.10.10.1' ) updated = self.operator_cloud.update_router( router['id'], ext_gateway_net_id=existing_net_id, ext_fixed_ips=[ {'subnet_id': sub['id'], 'ip_address': '10.10.10.77'} ] ) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # external_gateway_info is the only change we expect ext_gw_info = updated['external_gateway_info'] self.assertEqual(1, len(ext_gw_info['external_fixed_ips'])) self.assertEqual( sub['id'], ext_gw_info['external_fixed_ips'][0]['subnet_id'] ) self.assertEqual( '10.10.10.77', ext_gw_info['external_fixed_ips'][0]['ip_address'] ) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['name'], updated['name']) self.assertEqual(router['admin_state_up'], updated['admin_state_up']) shade-1.7.0/shade/tests/functional/test_floating_ip_pool.py0000664000567000056710000000340412677256557025320 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_pool ---------------------------------- Functional tests for floating IP pool resource (managed by nova) """ from shade.tests.functional import base # When using nova-network, floating IP pools are created with nova-manage # command. # When using Neutron, floating IP pools in Nova are mapped from external # network names. This only if the floating-ip-pools nova extension is # available. # For instance, for current implementation of hpcloud that's not true: # nova floating-ip-pool-list returns 404. class TestFloatingIPPool(base.BaseFunctionalTestCase): def setUp(self): super(TestFloatingIPPool, self).setUp() if not self.demo_cloud._has_nova_extension('os-floating-ip-pools'): # Skipping this test is floating-ip-pool extension is not # available on the testing cloud self.skip( 'Floating IP pools extension is not available') def test_list_floating_ip_pools(self): pools = self.demo_cloud.list_floating_ip_pools() if not pools: self.assertFalse('no floating-ip pool available') for pool in pools: self.assertTrue('name' in pool) shade-1.7.0/shade/tests/functional/test_domain.py0000664000567000056710000000572312677256557023251 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_domain ---------------------------------- Functional tests for `shade` keystone domain resource. """ import shade from shade.tests.functional import base class TestDomain(base.BaseFunctionalTestCase): def setUp(self): super(TestDomain, self).setUp() i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support domains') self.domain_prefix = self.getUniqueString('domain') self.addCleanup(self._cleanup_domains) def _cleanup_domains(self): exception_list = list() for domain in self.operator_cloud.list_domains(): if domain['name'].startswith(self.domain_prefix): try: self.operator_cloud.delete_domain(domain['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise shade.OpenStackCloudException('\n'.join(exception_list)) def test_search_domains(self): domain_name = self.domain_prefix + '_search' # Shouldn't find any domain with this name yet results = self.operator_cloud.search_domains( filters=dict(name=domain_name)) self.assertEqual(0, len(results)) # Now create a new domain domain = self.operator_cloud.create_domain(domain_name) self.assertEqual(domain_name, domain['name']) # Now we should find only the new domain results = self.operator_cloud.search_domains( filters=dict(name=domain_name)) self.assertEqual(1, len(results)) self.assertEqual(domain_name, results[0]['name']) def test_update_domain(self): domain = self.operator_cloud.create_domain( self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) updated = self.operator_cloud.update_domain( domain['id'], name='updated name', description='updated description', enabled=False) self.assertEqual('updated name', updated['name']) self.assertEqual('updated description', updated['description']) self.assertFalse(updated['enabled']) shade-1.7.0/shade/tests/functional/hooks/0000775000567000056710000000000012677257023021472 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/functional/hooks/post_test_hook.sh0000775000567000056710000000272312677256557025114 0ustar jenkinsjenkins00000000000000#!/bin/sh # -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. export SHADE_DIR="$BASE/new/shade" cd $SHADE_DIR sudo chown -R jenkins:stack $SHADE_DIR CLOUDS_YAML=/etc/openstack/clouds.yaml if [ ! -e ${CLOUDS_YAML} ] then # stable/liberty had clouds.yaml in the home/base directory sudo mkdir -p /etc/openstack sudo cp $BASE/new/.config/openstack/clouds.yaml ${CLOUDS_YAML} sudo chown -R jenkins:stack /etc/openstack fi # Devstack runs both keystone v2 and v3. An environment variable is set # within the shade keystone v2 job that tells us which version we should # test against. if [ ${SHADE_USE_KEYSTONE_V2:-0} -eq 1 ] then sudo sed -ie "s/identity_api_version: '3'/identity_api_version: '2.0'/g" $CLOUDS_YAML sudo sed -ie '/^.*domain_id.*$/d' $CLOUDS_YAML fi echo "Running shade functional test suite" set +e sudo -E -H -u jenkins tox -efunctional EXIT_CODE=$? sudo testr last --subunit > $WORKSPACE/tempest.subunit set -e exit $EXIT_CODE shade-1.7.0/shade/tests/functional/base.py0000664000567000056710000000227112677256557021650 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os_client_config as occ import shade from shade.tests import base class BaseFunctionalTestCase(base.TestCase): def setUp(self): super(BaseFunctionalTestCase, self).setUp() self.config = occ.OpenStackConfig() demo_config = self.config.get_one_cloud(cloud='devstack') self.demo_cloud = shade.OpenStackCloud( cloud_config=demo_config, log_inner_exceptions=True) operator_config = self.config.get_one_cloud(cloud='devstack-admin') self.operator_cloud = shade.OperatorCloud( cloud_config=operator_config, log_inner_exceptions=True) shade-1.7.0/shade/tests/functional/test_volume.py0000664000567000056710000000467612677256557023317 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_volume ---------------------------------- Functional tests for `shade` block storage methods. """ from testtools import content from shade.tests.functional import base class TestVolume(base.BaseFunctionalTestCase): def setUp(self): super(TestVolume, self).setUp() if not self.demo_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') def test_volumes(self): '''Test volume and snapshot functionality''' volume_name = self.getUniqueString() snapshot_name = self.getUniqueString() self.addDetail('volume', content.text_content(volume_name)) self.addCleanup(self.cleanup, volume_name, snapshot_name) volume = self.demo_cloud.create_volume( display_name=volume_name, size=1) snapshot = self.demo_cloud.create_volume_snapshot( volume['id'], display_name=snapshot_name ) volume_ids = [v['id'] for v in self.demo_cloud.list_volumes()] self.assertIn(volume['id'], volume_ids) snapshot_list = self.demo_cloud.list_volume_snapshots() snapshot_ids = [s['id'] for s in snapshot_list] self.assertIn(snapshot['id'], snapshot_ids) ret_snapshot = self.demo_cloud.get_volume_snapshot_by_id( snapshot['id']) self.assertEqual(snapshot['id'], ret_snapshot['id']) self.demo_cloud.delete_volume_snapshot(snapshot_name, wait=True) self.demo_cloud.delete_volume(volume_name, wait=True) def cleanup(self, volume_name, snapshot_name): volume = self.demo_cloud.get_volume(volume_name) snapshot = self.demo_cloud.get_volume_snapshot(snapshot_name) # Need to delete snapshots before volumes if snapshot: self.demo_cloud.delete_volume_snapshot(snapshot_name) if volume: self.demo_cloud.delete_volume(volume_name) shade-1.7.0/shade/tests/functional/test_floating_ip.py0000664000567000056710000002307512677256557024275 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip ---------------------------------- Functional tests for floating IP resource. """ import pprint from novaclient import exceptions as nova_exc from testtools import content from shade import _utils from shade import meta from shade.exc import OpenStackCloudException from shade.tests.functional import base from shade.tests.functional.util import pick_flavor, pick_image class TestFloatingIP(base.BaseFunctionalTestCase): timeout = 60 def setUp(self): super(TestFloatingIP, self).setUp() self.nova = self.demo_cloud.nova_client if self.demo_cloud.has_service('network'): self.neutron = self.demo_cloud.neutron_client self.flavor = pick_flavor(self.nova.flavors.list()) if self.flavor is None: self.assertFalse('no sensible flavor available') self.image = pick_image(self.nova.images.list()) if self.image is None: self.assertFalse('no sensible image available') # Generate a random name for these tests self.new_item_name = self.getUniqueString() self.addCleanup(self._cleanup_network) self.addCleanup(self._cleanup_servers) def _cleanup_network(self): exception_list = list() # Delete stale networks as well as networks created for this test if self.demo_cloud.has_service('network'): # Delete routers for r in self.demo_cloud.list_routers(): try: if r['name'].startswith(self.new_item_name): # ToDo: update_router currently won't allow removing # external_gateway_info router = { 'external_gateway_info': None } self.neutron.update_router( router=r['id'], body={'router': router}) # ToDo: Shade currently doesn't have methods for this for s in self.demo_cloud.list_subnets(): if s['name'].startswith(self.new_item_name): try: self.neutron.remove_interface_router( router=r['id'], body={'subnet_id': s['id']}) except Exception: pass self.demo_cloud.delete_router(name_or_id=r['id']) except Exception as e: exception_list.append(str(e)) continue # Delete subnets for s in self.demo_cloud.list_subnets(): if s['name'].startswith(self.new_item_name): try: self.demo_cloud.delete_subnet(name_or_id=s['id']) except Exception as e: exception_list.append(str(e)) continue # Delete networks for n in self.demo_cloud.list_networks(): if n['name'].startswith(self.new_item_name): try: self.demo_cloud.delete_network(name_or_id=n['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_servers(self): exception_list = list() # Delete stale servers as well as server created for this test for i in self.nova.servers.list(): if i.name.startswith(self.new_item_name): self.nova.servers.delete(i) for _ in _utils._iterate_timeout( self.timeout, "Timeout deleting servers"): try: self.nova.servers.get(server=i) except nova_exc.NotFound: break except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_ips(self, server): exception_list = list() fixed_ip = meta.get_server_private_ip(server) for ip in self.demo_cloud.list_floating_ips(): if (ip.get('fixed_ip', None) == fixed_ip or ip.get('fixed_ip_address', None) == fixed_ip): try: self.demo_cloud.delete_floating_ip(ip['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _setup_networks(self): if self.demo_cloud.has_service('network'): # Create a network self.test_net = self.demo_cloud.create_network( name=self.new_item_name + '_net') # Create a subnet on it self.test_subnet = self.demo_cloud.create_subnet( subnet_name=self.new_item_name + '_subnet', network_name_or_id=self.test_net['id'], cidr='10.24.4.0/24', enable_dhcp=True ) # Create a router self.test_router = self.demo_cloud.create_router( name=self.new_item_name + '_router') # Attach the router to an external network ext_nets = self.demo_cloud.search_networks( filters={'router:external': True}) self.demo_cloud.update_router( name_or_id=self.test_router['id'], ext_gateway_net_id=ext_nets[0]['id']) # Attach the router to the internal subnet self.neutron.add_interface_router( router=self.test_router['id'], body={'subnet_id': self.test_subnet['id']}) # Select the network for creating new servers self.nic = {'net-id': self.test_net['id']} self.addDetail( 'networks-neutron', content.text_content(pprint.pformat( self.demo_cloud.list_networks()))) else: # ToDo: remove once we have list/get methods for nova networks nets = self.demo_cloud.nova_client.networks.list() self.addDetail( 'networks-nova', content.text_content(pprint.pformat( nets))) self.nic = {'net-id': nets[0].id} def test_private_ip(self): self._setup_networks() new_server = self.demo_cloud.get_openstack_vars( self.demo_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic])) self.addDetail( 'server', content.text_content(pprint.pformat(new_server))) self.assertNotEqual(new_server['private_v4'], '') def test_add_auto_ip(self): self._setup_networks() new_server = self.demo_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic]) # ToDo: remove the following iteration when create_server waits for # the IP to be attached ip = None for _ in _utils._iterate_timeout( self.timeout, "Timeout waiting for IP address to be attached"): ip = meta.get_server_external_ipv4(self.demo_cloud, new_server) if ip is not None: break new_server = self.demo_cloud.get_server(new_server.id) self.addCleanup(self._cleanup_ips, new_server) def test_detach_ip_from_server(self): self._setup_networks() new_server = self.demo_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic]) # ToDo: remove the following iteration when create_server waits for # the IP to be attached ip = None for _ in _utils._iterate_timeout( self.timeout, "Timeout waiting for IP address to be attached"): ip = meta.get_server_external_ipv4(self.demo_cloud, new_server) if ip is not None: break new_server = self.demo_cloud.get_server(new_server.id) self.addCleanup(self._cleanup_ips, new_server) f_ip = self.demo_cloud.get_floating_ip( id=None, filters={'floating_ip_address': ip}) self.demo_cloud.detach_ip_from_server( server_id=new_server.id, floating_ip_id=f_ip['id']) shade-1.7.0/shade/tests/functional/test_range_search.py0000664000567000056710000001176112677256557024422 0ustar jenkinsjenkins00000000000000# Copyright (c) 2016 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. from shade import exc from shade.tests.functional import base class TestRangeSearch(base.BaseFunctionalTestCase): def test_range_search_bad_range(self): flavors = self.demo_cloud.list_flavors() self.assertRaises( exc.OpenStackCloudException, self.demo_cloud.range_search, flavors, {"ram": "<1a0"}) def test_range_search_exact(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search(flavors, {"ram": "4096"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) self.assertEqual("m1.medium", result[0]['name']) def test_range_search_min(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search(flavors, {"ram": "MIN"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) self.assertEqual("m1.tiny", result[0]['name']) def test_range_search_max(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search(flavors, {"ram": "MAX"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) self.assertEqual("m1.xlarge", result[0]['name']) def test_range_search_lt(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search(flavors, {"ram": "<4096"}) self.assertIsInstance(result, list) self.assertEqual(2, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.tiny", flavor_names) self.assertIn("m1.small", flavor_names) def test_range_search_gt(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search(flavors, {"ram": ">4096"}) self.assertIsInstance(result, list) self.assertEqual(2, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.large", flavor_names) self.assertIn("m1.xlarge", flavor_names) def test_range_search_le(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search(flavors, {"ram": "<=4096"}) self.assertIsInstance(result, list) self.assertEqual(3, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.tiny", flavor_names) self.assertIn("m1.small", flavor_names) self.assertIn("m1.medium", flavor_names) def test_range_search_ge(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search(flavors, {"ram": ">=4096"}) self.assertIsInstance(result, list) self.assertEqual(3, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.medium", flavor_names) self.assertIn("m1.large", flavor_names) self.assertIn("m1.xlarge", flavor_names) def test_range_search_multi_1(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search( flavors, {"ram": "MIN", "vcpus": "MIN"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) self.assertEqual("m1.tiny", result[0]['name']) def test_range_search_multi_2(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search( flavors, {"ram": "<8192", "vcpus": "MIN"}) self.assertIsInstance(result, list) self.assertEqual(2, len(result)) flavor_names = [r['name'] for r in result] # All of these should have 1 vcpu self.assertIn("m1.tiny", flavor_names) self.assertIn("m1.small", flavor_names) def test_range_search_multi_3(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search( flavors, {"ram": ">=4096", "vcpus": "<6"}) self.assertIsInstance(result, list) self.assertEqual(2, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.medium", flavor_names) self.assertIn("m1.large", flavor_names) def test_range_search_multi_4(self): flavors = self.demo_cloud.list_flavors() result = self.demo_cloud.range_search( flavors, {"ram": ">=4096", "vcpus": "MAX"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) # This is the only result that should have max vcpu self.assertEqual("m1.xlarge", result[0]['name']) shade-1.7.0/shade/tests/functional/util.py0000664000567000056710000000273412677256557021717 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ util -------------------------------- Util methods for functional tests """ import operator def pick_flavor(flavors): """Given a flavor list pick the smallest one.""" # Enable running functional tests against rax - which requires # performance flavors be used for boot from volume for flavor in sorted( flavors, key=operator.attrgetter('ram')): if 'performance' in flavor.name: return flavor for flavor in sorted( flavors, key=operator.attrgetter('ram')): return flavor def pick_image(images): for image in images: if image.name.startswith('cirros') and image.name.endswith('-uec'): return image for image in images: if image.name.lower().startswith('ubuntu'): return image for image in images: if image.name.lower().startswith('centos'): return image shade-1.7.0/shade/tests/functional/test_compute.py0000664000567000056710000002306012677256557023450 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_compute ---------------------------------- Functional tests for `shade` compute methods. """ from shade.tests.functional import base from shade.tests.functional.util import pick_flavor, pick_image class TestCompute(base.BaseFunctionalTestCase): def setUp(self): super(TestCompute, self).setUp() self.flavor = pick_flavor(self.demo_cloud.list_flavors()) if self.flavor is None: self.assertFalse('no sensible flavor available') self.image = pick_image(self.demo_cloud.list_images()) if self.image is None: self.assertFalse('no sensible image available') self.server_name = self.getUniqueString() def _cleanup_servers_and_volumes(self, server_name): """Delete the named server and any attached volumes. Adding separate cleanup calls for servers and volumes can be tricky since they need to be done in the proper order. And sometimes deleting a server can start the process of deleting a volume if it is booted from that volume. This encapsulates that logic. """ server = self.demo_cloud.get_server(server_name) if not server: return volumes = self.demo_cloud.get_volumes(server) self.demo_cloud.delete_server(server.name, wait=True) for volume in volumes: if volume.status != 'deleting': self.demo_cloud.delete_volume(volume.id, wait=True) def test_create_and_delete_server(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.demo_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.demo_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.demo_cloud.get_server(self.server_name)) def test_create_and_delete_server_with_admin_pass(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.demo_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, admin_pass='sheiqu9loegahSh', wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertEqual(server['adminPass'], 'sheiqu9loegahSh') self.assertTrue( self.demo_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.demo_cloud.get_server(self.server_name)) def test_get_image_id(self): self.assertEqual( self.image.id, self.demo_cloud.get_image_id(self.image.id)) self.assertEqual( self.image.id, self.demo_cloud.get_image_id(self.image.name)) def test_get_image_name(self): self.assertEqual( self.image.name, self.demo_cloud.get_image_name(self.image.id)) self.assertEqual( self.image.name, self.demo_cloud.get_image_name(self.image.name)) def _assert_volume_attach(self, server, volume_id=None): self.assertEqual(self.server_name, server['name']) self.assertEqual('', server['image']) self.assertEqual(self.flavor.id, server['flavor']['id']) volumes = self.demo_cloud.get_volumes(server) self.assertEqual(1, len(volumes)) volume = volumes[0] if volume_id: self.assertEqual(volume_id, volume['id']) else: volume_id = volume['id'] self.assertEqual(1, len(volume['attachments']), 1) self.assertEqual(server['id'], volume['attachments'][0]['server_id']) return volume_id def test_create_boot_from_volume_image(self): if not self.demo_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.demo_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, boot_from_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server) volume = self.demo_cloud.get_volume(volume_id) self.assertIsNotNone(volume) self.assertEqual(volume['name'], volume['display_name']) self.assertEqual(True, volume['bootable']) self.assertEqual(server['id'], volume['attachments'][0]['server_id']) self.assertTrue(self.demo_cloud.delete_server(server.id, wait=True)) self.assertTrue(self.demo_cloud.delete_volume(volume.id, wait=True)) self.assertIsNone(self.demo_cloud.get_server(server.id)) self.assertIsNone(self.demo_cloud.get_volume(volume.id)) def test_create_terminate_volume_image(self): if not self.demo_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.demo_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, boot_from_volume=True, terminate_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server) self.assertTrue( self.demo_cloud.delete_server(self.server_name, wait=True)) volume = self.demo_cloud.get_volume(volume_id) # We can either get None (if the volume delete was quick), or a volume # that is in the process of being deleted. if volume: self.assertEquals('deleting', volume.status) self.assertIsNone(self.demo_cloud.get_server(self.server_name)) def test_create_boot_from_volume_preexisting(self): if not self.demo_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) volume = self.demo_cloud.create_volume( size=1, name=self.server_name, image=self.image, wait=True) server = self.demo_cloud.create_server( name=self.server_name, image=None, flavor=self.flavor, boot_volume=volume, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server, volume_id=volume['id']) self.assertTrue( self.demo_cloud.delete_server(self.server_name, wait=True)) self.addCleanup(self.demo_cloud.delete_volume, volume_id) volume = self.demo_cloud.get_volume(volume_id) self.assertIsNotNone(volume) self.assertEqual(volume['name'], volume['display_name']) self.assertEqual(True, volume['bootable']) self.assertEqual([], volume['attachments']) self.assertTrue(self.demo_cloud.delete_volume(volume_id)) self.assertIsNone(self.demo_cloud.get_server(self.server_name)) self.assertIsNone(self.demo_cloud.get_volume(volume_id)) def test_create_boot_from_volume_preexisting_terminate(self): if not self.demo_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) volume = self.demo_cloud.create_volume( size=1, name=self.server_name, image=self.image, wait=True) server = self.demo_cloud.create_server( name=self.server_name, image=None, flavor=self.flavor, boot_volume=volume, terminate_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server, volume_id=volume['id']) self.assertTrue( self.demo_cloud.delete_server(self.server_name, wait=True)) volume = self.demo_cloud.get_volume(volume_id) # We can either get None (if the volume delete was quick), or a volume # that is in the process of being deleted. if volume: self.assertEquals('deleting', volume.status) self.assertIsNone(self.demo_cloud.get_server(self.server_name)) def test_create_image_snapshot_wait_active(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.demo_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, admin_pass='sheiqu9loegahSh', wait=True) image = self.demo_cloud.create_image_snapshot('test-snapshot', server, wait=True) self.addCleanup(self.demo_cloud.delete_image, image['id']) self.assertEqual('active', image['status']) shade-1.7.0/shade/tests/functional/test_image.py0000664000567000056710000000467612677256557023072 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_compute ---------------------------------- Functional tests for `shade` image methods. """ import filecmp import os import tempfile from shade.tests.functional import base from shade.tests.functional.util import pick_image class TestImage(base.BaseFunctionalTestCase): def setUp(self): super(TestImage, self).setUp() self.image = pick_image(self.demo_cloud.nova_client.images.list()) def test_create_image(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write('\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: self.demo_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) finally: self.demo_cloud.delete_image(image_name, wait=True) def test_download_image(self): test_image = tempfile.NamedTemporaryFile(delete=False) self.addCleanup(os.remove, test_image.name) test_image.write('\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') self.demo_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) self.addCleanup(self.demo_cloud.delete_image, image_name, wait=True) output = os.path.join(tempfile.gettempdir(), self.getUniqueString()) self.demo_cloud.download_image(image_name, output) self.addCleanup(os.remove, output) self.assertTrue(filecmp.cmp(test_image.name, output), "Downloaded contents don't match created image") shade-1.7.0/shade/tests/functional/__init__.py0000664000567000056710000000000012677256557022461 0ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/functional/test_inventory.py0000664000567000056710000000741112677256557024033 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_inventory ---------------------------------- Functional tests for `shade` inventory methods. """ from shade import inventory from shade.tests.functional import base from shade.tests.functional.util import pick_flavor, pick_image class TestInventory(base.BaseFunctionalTestCase): def setUp(self): super(TestInventory, self).setUp() # This needs to use an admin account, otherwise a public IP # is not allocated from devstack. self.inventory = inventory.OpenStackInventory() self.server_name = 'test_inventory_server' self.nova = self.operator_cloud.nova_client self.flavor = pick_flavor(self.nova.flavors.list()) if self.flavor is None: self.assertTrue(False, 'no sensible flavor available') self.image = pick_image(self.nova.images.list()) if self.image is None: self.assertTrue(False, 'no sensible image available') self.addCleanup(self._cleanup_servers) self.operator_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True, auto_ip=True) def _cleanup_servers(self): for i in self.nova.servers.list(): if i.name.startswith(self.server_name): self.nova.servers.delete(i) def _test_host_content(self, host): self.assertEquals(host['image']['id'], self.image.id) self.assertNotIn('links', host['image']) self.assertEquals(host['flavor']['id'], self.flavor.id) self.assertNotIn('links', host['flavor']) self.assertNotIn('links', host) self.assertIsInstance(host['volumes'], list) self.assertIsInstance(host['metadata'], dict) self.assertIn('interface_ip', host) def _test_expanded_host_content(self, host): self.assertEquals(host['image']['name'], self.image.name) self.assertEquals(host['flavor']['name'], self.flavor.name) def test_get_host(self): host = self.inventory.get_host(self.server_name) self.assertIsNotNone(host) self.assertEquals(host['name'], self.server_name) self._test_host_content(host) self._test_expanded_host_content(host) host_found = False for host in self.inventory.list_hosts(): if host['name'] == self.server_name: host_found = True self._test_host_content(host) self.assertTrue(host_found) def test_get_host_no_detail(self): host = self.inventory.get_host(self.server_name, expand=False) self.assertIsNotNone(host) self.assertEquals(host['name'], self.server_name) self.assertEquals(host['image']['id'], self.image.id) self.assertNotIn('links', host['image']) self.assertNotIn('name', host['name']) self.assertEquals(host['flavor']['id'], self.flavor.id) self.assertNotIn('links', host['flavor']) self.assertNotIn('name', host['flavor']) host_found = False for host in self.inventory.list_hosts(expand=False): if host['name'] == self.server_name: host_found = True self._test_host_content(host) self.assertTrue(host_found) shade-1.7.0/shade/tests/functional/test_flavor.py0000664000567000056710000001424712677256557023274 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_flavor ---------------------------------- Functional tests for `shade` flavor resource. """ from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestFlavor(base.BaseFunctionalTestCase): def setUp(self): super(TestFlavor, self).setUp() # Generate a random name for flavors in this test self.new_item_name = self.getUniqueString('flavor') self.addCleanup(self._cleanup_flavors) def _cleanup_flavors(self): exception_list = list() for f in self.operator_cloud.list_flavors(): if f['name'].startswith(self.new_item_name): try: self.operator_cloud.delete_flavor(f['id']) except Exception as e: # We were unable to delete a flavor, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_flavor(self): flavor_name = self.new_item_name + '_create' flavor_kwargs = dict( name=flavor_name, ram=1024, vcpus=2, disk=10, ephemeral=5, swap=100, rxtx_factor=1.5, is_public=True ) flavor = self.operator_cloud.create_flavor(**flavor_kwargs) self.assertIsNotNone(flavor['id']) # When properly normalized, we should always get an extra_specs # and expect empty dict on create. self.assertIn('extra_specs', flavor) self.assertEqual({}, flavor['extra_specs']) # We should also always have ephemeral and public attributes self.assertIn('ephemeral', flavor) self.assertIn('OS-FLV-EXT-DATA:ephemeral', flavor) self.assertEqual(5, flavor['ephemeral']) self.assertIn('is_public', flavor) self.assertIn('os-flavor-access:is_public', flavor) self.assertEqual(True, flavor['is_public']) for key in flavor_kwargs.keys(): self.assertIn(key, flavor) for key, value in flavor_kwargs.items(): self.assertEqual(value, flavor[key]) def test_list_flavors(self): pub_flavor_name = self.new_item_name + '_public' priv_flavor_name = self.new_item_name + '_private' public_kwargs = dict( name=pub_flavor_name, ram=1024, vcpus=2, disk=10, is_public=True ) private_kwargs = dict( name=priv_flavor_name, ram=1024, vcpus=2, disk=10, is_public=False ) # Create a public and private flavor. We expect both to be listed # for an operator. self.operator_cloud.create_flavor(**public_kwargs) self.operator_cloud.create_flavor(**private_kwargs) flavors = self.operator_cloud.list_flavors() # Flavor list will include the standard devstack flavors. We just want # to make sure both of the flavors we just created are present. found = [] for f in flavors: # extra_specs should be added within list_flavors() self.assertIn('extra_specs', f) if f['name'] in (pub_flavor_name, priv_flavor_name): found.append(f) self.assertEqual(2, len(found)) def test_flavor_access(self): priv_flavor_name = self.new_item_name + '_private' private_kwargs = dict( name=priv_flavor_name, ram=1024, vcpus=2, disk=10, is_public=False ) new_flavor = self.operator_cloud.create_flavor(**private_kwargs) # Validate the 'demo' user cannot see the new flavor flavors = self.demo_cloud.search_flavors(priv_flavor_name) self.assertEqual(0, len(flavors)) # We need the tenant ID for the 'demo' user project = self.operator_cloud.get_project('demo') self.assertIsNotNone(project) # Now give 'demo' access self.operator_cloud.add_flavor_access(new_flavor['id'], project['id']) # Now see if the 'demo' user has access to it flavors = self.demo_cloud.search_flavors(priv_flavor_name) self.assertEqual(1, len(flavors)) self.assertEqual(priv_flavor_name, flavors[0]['name']) # Now revoke the access and make sure we can't find it self.operator_cloud.remove_flavor_access(new_flavor['id'], project['id']) flavors = self.demo_cloud.search_flavors(priv_flavor_name) self.assertEqual(0, len(flavors)) def test_set_unset_flavor_specs(self): """ Test setting and unsetting flavor extra specs """ flavor_name = self.new_item_name + '_spec_test' kwargs = dict( name=flavor_name, ram=1024, vcpus=2, disk=10 ) new_flavor = self.operator_cloud.create_flavor(**kwargs) # Expect no extra_specs self.assertEqual({}, new_flavor['extra_specs']) # Now set them extra_specs = {'foo': 'aaa', 'bar': 'bbb'} self.operator_cloud.set_flavor_specs(new_flavor['id'], extra_specs) mod_flavor = self.operator_cloud.get_flavor(new_flavor['id']) # Verify extra_specs were set self.assertIn('extra_specs', mod_flavor) self.assertEqual(extra_specs, mod_flavor['extra_specs']) # Unset the 'foo' value self.operator_cloud.unset_flavor_specs(mod_flavor['id'], ['foo']) mod_flavor = self.operator_cloud.get_flavor(new_flavor['id']) # Verify 'foo' is unset and 'bar' is still set self.assertEqual({'bar': 'bbb'}, mod_flavor['extra_specs']) shade-1.7.0/shade/tests/ansible/0000775000567000056710000000000012677257023017622 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/hooks/0000775000567000056710000000000012677257023020745 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/hooks/post_test_hook.sh0000775000567000056710000000201712677256557024363 0ustar jenkinsjenkins00000000000000#!/bin/sh # -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. export SHADE_DIR="$BASE/new/shade" cd $SHADE_DIR sudo chown -R jenkins:stack $SHADE_DIR echo "Running shade Ansible test suite" if [ ${SHADE_ANSIBLE_DEV:-0} -eq 1 ] then # Use the upstream development version of Ansible set +e sudo -E -H -u jenkins tox -eansible -- -d EXIT_CODE=$? set -e else # Use the release version of Ansible set +e sudo -E -H -u jenkins tox -eansible EXIT_CODE=$? set -e fi exit $EXIT_CODE shade-1.7.0/shade/tests/ansible/run.yml0000664000567000056710000000123312677256557021163 0ustar jenkinsjenkins00000000000000--- - hosts: localhost connection: local gather_facts: true roles: - { role: auth, tags: auth } - { role: client_config, tags: client_config } - { role: image, tags: image } - { role: keypair, tags: keypair } - { role: network, tags: network } - { role: nova_flavor, tags: nova_flavor } - { role: object, tags: object } - { role: port, tags: port } - { role: router, tags: router } - { role: security_group, tags: security_group } - { role: server, tags: server } - { role: subnet, tags: subnet } - { role: user, tags: user } - { role: user_group, tags: user_group } - { role: volume, tags: volume } shade-1.7.0/shade/tests/ansible/README.txt0000664000567000056710000000211112677256557021326 0ustar jenkinsjenkins00000000000000This directory contains a testing infrastructure for the Ansible OpenStack modules. You will need a clouds.yaml file in order to run the tests. You must provide a value for the `cloud` variable for each run (using the -e option) as a default is not currently provided. If you want to run these tests against devstack, it is easiest to use the tox target. This assumes you have a devstack-admin cloud defined in your clouds.yaml file that points to devstack. Some examples of using tox: tox -e ansible tox -e ansible keypair security_group If you want to run these tests directly, or against different clouds, then you'll need to use the ansible-playbook command that comes with the Ansible distribution and feed it the run.yml playbook. Some examples: # Run all module tests against a provider ansible-playbook run.yml -e "cloud=hp" # Run only the keypair and security_group tests ansible-playbook run.yml -e "cloud=hp" --tags "keypair,security_group" # Run all tests except security_group ansible-playbook run.yml -e "cloud=hp" --skip-tags "security_group" shade-1.7.0/shade/tests/ansible/roles/0000775000567000056710000000000012677257023020746 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/user_group/0000775000567000056710000000000012677257023023140 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/user_group/tasks/0000775000567000056710000000000012677257023024265 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/user_group/tasks/main.yml0000664000567000056710000000116512677256557025752 0ustar jenkinsjenkins00000000000000--- - name: Create user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: ansible.user@nowhere.net domain: default default_project: demo register: user - name: Assign user to nonadmins group os_user_group: cloud: "{{ cloud }}" state: present user: ansible_user group: nonadmins - name: Remove user from nonadmins group os_user_group: cloud: "{{ cloud }}" state: absent user: ansible_user group: nonadmins - name: Delete user os_user: cloud: "{{ cloud }}" state: absent name: ansible_user shade-1.7.0/shade/tests/ansible/roles/port/0000775000567000056710000000000012677257023021732 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/port/tasks/0000775000567000056710000000000012677257023023057 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/port/tasks/main.yml0000664000567000056710000000317512677256557024547 0ustar jenkinsjenkins00000000000000--- - name: Create network os_network: cloud: "{{ cloud }}" state: present name: "{{ network_name }}" external: True - name: Create subnet os_subnet: cloud: "{{ cloud }}" state: present name: "{{ subnet_name }}" network_name: "{{ network_name }}" cidr: 10.5.5.0/24 - name: Create port (no security group) os_port: cloud: "{{ cloud }}" state: present name: "{{ port_name }}" network: "{{ network_name }}" no_security_groups: True fixed_ips: - ip_address: 10.5.5.69 register: port - debug: var=port - name: Delete port (no security group) os_port: cloud: "{{ cloud }}" state: absent name: "{{ port_name }}" - name: Create security group os_security_group: cloud: "{{ cloud }}" state: present name: "{{ secgroup_name }}" description: Test group - name: Create port (with security group) os_port: cloud: "{{ cloud }}" state: present name: "{{ port_name }}" network: "{{ network_name }}" fixed_ips: - ip_address: 10.5.5.69 security_groups: - "{{ secgroup_name }}" register: port - debug: var=port - name: Delete port (with security group) os_port: cloud: "{{ cloud }}" state: absent name: "{{ port_name }}" - name: Delete security group os_security_group: cloud: "{{ cloud }}" state: absent name: "{{ secgroup_name }}" - name: Delete subnet os_subnet: cloud: "{{ cloud }}" state: absent name: "{{ subnet_name }}" - name: Delete network os_network: cloud: "{{ cloud }}" state: absent name: "{{ network_name }}" shade-1.7.0/shade/tests/ansible/roles/port/vars/0000775000567000056710000000000012677257023022705 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/port/vars/main.yml0000664000567000056710000000020112677256557024360 0ustar jenkinsjenkins00000000000000network_name: ansible_port_network subnet_name: ansible_port_subnet port_name: ansible_port secgroup_name: ansible_port_secgroup shade-1.7.0/shade/tests/ansible/roles/auth/0000775000567000056710000000000012677257023021707 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/auth/tasks/0000775000567000056710000000000012677257023023034 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/auth/tasks/main.yml0000664000567000056710000000014512677256557024516 0ustar jenkinsjenkins00000000000000--- - name: Authenticate to the cloud os_auth: cloud={{ cloud }} - debug: var=service_catalog shade-1.7.0/shade/tests/ansible/roles/image/0000775000567000056710000000000012677257023022030 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/image/tasks/0000775000567000056710000000000012677257023023155 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/image/tasks/main.yml0000664000567000056710000000215112677256557024636 0ustar jenkinsjenkins00000000000000--- - name: Create a test image file shell: mktemp register: tmp_file - name: Fill test image file to 1MB shell: truncate -s 1048576 {{ tmp_file.stdout }} - name: Create raw image (defaults) os_image: cloud: "{{ cloud }}" state: present name: "{{ image_name }}" filename: "{{ tmp_file.stdout }}" disk_format: raw register: image - debug: var=image - name: Delete raw image (defaults) os_image: cloud: "{{ cloud }}" state: absent name: "{{ image_name }}" - name: Create raw image (complex) os_image: cloud: "{{ cloud }}" state: present name: "{{ image_name }}" filename: "{{ tmp_file.stdout }}" disk_format: raw is_public: True min_disk: 10 min_ram: 1024 kernel: cirros-vmlinuz ramdisk: cirros-initrd properties: cpu_arch: x86_64 distro: ubuntu register: image - debug: var=image - name: Delete raw image (complex) os_image: cloud: "{{ cloud }}" state: absent name: "{{ image_name }}" - name: Delete test image file file: name: "{{ tmp_file.stdout }}" state: absent shade-1.7.0/shade/tests/ansible/roles/image/vars/0000775000567000056710000000000012677257023023003 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/image/vars/main.yml0000664000567000056710000000003212677256557024460 0ustar jenkinsjenkins00000000000000image_name: ansible_image shade-1.7.0/shade/tests/ansible/roles/network/0000775000567000056710000000000012677257023022437 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/network/tasks/0000775000567000056710000000000012677257023023564 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/network/tasks/main.yml0000664000567000056710000000046612677256557025254 0ustar jenkinsjenkins00000000000000--- - name: Create network os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: present shared: "{{ network_shared }}" external: "{{ network_external }}" - name: Delete network os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: absent shade-1.7.0/shade/tests/ansible/roles/network/vars/0000775000567000056710000000000012677257023023412 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/network/vars/main.yml0000664000567000056710000000011212677256557025066 0ustar jenkinsjenkins00000000000000network_name: shade_network network_shared: false network_external: false shade-1.7.0/shade/tests/ansible/roles/client_config/0000775000567000056710000000000012677257023023551 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/client_config/tasks/0000775000567000056710000000000012677257023024676 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/client_config/tasks/main.yml0000664000567000056710000000023312677256557026356 0ustar jenkinsjenkins00000000000000--- - name: List all profiles os_client_config: register: list # WARNING: This will output sensitive authentication information!!!! - debug: var=list shade-1.7.0/shade/tests/ansible/roles/keypair/0000775000567000056710000000000012677257023022412 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/keypair/tasks/0000775000567000056710000000000012677257023023537 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/keypair/tasks/main.yml0000664000567000056710000000240612677256557025223 0ustar jenkinsjenkins00000000000000--- - name: Create keypair (non-existing) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present - name: Delete keypair (non-existing) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Generate test key file user: name: "{{ ansible_env.USER }}" generate_ssh_key: yes ssh_key_file: .ssh/shade_id_rsa - name: Create keypair (file) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present public_key_file: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa.pub" - name: Delete keypair (file) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Create keypair (key) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present public_key: "{{ lookup('file', '~/.ssh/shade_id_rsa.pub') }}" - name: Delete keypair (key) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Delete test key pub file file: name: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa.pub" state: absent - name: Delete test key pvt file file: name: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa" state: absent shade-1.7.0/shade/tests/ansible/roles/keypair/vars/0000775000567000056710000000000012677257023023365 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/keypair/vars/main.yml0000664000567000056710000000003412677256557025044 0ustar jenkinsjenkins00000000000000keypair_name: shade_keypair shade-1.7.0/shade/tests/ansible/roles/user/0000775000567000056710000000000012677257023021724 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/user/tasks/0000775000567000056710000000000012677257023023051 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/user/tasks/main.yml0000664000567000056710000000107012677256557024531 0ustar jenkinsjenkins00000000000000--- - name: Create user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: ansible.user@nowhere.net domain: default default_project: demo register: user - debug: var=user - name: Update user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: updated.ansible.user@nowhere.net register: updateduser - debug: var=updateduser - name: Delete user os_user: cloud: "{{ cloud }}" state: absent name: ansible_user shade-1.7.0/shade/tests/ansible/roles/server/0000775000567000056710000000000012677257023022254 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/server/tasks/0000775000567000056710000000000012677257023023401 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/server/tasks/main.yml0000664000567000056710000000267712677256562025073 0ustar jenkinsjenkins00000000000000--- - name: Create server with meta as CSV os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" network: "{{ server_network }}" auto_floating_ip: false meta: "key1=value1,key2=value2" wait: true register: server - debug: var=server - name: Delete server with meta as CSV os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true - name: Create server with meta as dict os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" auto_floating_ip: false network: "{{ server_network }}" meta: key1: value1 key2: value2 wait: true register: server - debug: var=server - name: Delete server with meta as dict os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true - name: Create server (FIP from pool/network) os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" network: "{{ server_network }}" floating_ip_pools: - public wait: true register: server - debug: var=server - name: Delete server (FIP from pool/network) os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true shade-1.7.0/shade/tests/ansible/roles/server/vars/0000775000567000056710000000000012677257023023227 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/server/vars/main.yaml0000664000567000056710000000010412677256557025045 0ustar jenkinsjenkins00000000000000server_network: private server_name: ansible_server flavor: m1.tiny shade-1.7.0/shade/tests/ansible/roles/router/0000775000567000056710000000000012677257023022266 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/router/tasks/0000775000567000056710000000000012677257023023413 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/router/tasks/main.yml0000664000567000056710000000307112677256557025076 0ustar jenkinsjenkins00000000000000--- - name: Create external network os_network: cloud: "{{ cloud }}" state: present name: "{{ external_network_name }}" external: true - name: Create internal network os_network: cloud: "{{ cloud }}" state: present name: "{{ network_name }}" external: false - name: Create subnet1 os_subnet: cloud: "{{ cloud }}" state: present network_name: "{{ external_network_name }}" name: shade_subnet1 cidr: 10.6.6.0/24 - name: Create subnet2 os_subnet: cloud: "{{ cloud }}" state: present network_name: "{{ network_name }}" name: shade_subnet2 cidr: 10.7.7.0/24 - name: Create router os_router: cloud: "{{ cloud }}" state: present name: "{{ router_name }}" network: "{{ external_network_name }}" - name: Update router os_router: cloud: "{{ cloud }}" state: present name: "{{ router_name }}" network: "{{ external_network_name }}" interfaces: - shade_subnet2 - name: Delete router os_router: cloud: "{{ cloud }}" state: absent name: "{{ router_name }}" - name: Delete subnet1 os_subnet: cloud: "{{ cloud }}" state: absent name: shade_subnet1 - name: Delete subnet2 os_subnet: cloud: "{{ cloud }}" state: absent name: shade_subnet2 - name: Delete internal network os_network: cloud: "{{ cloud }}" state: absent name: "{{ network_name }}" - name: Delete external network os_network: cloud: "{{ cloud }}" state: absent name: "{{ external_network_name }}" shade-1.7.0/shade/tests/ansible/roles/router/vars/0000775000567000056710000000000012677257023023241 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/router/vars/main.yml0000664000567000056710000000011012677256557024713 0ustar jenkinsjenkins00000000000000external_network_name: ansible_external_net router_name: ansible_router shade-1.7.0/shade/tests/ansible/roles/object/0000775000567000056710000000000012677257023022214 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/object/tasks/0000775000567000056710000000000012677257023023341 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/object/tasks/main.yml0000664000567000056710000000136512677256557025030 0ustar jenkinsjenkins00000000000000--- - name: Create a test object file shell: mktemp register: tmp_file - name: Create container os_object: cloud: "{{ cloud }}" state: present container: ansible_container container_access: private - name: Put object os_object: cloud: "{{ cloud }}" state: present name: ansible_object filename: "{{ tmp_file.stdout }}" container: ansible_container - name: Delete object os_object: cloud: "{{ cloud }}" state: absent name: ansible_object container: ansible_container - name: Delete container os_object: cloud: "{{ cloud }}" state: absent container: ansible_container - name: Delete test object file file: name: "{{ tmp_file.stdout }}" state: absent shade-1.7.0/shade/tests/ansible/roles/security_group/0000775000567000056710000000000012677257023024031 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/security_group/tasks/0000775000567000056710000000000012677257023025156 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/security_group/tasks/main.yml0000664000567000056710000000570612677256557026650 0ustar jenkinsjenkins00000000000000--- - name: Create security group os_security_group: cloud: "{{ cloud }}" name: "{{ secgroup_name }}" state: present description: Created from Ansible playbook - name: Create empty ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: icmp remote_ip_prefix: 0.0.0.0/0 - name: Create -1 ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: icmp port_range_min: -1 port_range_max: -1 remote_ip_prefix: 0.0.0.0/0 - name: Create empty TCP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp remote_ip_prefix: 0.0.0.0/0 - name: Create empty UDP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: udp remote_ip_prefix: 0.0.0.0/0 - name: Create HTTP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp port_range_min: 80 port_range_max: 80 remote_ip_prefix: 0.0.0.0/0 - name: Create egress rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp port_range_min: 30000 port_range_max: 30001 remote_ip_prefix: 0.0.0.0/0 direction: egress - name: Delete empty ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: icmp remote_ip_prefix: 0.0.0.0/0 - name: Delete -1 ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: icmp port_range_min: -1 port_range_max: -1 remote_ip_prefix: 0.0.0.0/0 - name: Delete empty TCP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp remote_ip_prefix: 0.0.0.0/0 - name: Delete empty UDP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: udp remote_ip_prefix: 0.0.0.0/0 - name: Delete HTTP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp port_range_min: 80 port_range_max: 80 remote_ip_prefix: 0.0.0.0/0 - name: Delete egress rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp port_range_min: 30000 port_range_max: 30001 remote_ip_prefix: 0.0.0.0/0 direction: egress - name: Delete security group os_security_group: cloud: "{{ cloud }}" name: "{{ secgroup_name }}" state: absent shade-1.7.0/shade/tests/ansible/roles/security_group/vars/0000775000567000056710000000000012677257023025004 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/security_group/vars/main.yml0000664000567000056710000000003612677256557026465 0ustar jenkinsjenkins00000000000000secgroup_name: shade_secgroup shade-1.7.0/shade/tests/ansible/roles/subnet/0000775000567000056710000000000012677257023022246 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/subnet/tasks/0000775000567000056710000000000012677257023023373 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/subnet/tasks/main.yml0000664000567000056710000000176012677256557025061 0ustar jenkinsjenkins00000000000000--- - name: Create network {{ network_name }} os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: present - name: Create subnet {{ subnet_name }} on network {{ network_name }} os_subnet: cloud: "{{ cloud }}" network_name: "{{ network_name }}" name: "{{ subnet_name }}" state: present enable_dhcp: false dns_nameservers: - 8.8.8.7 - 8.8.8.8 cidr: 192.168.0.0/24 gateway_ip: 192.168.0.1 allocation_pool_start: 192.168.0.2 allocation_pool_end: 192.168.0.254 - name: Update subnet os_subnet: cloud: "{{ cloud }}" network_name: "{{ network_name }}" name: "{{ subnet_name }}" state: present dns_nameservers: - 8.8.8.7 cidr: 192.168.0.0/24 - name: Delete subnet {{ subnet_name }} os_subnet: cloud: "{{ cloud }}" name: "{{ subnet_name }}" state: absent - name: Delete network {{ network_name }} os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: absent shade-1.7.0/shade/tests/ansible/roles/subnet/vars/0000775000567000056710000000000012677257023023221 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/subnet/vars/main.yml0000664000567000056710000000003212677256557024676 0ustar jenkinsjenkins00000000000000subnet_name: shade_subnet shade-1.7.0/shade/tests/ansible/roles/volume/0000775000567000056710000000000012677257023022255 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/volume/tasks/0000775000567000056710000000000012677257023023402 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/volume/tasks/main.yml0000664000567000056710000000047712677256557025074 0ustar jenkinsjenkins00000000000000--- - name: Create volume os_volume: cloud: "{{ cloud }}" state: present size: 1 display_name: ansible_volume display_description: Test volume register: vol - debug: var=vol - name: Delete volume os_volume: cloud: "{{ cloud }}" state: absent display_name: ansible_volume shade-1.7.0/shade/tests/ansible/roles/nova_flavor/0000775000567000056710000000000012677257023023262 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/nova_flavor/tasks/0000775000567000056710000000000012677257023024407 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/tests/ansible/roles/nova_flavor/tasks/main.yml0000664000567000056710000000204012677256557026065 0ustar jenkinsjenkins00000000000000--- - name: Create public flavor os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_public_flavor is_public: True ram: 1024 vcpus: 1 disk: 10 ephemeral: 10 swap: 1 flavorid: 12345 - name: Delete public flavor os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_public_flavor - name: Create private flavor os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_private_flavor is_public: False ram: 1024 vcpus: 1 disk: 10 ephemeral: 10 swap: 1 flavorid: 12345 - name: Delete private flavor os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_private_flavor - name: Create flavor (defaults) os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_defaults_flavor ram: 1024 vcpus: 1 disk: 10 - name: Delete flavor (defaults) os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_defaults_flavor shade-1.7.0/shade/tests/fakes.py0000664000567000056710000001544612677256562017671 0ustar jenkinsjenkins00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License.V """ fakes ---------------------------------- Fakes used for testing """ class FakeEndpoint(object): def __init__(self, id, service_id, region, publicurl, internalurl=None, adminurl=None): self.id = id self.service_id = service_id self.region = region self.publicurl = publicurl self.internalurl = internalurl self.adminurl = adminurl class FakeEndpointv3(object): def __init__(self, id, service_id, region, url, interface=None): self.id = id self.service_id = service_id self.region = region self.url = url self.interface = interface class FakeFlavor(object): def __init__(self, id, name, ram, extra_specs=None): self.id = id self.name = name self.ram = ram # Leave it unset if we don't pass it in to test that normalize_ works # but we also have to be able to pass one in to deal with mocks if extra_specs: self.extra_specs = extra_specs def get_keys(self): return {} class FakeFloatingIP(object): def __init__(self, id, pool, ip, fixed_ip, instance_id): self.id = id self.pool = pool self.ip = ip self.fixed_ip = fixed_ip self.instance_id = instance_id class FakeFloatingIPPool(object): def __init__(self, id, name): self.id = id self.name = name class FakeImage(object): def __init__(self, id, name, status): self.id = id self.name = name self.status = status class FakeProject(object): def __init__(self, id, domain_id=None): self.id = id self.domain_id = domain_id or 'default' class FakeServer(object): def __init__( self, id, name, status, addresses=None, accessIPv4='', accessIPv6='', private_v4='', private_v6='', public_v4='', public_v6='', flavor=None, image=None, adminPass=None, metadata=None): self.id = id self.name = name self.status = status if not addresses: self.addresses = {} else: self.addresses = addresses if not flavor: flavor = {} self.flavor = flavor if not image: image = {} self.image = image self.accessIPv4 = accessIPv4 self.accessIPv6 = accessIPv6 self.private_v4 = private_v4 self.public_v4 = public_v4 self.private_v6 = private_v6 self.public_v6 = public_v6 self.adminPass = adminPass self.metadata = metadata class FakeService(object): def __init__(self, id, name, type, service_type, description='', enabled=True): self.id = id self.name = name self.type = type self.service_type = service_type self.description = description self.enabled = enabled class FakeUser(object): def __init__(self, id, email, name, domain_id=None): self.id = id self.email = email self.name = name if domain_id is not None: self.domain_id = domain_id class FakeVolume(object): def __init__( self, id, status, name, attachments=[], size=75): self.id = id self.status = status self.name = name self.attachments = attachments self.size = size self.snapshot_id = 'id:snapshot' self.description = 'description' self.volume_type = 'type:volume' self.availability_zone = 'az1' self.created_at = '1900-01-01 12:34:56' self.source_volid = '12345' self.metadata = {} class FakeVolumeSnapshot(object): def __init__( self, id, status, name, description, size=75): self.id = id self.status = status self.name = name self.description = description self.size = size self.created_at = '1900-01-01 12:34:56' self.volume_id = '12345' self.metadata = {} class FakeMachine(object): def __init__(self, id, name=None, driver=None, driver_info=None, chassis_uuid=None, instance_info=None, instance_uuid=None, properties=None): self.id = id self.name = name self.driver = driver self.driver_info = driver_info self.chassis_uuid = chassis_uuid self.instance_info = instance_info self.instance_uuid = instance_uuid self.properties = properties class FakeMachinePort(object): def __init__(self, id, address, node_id): self.id = id self.address = address self.node_id = node_id class FakeSecgroup(object): def __init__(self, id, name, description='', rules=None): self.id = id self.name = name self.description = description self.rules = rules class FakeNovaSecgroupRule(object): def __init__(self, id, from_port=None, to_port=None, ip_protocol=None, cidr=None, parent_group_id=None): self.id = id self.from_port = from_port self.to_port = to_port self.ip_protocol = ip_protocol if cidr: self.ip_range = {'cidr': cidr} self.parent_group_id = parent_group_id class FakeKeypair(object): def __init__(self, id, name, public_key): self.id = id self.name = name self.public_key = public_key class FakeDomain(object): def __init__(self, id, name, description, enabled): self.id = id self.name = name self.description = description self.enabled = enabled class FakeRole(object): def __init__(self, id, name): self.id = id self.name = name class FakeGroup(object): def __init__(self, id, name, description, domain_id=None): self.id = id self.name = name self.description = description self.domain_id = domain_id or 'default' class FakeHypervisor(object): def __init__(self, id, hostname): self.id = id self.hypervisor_hostname = hostname class FakeStack(object): def __init__(self, id, name, description=None, status='CREATE_COMPLETE'): self.id = id self.name = name self.stack_name = name self.stack_description = description self.stack_status = status shade-1.7.0/shade/exc.py0000664000567000056710000000416212677256557016212 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys from shade import _log log = _log.setup_logging(__name__) class OpenStackCloudException(Exception): log_inner_exceptions = False def __init__(self, message, extra_data=None): args = [message] if extra_data: args.append(extra_data) super(OpenStackCloudException, self).__init__(*args) self.extra_data = extra_data self.inner_exception = sys.exc_info() self.orig_message = message def log_error(self, logger=log): if self.inner_exception and self.inner_exception[1]: logger.error(self.orig_message, exc_info=self.inner_exception) def __str__(self): message = Exception.__str__(self) if self.extra_data is not None: message = "%s (Extra: %s)" % (message, self.extra_data) if (self.inner_exception and self.inner_exception[1] and not self.orig_message.endswith( str(self.inner_exception[1]))): message = "%s (Inner Exception: %s)" % ( message, str(self.inner_exception[1])) if self.log_inner_exceptions: self.log_error() return message class OpenStackCloudTimeout(OpenStackCloudException): pass class OpenStackCloudUnavailableExtension(OpenStackCloudException): pass class OpenStackCloudUnavailableFeature(OpenStackCloudException): pass class OpenStackCloudResourceNotFound(OpenStackCloudException): pass class OpenStackCloudURINotFound(OpenStackCloudException): pass shade-1.7.0/shade/meta.py0000664000567000056710000003402312677256557016360 0ustar jenkinsjenkins00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import munch import ipaddress import six from shade import exc from shade import _log NON_CALLABLES = (six.string_types, bool, dict, int, float, list, type(None)) log = _log.setup_logging(__name__) def find_nova_addresses(addresses, ext_tag=None, key_name=None, version=4): ret = [] for (k, v) in iter(addresses.items()): if key_name is not None and k != key_name: # key_name is specified and it doesn't match the current network. # Continue with the next one continue for interface_spec in v: if ext_tag is not None: if 'OS-EXT-IPS:type' not in interface_spec: # ext_tag is specified, but this interface has no tag # We could actually return right away as this means that # this cloud doesn't support OS-EXT-IPS. Nevertheless, # it would be better to perform an explicit check. e.g.: # cloud._has_nova_extension('OS-EXT-IPS') # But this needs cloud to be passed to this function. continue elif interface_spec['OS-EXT-IPS:type'] != ext_tag: # Type doesn't match, continue with next one continue if interface_spec['version'] == version: ret.append(interface_spec['addr']) return ret def get_server_ip(server, **kwargs): addrs = find_nova_addresses(server['addresses'], **kwargs) if not addrs: return None return addrs[0] def get_server_private_ip(server, cloud=None): """Find the private IP address If Neutron is available, search for a port on a network where `router:external` is False and `shared` is False. This combination indicates a private network with private IP addresses. This port should have the private IP. If Neutron is not available, or something goes wrong communicating with it, as a fallback, try the list of addresses associated with the server dict, looking for an IP type tagged as 'fixed' in the network named 'private'. Last resort, ignore the IP type and just look for an IP on the 'private' network (e.g., Rackspace). """ if cloud and not cloud.use_internal_network(): return None # Short circuit the ports/networks search below with a heavily cached # and possibly pre-configured network name if cloud: int_nets = cloud.get_internal_networks() for int_net in int_nets: int_ip = get_server_ip(server, key_name=int_net['name']) if int_ip is not None: return int_ip ip = get_server_ip(server, ext_tag='fixed', key_name='private') if ip: return ip # Last resort, and Rackspace return get_server_ip(server, key_name='private') def get_server_external_ipv4(cloud, server): """Find an externally routable IP for the server. There are 5 different scenarios we have to account for: * Cloud has externally routable IP from neutron but neutron APIs don't work (only info available is in nova server record) (rackspace) * Cloud has externally routable IP from neutron (runabove, ovh) * Cloud has externally routable IP from neutron AND supports optional private tenant networks (vexxhost, unitedstack) * Cloud only has private tenant network provided by neutron and requires floating-ip for external routing (dreamhost, hp) * Cloud only has private tenant network provided by nova-network and requires floating-ip for external routing (auro) :param cloud: the cloud we're working with :param server: the server dict from which we want to get an IPv4 address :return: a string containing the IPv4 address or None """ if not cloud.use_external_network(): return None if server['accessIPv4']: return server['accessIPv4'] # Short circuit the ports/networks search below with a heavily cached # and possibly pre-configured network name ext_nets = cloud.get_external_networks() for ext_net in ext_nets: ext_ip = get_server_ip(server, key_name=ext_net['name']) if ext_ip is not None: return ext_ip # Try to get a floating IP address # Much as I might find floating IPs annoying, if it has one, that's # almost certainly the one that wants to be used ext_ip = get_server_ip(server, ext_tag='floating') if ext_ip is not None: return ext_ip # The cloud doesn't support Neutron or Neutron can't be contacted. The # server might have fixed addresses that are reachable from outside the # cloud (e.g. Rax) or have plain ol' floating IPs # Try to get an address from a network named 'public' ext_ip = get_server_ip(server, key_name='public') if ext_ip is not None: return ext_ip # Nothing else works, try to find a globally routable IP address for interfaces in server['addresses'].values(): for interface in interfaces: try: ip = ipaddress.ip_address(interface['addr']) except Exception: # Skip any error, we're looking for a working ip - if the # cloud returns garbage, it wouldn't be the first weird thing # but it still doesn't meet the requirement of "be a working # ip address" continue if ip.version == 4 and not ip.is_private: return str(ip) return None def get_server_external_ipv6(server): """ Get an IPv6 address reachable from outside the cloud. This function assumes that if a server has an IPv6 address, that address is reachable from outside the cloud. :param server: the server from which we want to get an IPv6 address :return: a string containing the IPv6 address or None """ if server['accessIPv6']: return server['accessIPv6'] addresses = find_nova_addresses(addresses=server['addresses'], version=6) if addresses: return addresses[0] return None def get_groups_from_server(cloud, server, server_vars): groups = [] region = cloud.region_name cloud_name = cloud.name # Create a group for the cloud groups.append(cloud_name) # Create a group on region groups.append(region) # And one by cloud_region groups.append("%s_%s" % (cloud_name, region)) # Check if group metadata key in servers' metadata group = server['metadata'].get('group') if group: groups.append(group) for extra_group in server['metadata'].get('groups', '').split(','): if extra_group: groups.append(extra_group) groups.append('instance-%s' % server['id']) for key in ('flavor', 'image'): if 'name' in server_vars[key]: groups.append('%s-%s' % (key, server_vars[key]['name'])) for key, value in iter(server['metadata'].items()): groups.append('meta-%s_%s' % (key, value)) az = server_vars.get('az', None) if az: # Make groups for az, region_az and cloud_region_az groups.append(az) groups.append('%s_%s' % (region, az)) groups.append('%s_%s_%s' % (cloud.name, region, az)) return groups def expand_server_vars(cloud, server): """Backwards compatibility function.""" return add_server_interfaces(cloud, server) def add_server_interfaces(cloud, server): """Add network interface information to server. Query the cloud as necessary to add information to the server record about the network information needed to interface with the server. Ensures that public_v4, public_v6, private_v4, private_v6, interface_ip, accessIPv4 and accessIPv6 are always set. """ # First, add an IP address. Set it to '' rather than None if it does # not exist to remain consistent with the pre-existing missing values server['public_v4'] = get_server_external_ipv4(cloud, server) or '' server['public_v6'] = get_server_external_ipv6(server) or '' server['private_v4'] = get_server_private_ip(server, cloud) or '' interface_ip = None if cloud.private and server['private_v4']: interface_ip = server['private_v4'] else: if (server['public_v6'] and cloud._local_ipv6 and not cloud.force_ipv4): interface_ip = server['public_v6'] else: interface_ip = server['public_v4'] if interface_ip: server['interface_ip'] = interface_ip # Some clouds do not set these, but they're a regular part of the Nova # server record. Since we know them, go ahead and set them. In the case # where they were set previous, we use the values, so this will not break # clouds that provide the information if cloud.private and server['private_v4']: server['accessIPv4'] = server['private_v4'] else: server['accessIPv4'] = server['public_v4'] server['accessIPv6'] = server['public_v6'] return server def expand_server_security_groups(cloud, server): try: groups = cloud.list_server_security_groups(server) except exc.OpenStackCloudException: groups = [] server['security_groups'] = groups def get_hostvars_from_server(cloud, server, mounts=None): """Expand additional server information useful for ansible inventory. Variables in this function may make additional cloud queries to flesh out possibly interesting info, making it more expensive to call than expand_server_vars if caching is not set up. If caching is set up, the extra cost should be minimal. """ server_vars = add_server_interfaces(cloud, server) flavor_id = server['flavor']['id'] flavor_name = cloud.get_flavor_name(flavor_id) if flavor_name: server_vars['flavor']['name'] = flavor_name expand_server_security_groups(cloud, server) # OpenStack can return image as a string when you've booted from volume if str(server['image']) == server['image']: image_id = server['image'] server_vars['image'] = dict(id=image_id) else: image_id = server['image'].get('id', None) if image_id: image_name = cloud.get_image_name(image_id) if image_name: server_vars['image']['name'] = image_name volumes = [] if cloud.has_service('volume'): try: for volume in cloud.get_volumes(server): # Make things easier to consume elsewhere volume['device'] = volume['attachments'][0]['device'] volumes.append(volume) except exc.OpenStackCloudException: pass server_vars['volumes'] = volumes if mounts: for mount in mounts: for vol in server_vars['volumes']: if vol['display_name'] == mount['display_name']: if 'mount' in mount: vol['mount'] = mount['mount'] return server_vars def _add_request_id(obj, request_id): if request_id is not None: obj['x_openstack_request_ids'] = [request_id] return obj def obj_to_dict(obj, request_id=None): """ Turn an object with attributes into a dict suitable for serializing. Some of the things that are returned in OpenStack are objects with attributes. That's awesome - except when you want to expose them as JSON structures. We use this as the basis of get_hostvars_from_server above so that we can just have a plain dict of all of the values that exist in the nova metadata for a server. """ if obj is None: return None elif isinstance(obj, munch.Munch) or hasattr(obj, 'mock_add_spec'): # If we obj_to_dict twice, don't fail, just return the munch # Also, don't try to modify Mock objects - that way lies madness return obj elif hasattr(obj, 'schema') and hasattr(obj, 'validate'): # It's a warlock return _add_request_id(warlock_to_dict(obj), request_id) elif isinstance(obj, dict): # The new request-id tracking spec: # https://specs.openstack.org/openstack/nova-specs/specs/juno/approved/log-request-id-mappings.html # adds a request-ids attribute to returned objects. It does this even # with dicts, which now become dict subclasses. So we want to convert # the dict we get, but we also want it to fall through to object # attribute processing so that we can also get the request_ids # data into our resulting object. instance = munch.Munch(obj) else: instance = munch.Munch() for key in dir(obj): value = getattr(obj, key) if isinstance(value, NON_CALLABLES) and not key.startswith('_'): instance[key] = value return _add_request_id(instance, request_id) def obj_list_to_dict(obj_list, request_id=None): """Enumerate through lists of objects and return lists of dictonaries. Some of the objects returned in OpenStack are actually lists of objects, and in order to expose the data structures as JSON, we need to facilitate the conversion to lists of dictonaries. """ new_list = [] for obj in obj_list: new_list.append(obj_to_dict(obj, request_id=request_id)) return new_list def warlock_to_dict(obj): # glanceclient v2 uses warlock to construct its objects. Warlock does # deep black magic to attribute look up to support validation things that # means we cannot use normal obj_to_dict obj_dict = munch.Munch() for (key, value) in obj.items(): if isinstance(value, NON_CALLABLES) and not key.startswith('_'): obj_dict[key] = value return obj_dict shade-1.7.0/shade/__init__.py0000664000567000056710000000613412677256557017173 0ustar jenkinsjenkins00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import warnings import keystoneauth1.exceptions import os_client_config import pbr.version import requestsexceptions from shade.exc import * # noqa from shade.openstackcloud import OpenStackCloud from shade.operatorcloud import OperatorCloud from shade import _log __version__ = pbr.version.VersionInfo('shade').version_string() if requestsexceptions.SubjectAltNameWarning: warnings.filterwarnings( 'ignore', category=requestsexceptions.SubjectAltNameWarning) def simple_logging(debug=False, http_debug=False): if http_debug: debug = True if debug: log_level = logging.DEBUG else: log_level = logging.INFO if http_debug: # Enable HTTP level tracing log = _log.setup_logging('keystoneauth') log.addHandler(logging.StreamHandler()) log.setLevel(log_level) log = _log.setup_logging('shade') log.addHandler(logging.StreamHandler()) log.setLevel(log_level) # Suppress warning about keystoneauth loggers log = _log.setup_logging('keystoneauth.identity.base') log = _log.setup_logging('keystoneauth.identity.generic.base') def openstack_clouds(config=None, debug=False): if not config: config = os_client_config.OpenStackConfig() try: return [ OpenStackCloud( cloud=f.name, debug=debug, cloud_config=f, **f.config) for f in config.get_all_clouds() ] except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e: raise OpenStackCloudException( "Invalid cloud configuration: {exc}".format(exc=str(e))) def openstack_cloud(config=None, **kwargs): if not config: config = os_client_config.OpenStackConfig() try: cloud_config = config.get_one_cloud(**kwargs) except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e: raise OpenStackCloudException( "Invalid cloud configuration: {exc}".format(exc=str(e))) return OpenStackCloud(cloud_config=cloud_config) def operator_cloud(config=None, **kwargs): if 'interface' not in kwargs: kwargs['interface'] = 'admin' if not config: config = os_client_config.OpenStackConfig() try: cloud_config = config.get_one_cloud(**kwargs) except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e: raise OpenStackCloudException( "Invalid cloud configuration: {exc}".format(exc=str(e))) return OperatorCloud(cloud_config=cloud_config) shade-1.7.0/shade/_utils.py0000664000567000056710000005717512677256557016746 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import inspect import munch import netifaces import re import six import time from decorator import decorator from neutronclient.common import exceptions as neutron_exc from shade import _log from shade import exc from shade import meta log = _log.setup_logging(__name__) _decorated_methods = [] def _iterate_timeout(timeout, message, wait=2): """Iterate and raise an exception on timeout. This is a generator that will continually yield and sleep for wait seconds, and if the timeout is reached, will raise an exception with . """ try: wait = float(wait) except ValueError: raise exc.OpenStackCloudException( "Wait value must be an int or float value. {wait} given" " instead".format(wait=wait)) start = time.time() count = 0 while (timeout is None) or (time.time() < start + timeout): count += 1 yield count log.debug('Waiting {wait} seconds'.format(wait=wait)) time.sleep(wait) raise exc.OpenStackCloudTimeout(message) def _filter_list(data, name_or_id, filters): """Filter a list by name/ID and arbitrary meta data. :param list data: The list of dictionary data to filter. It is expected that each dictionary contains an 'id' and 'name' key if a value for name_or_id is given. :param string name_or_id: The name or ID of the entity being filtered. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } """ if name_or_id: identifier_matches = [] for e in data: e_id = str(e.get('id', None)) e_name = e.get('name', None) if str(name_or_id) in (e_id, e_name): identifier_matches.append(e) data = identifier_matches if not filters: return data def _dict_filter(f, d): if not d: return False for key in f.keys(): if isinstance(f[key], dict): if not _dict_filter(f[key], d.get(key, None)): return False elif d.get(key, None) != f[key]: return False return True filtered = [] for e in data: filtered.append(e) for key in filters.keys(): if isinstance(filters[key], dict): if not _dict_filter(filters[key], e.get(key, None)): filtered.pop() break elif e.get(key, None) != filters[key]: filtered.pop() break return filtered def _get_entity(func, name_or_id, filters): """Return a single entity from the list returned by a given method. :param callable func: A function that takes `name_or_id` and `filters` as parameters and returns a list of entities to filter. :param string name_or_id: The name or ID of the entity being filtered or a dict :param dict filters: A dictionary of meta data to use for further filtering. """ # Sometimes in the control flow of shade, we already have an object # fetched. Rather than then needing to pull the name or id out of that # object, pass it in here and rely on caching to prevent us from making # an additional call, it's simple enough to test to see if we got an # object and just short-circuit return it. if hasattr(name_or_id, 'id'): return name_or_id entities = func(name_or_id, filters) if not entities: return None if len(entities) > 1: raise exc.OpenStackCloudException( "Multiple matches found for %s" % name_or_id) return entities[0] def normalize_servers(servers, cloud_name, region_name): # Here instead of _utils because we need access to region and cloud # name from the cloud object ret = [] for server in servers: ret.append(normalize_server(server, cloud_name, region_name)) return ret def normalize_server(server, cloud_name, region_name): server.pop('links', None) server['flavor'].pop('links', None) # OpenStack can return image as a string when you've booted # from volume if str(server['image']) != server['image']: server['image'].pop('links', None) server['region'] = region_name server['cloud'] = cloud_name az = server.get('OS-EXT-AZ:availability_zone', None) if az: server['az'] = az # Ensure volumes is always in the server dict, even if empty server['volumes'] = [] return server def normalize_keystone_services(services): """Normalize the structure of keystone services In keystone v2, there is a field called "service_type". In v3, it's "type". Just make the returned dict have both. :param list services: A list of keystone service dicts :returns: A list of normalized dicts. """ ret = [] for service in services: service_type = service.get('type', service.get('service_type')) new_service = { 'id': service['id'], 'name': service['name'], 'description': service.get('description', None), 'type': service_type, 'service_type': service_type, 'enabled': service['enabled'] } ret.append(new_service) return meta.obj_list_to_dict(ret) def normalize_nova_secgroups(groups): """Normalize the structure of nova security groups This makes security group dicts, as returned from nova, look like the security group dicts as returned from neutron. This does not make them look exactly the same, but it's pretty close. :param list groups: A list of security group dicts. :returns: A list of normalized dicts. """ ret = [{'id': g['id'], 'name': g['name'], 'description': g['description'], 'security_group_rules': normalize_nova_secgroup_rules(g['rules']) } for g in groups] return meta.obj_list_to_dict(ret) def normalize_nova_secgroup_rules(rules): """Normalize the structure of nova security group rules Note that nova uses -1 for non-specific port values, but neutron represents these with None. :param list rules: A list of security group rule dicts. :returns: A list of normalized dicts. """ ret = [{'id': r['id'], 'direction': 'ingress', 'ethertype': 'IPv4', 'port_range_min': None if r['from_port'] == -1 else r['from_port'], 'port_range_max': None if r['to_port'] == -1 else r['to_port'], 'protocol': r['ip_protocol'], 'remote_ip_prefix': r['ip_range'].get('cidr', None), 'security_group_id': r['parent_group_id'] } for r in rules] return meta.obj_list_to_dict(ret) def normalize_nova_floating_ips(ips): """Normalize the structure of Neutron floating IPs Unfortunately, not all the Neutron floating_ip attributes are available with Nova and not all Nova floating_ip attributes are available with Neutron. This function extract attributes that are common to Nova and Neutron floating IP resource. If the whole structure is needed inside shade, shade provides private methods that returns "original" objects (e.g. _nova_allocate_floating_ip) :param list ips: A list of Nova floating IPs. :returns: A list of normalized dicts with the following attributes:: [ { "id": "this-is-a-floating-ip-id", "fixed_ip_address": "192.0.2.10", "floating_ip_address": "198.51.100.10", "network": "this-is-a-net-or-pool-id", "attached": True, "status": "ACTIVE" }, ... ] """ ret = [dict( id=ip['id'], fixed_ip_address=ip.get('fixed_ip'), floating_ip_address=ip['ip'], network=ip['pool'], attached=(ip.get('instance_id') is not None and ip.get('instance_id') != ''), status='ACTIVE' # In neutrons terms, Nova floating IPs are always # ACTIVE ) for ip in ips] return meta.obj_list_to_dict(ret) def normalize_neutron_floating_ips(ips): """Normalize the structure of Neutron floating IPs Unfortunately, not all the Neutron floating_ip attributes are available with Nova and not all Nova floating_ip attributes are available with Neutron. This function extract attributes that are common to Nova and Neutron floating IP resource. If the whole structure is needed inside shade, shade provides private methods that returns "original" objects (e.g. _neutron_allocate_floating_ip) :param list ips: A list of Neutron floating IPs. :returns: A list of normalized dicts with the following attributes:: [ { "id": "this-is-a-floating-ip-id", "fixed_ip_address": "192.0.2.10", "floating_ip_address": "198.51.100.10", "network": "this-is-a-net-or-pool-id", "attached": True, "status": "ACTIVE" }, ... ] """ ret = [dict( id=ip['id'], fixed_ip_address=ip.get('fixed_ip_address'), floating_ip_address=ip['floating_ip_address'], network=ip['floating_network_id'], attached=(ip.get('port_id') is not None and ip.get('port_id') != ''), status=ip['status'] ) for ip in ips] return meta.obj_list_to_dict(ret) def localhost_supports_ipv6(): """Determine whether the local host supports IPv6 We look for a default route that supports the IPv6 address family, and assume that if it is present, this host has globally routable IPv6 connectivity. """ return netifaces.AF_INET6 in netifaces.gateways()['default'] def normalize_users(users): ret = [ dict( id=user.get('id'), email=user.get('email'), name=user.get('name'), username=user.get('username'), default_project_id=user.get('default_project_id', user.get('tenantId')), domain_id=user.get('domain_id'), enabled=user.get('enabled'), ) for user in users ] return meta.obj_list_to_dict(ret) def normalize_volumes(volumes): ret = [] for vol in volumes: new_vol = vol.copy() name = vol.get('name', vol.get('display_name')) description = vol.get('description', vol.get('display_description')) new_vol['name'] = name new_vol['display_name'] = name new_vol['description'] = description new_vol['display_description'] = description # For some reason, cinder v1 uses strings for bools for these fields. # Cinder v2 uses booleans. for field in ('bootable', 'multiattach'): if field in new_vol and isinstance(new_vol[field], six.string_types): if new_vol[field] is not None: if new_vol[field].lower() == 'true': new_vol[field] = True elif new_vol[field].lower() == 'false': new_vol[field] = False ret.append(new_vol) return meta.obj_list_to_dict(ret) def normalize_domains(domains): ret = [ dict( id=domain.get('id'), name=domain.get('name'), description=domain.get('description'), enabled=domain.get('enabled'), ) for domain in domains ] return meta.obj_list_to_dict(ret) def normalize_groups(domains): """Normalize Identity groups.""" ret = [ dict( id=domain.get('id'), name=domain.get('name'), description=domain.get('description'), domain_id=domain.get('domain_id'), ) for domain in domains ] return meta.obj_list_to_dict(ret) def normalize_role_assignments(assignments): """Put role_assignments into a form that works with search/get interface. Role assignments have the structure:: [ { "role": { "id": "--role-id--" }, "scope": { "domain": { "id": "--domain-id--" } }, "user": { "id": "--user-id--" } }, ] Which is hard to work with in the rest of our interface. Map this to be:: [ { "id": "--role-id--", "domain": "--domain-id--", "user": "--user-id--", } ] Scope can be "domain" or "project" and "user" can also be "group". :param list assignments: A list of dictionaries of role assignments. :returns: A list of flattened/normalized role assignment dicts. """ new_assignments = [] for assignment in assignments: new_val = munch.Munch({'id': assignment['role']['id']}) for scope in ('project', 'domain'): if scope in assignment['scope']: new_val[scope] = assignment['scope'][scope]['id'] for assignee in ('user', 'group'): if assignee in assignment: new_val[assignee] = assignment[assignee]['id'] new_assignments.append(new_val) return new_assignments def normalize_roles(roles): """Normalize Identity roles.""" ret = [ dict( id=role.get('id'), name=role.get('name'), ) for role in roles ] return meta.obj_list_to_dict(ret) def normalize_stacks(stacks): """ Normalize Stack Object """ for stack in stacks: stack['name'] = stack['stack_name'] return stacks def normalize_flavors(flavors): """ Normalize a list of flavor objects """ for flavor in flavors: flavor.pop('links', None) flavor.pop('NAME_ATTR', None) flavor.pop('HUMAN_ID', None) flavor.pop('human_id', None) if 'extra_specs' not in flavor: flavor['extra_specs'] = {} ephemeral = flavor.pop('OS-FLV-EXT-DATA:ephemeral', 0) is_public = flavor.pop('os-flavor-access:is_public', True) # Make sure both the extension version and a sane version are present flavor['OS-FLV-EXT-DATA:ephemeral'] = ephemeral flavor['ephemeral'] = ephemeral flavor['os-flavor-access:is_public'] = is_public flavor['is_public'] = is_public return flavors def valid_kwargs(*valid_args): # This decorator checks if argument passed as **kwargs to a function are # present in valid_args. # # Typically, valid_kwargs is used when we want to distinguish between # None and omitted arguments and we still want to validate the argument # list. # # Example usage: # # @valid_kwargs('opt_arg1', 'opt_arg2') # def my_func(self, mandatory_arg1, mandatory_arg2, **kwargs): # ... # @decorator def func_wrapper(func, *args, **kwargs): argspec = inspect.getargspec(func) for k in kwargs: if k not in argspec.args[1:] and k not in valid_args: raise TypeError( "{f}() got an unexpected keyword argument " "'{arg}'".format(f=inspect.stack()[1][3], arg=k)) return func(*args, **kwargs) return func_wrapper def cache_on_arguments(*cache_on_args, **cache_on_kwargs): def _inner_cache_on_arguments(func): def _cache_decorator(obj, *args, **kwargs): the_method = obj._cache.cache_on_arguments( *cache_on_args, **cache_on_kwargs)( func.__get__(obj, type(obj))) return the_method(*args, **kwargs) def invalidate(obj, *args, **kwargs): return obj._cache.cache_on_arguments()(func).invalidate( *args, **kwargs) _cache_decorator.invalidate = invalidate _cache_decorator.func = func _decorated_methods.append(func.__name__) return _cache_decorator return _inner_cache_on_arguments @contextlib.contextmanager def neutron_exceptions(error_message): try: yield except neutron_exc.NotFound as e: raise exc.OpenStackCloudResourceNotFound( "{msg}: {exc}".format(msg=error_message, exc=str(e))) except neutron_exc.NeutronClientException as e: if e.status_code == 404: raise exc.OpenStackCloudURINotFound( "{msg}: {exc}".format(msg=error_message, exc=str(e))) else: raise exc.OpenStackCloudException( "{msg}: {exc}".format(msg=error_message, exc=str(e))) except Exception as e: raise exc.OpenStackCloudException( "{msg}: {exc}".format(msg=error_message, exc=str(e))) @contextlib.contextmanager def shade_exceptions(error_message=None): """Context manager for dealing with shade exceptions. :param string error_message: String to use for the exception message content on non-OpenStackCloudExceptions. Useful for avoiding wrapping shade OpenStackCloudException exceptions within themselves. Code called from within the context may throw such exceptions without having to catch and reraise them. Non-OpenStackCloudException exceptions thrown within the context will be wrapped and the exception message will be appended to the given error message. """ try: yield except exc.OpenStackCloudException: raise except Exception as e: if error_message is None: error_message = str(e) raise exc.OpenStackCloudException(error_message) def safe_dict_min(key, data): """Safely find the minimum for a given key in a list of dict objects. This will find the minimum integer value for specific dictionary key across a list of dictionaries. The values for the given key MUST be integers, or string representations of an integer. The dictionary key does not have to be present in all (or any) of the elements/dicts within the data set. :param string key: The dictionary key to search for the minimum value. :param list data: List of dicts to use for the data set. :returns: None if the field was not not found in any elements, or the minimum value for the field otherwise. """ min_value = None for d in data: if (key in d) and (d[key] is not None): try: val = int(d[key]) except ValueError: raise exc.OpenStackCloudException( "Search for minimum value failed. " "Value for {key} is not an integer: {value}".format( key=key, value=d[key]) ) if (min_value is None) or (val < min_value): min_value = val return min_value def safe_dict_max(key, data): """Safely find the maximum for a given key in a list of dict objects. This will find the maximum integer value for specific dictionary key across a list of dictionaries. The values for the given key MUST be integers, or string representations of an integer. The dictionary key does not have to be present in all (or any) of the elements/dicts within the data set. :param string key: The dictionary key to search for the maximum value. :param list data: List of dicts to use for the data set. :returns: None if the field was not not found in any elements, or the maximum value for the field otherwise. """ max_value = None for d in data: if (key in d) and (d[key] is not None): try: val = int(d[key]) except ValueError: raise exc.OpenStackCloudException( "Search for maximum value failed. " "Value for {key} is not an integer: {value}".format( key=key, value=d[key]) ) if (max_value is None) or (val > max_value): max_value = val return max_value def parse_range(value): """Parse a numerical range string. Breakdown a range expression into its operater and numerical parts. This expression must be a string. Valid values must be an integer string, optionally preceeded by one of the following operators:: - "<" : Less than - ">" : Greater than - "<=" : Less than or equal to - ">=" : Greater than or equal to Some examples of valid values and function return values:: - "1024" : returns (None, 1024) - "<5" : returns ("<", 5) - ">=100" : returns (">=", 100) :param string value: The range expression to be parsed. :returns: A tuple with the operator string (or None if no operator was given) and the integer value. None is returned if parsing failed. """ if value is None: return None range_exp = re.match('(<|>|<=|>=){0,1}(\d+)$', value) if range_exp is None: return None op = range_exp.group(1) num = int(range_exp.group(2)) return (op, num) def range_filter(data, key, range_exp): """Filter a list by a single range expression. :param list data: List of dictionaries to be searched. :param string key: Key name to search within the data set. :param string range_exp: The expression describing the range of values. :returns: A list subset of the original data set. :raises: OpenStackCloudException on invalid range expressions. """ filtered = [] range_exp = str(range_exp).upper() if range_exp == "MIN": key_min = safe_dict_min(key, data) if key_min is None: return [] for d in data: if int(d[key]) == key_min: filtered.append(d) return filtered elif range_exp == "MAX": key_max = safe_dict_max(key, data) if key_max is None: return [] for d in data: if int(d[key]) == key_max: filtered.append(d) return filtered # Not looking for a min or max, so a range or exact value must # have been supplied. val_range = parse_range(range_exp) # If parsing the range fails, it must be a bad value. if val_range is None: raise exc.OpenStackCloudException( "Invalid range value: {value}".format(value=range_exp)) op = val_range[0] if op: # Range matching for d in data: d_val = int(d[key]) if op == '<': if d_val < val_range[1]: filtered.append(d) elif op == '>': if d_val > val_range[1]: filtered.append(d) elif op == '<=': if d_val <= val_range[1]: filtered.append(d) elif op == '>=': if d_val >= val_range[1]: filtered.append(d) return filtered else: # Exact number match for d in data: if int(d[key]) == val_range[1]: filtered.append(d) return filtered shade-1.7.0/shade/operatorcloud.py0000664000567000056710000021070212677256557020314 0ustar jenkinsjenkins00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import jsonpatch from ironicclient import client as ironic_client from ironicclient import exceptions as ironic_exceptions from shade.exc import * # noqa from shade import openstackcloud from shade import _tasks from shade import _utils class OperatorCloud(openstackcloud.OpenStackCloud): """Represent a privileged/operator connection to an OpenStack Cloud. `OperatorCloud` is the entry point for all admin operations, regardless of which OpenStack service those operations are for. See the :class:`OpenStackCloud` class for a description of most options. """ def __init__(self, *args, **kwargs): super(OperatorCloud, self).__init__(*args, **kwargs) self._ironic_client = None # Set the ironic API microversion to a known-good # supported/tested with the contents of shade. # # Note(TheJulia): Defaulted to version 1.6 as the ironic # state machine changes which will increment the version # and break an automatic transition of an enrolled node # to an available state. Locking the version is intended # to utilize the original transition until shade supports # calling for node inspection to allow the transition to # take place automatically. ironic_api_microversion = '1.6' @property def ironic_client(self): if self._ironic_client is None: self._ironic_client = self._get_client( 'baremetal', ironic_client.Client, os_ironic_api_version=self.ironic_api_microversion) return self._ironic_client def list_nics(self): with _utils.shade_exceptions("Error fetching machine port list"): return self.manager.submitTask(_tasks.MachinePortList()) def list_nics_for_machine(self, uuid): with _utils.shade_exceptions( "Error fetching port list for node {node_id}".format( node_id=uuid)): return self.manager.submitTask( _tasks.MachineNodePortList(node_id=uuid)) def get_nic_by_mac(self, mac): try: return self.manager.submitTask( _tasks.MachineNodePortGet(port_id=mac)) except ironic_exceptions.ClientException: return None def list_machines(self): return self.manager.submitTask(_tasks.MachineNodeList()) def get_machine(self, name_or_id): """Get Machine by name or uuid Search the baremetal host out by utilizing the supplied id value which can consist of a name or UUID. :param name_or_id: A node name or UUID that will be looked up. :returns: Dictonary representing the node found or None if no nodes are found. """ try: return self.manager.submitTask( _tasks.MachineNodeGet(node_id=name_or_id)) except ironic_exceptions.ClientException: return None def get_machine_by_mac(self, mac): """Get machine by port MAC address :param mac: Port MAC address to query in order to return a node. :returns: Dictonary representing the node found or None if the node is not found. """ try: port = self.manager.submitTask( _tasks.MachinePortGetByAddress(address=mac)) return self.manager.submitTask( _tasks.MachineNodeGet(node_id=port.node_uuid)) except ironic_exceptions.ClientException: return None def inspect_machine(self, name_or_id, wait=False, timeout=3600): """Inspect a Barmetal machine Engages the Ironic node inspection behavior in order to collect metadata about the baremetal machine. :param name_or_id: String representing machine name or UUID value in order to identify the machine. :param wait: Boolean value controlling if the method is to wait for the desired state to be reached or a failure to occur. :param timeout: Integer value, defautling to 3600 seconds, for the$ wait state to reach completion. :returns: Dictonary representing the current state of the machine upon exit of the method. """ return_to_available = False machine = self.get_machine(name_or_id) if not machine: raise OpenStackCloudException( "Machine inspection failed to find: %s." % name_or_id) # NOTE(TheJulia): If in available state, we can do this, however # We need to to move the host back to m if "available" in machine['provision_state']: return_to_available = True # NOTE(TheJulia): Changing available machine to managedable state # and due to state transitions we need to until that transition has # completd. self.node_set_provision_state(machine['uuid'], 'manage', wait=True, timeout=timeout) elif ("manage" not in machine['provision_state'] and "inspect failed" not in machine['provision_state']): raise OpenStackCloudException( "Machine must be in 'manage' or 'available' state to " "engage inspection: Machine: %s State: %s" % (machine['uuid'], machine['provision_state'])) with _utils.shade_exceptions("Error inspecting machine"): machine = self.node_set_provision_state(machine['uuid'], 'inspect') if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for node transition to " "target state of 'inspect'"): machine = self.get_machine(name_or_id) if "inspect failed" in machine['provision_state']: raise OpenStackCloudException( "Inspection of node %s failed, last error: %s" % (machine['uuid'], machine['last_error'])) if "manageable" in machine['provision_state']: break if return_to_available: machine = self.node_set_provision_state( machine['uuid'], 'provide', wait=wait, timeout=timeout) return(machine) def register_machine(self, nics, wait=False, timeout=3600, lock_timeout=600, **kwargs): """Register Baremetal with Ironic Allows for the registration of Baremetal nodes with Ironic and population of pertinant node information or configuration to be passed to the Ironic API for the node. This method also creates ports for a list of MAC addresses passed in to be utilized for boot and potentially network configuration. If a failure is detected creating the network ports, any ports created are deleted, and the node is removed from Ironic. :param list nics: An array of MAC addresses that represent the network interfaces for the node to be created. Example:: [ {'mac': 'aa:bb:cc:dd:ee:01'}, {'mac': 'aa:bb:cc:dd:ee:02'} ] :param wait: Boolean value, defaulting to false, to wait for the node to reach the available state where the node can be provisioned. It must be noted, when set to false, the method will still wait for locks to clear before sending the next required command. :param timeout: Integer value, defautling to 3600 seconds, for the wait state to reach completion. :param lock_timeout: Integer value, defaulting to 600 seconds, for locks to clear. :param kwargs: Key value pairs to be passed to the Ironic API, including uuid, name, chassis_uuid, driver_info, parameters. :raises: OpenStackCloudException on operation error. :returns: Returns a dictonary representing the new baremetal node. """ with _utils.shade_exceptions("Error registering machine with Ironic"): machine = self.manager.submitTask(_tasks.MachineCreate(**kwargs)) created_nics = [] try: for row in nics: nic = self.manager.submitTask( _tasks.MachinePortCreate(address=row['mac'], node_uuid=machine['uuid'])) created_nics.append(nic.uuid) except Exception as e: self.log.debug("ironic NIC registration failed", exc_info=True) # TODO(mordred) Handle failures here try: for uuid in created_nics: try: self.manager.submitTask( _tasks.MachinePortDelete( port_id=uuid)) except: pass finally: self.manager.submitTask( _tasks.MachineDelete(node_id=machine['uuid'])) raise OpenStackCloudException( "Error registering NICs with the baremetal service: %s" % str(e)) with _utils.shade_exceptions( "Error transitioning node to available state"): if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for node transition to " "available state"): machine = self.get_machine(machine['uuid']) # Note(TheJulia): Per the Ironic state code, a node # that fails returns to enroll state, which means a failed # node cannot be determined at this point in time. if machine['provision_state'] in ['enroll']: self.node_set_provision_state( machine['uuid'], 'manage') elif machine['provision_state'] in ['manageable']: self.node_set_provision_state( machine['uuid'], 'provide') elif machine['last_error'] is not None: raise OpenStackCloudException( "Machine encountered a failure: %s" % machine['last_error']) # Note(TheJulia): Earlier versions of Ironic default to # None and later versions default to available up until # the introduction of enroll state. # Note(TheJulia): The node will transition through # cleaning if it is enabled, and we will wait for # completion. elif machine['provision_state'] in ['available', None]: break else: if machine['provision_state'] in ['enroll']: self.node_set_provision_state(machine['uuid'], 'manage') # Note(TheJulia): We need to wait for the lock to clear # before we attempt to set the machine into provide state # which allows for the transition to available. for count in _utils._iterate_timeout( lock_timeout, "Timeout waiting for reservation to clear " "before setting provide state"): machine = self.get_machine(machine['uuid']) if (machine['reservation'] is None and machine['provision_state'] is not 'enroll'): self.node_set_provision_state( machine['uuid'], 'provide') machine = self.get_machine(machine['uuid']) break elif machine['provision_state'] in [ 'cleaning', 'available']: break elif machine['last_error'] is not None: raise OpenStackCloudException( "Machine encountered a failure: %s" % machine['last_error']) return machine def unregister_machine(self, nics, uuid, wait=False, timeout=600): """Unregister Baremetal from Ironic Removes entries for Network Interfaces and baremetal nodes from an Ironic API :param list nics: An array of strings that consist of MAC addresses to be removed. :param string uuid: The UUID of the node to be deleted. :param wait: Boolean value, defaults to false, if to block the method upon the final step of unregistering the machine. :param timeout: Integer value, representing seconds with a default value of 600, which controls the maximum amount of time to block the method's completion on. :raises: OpenStackCloudException on operation failure. """ machine = self.get_machine(uuid) invalid_states = ['active', 'cleaning', 'clean wait', 'clean failed'] if machine['provision_state'] in invalid_states: raise OpenStackCloudException( "Error unregistering node '%s' due to current provision " "state '%s'" % (uuid, machine['provision_state'])) for nic in nics: with _utils.shade_exceptions( "Error removing NIC {nic} from baremetal API for node " "{uuid}".format(nic=nic, uuid=uuid)): port = self.manager.submitTask( _tasks.MachinePortGetByAddress(address=nic['mac'])) self.manager.submitTask( _tasks.MachinePortDelete(port_id=port.uuid)) with _utils.shade_exceptions( "Error unregistering machine {node_id} from the baremetal " "API".format(node_id=uuid)): self.manager.submitTask( _tasks.MachineDelete(node_id=uuid)) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for machine to be deleted"): if not self.get_machine(uuid): break def patch_machine(self, name_or_id, patch): """Patch Machine Information This method allows for an interface to manipulate node entries within Ironic. Specifically, it is a pass-through for the ironicclient.nodes.update interface which allows the Ironic Node properties to be updated. :param node_id: The server object to attach to. :param patch: The JSON Patch document is a list of dictonary objects that comply with RFC 6902 which can be found at https://tools.ietf.org/html/rfc6902. Example patch construction:: patch=[] patch.append({ 'op': 'remove', 'path': '/instance_info' }) patch.append({ 'op': 'replace', 'path': '/name', 'value': 'newname' }) patch.append({ 'op': 'add', 'path': '/driver_info/username', 'value': 'administrator' }) :raises: OpenStackCloudException on operation error. :returns: Dictonary representing the newly updated node. """ with _utils.shade_exceptions( "Error updating machine via patch operation on node " "{node}".format(node=name_or_id) ): return self.manager.submitTask( _tasks.MachinePatch(node_id=name_or_id, patch=patch, http_method='PATCH')) def update_machine(self, name_or_id, chassis_uuid=None, driver=None, driver_info=None, name=None, instance_info=None, instance_uuid=None, properties=None): """Update a machine with new configuration information A user-friendly method to perform updates of a machine, in whole or part. :param string name_or_id: A machine name or UUID to be updated. :param string chassis_uuid: Assign a chassis UUID to the machine. NOTE: As of the Kilo release, this value cannot be changed once set. If a user attempts to change this value, then the Ironic API, as of Kilo, will reject the request. :param string driver: The driver name for controlling the machine. :param dict driver_info: The dictonary defining the configuration that the driver will utilize to control the machine. Permutations of this are dependent upon the specific driver utilized. :param string name: A human relatable name to represent the machine. :param dict instance_info: A dictonary of configuration information that conveys to the driver how the host is to be configured when deployed. be deployed to the machine. :param string instance_uuid: A UUID value representing the instance that the deployed machine represents. :param dict properties: A dictonary defining the properties of a machine. :raises: OpenStackCloudException on operation error. :returns: Dictonary containing a machine sub-dictonary consisting of the updated data returned from the API update operation, and a list named changes which contains all of the API paths that received updates. """ machine = self.get_machine(name_or_id) if not machine: raise OpenStackCloudException( "Machine update failed to find Machine: %s. " % name_or_id) machine_config = {} new_config = {} try: if chassis_uuid: machine_config['chassis_uuid'] = machine['chassis_uuid'] new_config['chassis_uuid'] = chassis_uuid if driver: machine_config['driver'] = machine['driver'] new_config['driver'] = driver if driver_info: machine_config['driver_info'] = machine['driver_info'] new_config['driver_info'] = driver_info if name: machine_config['name'] = machine['name'] new_config['name'] = name if instance_info: machine_config['instance_info'] = machine['instance_info'] new_config['instance_info'] = instance_info if instance_uuid: machine_config['instance_uuid'] = machine['instance_uuid'] new_config['instance_uuid'] = instance_uuid if properties: machine_config['properties'] = machine['properties'] new_config['properties'] = properties except KeyError as e: self.log.debug( "Unexpected machine response missing key %s [%s]" % ( e.args[0], name_or_id)) raise OpenStackCloudException( "Machine update failed - machine [%s] missing key %s. " "Potential API issue." % (name_or_id, e.args[0])) try: patch = jsonpatch.JsonPatch.from_diff(machine_config, new_config) except Exception as e: raise OpenStackCloudException( "Machine update failed - Error generating JSON patch object " "for submission to the API. Machine: %s Error: %s" % (name_or_id, str(e))) with _utils.shade_exceptions( "Machine update failed - patch operation failed on Machine " "{node}".format(node=name_or_id) ): if not patch: return dict( node=machine, changes=None ) else: machine = self.patch_machine(machine['uuid'], list(patch)) change_list = [] for change in list(patch): change_list.append(change['path']) return dict( node=machine, changes=change_list ) def validate_node(self, uuid): with _utils.shade_exceptions(): ifaces = self.manager.submitTask( _tasks.MachineNodeValidate(node_uuid=uuid)) if not ifaces.deploy or not ifaces.power: raise OpenStackCloudException( "ironic node %s failed to validate. " "(deploy: %s, power: %s)" % (ifaces.deploy, ifaces.power)) def node_set_provision_state(self, name_or_id, state, configdrive=None, wait=False, timeout=3600): """Set Node Provision State Enables a user to provision a Machine and optionally define a config drive to be utilized. :param string name_or_id: The Name or UUID value representing the baremetal node. :param string state: The desired provision state for the baremetal node. :param string configdrive: An optional URL or file or path representing the configdrive. In the case of a directory, the client API will create a properly formatted configuration drive file and post the file contents to the API for deployment. :param boolean wait: A boolean value, defaulted to false, to control if the method will wait for the desire end state to be reached before returning. :param integer timeout: Integer value, defaulting to 3600 seconds, representing the amount of time to wait for the desire end state to be reached. :raises: OpenStackCloudException on operation error. :returns: Dictonary representing the current state of the machine upon exit of the method. """ with _utils.shade_exceptions( "Baremetal machine node failed change provision state to " "{state}".format(state=state) ): machine = self.manager.submitTask( _tasks.MachineSetProvision(node_uuid=name_or_id, state=state, configdrive=configdrive)) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for node transition to " "target state of '%s'" % state): machine = self.get_machine(name_or_id) # NOTE(TheJulia): This performs matching if the requested # end state matches the state the node has reached. if state in machine['provision_state']: break # NOTE(TheJulia): This performs matching for cases where # the reqeusted state action ends in available state. if ("available" in machine['provision_state'] and state in ["provide", "deleted"]): break else: machine = self.get_machine(name_or_id) return machine def set_machine_maintenance_state( self, name_or_id, state=True, reason=None): """Set Baremetal Machine Maintenance State Sets Baremetal maintenance state and maintenance reason. :param string name_or_id: The Name or UUID value representing the baremetal node. :param boolean state: The desired state of the node. True being in maintenance where as False means the machine is not in maintenance mode. This value defaults to True if not explicitly set. :param string reason: An optional freeform string that is supplied to the baremetal API to allow for notation as to why the node is in maintenance state. :raises: OpenStackCloudException on operation error. :returns: None """ with _utils.shade_exceptions( "Error setting machine maintenance state to {state} on node " "{node}".format(state=state, node=name_or_id) ): if state: result = self.manager.submitTask( _tasks.MachineSetMaintenance(node_id=name_or_id, state='true', maint_reason=reason)) else: result = self.manager.submitTask( _tasks.MachineSetMaintenance(node_id=name_or_id, state='false')) if result is not None: raise OpenStackCloudException( "Failed setting machine maintenance state to %s " "on node %s. Received: %s" % ( state, name_or_id, result)) return None def remove_machine_from_maintenance(self, name_or_id): """Remove Baremetal Machine from Maintenance State Similarly to set_machine_maintenance_state, this method removes a machine from maintenance state. It must be noted that this method simpily calls set_machine_maintenace_state for the name_or_id requested and sets the state to False. :param string name_or_id: The Name or UUID value representing the baremetal node. :raises: OpenStackCloudException on operation error. :returns: None """ self.set_machine_maintenance_state(name_or_id, False) def _set_machine_power_state(self, name_or_id, state): """Set machine power state to on or off This private method allows a user to turn power on or off to a node via the Baremetal API. :params string name_or_id: A string representing the baremetal node to have power turned to an "on" state. :params string state: A value of "on", "off", or "reboot" that is passed to the baremetal API to be asserted to the machine. In the case of the "reboot" state, Ironic will return the host to the "on" state. :raises: OpenStackCloudException on operation error or. :returns: None """ with _utils.shade_exceptions( "Error setting machine power state to {state} on node " "{node}".format(state=state, node=name_or_id) ): power = self.manager.submitTask( _tasks.MachineSetPower(node_id=name_or_id, state=state)) if power is not None: raise OpenStackCloudException( "Failed setting machine power state %s on node %s. " "Received: %s" % (state, name_or_id, power)) return None def set_machine_power_on(self, name_or_id): """Activate baremetal machine power This is a method that sets the node power state to "on". :params string name_or_id: A string representing the baremetal node to have power turned to an "on" state. :raises: OpenStackCloudException on operation error. :returns: None """ self._set_machine_power_state(name_or_id, 'on') def set_machine_power_off(self, name_or_id): """De-activate baremetal machine power This is a method that sets the node power state to "off". :params string name_or_id: A string representing the baremetal node to have power turned to an "off" state. :raises: OpenStackCloudException on operation error. :returns: """ self._set_machine_power_state(name_or_id, 'off') def set_machine_power_reboot(self, name_or_id): """De-activate baremetal machine power This is a method that sets the node power state to "reboot", which in essence changes the machine power state to "off", and that back to "on". :params string name_or_id: A string representing the baremetal node to have power turned to an "off" state. :raises: OpenStackCloudException on operation error. :returns: None """ self._set_machine_power_state(name_or_id, 'reboot') def activate_node(self, uuid, configdrive=None, wait=False, timeout=1200): self.node_set_provision_state( uuid, 'active', configdrive, wait=wait, timeout=timeout) def deactivate_node(self, uuid, wait=False, timeout=1200): self.node_set_provision_state( uuid, 'deleted', wait=wait, timeout=timeout) def set_node_instance_info(self, uuid, patch): with _utils.shade_exceptions(): return self.manager.submitTask( _tasks.MachineNodeUpdate(node_id=uuid, patch=patch)) def purge_node_instance_info(self, uuid): patch = [] patch.append({'op': 'remove', 'path': '/instance_info'}) with _utils.shade_exceptions(): return self.manager.submitTask( _tasks.MachineNodeUpdate(node_id=uuid, patch=patch)) @_utils.valid_kwargs('type', 'service_type', 'description') def create_service(self, name, enabled=True, **kwargs): """Create a service. :param name: Service name. :param type: Service type. (type or service_type required.) :param service_type: Service type. (type or service_type required.) :param description: Service description (optional). :param enabled: Whether the service is enabled (v3 only) :returns: a dict containing the services description, i.e. the following attributes:: - id: - name: - type: - service_type: - description: :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ type_ = kwargs.pop('type', None) service_type = kwargs.pop('service_type', None) if self.cloud_config.get_api_version('identity').startswith('2'): kwargs['service_type'] = type_ or service_type else: kwargs['type'] = type_ or service_type kwargs['enabled'] = enabled with _utils.shade_exceptions( "Failed to create service {name}".format(name=name) ): service = self.manager.submitTask( _tasks.ServiceCreate(name=name, **kwargs) ) return _utils.normalize_keystone_services([service])[0] @_utils.valid_kwargs('name', 'enabled', 'type', 'service_type', 'description') def update_service(self, name_or_id, **kwargs): # NOTE(SamYaple): Service updates are only available on v3 api if self.cloud_config.get_api_version('identity').startswith('2'): raise OpenStackCloudUnavailableFeature( 'Unavailable Feature: Service update requires Identity v3' ) # NOTE(SamYaple): Keystone v3 only accepts 'type' but shade accepts # both 'type' and 'service_type' with a preference # towards 'type' type_ = kwargs.pop('type', None) service_type = kwargs.pop('service_type', None) if type_ or service_type: kwargs['type'] = type_ or service_type with _utils.shade_exceptions( "Error in updating service {service}".format(service=name_or_id) ): service = self.manager.submitTask( _tasks.ServiceUpdate(service=name_or_id, **kwargs) ) return _utils.normalize_keystone_services([service])[0] def list_services(self): """List all Keystone services. :returns: a list of dict containing the services description. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ with _utils.shade_exceptions(): services = self.manager.submitTask(_tasks.ServiceList()) return _utils.normalize_keystone_services(services) def search_services(self, name_or_id=None, filters=None): """Search Keystone services. :param name_or_id: Name or id of the desired service. :param filters: a dict containing additional filters to use. e.g. {'type': 'network'}. :returns: a list of dict containing the services description. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ services = self.list_services() return _utils._filter_list(services, name_or_id, filters) def get_service(self, name_or_id, filters=None): """Get exactly one Keystone service. :param name_or_id: Name or id of the desired service. :param filters: a dict containing additional filters to use. e.g. {'type': 'network'} :returns: a dict containing the services description, i.e. the following attributes:: - id: - name: - type: - description: :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call or if multiple matches are found. """ return _utils._get_entity(self.search_services, name_or_id, filters) def delete_service(self, name_or_id): """Delete a Keystone service. :param name_or_id: Service name or id. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call """ service = self.get_service(name_or_id=name_or_id) if service is None: self.log.debug("Service %s not found for deleting" % name_or_id) return False if self.cloud_config.get_api_version('identity').startswith('2'): service_kwargs = {'id': service['id']} else: service_kwargs = {'service': service['id']} with _utils.shade_exceptions("Failed to delete service {id}".format( id=service['id'])): self.manager.submitTask(_tasks.ServiceDelete(**service_kwargs)) return True @_utils.valid_kwargs('public_url', 'internal_url', 'admin_url') def create_endpoint(self, service_name_or_id, url=None, interface=None, region=None, enabled=True, **kwargs): """Create a Keystone endpoint. :param service_name_or_id: Service name or id for this endpoint. :param url: URL of the endpoint :param interface: Interface type of the endpoint :param public_url: Endpoint public URL. :param internal_url: Endpoint internal URL. :param admin_url: Endpoint admin URL. :param region: Endpoint region. :param enabled: Whether the endpoint is enabled NOTE: Both v2 (public_url, internal_url, admin_url) and v3 (url, interface) calling semantics are supported. But you can only use one of them at a time. :returns: a list of dicts containing the endpoint description. :raises: OpenStackCloudException if the service cannot be found or if something goes wrong during the openstack API call. """ public_url = kwargs.pop('public_url', None) internal_url = kwargs.pop('internal_url', None) admin_url = kwargs.pop('admin_url', None) if (url or interface) and (public_url or internal_url or admin_url): raise OpenStackCloudException( "create_endpoint takes either url and interface OR" " public_url, internal_url, admin_url") service = self.get_service(name_or_id=service_name_or_id) if service is None: raise OpenStackCloudException("service {service} not found".format( service=service_name_or_id)) endpoints = [] endpoint_args = [] if url: urlkwargs = {} if self.cloud_config.get_api_version('identity').startswith('2'): if interface != 'public': raise OpenStackCloudException( "Error adding endpoint for service {service}." " On a v2 cloud the url/interface API may only be" " used for public url. Try using the public_url," " internal_url, admin_url parameters instead of" " url and interface".format( service=service_name_or_id)) urlkwargs['{}url'.format(interface)] = url else: urlkwargs['url'] = url urlkwargs['interface'] = interface endpoint_args.append(urlkwargs) else: expected_endpoints = {'public': public_url, 'internal': internal_url, 'admin': admin_url} if self.cloud_config.get_api_version('identity').startswith('2'): urlkwargs = {} for interface, url in expected_endpoints.items(): if url: urlkwargs['{}url'.format(interface)] = url endpoint_args.append(urlkwargs) else: for interface, url in expected_endpoints.items(): if url: urlkwargs = {} urlkwargs['url'] = url urlkwargs['interface'] = interface endpoint_args.append(urlkwargs) if self.cloud_config.get_api_version('identity').startswith('2'): kwargs['service_id'] = service['id'] # Keystone v2 requires 'region' arg even if it is None kwargs['region'] = region else: kwargs['service'] = service['id'] kwargs['enabled'] = enabled if region is not None: kwargs['region'] = region with _utils.shade_exceptions( "Failed to create endpoint for service" " {service}".format(service=service['name']) ): for args in endpoint_args: # NOTE(SamYaple): Add shared kwargs to endpoint args args.update(kwargs) endpoint = self.manager.submitTask( _tasks.EndpointCreate(**args) ) endpoints.append(endpoint) return endpoints def list_endpoints(self): """List Keystone endpoints. :returns: a list of dict containing the endpoint description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ # ToDo: support v3 api (dguerri) with _utils.shade_exceptions("Failed to list endpoints"): endpoints = self.manager.submitTask(_tasks.EndpointList()) return endpoints def search_endpoints(self, id=None, filters=None): """List Keystone endpoints. :param id: endpoint id. :param filters: a dict containing additional filters to use. e.g. {'region': 'region-a.geo-1'} :returns: a list of dict containing the endpoint description. Each dict contains the following attributes:: - id: - region: - public_url: - internal_url: (optional) - admin_url: (optional) :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ endpoints = self.list_endpoints() return _utils._filter_list(endpoints, id, filters) def get_endpoint(self, id, filters=None): """Get exactly one Keystone endpoint. :param id: endpoint id. :param filters: a dict containing additional filters to use. e.g. {'region': 'region-a.geo-1'} :returns: a dict containing the endpoint description. i.e. a dict containing the following attributes:: - id: - region: - public_url: - internal_url: (optional) - admin_url: (optional) """ return _utils._get_entity(self.search_endpoints, id, filters) def delete_endpoint(self, id): """Delete a Keystone endpoint. :param id: Id of the endpoint to delete. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ # ToDo: support v3 api (dguerri) endpoint = self.get_endpoint(id=id) if endpoint is None: self.log.debug("Endpoint %s not found for deleting" % id) return False if self.cloud_config.get_api_version('identity').startswith('2'): endpoint_kwargs = {'id': endpoint['id']} else: endpoint_kwargs = {'endpoint': endpoint['id']} with _utils.shade_exceptions("Failed to delete endpoint {id}".format( id=id)): self.manager.submitTask(_tasks.EndpointDelete(**endpoint_kwargs)) return True def create_domain( self, name, description=None, enabled=True): """Create a Keystone domain. :param name: The name of the domain. :param description: A description of the domain. :param enabled: Is the domain enabled or not (default True). :returns: a dict containing the domain description :raise OpenStackCloudException: if the domain cannot be created """ with _utils.shade_exceptions("Failed to create domain {name}".format( name=name)): domain = self.manager.submitTask(_tasks.DomainCreate( name=name, description=description, enabled=enabled)) return _utils.normalize_domains([domain])[0] def update_domain( self, domain_id, name=None, description=None, enabled=None): with _utils.shade_exceptions( "Error in updating domain {domain}".format(domain=domain_id)): domain = self.manager.submitTask(_tasks.DomainUpdate( domain=domain_id, name=name, description=description, enabled=enabled)) return _utils.normalize_domains([domain])[0] def delete_domain(self, domain_id): """Delete a Keystone domain. :param domain_id: ID of the domain to delete. :returns: None :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ with _utils.shade_exceptions("Failed to delete domain {id}".format( id=domain_id)): # Deleting a domain is expensive, so disabling it first increases # the changes of success domain = self.update_domain(domain_id, enabled=False) self.manager.submitTask(_tasks.DomainDelete( domain=domain['id'])) def list_domains(self): """List Keystone domains. :returns: a list of dicts containing the domain description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ with _utils.shade_exceptions("Failed to list domains"): domains = self.manager.submitTask(_tasks.DomainList()) return _utils.normalize_domains(domains) def search_domains(self, filters=None): """Search Keystone domains. :param dict filters: A dict containing additional filters to use. Keys to search on are id, name, enabled and description. :returns: a list of dicts containing the domain description. Each dict contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ with _utils.shade_exceptions("Failed to list domains"): domains = self.manager.submitTask( _tasks.DomainList(**filters)) return _utils.normalize_domains(domains) def get_domain(self, domain_id): """Get exactly one Keystone domain. :param domain_id: domain id. :returns: a dict containing the domain description, or None if not found. Each dict contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ with _utils.shade_exceptions( "Failed to get domain " "{domain_id}".format(domain_id=domain_id) ): domain = self.manager.submitTask( _tasks.DomainGet(domain=domain_id)) return _utils.normalize_domains([domain])[0] @_utils.cache_on_arguments() def list_groups(self): """List Keystone Groups. :returns: A list of dicts containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ with _utils.shade_exceptions("Failed to list groups"): groups = self.manager.submitTask(_tasks.GroupList()) return _utils.normalize_groups(groups) def search_groups(self, name_or_id=None, filters=None): """Search Keystone groups. :param name: Group name or id. :param filters: A dict containing additional filters to use. :returns: A list of dict containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ groups = self.list_groups() return _utils._filter_list(groups, name_or_id, filters) def get_group(self, name_or_id, filters=None): """Get exactly one Keystone group. :param id: Group name or id. :param filters: A dict containing additional filters to use. :returns: A dict containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ return _utils._get_entity(self.search_groups, name_or_id, filters) def create_group(self, name, description, domain=None): """Create a group. :param string name: Group name. :param string description: Group description. :param string domain: Domain name or ID for the group. :returns: A dict containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ with _utils.shade_exceptions( "Error creating group {group}".format(group=name) ): domain_id = None if domain: dom = self.get_domain(domain) if not dom: raise OpenStackCloudException( "Creating group {group} failed: Invalid domain " "{domain}".format(group=name, domain=domain) ) domain_id = dom['id'] group = self.manager.submitTask(_tasks.GroupCreate( name=name, description=description, domain=domain_id) ) self.list_groups.invalidate(self) return _utils.normalize_groups([group])[0] def update_group(self, name_or_id, name=None, description=None): """Update an existing group :param string name: New group name. :param string description: New group description. :returns: A dict containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ self.list_groups.invalidate(self) group = self.get_group(name_or_id) if group is None: raise OpenStackCloudException( "Group {0} not found for updating".format(name_or_id) ) with _utils.shade_exceptions( "Unable to update group {name}".format(name=name_or_id) ): group = self.manager.submitTask(_tasks.GroupUpdate( group=group['id'], name=name, description=description)) self.list_groups.invalidate(self) return _utils.normalize_groups([group])[0] def delete_group(self, name_or_id): """Delete a group :param name_or_id: ID or name of the group to delete. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ group = self.get_group(name_or_id) if group is None: self.log.debug( "Group {0} not found for deleting".format(name_or_id)) return False with _utils.shade_exceptions( "Unable to delete group {name}".format(name=name_or_id) ): self.manager.submitTask(_tasks.GroupDelete(group=group['id'])) self.list_groups.invalidate(self) return True def list_roles(self): """List Keystone roles. :returns: a list of dicts containing the role description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ with _utils.shade_exceptions(): roles = self.manager.submitTask(_tasks.RoleList()) return roles def search_roles(self, name_or_id=None, filters=None): """Seach Keystone roles. :param string name: role name or id. :param dict filters: a dict containing additional filters to use. :returns: a list of dict containing the role description. Each dict contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ roles = self.list_roles() return _utils._filter_list(roles, name_or_id, filters) def get_role(self, name_or_id, filters=None): """Get exactly one Keystone role. :param id: role name or id. :param filters: a dict containing additional filters to use. :returns: a single dict containing the role description. Each dict contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ return _utils._get_entity(self.search_roles, name_or_id, filters) def _keystone_v2_role_assignments(self, user, project=None, role=None, **kwargs): with _utils.shade_exceptions("Failed to list role assignments"): roles = self.manager.submitTask( _tasks.RolesForUser(user=user, tenant=project) ) ret = [] for tmprole in roles: if role is not None and role != tmprole.id: continue ret.append({ 'role': { 'id': tmprole.id }, 'scope': { 'project': { 'id': project, } }, 'user': { 'id': user, } }) return ret def list_role_assignments(self, filters=None): """List Keystone role assignments :param dict filters: Dict of filter conditions. Acceptable keys are:: - 'user' (string) - User ID to be used as query filter. - 'group' (string) - Group ID to be used as query filter. - 'project' (string) - Project ID to be used as query filter. - 'domain' (string) - Domain ID to be used as query filter. - 'role' (string) - Role ID to be used as query filter. - 'os_inherit_extension_inherited_to' (string) - Return inherited role assignments for either 'projects' or 'domains' - 'effective' (boolean) - Return effective role assignments. - 'include_subtree' (boolean) - Include subtree 'user' and 'group' are mutually exclusive, as are 'domain' and 'project'. NOTE: For keystone v2, only user, project, and role are used. Project and user are both required in filters. :returns: a list of dicts containing the role assignment description. Contains the following attributes:: - id: - user|group: - project|domain: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ if not filters: filters = {} if self.cloud_config.get_api_version('identity').startswith('2'): if filters.get('project') is None or filters.get('user') is None: raise OpenStackCloudException( "Must provide project and user for keystone v2" ) assignments = self._keystone_v2_role_assignments(**filters) else: with _utils.shade_exceptions("Failed to list role assignments"): assignments = self.manager.submitTask( _tasks.RoleAssignmentList(**filters) ) return _utils.normalize_role_assignments(assignments) def create_flavor(self, name, ram, vcpus, disk, flavorid="auto", ephemeral=0, swap=0, rxtx_factor=1.0, is_public=True): """Create a new flavor. :param name: Descriptive name of the flavor :param ram: Memory in MB for the flavor :param vcpus: Number of VCPUs for the flavor :param disk: Size of local disk in GB :param flavorid: ID for the flavor (optional) :param ephemeral: Ephemeral space size in GB :param swap: Swap space in MB :param rxtx_factor: RX/TX factor :param is_public: Make flavor accessible to the public :returns: A dict describing the new flavor. :raises: OpenStackCloudException on operation error. """ with _utils.shade_exceptions("Failed to create flavor {name}".format( name=name)): flavor = self.manager.submitTask( _tasks.FlavorCreate(name=name, ram=ram, vcpus=vcpus, disk=disk, flavorid=flavorid, ephemeral=ephemeral, swap=swap, rxtx_factor=rxtx_factor, is_public=is_public) ) return _utils.normalize_flavors([flavor])[0] def delete_flavor(self, name_or_id): """Delete a flavor :param name_or_id: ID or name of the flavor to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ flavor = self.get_flavor(name_or_id) if flavor is None: self.log.debug( "Flavor {0} not found for deleting".format(name_or_id)) return False with _utils.shade_exceptions("Unable to delete flavor {name}".format( name=name_or_id)): self.manager.submitTask(_tasks.FlavorDelete(flavor=flavor['id'])) return True def set_flavor_specs(self, flavor_id, extra_specs): """Add extra specs to a flavor :param string flavor_id: ID of the flavor to update. :param dict extra_specs: Dictionary of key-value pairs. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudResourceNotFound if flavor ID is not found. """ try: self.manager.submitTask( _tasks.FlavorSetExtraSpecs( id=flavor_id, json=dict(extra_specs=extra_specs))) except Exception as e: raise OpenStackCloudException( "Unable to set flavor specs: {0}".format(str(e)) ) def unset_flavor_specs(self, flavor_id, keys): """Delete extra specs from a flavor :param string flavor_id: ID of the flavor to update. :param list keys: List of spec keys to delete. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudResourceNotFound if flavor ID is not found. """ for key in keys: try: self.manager.submitTask( _tasks.FlavorUnsetExtraSpecs(id=flavor_id, key=key)) except Exception as e: raise OpenStackCloudException( "Unable to delete flavor spec {0}: {0}".format( key, str(e))) def _mod_flavor_access(self, action, flavor_id, project_id): """Common method for adding and removing flavor access """ with _utils.shade_exceptions("Error trying to {action} access from " "flavor ID {flavor}".format( action=action, flavor=flavor_id)): if action == 'add': self.manager.submitTask( _tasks.FlavorAddAccess(flavor=flavor_id, tenant=project_id) ) elif action == 'remove': self.manager.submitTask( _tasks.FlavorRemoveAccess(flavor=flavor_id, tenant=project_id) ) def add_flavor_access(self, flavor_id, project_id): """Grant access to a private flavor for a project/tenant. :param string flavor_id: ID of the private flavor. :param string project_id: ID of the project/tenant. :raises: OpenStackCloudException on operation error. """ self._mod_flavor_access('add', flavor_id, project_id) def remove_flavor_access(self, flavor_id, project_id): """Revoke access from a private flavor for a project/tenant. :param string flavor_id: ID of the private flavor. :param string project_id: ID of the project/tenant. :raises: OpenStackCloudException on operation error. """ self._mod_flavor_access('remove', flavor_id, project_id) def create_role(self, name): """Create a Keystone role. :param string name: The name of the role. :returns: a dict containing the role description :raise OpenStackCloudException: if the role cannot be created """ with _utils.shade_exceptions(): role = self.manager.submitTask( _tasks.RoleCreate(name=name) ) return role def delete_role(self, name_or_id): """Delete a Keystone role. :param string id: Name or id of the role to delete. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ role = self.get_role(name_or_id) if role is None: self.log.debug( "Role {0} not found for deleting".format(name_or_id)) return False with _utils.shade_exceptions("Unable to delete role {name}".format( name=name_or_id)): self.manager.submitTask(_tasks.RoleDelete(role=role['id'])) return True def _get_grant_revoke_params(self, role, user=None, group=None, project=None, domain=None): role = self.get_role(role) if role is None: return {} data = {'role': role.id} # domain and group not available in keystone v2.0 keystone_version = self.cloud_config.get_api_version('identity') is_keystone_v2 = keystone_version.startswith('2') filters = {} if not is_keystone_v2 and domain: filters['domain_id'] = data['domain'] = \ self.get_domain(domain)['id'] if user: data['user'] = self.get_user(user, filters=filters) if project: # drop domain in favor of project data.pop('domain', None) data['project'] = self.get_project(project, filters=filters) if not is_keystone_v2 and group: data['group'] = self.get_group(group, filters=filters) return data def grant_role(self, name_or_id, user=None, group=None, project=None, domain=None, wait=False, timeout=60): """Grant a role to a user. :param string name_or_id: The name or id of the role. :param string user: The name or id of the user. :param string group: The name or id of the group. (v3) :param string project: The name or id of the project. :param string domain: The id of the domain. (v3) :param bool wait: Wait for role to be granted :param int timeout: Timeout to wait for role to be granted NOTE: for wait and timeout, sometimes granting roles is not instantaneous for granting roles. NOTE: project is required for keystone v2 :returns: True if the role is assigned, otherwise False :raise OpenStackCloudException: if the role cannot be granted """ data = self._get_grant_revoke_params(name_or_id, user, group, project, domain) filters = data.copy() if not data: raise OpenStackCloudException( 'Role {0} not found.'.format(name_or_id)) if data.get('user') is not None and data.get('group') is not None: raise OpenStackCloudException( 'Specify either a group or a user, not both') if data.get('user') is None and data.get('group') is None: raise OpenStackCloudException( 'Must specify either a user or a group') if self.cloud_config.get_api_version('identity').startswith('2') and \ data.get('project') is None: raise OpenStackCloudException( 'Must specify project for keystone v2') if self.list_role_assignments(filters=filters): self.log.debug('Assignment already exists') return False with _utils.shade_exceptions( "Error granting access to role: {0}".format( data)): if self.cloud_config.get_api_version('identity').startswith('2'): data['tenant'] = data.pop('project') self.manager.submitTask(_tasks.RoleAddUser(**data)) else: if data.get('project') is None and data.get('domain') is None: raise OpenStackCloudException( 'Must specify either a domain or project') self.manager.submitTask(_tasks.RoleGrantUser(**data)) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for role to be granted"): if self.list_role_assignments(filters=filters): break return True def revoke_role(self, name_or_id, user=None, group=None, project=None, domain=None, wait=False, timeout=60): """Revoke a role from a user. :param string name_or_id: The name or id of the role. :param string user: The name or id of the user. :param string group: The name or id of the group. (v3) :param string project: The name or id of the project. :param string domain: The id of the domain. (v3) :param bool wait: Wait for role to be revoked :param int timeout: Timeout to wait for role to be revoked NOTE: for wait and timeout, sometimes revoking roles is not instantaneous for revoking roles. NOTE: project is required for keystone v2 :returns: True if the role is revoke, otherwise False :raise OpenStackCloudException: if the role cannot be removed """ data = self._get_grant_revoke_params(name_or_id, user, group, project, domain) filters = data.copy() if not data: raise OpenStackCloudException( 'Role {0} not found.'.format(name_or_id)) if data.get('user') is not None and data.get('group') is not None: raise OpenStackCloudException( 'Specify either a group or a user, not both') if data.get('user') is None and data.get('group') is None: raise OpenStackCloudException( 'Must specify either a user or a group') if self.cloud_config.get_api_version('identity').startswith('2') and \ data.get('project') is None: raise OpenStackCloudException( 'Must specify project for keystone v2') if not self.list_role_assignments(filters=filters): self.log.debug('Assignment does not exist') return False with _utils.shade_exceptions( "Error revoking access to role: {0}".format( data)): if self.cloud_config.get_api_version('identity').startswith('2'): data['tenant'] = data.pop('project') self.manager.submitTask(_tasks.RoleRemoveUser(**data)) else: if data.get('project') is None \ and data.get('domain') is None: raise OpenStackCloudException( 'Must specify either a domain or project') self.manager.submitTask(_tasks.RoleRevokeUser(**data)) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for role to be revoked"): if not self.list_role_assignments(filters=filters): break return True def list_hypervisors(self): """List all hypervisors :returns: A list of hypervisor dicts. """ with _utils.shade_exceptions("Error fetching hypervisor list"): return self.manager.submitTask(_tasks.HypervisorList()) shade-1.7.0/shade/cmd/0000775000567000056710000000000012677257023015606 5ustar jenkinsjenkins00000000000000shade-1.7.0/shade/cmd/inventory.py0000775000567000056710000000467012677256557020242 0ustar jenkinsjenkins00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import json import sys import yaml import shade import shade.inventory def output_format_dict(data, use_yaml): if use_yaml: return yaml.safe_dump(data, default_flow_style=False) else: return json.dumps(data, sort_keys=True, indent=2) def parse_args(): parser = argparse.ArgumentParser(description='OpenStack Inventory Module') parser.add_argument('--refresh', action='store_true', help='Refresh cached information') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--list', action='store_true', help='List active servers') group.add_argument('--host', help='List details about the specific host') parser.add_argument('--private', action='store_true', default=False, help='Use private IPs for interface_ip') parser.add_argument('--cloud', default=None, help='Return data for one cloud only') parser.add_argument('--yaml', action='store_true', default=False, help='Output data in nicely readable yaml') parser.add_argument('--debug', action='store_true', default=False, help='Enable debug output') return parser.parse_args() def main(): args = parse_args() try: shade.simple_logging(debug=args.debug) inventory = shade.inventory.OpenStackInventory( refresh=args.refresh, private=args.private, cloud=args.cloud) if args.list: output = inventory.list_hosts() elif args.host: output = inventory.get_host(args.host) print(output_format_dict(output, args.yaml)) except shade.OpenStackCloudException as e: sys.stderr.write(e.message + '\n') sys.exit(1) sys.exit(0) if __name__ == '__main__': main() shade-1.7.0/shade/cmd/__init__.py0000664000567000056710000000000012677256557017720 0ustar jenkinsjenkins00000000000000shade-1.7.0/shade/task_manager.py0000664000567000056710000001334612677256557020073 0ustar jenkinsjenkins00000000000000# Copyright (C) 2011-2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. import abc import sys import threading import time import types import keystoneauth1.exceptions import simplejson import six from shade import _log from shade import meta @six.add_metaclass(abc.ABCMeta) class Task(object): """Represent a task to be performed on an OpenStack Cloud. Some consumers need to inject things like rate-limiting or auditing around each external REST interaction. Task provides an interface to encapsulate each such interaction. Also, although shade itself operates normally in a single-threaded direct action manner, consuming programs may provide a multi-threaded TaskManager themselves. For that reason, Task uses threading events to ensure appropriate wait conditions. These should be a no-op in single-threaded applications. A consumer is expected to overload the main method. :param dict kw: Any args that are expected to be passed to something in the main payload at execution time. """ def __init__(self, **kw): self._exception = None self._traceback = None self._result = None self._response = None self._finished = threading.Event() self.args = kw self.requests = False self._request_id = None @abc.abstractmethod def main(self, client): """ Override this method with the actual workload to be performed """ def done(self, result): if self.requests: self._response, self._result = result else: self._result = result self._finished.set() def exception(self, e, tb): self._exception = e self._traceback = tb self._finished.set() def wait(self, raw=False): self._finished.wait() if self._exception: six.reraise(type(self._exception), self._exception, self._traceback) if raw: # Do NOT convert the result. return self._result # NOTE(Shrews): Since the client API might decide to subclass one # of these result types, we use isinstance() here instead of type(). if (isinstance(self._result, list) or isinstance(self._result, types.GeneratorType)): return meta.obj_list_to_dict( self._result, request_id=self._request_id) elif (not isinstance(self._result, bool) and not isinstance(self._result, int) and not isinstance(self._result, float) and not isinstance(self._result, str) and not isinstance(self._result, set) and not isinstance(self._result, tuple) and not isinstance(self._result, types.GeneratorType)): return meta.obj_to_dict(self._result, request_id=self._request_id) else: return self._result def run(self, client): self._client = client try: # Retry one time if we get a retriable connection failure try: self.done(self.main(client)) except keystoneauth1.exceptions.RetriableConnectionFailure: client.log.debug( "Connection failure for {name}, retrying".format( name=type(self).__name__)) self.done(self.main(client)) except Exception: raise except Exception as e: self.exception(e, sys.exc_info()[2]) class RequestTask(Task): # It's totally legit for calls to not return things result_key = None # keystoneauth1 throws keystoneauth1.exceptions.http.HttpError on !200 def done(self, result): self._response = result try: result_json = self._response.json() except (simplejson.scanner.JSONDecodeError, ValueError) as e: result_json = self._response.text self._client.log.debug( 'Could not decode json in response: {e}'.format(e=str(e))) self._client.log.debug(result_json) if self.result_key: self._result = result_json[self.result_key] else: self._result = result_json self._request_id = self._response.headers.get('x-openstack-request-id') self._finished.set() class TaskManager(object): log = _log.setup_logging("shade.TaskManager") def __init__(self, client, name): self.name = name self._client = client def stop(self): """ This is a direct action passthrough TaskManager """ pass def run(self): """ This is a direct action passthrough TaskManager """ pass def submitTask(self, task, raw=False): """Submit and execute the given task. :param task: The task to execute. :param bool raw: If True, return the raw result as received from the underlying client call. """ self.log.debug( "Manager %s running task %s" % (self.name, type(task).__name__)) start = time.time() task.run(self._client) end = time.time() self.log.debug( "Manager %s ran task %s in %ss" % ( self.name, type(task).__name__, (end - start))) return task.wait(raw) shade-1.7.0/doc/0000775000567000056710000000000012677257023014524 5ustar jenkinsjenkins00000000000000shade-1.7.0/doc/source/0000775000567000056710000000000012677257023016024 5ustar jenkinsjenkins00000000000000shade-1.7.0/doc/source/index.rst0000664000567000056710000000101612677256557017676 0ustar jenkinsjenkins00000000000000.. shade documentation master file, created by sphinx-quickstart on Tue Jul 9 22:26:36 2013. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to shade's documentation! ================================= Contents: .. toctree:: :maxdepth: 2 installation usage contributing coding future releasenotes .. include:: ../../README.rst Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` shade-1.7.0/doc/source/releasenotes.rst0000664000567000056710000000007612677256557021265 0ustar jenkinsjenkins00000000000000============= Release Notes ============= .. release-notes:: shade-1.7.0/doc/source/future.rst0000664000567000056710000001570512677256557020113 0ustar jenkinsjenkins00000000000000************************ Future Design Discussion ************************ This document discusses a new approach to the Shade library and how we might wish for it to operate in a future, not-yet-developed version. It presents a more object oriented approach, and design decisions that we have learned and decided on while working on the current version. Object Design ============= Shade is a library for managing resources, not for operating APIs. As such, it is the resource in question that is the primary object and not the service that may or may not provide that resource, much as we may feel warm and fuzzy to one of the services. Every resource at minimum has CRUD functions. Additionally, every resource action should have a "do this task blocking" or "request that the cloud start this action and give me a way to check its status" The creation and deletion of Resources will be handled by a ResourceManager that is attached to the Cloud :: class Cloud: ResourceManager server servers = server ResourceManager floating_ip floating_ips = floating_ip ResourceManager image images = image ResourceManager role roles = role ResourceManager volume volumes = volume getting, listing and searching ------------------------------ In addition to creating a resource, there are different ways of getting your hands on a resource. A `get`, a `list` and a `search`. `list` has the simplest semantics - it takes no parameters and simply returns a list of all of the resources that exist. `search` takes a set of parameters to match against and returns a list of resources that match the parameters given. If no resources match, it returns an empty list. `get` takes the same set of parameters that `search` takes, but will only ever return a single matching resource or None. If multiple resources are matched, an exception will be raised. :: class ResourceManager: def get -> Resource def list -> List def search -> List def create -> Resource Cloud and ResourceManager interface =================================== All ResourceManagers should accept a cache object passed in to their constructor and should additionally pass that cache object to all Resource constructors. The top-level cloud should create the cache object, then pass it to each of the ResourceManagers when it creates them. Client connection objects should exist and be managed at the Cloud level. A backreference to the OpenStack cloud should be passed to every resource manager so that ResourceManagers can get hold of the ones they need. For instance, an Image ResourceManager would potentially need access to both the glance_client and the swift_client. :: class ResourceManager def __init__(self, cache, cloud) class ServerManager(ResourceManager) class OpenStackCloud def __init__(self): self.cache = dogpile.cache() self.server = ServerManager(self.cache, self) self.servers = self.server Any resources that have an association action - such as servers and floating_ips, should carry reciprocal methods on each resource with absolutely no difference in behavior. :: class Server(Resource): def connect_floating_ip: class FloatingIp(Resource): def connect_server: Resource objects should have all of the accessor methods you'd expect, as well as any other interesting rollup methods or actions. For instance, since a keystone User can be enabled or disabled, one should expect that there would be an enable() and a disable() method, and that those methods will immediately operate the necessary REST apis. However, if you need to make 80 changes to a Resource, 80 REST calls may or may not be silly, so there should also be a generic update() method which can be used to request the minimal amount of REST calls needed to update the attributes to the requested values. Resource objects should all have a to_dict method which will return a plain flat dictionary of their attributes. :: class Resource: def update(**new_values) -> Resource def delete -> None, throws on error Readiness --------- `create`, `get`, and `attach` can return resources that are not yet ready. Each method should take a `wait` and a `timeout` parameter, that will cause the request for the resource to block until it is ready. However, the user may want to poll themselves. Each resource should have an `is_ready` method which will return True when the resource is ready. The `wait` method then can actually be implemented in the base Resource class as an iterate timeout loop around calls to `is_ready`. Every Resource should also have an `is_failed` and an `is_deleted` method. Optional Behavior ----------------- Not all clouds expose all features. For instance, some clouds do not have floating ips. Additionally, some clouds may have the feature but the user account does not, which is effectively the same thing. This should be handled in several ways: If the user explicitly requests a resource that they do not have access to, an error should be raised. For instance, if a user tries to create a floating ip on a cloud that does not expose that feature to them, shade should throw a "Your cloud does not let you do that" error. If the resource concept can be can be serviced by multiple possible services, shade should transparently try all of them. The discovery method should use the dogpile.cache mechanism so that it can be avoided on subsequent tries. For instance, if the user says "please upload this image", shade should figure out which sequence of actions need to be performed and should get the job done. If the resource isn't present on some clouds, but the overall concept the resource represents is, a different resource should present the concept. For instance, while some clouds do not have floating ips, if what the user wants is "a server with an IP" - then the fact that one needs to request a floating ip on some clouds is a detail, and the right thing for that to be is a quality of a server and managed by the server resource. A floating ip resource should really only be directly manipulated by the user if they were doing something very floating-ip specific, such as moving a floating ip from one server to another. In short, it should be considered a MASSIVE bug in shade if the shade user ever has to have in their own code "if cloud.has_capability("X") do_thing else do_other_thing" - since that construct conveys some resource that shade should really be able to model. Functional Interface ==================== shade should also provide a functional mapping to the object interface that does not expose the object interface at all. For instance, for a resource type `server`, one could expect the following. :: class OpenStackCloud: def create_server return self.server.create().to_dict() def get_server return self.server.get().to_dict() def update_server return self.server.get().update().to_dict() shade-1.7.0/doc/source/conf.py0000775000567000056710000000173112677256557017343 0ustar jenkinsjenkins00000000000000import os import sys sys.path.insert(0, os.path.abspath('../..')) extensions = [ 'sphinx.ext.autodoc', 'oslosphinx', 'reno.sphinxext' ] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'shade' copyright = u'2014 Hewlett-Packard Development Company, L.P.' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, u'%s Documentation' % project, u'Monty Taylor', 'manual'), ] shade-1.7.0/doc/source/coding.rst0000664000567000056710000000634412677256557020043 0ustar jenkinsjenkins00000000000000******************************** Shade Developer Coding Standards ******************************** In the beginning, there were no guidelines. And it was good. But that didn't last long. As more and more people added more and more code, we realized that we needed a set of coding standards to make sure that the shade API at least *attempted* to display some form of consistency. Thus, these coding standards/guidelines were developed. Note that not all of shade adheres to these standards just yet. Some older code has not been updated because we need to maintain backward compatibility. Some of it just hasn't been changed yet. But be clear, all new code *must* adhere to these guidelines. Below are the patterns that we expect Shade developers to follow. API Methods =========== - When an API call acts on a resource that has both a unique ID and a name, that API call should accept either identifier with a name_or_id parameter. - All resources should adhere to the get/list/search interface that control retrieval of those resources. E.g., `get_image()`, `list_images()`, `search_images()`. - Resources should have `create_RESOURCE()`, `delete_RESOURCE()`, `update_RESOURCE()` API methods (as it makes sense). - For those methods that should behave differently for omitted or None-valued parameters, use the `_utils.valid_kwargs` decorator. Notably: all Neutron `update_*` functions. - Deleting a resource should return True if the delete succeeded, or False if the resource was not found. Exceptions ========== All underlying client exceptions must be captured and converted to an `OpenStackCloudException` or one of its derivatives. Client Calls ============ All underlying client calls (novaclient, swiftclient, etc.) must be wrapped by a Task object. Returned Resources ================== Complex objects returned to the caller must be a dict type. The methods `obj_to_dict()` or `obj_list_to_dict()` should be used for this. As of this writing, those two methods are returning Bunch objects, which help to maintain backward compatibility with a time when shade returned raw objects. Bunch allows the returned resource to act as *both* an object and a dict. Use of Bunch objects will eventually be deprecated in favor of just pure dicts, so do not depend on the Bunch object functionality. Expect a pure dict type. Nova vs. Neutron ================ - Recognize that not all cloud providers support Neutron, so never assume it will be present. If a task can be handled by either Neutron or Nova, code it to be handled by either. - For methods that accept either a Nova pool or Neutron network, the parameter should just refer to the network, but documentation of it should explain about the pool. See: `create_floating_ip()` and `available_floating_ip()` methods. Tests ===== - New API methods *must* have unit tests! - Functional tests should be added, when possible. - In functional tests, always use unique names (for resources that have this attribute) and use it for clean up (see next point). - In functional tests, always define cleanup functions to delete data added by your test, should something go wrong. Data removal should be wrapped in a try except block and try to delete as many entries added by the test as possible. shade-1.7.0/doc/source/usage.rst0000664000567000056710000000160612677256557017700 0ustar jenkinsjenkins00000000000000===== Usage ===== To use shade in a project:: import shade .. note:: API methods that return a description of an OpenStack resource (e.g., server instance, image, volume, etc.) do so using a dictionary of values (e.g., ``server['id']``, ``image['name']``). This is the standard, and **recommended**, way to access these resource values. For backward compatibility, resource values can be accessed using object attribute access (e.g., ``server.id``, ``image.name``). Shade uses the `Munch library `_ to provide this behavior. This is **NOT** the recommended way to access resource values. We keep this behavior for developer convenience in the 1.x series of shade releases. This will likely not be the case in future, major releases of shade. .. autoclass:: shade.OpenStackCloud :members: .. autoclass:: shade.OperatorCloud :members: shade-1.7.0/doc/source/contributing.rst0000664000567000056710000000004312677256557021275 0ustar jenkinsjenkins00000000000000.. include:: ../../CONTRIBUTING.rstshade-1.7.0/doc/source/installation.rst0000664000567000056710000000027112677256557021272 0ustar jenkinsjenkins00000000000000============ Installation ============ At the command line:: $ pip install shade Or, if you have virtualenv wrapper installed:: $ mkvirtualenv shade $ pip install shade shade-1.7.0/HACKING.rst0000664000567000056710000000023412677256557015567 0ustar jenkinsjenkins00000000000000shade Style Commandments =============================================== Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/shade-1.7.0/CONTRIBUTING.rst0000664000567000056710000000213312677256557016432 0ustar jenkinsjenkins00000000000000.. _contributing: ===================== Contributing to shade ===================== If you're interested in contributing to the shade project, the following will help get you started. Contributor License Agreement ----------------------------- .. index:: single: license; agreement In order to contribute to the shade project, you need to have signed OpenStack's contributor's agreement. .. seealso:: * http://wiki.openstack.org/HowToContribute * http://wiki.openstack.org/CLA Project Hosting Details ------------------------- Project Documentation http://docs.openstack.org/infra/shade/ Bug tracker http://storyboard.openstack.org Mailing list (prefix subjects with ``[shade]`` for faster responses) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra Code Hosting https://git.openstack.org/cgit/openstack-infra/shade Code Review https://review.openstack.org/#/q/status:open+project:openstack-infra/shade,n,z Please read `GerritWorkflow`_ before sending your first patch for review. .. _GerritWorkflow: https://wiki.openstack.org/wiki/GerritWorkflow shade-1.7.0/.testr.conf0000664000567000056710000000054112677256557016060 0ustar jenkinsjenkins00000000000000[DEFAULT] test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \ ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./shade/tests/unit} $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list shade-1.7.0/tox.ini0000664000567000056710000000214312677256557015305 0ustar jenkinsjenkins00000000000000[tox] minversion = 1.6 envlist = py34,py27,pypy,pep8 skipsdist = True [testenv] usedevelop = True install_command = pip install -U {opts} {packages} setenv = VIRTUAL_ENV={envdir} LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=C deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = python setup.py testr --slowest --testr-args='{posargs}' [testenv:functional] setenv = OS_TEST_PATH = ./shade/tests/functional passenv = OS_* commands = python setup.py testr --slowest --testr-args='--concurrency=1 {posargs}' [testenv:pep8] commands = flake8 [testenv:venv] commands = {posargs} [testenv:cover] commands = python setup.py testr --coverage --testr-args='{posargs}' [testenv:ansible] # Need to pass some env vars for the Ansible playbooks passenv = HOME USER commands = {toxinidir}/extras/run-ansible-tests.sh -e {envdir} {posargs} [testenv:docs] commands = python setup.py build_sphinx [flake8] # Infra does not follow hacking, nor the broken E12* things ignore = E123,E125,E129,H show-source = True builtins = _ exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build shade-1.7.0/ChangeLog0000664000567000056710000006257312677257022015545 0ustar jenkinsjenkins00000000000000CHANGES ======= 1.7.0 ----- * Cache ports like servers * Workaround multiple private network ports * Reset network caches after network create/delete * Fix test_list_servers unit test * Fix test_get_server_ip unit test * Remove duplicate FakeServer class from unit tests * Mutex protect internal/external network detection * Support provider networks in public network detection * Re-allow list of networks for FIP assignment * Support InsecureRequestWarning == None * Add release notes for new create_image_snapshot() args * Split waiting for images into its own method 1.6.2 ----- * Add wait support to create_image_snapshot() * Also add server interfaces for server get * Import os module as it is referenced in line 2097 * Fix grant_role docstring 1.6.1 ----- * Add default value to wait parameter 1.6.0 ----- * Use OpenStackCloudException when _delete_server( * Always do network interface introspection * Fix race condition in deleting volumes * Use direct requests for flavor extra_specs set/unset * Fix search_projects docstring * Fix search_users docstring * Deal with is_public and ephemeral in normalize_flavors * Create clouds in Functional Test base class * Run extra specs through TaskManager and use requests * Bug fix: Make set/unset of flavor specs work again * Refactor unit tests to construct cloud in base * Add constructor param to turn on inner logging * Log inner_exception in test runs * Add environment_files to stack_create * Add normalize stack function for heat stack_list * Add wait_for_server API call * Update create_endpoint() * Test v3 params on v2.0 endpoint; Add v3 unit * Add update_service() * Use network in neutron_available_floating_ips * Allow passing project_id to create_network 1.5.1 ----- * In the service lock, reset the service, not the lock * Bug fix: Do not fail on routers with no ext gw 1.5.0 ----- * Mock glance v1 image with object not dict * Use warlock in the glance v2 tests * Fixes for latest cinder and neutron clients * Add debug message about file hash calculation * Pass username/password to SwiftService * Also reset swift service object at upload time * Invalidate volume cache when waiting for attach * Use isinstance() for result type checking * Add test for os_server Ansible module * Fix create_server() with a named network * os_router playbook cleanup * Fix heat create_stack and delete_stack * Catch failures with particular clouds * Allow testing against Ansible dev branch * Recognize subclasses of list types * Add ability to pass just filename to create_image * Add support for provider network options * Remove mock testing of os-client-config for swift * Add a method to download an image from glance * Add test option to use Ansible source repo * Add enabled flag to keystone service data * Clarify Munch object usage in documentation * Add docs tox target * create_service() should normalize return value * Prepare functional test subunit stream for collection * Use release version of Ansible for testing * Modify test workaround for extra_dhcp_opts * Fix for stable/liberty job * granting and revoking privs to users and groups * Add release note for FIP timeout fix * include keystonev2 role assignments * Add release note for new get_object() API call * Pass timeout through to floating ip creation * Fix normalize_role_assignments() return value * Remove a done todo list item * add the ability to get an object back from swift * allow for updating passwords in keystone v2 * Support neutron subnets without gateway IPs * Save the adminPass if returned on server create * Fix unit tests that validate client call arguments * Allow inventory filtering by cloud name * Add range search functionality 1.4.0 ----- * correct rpmlint errors * Add tests for stack search API * Fix filtering in search_stacks() * Bug fix: Cinder v2 returns bools now * Normalize server objects * Make server variable expansion optional * Have inventory use os-client-config extra_config * Fix unittest stack status * Fix shade tests with OCC 1.13.0 * No Mutable Defaults * Add option to enable HTTP tracing * Add support for querying role assignments * Add inventory unit tests * Fix server deletes when cinder isn't available * Pedantic spelling correction * Bug fix: create_stack() fails when waiting * Stack API improvements * Bug fix: delete_object() returns True/False * Add wait support for ironic node [de]activate * Improve test coverage: container/object list API * Make a new swift client prior to each image upload * Improve test coverage: volume attach/detach API * Bug fix: Allow name update for domains * Improve test coverage: network delete API * Bug fix: Fix pass thru filtering in list_networks * Consider 'in-use' a non-pending volume for caching * Improve test coverage: private extension API * Improve test coverage: hypervisor list * Use reno for release notes * Improve test coverage: list_router_interfaces API * Change the client imports to stop shadowing * Use non-versioned cinderclient constructor * Improve test coverage: server secgroup API * Improve test coverage: container API 1.3.0 ----- * Improve test coverage: project API * Improve test coverage: user API * Provide a better comment for the object short-circuit * Remove cinderclient version pin * Add functional tests for boot from volume * Enable running tests against RAX and IBM * Don't double-print exception subjects * Accept objects in name_or_id parameter * Normalize volume objects * Fix argument sequences for boot from volume * Make delete_server() return True/False * Adjust conditions when enable_snat is specified * Only log errors in exceptions on demand * Fix resource leak in test_compute * Clean up compute functional tests * Stop using nova client in test_compute * Retry API calls if they get a Retryable failure * Fix call to shade_exceptions in update_project * Add test for os_volume Ansible module 1.2.0 ----- * Fix for min_disk/min_ram in create_image API * Add test for os_image Ansible module * Fix warnings.filterwarnings call * boot-from-volume and network params for server create * Do not send 'router:external' unless it is set * Add test for os_port Ansible module * Allow specifying cloud name to ansible tests * Fix a 60 second unit test * Make sure timeouts are floats * Remove default values from innner method * Bump os-client-config requirement * Add test for os_user_group Ansible module * Add user group assignment API * Add test for os_user Ansible module * Add test for os_nova_flavor Ansible module * Stop using uuid in functional tests * Make functional object tests actually run * Add Ansible object role * Fix for create_object * Four minor fixes that make debugging better * Add new context manager for shade exceptions, final * Add ability to selectively run ansible tests * Add Ansible testing infrastructure * Add new context manager for shade exceptions, cont. again * Pull server list cache setting via API * Plumb fixed_address through add_ips_to_server * Let os-client-config handle session creation * Remove designate support * Remove test reference to api_versions * Update dated project methods * Fix incorrect variable name * Add CRUD methods for keystone groups * Bump ironicclient depend * Make sure cache expiration time is an int * Add new context manager for shade exceptions, cont * Use the requestsexceptions library * Don't warn on configured insecure certs * Normalize domain data * Normalization methods should return Munch * Fix keystone domain searching * Add new context manager for shade exceptions * teach shade how to list_hypervisors * Update ansible router playbook * Stop calling obj_to_dict everwhere * Always return a munch from Tasks * Make raw-requests calls behave like client calls * Minor logging improvements * Remove another extraneous get for create_server * Don't wrap wrapped exception in create_server * Skip an extra unneeded server get * Don't wrap wrapped exceptions in operatorcloud.py * Add docs for create_server * Update README to not reference client passthrough * Move ironic client attribute to correct class * Move _neutron_exceptions context manager to _utils * Fix misspelling of ironic state name * Timeout too aggressive for inspection tests * Split out OpenStackCloud and OperatorCloud classes * Adds volume snapshot functionality to shade * Fix the return values of create and delete volume * Remove removal of jenkins clouds.yaml * Consume /etc/openstack/clouds.yaml * Add logic to support baremetal inspection 1.0.0 ----- * node_set_provision_state wait/timeout support * Add warning suppression for keystoneauth loggers * Suppress Rackspace SAN warnings again * return additional detail about servers * expand security groups in get_hostvars_from_server * add list_server_security_groups method * Add swift object and container list functionality * Translate task name in log message always * Add debug logging to iterate timeout * Change the fallback on server wait to 2 seconds * Add entry for James Blair to .mailmap * handle routers without an external gateway in list_router_interfaces * Fix projects list/search/get interface * Remove unused parameter from create_stack * Move valid_kwargs decorator to _utils * Add heat support * Abstract out the name of the name key * Add heatclient support * Use OCC to create clouds in inventory * novaclient 2.32.0 does not work against rackspace * Support private address override in inventory * Normalize user information * Set cache information from clouds.yaml * Make designate record methods private for now * Rely on devstack for clouds.yaml * Rename identity_domain to domain * Rename designate domains to zones * Replace Bunch with compatible fork Munch * Make a few IP methods private 0.16.0 ------ * Push filtering down into neutron * Make floating IP func tests less racey * Make router func tests less racey * Create neutron floating ips with server info * Undecorate cache decorated methods on null cache * Tweak create_server to use list_servers cache * Add API method to list router interfaces * Handle list_servers caching more directly * Split the nova server active check out * Pass wait to add_ips_to_server * Fix floating ip removal on delete server * Document filters for get methods * Add some more docstrings * Remove shared=False from get_internal_network * Make attach_instance return updated volume object * Tell git to ignore .eggs directory * Align users with list/search/get interface * Add script to document deleting private networks * Add create/delete for keystone roles * Accept and emit union of keystone v2/v3 service * Use keystone v3 service type argument * Add get/list/search methods for identity roles * Add methods to update internal router interfaces * Add get_server_by_id optmization * Add option to floating ip creation to not reuse * Provide option to delete floating IP with server * Update python-troveclient requirement * Add a private method for nodepool server vars * Update required ironicclient version * Split get_hostvars_from_server into two * Invalidate image cache everytime we make a change * Use the ipaddress library for ip calculations * Optimize network finding * Fix create_image_snapshot 0.15.0 ------ * Return IPv6 address for interface_ip on request * Plumb wait and timout down to add_auto_ip * Pass parameters correctly for image snapshots * Fix mis-named has_service entry * Provide shortcut around has_service * Provide short-circuit for finding server networks * Update fake to match latest OCC * Dont throw exception on missing service * Add functional test for private_v4 * Attempt to use glanceclient strip_version * Fix baremetal port deletion 0.14.0 ------ * Add router ansible test and update network role * Trap exceptions in helper functions * Add more info to some exceptions * Allow more complex router updates * Allow more complex router creation * Allow creating externally accessible networks * Handle glance v1 and v2 difference with is_public * Get defaults for image type from occ * Use the get_auth function from occ * Add a NullHandler to all of our loggers * Remove many redundant debug logs * Make inner_exception a private member * Just do the error logging in the base exception * Store the inner exception when creating an OSCException * Start using keystoneauth for keystone sessions * Move keystone to common identity client interface * Bump the default API version for python-ironicclient * Avoid 2.27.0 of novaclient * unregister_machine blocking logic * Fix exception lists in functional tests * Migrate neutron to the common client interface * Remove last vestige of glanceclient being different * Pass timeout to session, not constructors * Delete floating ip by ID instead of name 0.13.0 ------ * Move glanceclient to new common interface * Addition of shade unregister_machine timeout * Initial support for ironic enroll state * Add flavor access API * Make client constructor calls consistent * Change functional testing to use clouds.yaml * Add a developer coding standards doc 0.12.0 ------ * Add flavor functional tests * Bug fix for obj_to_dict() * Add log message for when IP addresses fail * Add methods to set and unset flavor extra specs * Listing flavors should pull all flavors * Be consistent with accessing server dict * Throw an exception on a server without an IP * Be smarter finding private IP * Clarify future changes in docs * Remove meta.get_server_public_ip() function * Document create_object * Remove unused server functions * Fix two typos and one readablity on shade documentation * Pass socket timeout to swiftclient * Process config options via os-client-config * Update ansible subnet test * Fix test_object.py test class name * Fix for swift servers older than 1.11.0 * Use disable_vendor_agent flags in create_image * Use os-client-config SSL arg processing * Correctly pass the server ID to add_ip_from_pool * Add initial designate read-only operations * Always use a fixed address when attaching a floating IP to a server * Catch leaky exceptions from create_image() 0.11.0 ------ * Add flavor admin support * Fix debug logging lines * Account for Error 396 on Rackspace * Fix small error in README.rst * Allow use of admin tokens in keystone * Fix identity domain methods * Update ansible module playbooks * Rework how we get domains * Fix "Bad floatingip request" when multiple fixed IPs are present * Add Ansible module test for subnet * Add Ansible module test for networks * Add a testing framework for the Ansible modules * Support project/tenant and domain vs. None * Add CRUD methods for Keystone domains 0.10.0 ------ * Raise exception for nova egress secgroup rule * Modify secgroup rule processing * Make sure we are returning floating IPs in current domain * Correctly name the functional TestImage class 0.9.0 ----- * Locking ironic API microversion * Add Neutron/Nova Floating IP tests * Adding SSL arguments to glance client * Remove list_keypair_dicts method * Do not use environment for Swift unit tests * Add Neutron/Nova Floating IP attach/detach * Fix available_floating_ip when using Nova network * Skip Swift functional tests if needed * Fix AttributeError in keystone functional tests * Update keypair APIs to latest standards * Add Neutron/Nova Floating IP delete (i.e. deallocate from project) * Add Neutron/Nova Floating IP create (i.e. allocate to project) * Convert ironicclient node.update() call to Task * Convert ironicclient node.get() call to Task * Move TestShadeOperator in a separate file * Fix intermittent error in unit tests * Pin cinderclient * Add comment explaining why finding an IP is hard * Add IPv6 to the server information too * Use accessIPv4 and accessIPv6 if they're there * Add Neutron/Nova Floating IP list/search/get 0.8.2 ----- * Catch all exceptions around port for ip finding * Centralize exception management for Neutron 0.8.1 ----- * Fix MD5 headers regression * Ensure that service values are strings * Pass token and endpoint to swift os_options * Convert ironicclient node.validate() call to Task * Convert ironicclient node.list() call to Task * Return True/False for delete methods 0.8.0 ----- * Add delete method for security group rules * Add get_server_external_ipv6() to meta * Refactor find_nova_addresses() * Replace get_server_public_ip() with get_server_external_ipv4() * Add get_server_external_ipv4() to meta * Add more parameters to update_port() * Improve documentation for create_port() * Correct get_machine_by_mac and add test * Add create method for secgroup rule * Coalesce port values in secgroup rules * Move _utils unit testing to separate file 0.7.0 ----- * Add secgroup update API * Add very initial support for passing in occ object * Don't emit volume tracebacks in inventory debug * Return new secgroup object * Port ironic client port.get_by_address() to a Task * Port ironic client port.get() to a Task * Add inventory command to shade * Extract logging config into a helper function * Add create method for security groups * Add delete method for security groups * Switch to SwiftService for segmented uploads * Add support to get a SwiftService object * Add port resource methods * Split security group list operations * Add keystone endpoint resource methods * Add Keystone service resource methods * Rely on defaults being present * Consume os_client_config defaults as base defaults * Remove hacking select line * Add design for an object interface * Port ironic client node.list_ports() to a Task * Port ironic client port.list() to a Task * Split list filtering into _utils 0.6.5 ----- * Cast nova server object to dict after refetch * Split iterate_timeout into _utils * Cleanup OperatorCloud doc errors/warnings * Update pbr version pins 0.6.4 ----- * Set metadata headers on object create 0.6.3 ----- * Always refresh glanceclient for tokens validity * Don't cache keystone tokens as KSC does it for us * Make sure glance image list actually runs in Tasks 0.6.2 ----- * Make caching work when cloud name is None * Handle novaclient exception in delete_server wait * Support PUT in Image v2 API * Make ironic use the API version system * Catch client exceptions during list ops * Replace ci.o.o links with docs.o.o/infra * Pass OS_ variables through to functional tests * Improve error message on auth_plugin failure * Handle novaclient exceptions during delete_server * Add floating IP pool resource methods * Don't error on missing certs 0.6.1 ----- * Stop leaking server objects * Use fakes instead of mocks for data objects * Update images API for get/list/search interface * Rewrite extension checking methods * Update server API for get/list/search interface * Fix delete_server when wait=True * Return Bunch objects instead of plain dicts 0.6.0 ----- * Switch tasks vs put on a boolean config flag * Enhance the OperatorCloud constructor * Convert node_set_provision_state to task * Update recent Ironic exceptions * Enhance error message in update_machine * Rename get_endpoint() to get_session_endpoint() * Make warlock filtering match dict filtering * Fix exception re-raise during task execution for py34 * Add more tests for server metadata processing * Add thread sync points to Task * Add early service fail and active check method * Add a method for getting an endpoint * Raise a shade exception on broken volumes * Split exceptions into their own file * Add minor OperatorCloud documentation * Allow for int or string ID comparisons * Change ironic maintenance method to align with power method * Add Ironic machine power state pass-through * Update secgroup API for new get/list/search API * Fix functional tests to run against live clouds * Add functional tests for create_image * Do not cache unsteady state images * Add tests and invalidation for glance v2 upload * Allow complex filtering with embedded dicts * Call super in OpenStackCloudException * Add Ironic maintenance state pass-through * Add update_machine method * Replace e.message with str(e) * Update flavor API for new get/list/search API * Add a docstring to the Task class * Remove REST links from inventory metadata * Have get_image_name return an image_name * Fix get_hostvars_from_server for volume API update * Add test for create_image with glance v1 * Explicitly request cloud name in test_caching * Add test for caching in list_images * Test flavor cache and add invalidation * Fix major update_user issues * create_user should return the user created * Test that deleting user invalidates user cache * Use new getters in update_subnet and update_router * Update volume API for new getters and dict retval * Search methods for networks, subnets and routers * Update unregister_machine to use tasks * Invalidate user cache on user create * Update register_machine to use tasks * Add test of OperatorCloud auth_type=None * Allow name or ID for update_router() * Allow name or ID for update_subnet() * Add test for user_cache * MonkeyPatch time.sleep in unit tests to avoid wait * Add patch_machine method and operator unit test substrate * Wrap ironicclient methods that leak objects * Basic test for meta method obj_list_to_dict * Change Ironic node lookups to support names * Add meta method obj_list_to_dict * Add test for invalidation after delete * Deprecate use of cache in list_volumes * Invalidate volume list cache when creating * Make cache key generator ignore cache argument * Add get_subnet() method * Add API method update_subnet() * Add API method delete_subnet() * Add API method create_subnet() * Unsteady state in volume list should prevent cache * Test volume list caching * Allow passing config into shade.openstack_cloud * Refactor caching to allow per-method invalidate * Add tests for caching * Rename auth_plugin to auth_type * Update os-client-config min version * Fix volume operations * Fix exception in update_router() * Add API auto-generation based on docstrings 0.5.0 ----- * Fix docs nit - make it clear the arg is a string * Poll on the actual image showing up * Add delete_image call * Skip passing in timeout to glance if it's not set * Add some unit test for create_server * Migrate API calls to task management * Fix naming inconsistencies in rebuild_server tests * Add task management framework * Namespace caching per cloud * Allow for passing cache class in as a parameter * Add 'rebuild' to shade * Let router update to specify external gw net ID * Create swift container if it does not exist * Fix a use of in where it should be equality * Disable warnings about old Rackspace certificates * Pass socket timeout to all of the Client objects * Add methods for logical router management * Add api-level timeout parameter * Custom exception needs str representation 0.4.0 ----- * Add basic unit test for shade.openstack_cloud * Small fixes found working on ansible modules * Disable dogpile.cache if cache_interval is None * Add support for keystone projects * Fix up and document input parameters * Handle image name for boot from volume * Clean up race condition in functional tests * Add initial compute functional tests to Shade * Add cover to .gitignore * Add ironic node deployment support * Align cert, key, cacert and verify with requests * Add methods to create and delete networks * Add behavior to enable ironic noauth mode * Reorder envlist to avoid the rm -fr .testrepository when running tox -epy34 0.3.0 ----- * Make image processing work for v2 * Utilize dogpile.cache for caching * Add support for volume attach/detach * Do not allow to pass *-cache on init * Import from v2 instead of v1_1 * Add unit test for meta.get_groups_from_server * Add unit tests for meta module * Add a method to create image snapshots from nova * Return extra information for debugging on failures * Don't try to add an IP if there is one * Revamp README file * Add hasExtension method to check cloud capabilities * Don't compare images when image is None * Add service_catalog property * Remove unnecessary container creation * Make is_object_stale() a public method * Fix broken object hashing * Adds some more swift operations * Adds get_network() and list_networks function * Add support for creating/deleting volumes * Get auth token lazily * Pass service_name to nova_client constructor * Create a neutron client * Port to use keystone sessions and auth plugins * Add consistent methods for returning dicts * Add get_flavor method * Make get_image return None * Use the "iterate timeout" idiom from nodepool * Fix obj_to_dict type filtering * Adds a method to get security group * Pull in improvements from nodepool * Remove positional args to create_server * Don't include deleted images by default * Add image upload support * Refactor glance version call into method * Support uploading swift objects * Debug log any time we re-raise an exception * Remove py26 support * Explain obj_to_dict * Fix python3 unittests * Change meta info to be an Infra project * Fix flake8 errors and turn off hacking * Fix up copyright headers * Add better caching around volumes * Support boot from volume * Make get_image work on name or id * Add some additional server meta munging * Support injecting mount-point meta info * Move ironic node create/delete logic into shade * Refactor ironic commands into OperatorCloud class * fix typo in create_server * Don't die if we didn't grab a floating ip * Process flavor and image names * Stop prefixing values with slugify * Don't access object members on a None * Make all of the compute logic work * Add delete and get server name * Fixed up a bunch of flake8 warnings * Add in server metadata routines * Plumb through a small name change for args * Consume project_name from os-client-config * add Ironic client * Updates to use keystone session * Discover Trove API version * Offload config to the os-client-config library * Add example code to README * Add volumes and config file parsing * Fix log invocations * Remove some extra lines from the README * Add the initial library code * Initial cookiecutter repo shade-1.7.0/README.rst0000664000567000056710000000330112677256557015456 0ustar jenkinsjenkins00000000000000Introduction ============ shade is a simple client library for operating OpenStack clouds. The key word here is *simple*. Clouds can do many many many things - but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, you should probably use the lower level client libraries - or even the REST API directly. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then shade is for you. shade started its life as some code inside of ansible. ansible has a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. Example ======= Sometimes an example is nice. :: import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) # Initialize cloud # Cloud configs are read with os-client-config cloud = shade.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True)