shade-1.31.0/0000775000175000017500000000000013440330010012666 5ustar zuulzuul00000000000000shade-1.31.0/.coveragerc0000666000175000017500000000012713440327640015030 0ustar zuulzuul00000000000000[run] branch = True source = shade omit = shade/tests/* [report] ignore_errors = True shade-1.31.0/.stestr.conf0000666000175000017500000000006213440327640015156 0ustar zuulzuul00000000000000[DEFAULT] test_path=./shade/tests/unit top_dir=./ shade-1.31.0/lower-constraints.txt0000666000175000017500000000134213440327640017145 0ustar zuulzuul00000000000000appdirs==1.4.3 certifi==2018.1.18 chardet==3.0.4 coverage==4.0 decorator==4.2.1 deprecation==2.0 dogpile.cache==0.6.5 extras==1.0.0 fixtures==3.0.0 future==0.16.0 idna==2.6 iso8601==0.1.12 jmespath==0.9.3 jsonpatch==1.21 jsonpointer==2.0 keystoneauth1==3.8.0 linecache2==1.0.0 mock==2.0.0 mox3==0.20.0 munch==2.2.0 netifaces==0.10.6 openstacksdk==0.15.0 os-client-config==1.28.0 os-service-types==1.2.0 oslotest==3.2.0 packaging==17.1 pbr==2.0.0 pyparsing==2.2.0 python-mimeparse==1.6.0 python-subunit==1.0.0 PyYAML==3.12 requests-mock==1.2.0 requests==2.18.4 requestsexceptions==1.4.0 six==1.11.0 stestr==1.0.0 stevedore==1.28.0 testrepository==0.0.18 testscenarios==0.4 testtools==2.2.0 traceback2==1.4.0 unittest2==1.1.0 urllib3==1.22 shade-1.31.0/tox.ini0000666000175000017500000000654113440327640014230 0ustar zuulzuul00000000000000[tox] minversion = 2.0 envlist = py36,py35,py27,pep8 skipsdist = True [testenv] usedevelop = True passenv = UPPER_CONSTRAINTS_FILE install_command = pip install -U {opts} {packages} setenv = VIRTUAL_ENV={envdir} LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=C deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = stestr run {posargs} stestr slowest [testenv:functional] basepython = {env:SHADE_TOX_PYTHON:python2} passenv = OS_* SHADE_* UPPER_CONSTRAINTS_FILE commands = stestr --test-path ./shade/tests/functional run --serial {posargs} stestr slowest [testenv:pep8] basepython = python3 usedevelop = False skip_install = True deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} doc8 hacking pygments commands = doc8 doc/source flake8 shade [testenv:venv] basepython = python3 commands = {posargs} [testenv:debug] basepython = python3 whitelist_externals = find commands = find . -type f -name "*.pyc" -delete oslo_debug_helper {posargs} [testenv:cover] basepython = python3 setenv = {[testenv]setenv} PYTHON=coverage run --source shade --parallel-mode commands = stestr run {posargs} coverage combine coverage html -d cover coverage xml -o cover/coverage.xml [testenv:ansible] basepython = python3 # Need to pass some env vars for the Ansible playbooks passenv = HOME USER deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt ansible<2.6 commands = {toxinidir}/extras/run-ansible-tests.sh -e {envdir} {posargs} [testenv:docs] basepython = python3 deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} -r{toxinidir}/requirements.txt -r{toxinidir}/doc/requirements.txt commands = sphinx-build -W -d doc/build/doctrees -b html doc/source/ doc/build/html [testenv:releasenotes] basepython = python3 skip_install = True deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} -r{toxinidir}/doc/requirements.txt commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html [flake8] # The following are ignored on purpose - please do not submit patches to "fix" # without first verifying with a core that fixing them is non-disruptive. # H103 Is about the Apache license. It's strangely strict about the use of # single vs double quotes in the license text. Fixing is not worth it # H306 Is about alphabetical imports - there's a lot to fix # H4 Are about docstrings - and there's just too many of them to fix # W503 Is supposed to be off by default but a bug has it on by default. It's # also a categorially terrible idea and Donald Knuth agrees with me. ignore = H103,H306,H4,W503 show-source = True builtins = _ exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build [testenv:lower-constraints] basepython = python3 deps = -c{toxinidir}/lower-constraints.txt -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt shade-1.31.0/HACKING.rst0000666000175000017500000000262113440327640014506 0ustar zuulzuul00000000000000shade Style Commandments ======================== Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/ Indentation ----------- PEP-8 allows for 'visual' indentation. Do not use it. Visual indentation looks like this: .. code-block:: python return_value = self.some_method(arg1, arg1, arg3, arg4) Visual indentation makes refactoring the code base unneccesarily hard. Instead of visual indentation, use this: .. code-block:: python return_value = self.some_method( arg1, arg1, arg3, arg4) That way, if some_method ever needs to be renamed, the only line that needs to be touched is the line with some_method. Additionaly, if you need to line break at the top of a block, please indent the continuation line an additional 4 spaces, like this: .. code-block:: python for val in self.some_method( arg1, arg1, arg3, arg4): self.do_something_awesome() Neither of these are 'mandated' by PEP-8. However, they are prevailing styles within this code base. Unit Tests ---------- Unit tests should be virtually instant. If a unit test takes more than 1 second to run, it is a bad unit test. Honestly, 1 second is too slow. All unit test classes should subclass `shade.tests.unit.base.BaseTestCase`. The base TestCase class takes care of properly creating `OpenStackCloud` objects in a way that protects against local environment. shade-1.31.0/PKG-INFO0000664000175000017500000001120013440330010013755 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: shade Version: 1.31.0 Summary: Simple client library for interacting with OpenStack clouds Home-page: http://docs.openstack.org/shade/latest Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: Introduction ============ .. warning:: shade has been superceded by `openstacksdk`_ and no longer takes new features. The existing code will continue to be maintained indefinitely for bugfixes as necessary, but improvements will be deferred to `openstacksdk`_. Please update your applications to use `openstacksdk`_ directly. shade is a simple client library for interacting with OpenStack clouds. The key word here is *simple*. Clouds can do many many many things - but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, you should probably use the lower level client libraries - or even the REST API directly. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then shade is for you. shade started its life as some code inside of ansible. ansible has a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. .. _usage_example: Example ======= Sometimes an example is nice. #. Create a ``clouds.yml`` file:: clouds: mordred: region_name: RegionOne auth: username: 'mordred' password: XXXXXXX project_name: 'shade' auth_url: 'https://montytaylor-sjc.openstack.blueboxgrid.com:5001/v2.0' Please note: *os-client-config* will look for a file called ``clouds.yaml`` in the following locations: * Current Directory * ``~/.config/openstack`` * ``/etc/openstack`` More information at https://pypi.org/project/os-client-config #. Create a server with *shade*, configured with the ``clouds.yml`` file:: import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) # Initialize cloud # Cloud configs are read with os-client-config cloud = shade.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Links ===== * `Issue Tracker `_ * `Code Review `_ * `Documentation `_ * `PyPI `_ * `Mailing list `_ * `Release notes `_ .. _openstacksdk: https://docs.openstack.org/openstacksdk/latest/user/ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 shade-1.31.0/releasenotes/0000775000175000017500000000000013440330010015357 5ustar zuulzuul00000000000000shade-1.31.0/releasenotes/source/0000775000175000017500000000000013440330010016657 5ustar zuulzuul00000000000000shade-1.31.0/releasenotes/source/queens.rst0000666000175000017500000000022313440327640020727 0ustar zuulzuul00000000000000=================================== Queens Series Release Notes =================================== .. release-notes:: :branch: stable/queens shade-1.31.0/releasenotes/source/conf.py0000666000175000017500000002160013440327640020176 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # oslo.config Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # openstackdocstheme options repository_name = 'openstack-infra/shade' bug_project = '760' bug_tag = '' html_last_updated_fmt = '%Y-%m-%d %H:%M' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Shade Release Notes' copyright = u'2017, Shade Developers' # Release notes are version independent version_info = '' # The full version, including alpha/beta/rc tags. release = '' # The short X.Y version. version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'shadeReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'shadeReleaseNotes.tex', u'Shade Release Notes Documentation', u'Shade Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'shadereleasenotes', u'shade Release Notes Documentation', [u'shade Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'shadeReleaseNotes', u'shade Release Notes Documentation', u'shade Developers', 'shadeReleaseNotes', u'A client library for interacting with OpenStack clouds', u'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] shade-1.31.0/releasenotes/source/unreleased.rst0000666000175000017500000000014113440327640021555 0ustar zuulzuul00000000000000========================= Mainline Release Series ========================= .. release-notes:: shade-1.31.0/releasenotes/source/_templates/0000775000175000017500000000000013440330010021014 5ustar zuulzuul00000000000000shade-1.31.0/releasenotes/source/_templates/.placeholder0000666000175000017500000000000013440327640023306 0ustar zuulzuul00000000000000shade-1.31.0/releasenotes/source/pike.rst0000666000175000017500000000021713440327640020362 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike shade-1.31.0/releasenotes/source/index.rst0000666000175000017500000000021113440327640020533 0ustar zuulzuul00000000000000===================== Shade Release Notes ===================== .. toctree:: :maxdepth: 1 unreleased rocky queens pike shade-1.31.0/releasenotes/source/rocky.rst0000666000175000017500000000022113440327640020554 0ustar zuulzuul00000000000000=================================== Rocky Series Release Notes =================================== .. release-notes:: :branch: stable/rocky shade-1.31.0/releasenotes/source/_static/0000775000175000017500000000000013440330010020305 5ustar zuulzuul00000000000000shade-1.31.0/releasenotes/source/_static/.placeholder0000666000175000017500000000000013440327640022577 0ustar zuulzuul00000000000000shade-1.31.0/releasenotes/notes/0000775000175000017500000000000013440330010016507 5ustar zuulzuul00000000000000shade-1.31.0/releasenotes/notes/add-jmespath-support-f47b7a503dbbfda1.yaml0000666000175000017500000000016213440327640026112 0ustar zuulzuul00000000000000--- features: - All get and search functions can now take a jmespath expression in their filters parameter. shade-1.31.0/releasenotes/notes/add_host_aggregate_support-471623faf45ec3c3.yaml0000666000175000017500000000012513440327640027214 0ustar zuulzuul00000000000000--- features: - Add support for host aggregates and host aggregate membership. shade-1.31.0/releasenotes/notes/router_ext_gw-b86582317bca8b39.yaml0000666000175000017500000000016713440327640024460 0ustar zuulzuul00000000000000--- fixes: - No longer fail in list_router_interfaces() if a router does not have the external_gateway_info key. shade-1.31.0/releasenotes/notes/net_provider-dd64b697476b7094.yaml0000666000175000017500000000012113440327640024204 0ustar zuulzuul00000000000000--- features: - Network provider options are now accepted in create_network(). shade-1.31.0/releasenotes/notes/stack-update-5886e91fd6e423bf.yaml0000666000175000017500000000015413440327640024236 0ustar zuulzuul00000000000000--- features: - Implement update_stack to perform the update action on existing orchestration stacks. shade-1.31.0/releasenotes/notes/compute-usage-defaults-5f5b2936f17ff400.yaml0000666000175000017500000000066113440327640026143 0ustar zuulzuul00000000000000--- features: - get_compute_usage now has a default value for the start parameter of 2010-07-06. That was the date the OpenStack project started. It's completely impossible for someone to have Nova usage data that goes back further in time. Also, both the start and end date parameters now also accept strings which will be parsed and timezones will be properly converted to UTC which is what Nova expects. shade-1.31.0/releasenotes/notes/fix-missing-futures-a0617a1c1ce6e659.yaml0000666000175000017500000000025513440327640025557 0ustar zuulzuul00000000000000--- fixes: - Added missing dependency on futures library for python 2. The depend was missed in testing due to it having been listed in test-requirements already. shade-1.31.0/releasenotes/notes/fix-update-domain-af47b066ac52eb7f.yaml0000666000175000017500000000010713440327640025301 0ustar zuulzuul00000000000000--- fixes: - Fix for update_domain() where 'name' was not updatable. shade-1.31.0/releasenotes/notes/service_enabled_flag-c917b305d3f2e8fd.yaml0000666000175000017500000000031013440327640026012 0ustar zuulzuul00000000000000--- fixes: - Keystone service descriptions were missing an attribute describing whether or not the service was enabled. A new 'enabled' boolean attribute has been added to the service data. shade-1.31.0/releasenotes/notes/v4-fixed-ip-325740fdae85ffa9.yaml0000666000175000017500000000046313440327640023760 0ustar zuulzuul00000000000000--- fixes: - | Re-added support for `v4-fixed-ip` and `v6-fixed-ip` in the `nics` parameter to `create_server`. These are aliaes for `fixed_ip` provided by novaclient which shade used to use. The switch to REST didn't include support for these aliases, resulting in a behavior regression. shade-1.31.0/releasenotes/notes/fix-supplemental-fips-c9cd58aac12eb30e.yaml0000666000175000017500000000052713440327640026300 0ustar zuulzuul00000000000000--- fixes: - Fixed an issue where shade could report a floating IP being attached to a server erroneously due to only matching on fixed ip. Changed the lookup to match on port ids. This adds an API call in the case where the workaround is needed because of a bug in the cloud, but in most cases it should have no difference. shade-1.31.0/releasenotes/notes/add_server_group_support-dfa472e3dae7d34d.yaml0000666000175000017500000000010313440327640027171 0ustar zuulzuul00000000000000--- features: - Adds support to create and delete server groups. shade-1.31.0/releasenotes/notes/log-request-ids-37507cb6eed9a7da.yaml0000666000175000017500000000024613440327640025024 0ustar zuulzuul00000000000000--- other: - The contents of x-openstack-request-id are no longer added to object returned. Instead, they are logged to a logger named 'shade.request_ids'. shade-1.31.0/releasenotes/notes/add_magnum_services_support-3d95f9dcc60b5573.yaml0000666000175000017500000000007313440327640027436 0ustar zuulzuul00000000000000--- features: - Add support for listing Magnum services. shade-1.31.0/releasenotes/notes/list-az-names-a38c277d1192471b.yaml0000666000175000017500000000007713440327640024157 0ustar zuulzuul00000000000000--- features: - Added list_availability_zone_names API call. shade-1.31.0/releasenotes/notes/wait_for_server-8dc8446b7c673d36.yaml0000666000175000017500000000013613440327640024767 0ustar zuulzuul00000000000000--- features: - New wait_for_server() API call to wait for a server to reach ACTIVE status. shade-1.31.0/releasenotes/notes/meta-passthrough-d695bff4f9366b65.yaml0000666000175000017500000000041213440327640025146 0ustar zuulzuul00000000000000--- features: - Added a parameter to create_image 'meta' which allows for providing parameters to the API that will not have any type conversions performed. For the simple case, the existing kwargs approach to image metadata is still the best bet. shade-1.31.0/releasenotes/notes/false-not-attribute-error-49484d0fdc61f75d.yaml0000666000175000017500000000040513440327640026663 0ustar zuulzuul00000000000000--- fixes: - delete_image used to fail with an AttributeError if an invalid image name or id was passed, rather than returning False which was the intent. This is worthy of note because it's a behavior change, but the previous behavior was a bug. shade-1.31.0/releasenotes/notes/workaround-transitive-deps-1e7a214f3256b77e.yaml0000666000175000017500000000100713440327640027061 0ustar zuulzuul00000000000000--- fixes: - Added requests and Babel to the direct dependencies list to work around issues with pip installation, entrypoints and transitive dependencies with conflicting exclusion ranges. Packagers of shade do not need to add these two new requirements to shade's dependency list - they are transitive depends and should be satisfied by the other things in the requirements list. Both will be removed from the list again once the python client libraries that pull them in have been removed. shade-1.31.0/releasenotes/notes/delete_project-399f9b3107014dde.yaml0000666000175000017500000000034713440327640024550 0ustar zuulzuul00000000000000--- fixes: - The delete_project() API now conforms to our standard of returning True when the delete succeeds, or False when the project was not found. It would previously raise an expection if the project was not found. shade-1.31.0/releasenotes/notes/delete-image-objects-9d4b4e0fff36a23f.yaml0000666000175000017500000000212513440327640025745 0ustar zuulzuul00000000000000--- fixes: - Delete swift objects uploaded in service of uploading images at the time that the corresponding image is deleted. On some clouds, image uploads are accomplished by uploading the image to swift and then running a task-import. As shade does this action on behalf of the user, it is not reasonable to assume that the user would then be aware of or manage the swift objects shade created, which led to an ongoing leak of swift objects. - Upload swift Large Objects as Static Large Objects by default. Shade automatically uploads objects as Large Objects when they are over a segment_size threshold. It had been doing this as Dynamic Large Objects, which sound great, but which have the downside of not deleting their sub-segments when the primary object is deleted. Since nothing in the shade interface exposes that the object was segmented, the user would not know they would also need to find and delete the segments. Instead, we now upload as Static Large Objects which behave as expected and delete segments when the object is deleted. shade-1.31.0/releasenotes/notes/create_service_norm-319a97433d68fa6a.yaml0000666000175000017500000000013013440327640025571 0ustar zuulzuul00000000000000--- fixes: - The returned data from a create_service() call was not being normalized. shade-1.31.0/releasenotes/notes/boot-on-server-group-a80e51850db24b3d.yaml0000666000175000017500000000017113440327640025631 0ustar zuulzuul00000000000000--- features: - Added ``group`` parameter to create_server to allow booting a server into a specific server group. shade-1.31.0/releasenotes/notes/get_object_api-968483adb016bce1.yaml0000666000175000017500000000014513440327640024566 0ustar zuulzuul00000000000000--- features: - Added a new API call, OpenStackCloud.get_object(), to download objects from swift. shade-1.31.0/releasenotes/notes/list-servers-all-projects-349e6dc665ba2e8d.yaml0000666000175000017500000000035013440327640026757 0ustar zuulzuul00000000000000--- features: - Add 'all_projects' parameter to list_servers and search_servers which will tell Nova to return servers for all projects rather than just for the current project. This is only available to cloud admins. shade-1.31.0/releasenotes/notes/add-server-console-078ed2696e5b04d9.yaml0000666000175000017500000000032313440327640025257 0ustar zuulzuul00000000000000--- features: - Added get_server_console method to fetch the console log from a Server. On clouds that do not expose this feature, a debug line will be logged and an empty string will be returned. shade-1.31.0/releasenotes/notes/neutron_availability_zone_extension-675c2460ebb50a09.yaml0000666000175000017500000000044613440327640031121 0ustar zuulzuul00000000000000--- features: - | availability_zone_hints now accepted for create_network() when network_availability_zone extension is enabled on target cloud. - | availability_zone_hints now accepted for create_router() when router_availability_zone extension is enabled on target cloud. shade-1.31.0/releasenotes/notes/new-floating-attributes-213cdf5681d337e1.yaml0000666000175000017500000000017513440327640026330 0ustar zuulzuul00000000000000--- features: - Added support for created_at, updated_at, description and revision_number attributes for floating ips. shade-1.31.0/releasenotes/notes/nova-flavor-to-rest-0a5757e35714a690.yaml0000666000175000017500000000022313440327640025235 0ustar zuulzuul00000000000000--- upgrade: - Nova flavor operations are now handled via REST calls instead of via novaclient. There should be no noticable difference. shade-1.31.0/releasenotes/notes/switch-nova-to-created_at-45b7b50af6a2d59e.yaml0000666000175000017500000000026713440327640026671 0ustar zuulzuul00000000000000--- features: - The `created` field which was returned by the Nova API is now returned as `created_at` as well when not using strict mode for consistency with other models. shade-1.31.0/releasenotes/notes/nova-old-microversion-5e4b8e239ba44096.yaml0000666000175000017500000000030713440327640026012 0ustar zuulzuul00000000000000--- upgrade: - Nova microversion is being requested. Since shade is not yet actively microversion aware, but has been dealing with the 2.0 structures anyway, this should not affect anyone. shade-1.31.0/releasenotes/notes/multiple-updates-b48cc2f6db2e526d.yaml0000666000175000017500000000101313440327640025262 0ustar zuulzuul00000000000000--- features: - Removed unneeded calls that were made when deleting servers with floating ips. - Added pagination support for volume listing. upgrade: - Removed designateclient as a dependency. All designate operations are now performed with direct REST calls using keystoneauth Adapter. - Server creation calls are now done with direct REST calls. fixes: - Fixed a bug related to neutron endpoints that did not have trailing slashes. - Fixed issue with ports not having a created_at attribute. shade-1.31.0/releasenotes/notes/grant-revoke-assignments-231d3f9596a1ae75.yaml0000666000175000017500000000011313440327640026505 0ustar zuulzuul00000000000000--- features: - add granting and revoking of roles from groups and users shade-1.31.0/releasenotes/notes/started-using-reno-242e2b0cd27f9480.yaml0000666000175000017500000000006313440327640025276 0ustar zuulzuul00000000000000--- other: - Started using reno for release notes. shade-1.31.0/releasenotes/notes/fix-config-drive-a148b7589f7e1022.yaml0000666000175000017500000000037213440327640024637 0ustar zuulzuul00000000000000--- issues: - Fixed an issue where nodepool could cause config_drive to be passed explicitly as None, which was getting directly passed through to the JSON. Also fix the same logic for key_name and scheduler_hints while we're in there. shade-1.31.0/releasenotes/notes/fixed-magnum-type-7406f0a60525f858.yaml0000666000175000017500000000036513440327640024762 0ustar zuulzuul00000000000000--- fixes: - Fixed magnum service_type. shade was using it as 'container' but the correct type is 'container-infra'. It's possible that on old clouds with magnum shade may now do the wrong thing. If that occurs, please file a bug. shade-1.31.0/releasenotes/notes/add_update_server-8761059d6de7e68b.yaml0000666000175000017500000000012613440327640025253 0ustar zuulzuul00000000000000--- features: - Add update_server method to update name or description of a server. shade-1.31.0/releasenotes/notes/use-interface-ip-c5cb3e7c91150096.yaml0000666000175000017500000000125613440327640024713 0ustar zuulzuul00000000000000--- fixes: - shade now correctly does not try to attach a floating ip with auto_ip if the cloud has given a public IPv6 address and the calling context supports IPv6 routing. shade has always used this logic to determine the server 'interface_ip', but the auto floating ip was incorrectly only looking at the 'public_v4' value to determine whether the server needed additional networking. upgrade: - If your cloud presents a default split IPv4/IPv6 stack with a public v6 and a private v4 address and you have the expectation that auto_ip should procure a v4 floating ip, you need to set 'force_ipv4' to True in your clouds.yaml entry for the cloud. shade-1.31.0/releasenotes/notes/data-model-cf50d86982646370.yaml0000666000175000017500000000053213440327640023435 0ustar zuulzuul00000000000000--- features: - Explicit data model contracts are now defined for Flavors, Images, Security Groups, Security Group Rules, and Servers. - Resources with data model contracts are now being returned with 'location' attribute. The location carries cloud name, region name and information about the project that owns the resource. shade-1.31.0/releasenotes/notes/add_update_service-28e590a7a7524053.yaml0000666000175000017500000000040613440327640025223 0ustar zuulzuul00000000000000--- features: - Add the ability to update a keystone service information. This feature is not available on keystone v2.0. The new function, update_service(), allows the user to update description, name of service, service type, and enabled status. shade-1.31.0/releasenotes/notes/fix-properties-key-conflict-2161ca1faaad6731.yaml0000666000175000017500000000016613440327640027236 0ustar zuulzuul00000000000000--- issues: - Images in the cloud with a string property named "properties" caused image normalization to bomb. shade-1.31.0/releasenotes/notes/add-list_flavor_access-e038253e953e6586.yaml0000666000175000017500000000017113440327640026021 0ustar zuulzuul00000000000000--- features: - Add a list_flavor_access method to list all the projects/tenants allowed to access a given flavor. shade-1.31.0/releasenotes/notes/fip_timeout-035c4bb3ff92fa1f.yaml0000666000175000017500000000021313440327640024304 0ustar zuulzuul00000000000000--- fixes: - When creating a new server, the timeout was not being passed through to floating IP creation, which could also timeout. shade-1.31.0/releasenotes/notes/normalize-images-1331bea7bfffa36a.yaml0000666000175000017500000000041313440327640025300 0ustar zuulzuul00000000000000--- features: - Image dicts that are returned are now normalized across glance v1 and glance v2. Extra key/value properties are now both in the root dict and in a properties dict. Additionally, cloud and region have been added like they are for server. shade-1.31.0/releasenotes/notes/volume-types-a07a14ae668e7dd2.yaml0000666000175000017500000000015113440327640024355 0ustar zuulzuul00000000000000--- features: - Add support for listing volume types. - Add support for managing volume type access. shade-1.31.0/releasenotes/notes/version-discovery-a501c4e9e9869f77.yaml0000666000175000017500000000076613440327640025301 0ustar zuulzuul00000000000000--- features: - Version discovery is now done via the keystoneauth library. shade still has one behavioral difference from default keystoneauth behavior, which is that shade will use a version it understands if it can find one even if the user has requested a different version. This change opens the door for shade to start being able to consume API microversions as needed. upgrade: - keystoneauth version 3.2.0 or higher is required because of version discovery. shade-1.31.0/releasenotes/notes/less-file-hashing-d2497337da5acbef.yaml0000666000175000017500000000026213440327640025275 0ustar zuulzuul00000000000000--- upgrade: - shade will now only generate file hashes for glance images if both hashes are empty. If only one is given, the other will be treated as an empty string. shade-1.31.0/releasenotes/notes/glance-image-pagination-0b4dfef22b25852b.yaml0000666000175000017500000000017313440327640026343 0ustar zuulzuul00000000000000--- issues: - Fixed an issue where glance image list pagination was being ignored, leading to truncated image lists. shade-1.31.0/releasenotes/notes/get-limits-c383c512f8e01873.yaml0000666000175000017500000000010513440327640023545 0ustar zuulzuul00000000000000--- features: - Allow to retrieve the limits of a specific project shade-1.31.0/releasenotes/notes/always-detail-cluster-templates-3eb4b5744ba327ac.yaml0000666000175000017500000000025513440327640030115 0ustar zuulzuul00000000000000--- upgrade: - Cluster Templates have data model and normalization now. As a result, the detail parameter is now ignored and detailed records are always returned. shade-1.31.0/releasenotes/notes/volume-quotas-5b674ee8c1f71eb6.yaml0000666000175000017500000000027513440327640024547 0ustar zuulzuul00000000000000--- features: - Add new APIs, OperatorCloud.get_volume_quotas(), OperatorCloud.set_volume_quotas() and OperatorCloud.delete_volume_quotas() to manage cinder quotas for projects and usersshade-1.31.0/releasenotes/notes/fix-list-networks-a592725df64c306e.yaml0000666000175000017500000000007513440327640025170 0ustar zuulzuul00000000000000--- fixes: - Fix for list_networks() ignoring any filters. shade-1.31.0/releasenotes/notes/delete-obj-return-a3ecf0415b7a2989.yaml0000666000175000017500000000024213440327640025160 0ustar zuulzuul00000000000000--- fixes: - The delete_object() method was not returning True/False, similar to other delete methods. It is now consistent with the other delete APIs. shade-1.31.0/releasenotes/notes/make_object_metadata_easier.yaml-e9751723e002e06f.yaml0000666000175000017500000000036613440327640030071 0ustar zuulzuul00000000000000--- features: - create_object() now has a "metadata" parameter that can be used to create an object with metadata of each key and value pair in that dictionary - Add an update_object() function that updates the metadata of a swift object shade-1.31.0/releasenotes/notes/fnmatch-name-or-id-f658fe26f84086c8.yaml0000666000175000017500000000030213440327640025141 0ustar zuulzuul00000000000000--- features: - name_or_id parameters to search/get methods now support filename-like globbing. This means search_servers('nb0*') will return all servers whose names start with 'nb0'. shade-1.31.0/releasenotes/notes/get-usage-72d249ff790d1b8f.yaml0000666000175000017500000000010413440327640023522 0ustar zuulzuul00000000000000--- features: - Allow to retrieve the usage of a specific project shade-1.31.0/releasenotes/notes/wait-on-image-snapshot-27cd2eacab2fabd8.yaml0000666000175000017500000000044513440327640026474 0ustar zuulzuul00000000000000--- features: - Adds a new pair of options to create_image_snapshot(), wait and timeout, to have the function wait until the image snapshot being created goes into an active state. - Adds a new function wait_for_image() which will wait for an image to go into an active state. shade-1.31.0/releasenotes/notes/add-support-for-setting-static-routes-b3ce6cac2c5e9e51.yaml0000666000175000017500000000071313440327640031354 0ustar zuulzuul00000000000000--- features: - | The networking API v2 specification, which is implemented by OpenStack Neutron, features an optional routes parameter - when updating a router (PUT requests). Static routes are crucial for routers to handle traffic from subnets not directly connected to a router. The routes parameter has now been added to the OpenStackCloud.update_router method as a list of dictionaries with destination and nexthop parameters. shade-1.31.0/releasenotes/notes/fix-delete-ips-1d4eebf7bc4d4733.yaml0000666000175000017500000000033613440327640024612 0ustar zuulzuul00000000000000--- issues: - Fixed the logic in delete_ips and added regression tests to cover it. The old logic was incorrectly looking for floating ips using port syntax. It was also not swallowing errors when it should. shade-1.31.0/releasenotes/notes/domain_operations_name_or_id-baba4cac5b67234d.yaml0000666000175000017500000000017613440327640027725 0ustar zuulzuul00000000000000--- features: - Added name_or_id parameter to domain operations, allowing an admin to update/delete/get by domain name. shade-1.31.0/releasenotes/notes/create-stack-fix-12dbb59a48ac7442.yaml0000666000175000017500000000021613440327640024755 0ustar zuulzuul00000000000000--- fixes: - The create_stack() call was fixed to call the correct iterator method and to return the updated stack object when waiting. shade-1.31.0/releasenotes/notes/update_endpoint-f87c1f42d0c0d1ef.yaml0000666000175000017500000000046613440327640025155 0ustar zuulzuul00000000000000--- features: - Added update_endpoint as a new function that allows the user to update a created endpoint with new values rather than deleting and recreating that endpoint. This feature only works with keystone v3, with v2 it will raise an exception stating the feature is not available. shade-1.31.0/releasenotes/notes/dual-stack-networks-8a81941c97d28deb.yaml0000666000175000017500000000063313440327640025551 0ustar zuulzuul00000000000000--- features: - Added support for dual stack networks where the IPv4 subnet and the IPv6 subnet have opposite public/private qualities. It is now possible to add configuration to clouds.yaml that will indicate that a network is public for v6 and private for v4, which is otherwise very difficult to correctly infer while setting server attributes like private_v4, public_v4 and public_v6. shade-1.31.0/releasenotes/notes/server-create-error-id-66c698c7e633fb8b.yaml0000666000175000017500000000016513440327640026146 0ustar zuulzuul00000000000000--- features: - server creation errors now include the server id in the Exception to allow people to clean up. shade-1.31.0/releasenotes/notes/set-bootable-volume-454a7a41e7e77d08.yaml0000666000175000017500000000015513440327640025445 0ustar zuulzuul00000000000000--- features: - Added a ``set_volume_bootable`` call to allow toggling the bootable state of a volume. shade-1.31.0/releasenotes/notes/delete-autocreated-1839187b0aa35022.yaml0000666000175000017500000000025113440327640025134 0ustar zuulzuul00000000000000--- features: - Added new method, delete_autocreated_image_objects that can be used to delete any leaked objects shade may have created on behalf of the user. shade-1.31.0/releasenotes/notes/no-more-troveclient-0a4739c21432ac63.yaml0000666000175000017500000000030513440327640025363 0ustar zuulzuul00000000000000--- upgrade: - troveclient is no longer a hard dependency. Users who were using shade to construct a troveclient Client object should use os_client_config.make_legacy_client instead. shade-1.31.0/releasenotes/notes/remove-novaclient-3f8d4db20d5f9582.yaml0000666000175000017500000000021413440327640025273 0ustar zuulzuul00000000000000--- upgrade: - All Nova interactions are done via direct REST calls. python-novaclient is no longer a direct dependency of shade. shade-1.31.0/releasenotes/notes/list-role-assignments-keystone-v2-b127b12b4860f50c.yaml0000666000175000017500000000013113440327640030156 0ustar zuulzuul00000000000000--- features: - Implement list_role_assignments for keystone v2, using roles_for_user. shade-1.31.0/releasenotes/notes/feature-server-metadata-50caf18cec532160.yaml0000666000175000017500000000024713440327640026335 0ustar zuulzuul00000000000000--- features: - Add new APIs, OpenStackCloud.set_server_metadata() and OpenStackCloud.delete_server_metadata() to manage metadata of existing nova compute instances shade-1.31.0/releasenotes/notes/removed-glanceclient-105c7fba9481b9be.yaml0000666000175000017500000000056513440327640026007 0ustar zuulzuul00000000000000--- prelude: > This release marks the beginning of the path towards removing all of the 'python-\*client' libraries as dependencies. Subsequent releases should expect to have fewer and fewer library depdencies. upgrade: - Removed glanceclient as a dependency. All glance operations are now performed with direct REST calls using keystoneauth Adapter. shade-1.31.0/releasenotes/notes/bug-2001080-de52ead3c5466792.yaml0000666000175000017500000000063613440327640023232 0ustar zuulzuul00000000000000--- prelude: > Fixed a bug where a project was always enabled upon update, unless ``enabled=False`` is passed explicitly. fixes: - | [`bug 2001080 `_] Project update will only update the enabled field of projects when ``enabled=True`` or ``enabled=False`` is passed explicitly. The previous behavior had ``enabled=True`` as the default. shade-1.31.0/releasenotes/notes/config-flavor-specs-ca712e17971482b6.yaml0000666000175000017500000000017013440327640025340 0ustar zuulzuul00000000000000--- features: - Adds ability to add a config setting to clouds.yaml to disable fetching extra_specs from flavors. shade-1.31.0/releasenotes/notes/add_heat_tag_support-0668933506135082.yaml0000666000175000017500000000031213440327640025346 0ustar zuulzuul00000000000000--- features: - | Add tags support when creating a stack, as specified by the openstack orchestration api at [1] [1]https://developer.openstack.org/api-ref/orchestration/v1/#create-stack shade-1.31.0/releasenotes/notes/add_designate_zones_support-35fa9b8b09995b43.yaml0000666000175000017500000000020013440327640027343 0ustar zuulzuul00000000000000--- features: - Add support for Designate zones resources, with the usual methods (search/list/get/create/update/delete). shade-1.31.0/releasenotes/notes/network-quotas-b98cce9ffeffdbf4.yaml0000666000175000017500000000030013440327640025311 0ustar zuulzuul00000000000000--- features: - Add new APIs, OperatorCloud.get_network_quotas(), OperatorCloud.set_network_quotas() and OperatorCloud.delete_network_quotas() to manage neutron quotas for projects and usersshade-1.31.0/releasenotes/notes/cache-in-use-volumes-c7fa8bb378106fe3.yaml0000666000175000017500000000011213440327640025640 0ustar zuulzuul00000000000000--- fixes: - Fixed caching the volume list when volumes are in use. shade-1.31.0/releasenotes/notes/lifesupport-d6e700c3226e35d6.yaml0000666000175000017500000000060213440327640024120 0ustar zuulzuul00000000000000--- prelude: > The relationship between shade and openstacksdk has been decoupled and the shade library is now officially in maintenance mode. Bugfixes are welcome, and the library remains supported, but no new features will be accepted. Developers are strongly encouraged to migrate to openstacksdk which has the same code but is being actively driven forward. shade-1.31.0/releasenotes/notes/toggle-port-security-f5bc606e82141feb.yaml0000666000175000017500000000060613440327640026021 0ustar zuulzuul00000000000000--- features: - | Added a new property, 'port_security_enabled' which is a boolean to enable or disable port_secuirty during network creation. The default behavior will enable port security, security group and anti spoofing will act as before. When the attribute is set to False, security group and anti spoofing are disabled on the ports created on this network. shade-1.31.0/releasenotes/notes/stream-to-file-91f48d6dcea399c6.yaml0000666000175000017500000000011713440327640024561 0ustar zuulzuul00000000000000--- features: - get_object now supports streaming output directly to a file. shade-1.31.0/releasenotes/notes/image-flavor-by-name-54865b00ebbf1004.yaml0000666000175000017500000000061313440327640025432 0ustar zuulzuul00000000000000--- features: - The image and flavor parameters for create_server now accept name in addition to id and dict. If given as a name or id, shade will do a get_image or a get_flavor to find the matching image or flavor. If you have an id already and are not using any caching and the extra lookup is annoying, passing the id in as "dict(id='my-id')" will avoid the lookup. shade-1.31.0/releasenotes/notes/add_description_create_user-0ddc9a0ef4da840d.yaml0000666000175000017500000000012513440327640027554 0ustar zuulzuul00000000000000--- features: - Add description parameter to create_user, available on Keystone v3 shade-1.31.0/releasenotes/notes/remove-magnumclient-875b3e513f98f57c.yaml0000666000175000017500000000016713440327640025556 0ustar zuulzuul00000000000000--- upgrade: - magnumclient is no longer a direct dependency as magnum API calls are now made directly via REST. shade-1.31.0/releasenotes/notes/swift-upload-lock-d18f3d42b3a0719a.yaml0000666000175000017500000000035013440327640025163 0ustar zuulzuul00000000000000--- fixes: - Fixed an issue where a section of code that was supposed to be resetting the SwiftService object was instead resetting the protective mutex around the SwiftService object leading to an exception of "__exit__" shade-1.31.0/releasenotes/notes/norm_role_assignments-a13f41768e62d40c.yaml0000666000175000017500000000017213440327640026155 0ustar zuulzuul00000000000000--- fixes: - Role assignments were being returned as plain dicts instead of Munch objects. This has been corrected. shade-1.31.0/releasenotes/notes/compute-quotas-b07a0f24dfac8444.yaml0000666000175000017500000000027513440327640024673 0ustar zuulzuul00000000000000--- features: - Add new APIs, OperatorCloud.get_compute_quotas(), OperatorCloud.set_compute_quotas() and OperatorCloud.delete_compute_quotas() to manage nova quotas for projects and usersshade-1.31.0/releasenotes/notes/mtu-settings-8ce8b54d096580a2.yaml0000666000175000017500000000033613440327640024224 0ustar zuulzuul00000000000000--- features: - | create_network now exposes the mtu api option in accordance to network v2 api. This allows the operator to adjust the given MTU value which is needed in various complex network deployments. shade-1.31.0/releasenotes/notes/strict-mode-d493abc0c3e87945.yaml0000666000175000017500000000035413440327640024073 0ustar zuulzuul00000000000000--- features: - Added 'strict' mode, which is set by passing strict=True to the OpenStackCloud constructor. strict mode tells shade to only return values in resources that are part of shade's declared data model contract. shade-1.31.0/releasenotes/notes/server-security-groups-840ab28c04f359de.yaml0000666000175000017500000000024313440327640026320 0ustar zuulzuul00000000000000--- features: - Add the `add_server_security_groups` and `remove_server_security_groups` functions to add and remove security groups from a specific server. shade-1.31.0/releasenotes/notes/cleanup-objects-f99aeecf22ac13dd.yaml0000666000175000017500000000035213440327640025207 0ustar zuulzuul00000000000000--- features: - If shade has to create objects in swift to upload an image, it will now delete those objects upon successful image creation as they are no longer needed. They will also be deleted on fatal import errors. shade-1.31.0/releasenotes/notes/infer-secgroup-source-58d840aaf1a1f485.yaml0000666000175000017500000000064713440327640026065 0ustar zuulzuul00000000000000--- features: - If a cloud does not have a neutron service, it is now assumed that Nova will be the source of security groups. To handle clouds that have nova-network and do not have the security group extension, setting secgroup_source to None will prevent attempting to use them at all. If the cloud has neutron but it is not a functional source of security groups, set secgroup_source to nova. shade-1.31.0/releasenotes/notes/change-attach-vol-return-value-4834a1f78392abb1.yaml0000666000175000017500000000040513440327640027462 0ustar zuulzuul00000000000000--- upgrade: - | The ``attach_volume`` method now always returns a ``volume_attachment`` object. Previously, ``attach_volume`` would return a ``volume`` object if it was called with ``wait=True`` and a ``volume_attachment`` object otherwise. shade-1.31.0/releasenotes/notes/orch_timeout-a3953376a9a96343.yaml0000666000175000017500000000013013440327640024115 0ustar zuulzuul00000000000000--- fixes: - | Allow None for timeout in create_stack() and update_stack() calls. shade-1.31.0/releasenotes/notes/removed-swiftclient-aff22bfaeee5f59f.yaml0000666000175000017500000000023013440327640026206 0ustar zuulzuul00000000000000--- upgrade: - Removed swiftclient as a dependency. All swift operations are now performed with direct REST calls using keystoneauth Adapter. shade-1.31.0/releasenotes/notes/cinder_volume_backups_support-6f7ceab440853833.yaml0000666000175000017500000000017613440327640027720 0ustar zuulzuul00000000000000--- features: - Add support for Cinder volume backup resources, with the usual methods (search/list/get/create/delete). shade-1.31.0/releasenotes/notes/endpoint-from-catalog-bad36cb0409a4e6a.yaml0000666000175000017500000000020613440327640026143 0ustar zuulzuul00000000000000--- features: - Add new method, 'endpoint_for' which will return the raw endpoint for a given service from the current catalog. shade-1.31.0/releasenotes/notes/flavor_fix-a53c6b326dc34a2c.yaml0000666000175000017500000000040113440327640024026 0ustar zuulzuul00000000000000--- features: - Flavors will always contain an 'extra_specs' attribute. Client cruft, such as 'links', 'HUMAN_ID', etc. has been removed. fixes: - Setting and unsetting flavor extra specs now works. This had been broken since the 1.2.0 release. shade-1.31.0/releasenotes/notes/add_magnum_baymodel_support-e35e5aab0b14ff75.yaml0000666000175000017500000000047113440327640027531 0ustar zuulzuul00000000000000--- features: - Add support for Magnum baymodels, with the usual methods (search/list/get/create/update/delete). Due to upcoming rename in Magnum from baymodel to cluster_template, the shade functionality uses the term cluster_template. However, baymodel aliases are provided for each api call. shade-1.31.0/releasenotes/notes/create_server_network_fix-c4a56b31d2850a4b.yaml0000666000175000017500000000040413440327640027062 0ustar zuulzuul00000000000000--- fixes: - The create_server() API call would not use the supplied 'network' parameter if the 'nics' parameter was also supplied, even though it would be an empty list. It now uses 'network' if 'nics' is not supplied or if it is an empty list. shade-1.31.0/releasenotes/notes/add_designate_recordsets_support-69af0a6b317073e7.yaml0000666000175000017500000000020513440327640030352 0ustar zuulzuul00000000000000--- features: - Add support for Designate recordsets resources, with the usual methods (search/list/get/create/update/delete). shade-1.31.0/releasenotes/notes/add-show-all-images-flag-352748b6c3d99f3f.yaml0000666000175000017500000000063213440327640026220 0ustar zuulzuul00000000000000--- features: - Added flag "show_all" to list_images. The behavior of Glance v2 to only show shared images if they have been accepted by the user can be confusing, and the only way to change it is to use search_images(filters=dict(member_status='all')) which isn't terribly obvious. "show_all=True" will set that flag, as well as disabling the filtering of images in "deleted" state. shade-1.31.0/releasenotes/notes/add-current-user-id-49b6463e6bcc3b31.yaml0000666000175000017500000000021013440327640025376 0ustar zuulzuul00000000000000--- features: - Added a new property, 'current_user_id' which contains the id of the currently authenticated user from the token. shade-1.31.0/releasenotes/notes/cinderv2-norm-fix-037189c60b43089f.yaml0000666000175000017500000000012113440327640024661 0ustar zuulzuul00000000000000--- fixes: - Fixed the volume normalization function when used with cinder v2. shade-1.31.0/releasenotes/notes/alternate-auth-context-3939f1492a0e1355.yaml0000666000175000017500000000026713440327640026020 0ustar zuulzuul00000000000000--- features: - Added methods for making new cloud connections based on the current OpenStackCloud. This should enable working more easily across projects or user accounts. shade-1.31.0/releasenotes/notes/image-from-volume-9acf7379f5995b5b.yaml0000666000175000017500000000010213440327640025202 0ustar zuulzuul00000000000000--- features: - Added ability to create an image from a volume. shade-1.31.0/.mailmap0000666000175000017500000000020713440327640014327 0ustar zuulzuul00000000000000# Format is: # # shade-1.31.0/playbooks/0000775000175000017500000000000013440330010014671 5ustar zuulzuul00000000000000shade-1.31.0/playbooks/devstack/0000775000175000017500000000000013440330010016475 5ustar zuulzuul00000000000000shade-1.31.0/playbooks/devstack/legacy-git.yaml0000666000175000017500000000040113440327640021422 0ustar zuulzuul00000000000000- hosts: all tasks: - name: Set shade libraries to master branch before functional tests command: git checkout master args: chdir: "src/git.openstack.org/openstack/{{ item }}" with_items: - keystoneauth - os-client-config shade-1.31.0/playbooks/devstack/post.yaml0000666000175000017500000000011013440327640020357 0ustar zuulzuul00000000000000- hosts: all roles: - fetch-tox-output - fetch-subunit-output shade-1.31.0/ChangeLog0000664000175000017500000017337213440330007014463 0ustar zuulzuul00000000000000CHANGES ======= 1.31.0 ------ * Fix dogpile.cache 0.7.0 interaction * Restrict inventory test to devstack-admin * Change openstack-dev to openstack-discuss * Split parser creation and parser for inventory * Fix ansible stable pin in tox test * Fix grant\_role() when user not in default domain * Use openSUSE 15.0 for testing * change default python 3 env in tox to 3.5 * Add python 3.6 unit test job * Update the min version of tox to 2.0 1.30.0 ------ * Support v4-fixed-ip and v6-fixed-ip in create\_server * Add release note about decoupling * Decouple OpenStackCloud from Connection * Trim away the cover and py35 jobs * Remove the task manager * Update the url in doc * Cleanup .zuul.yaml * Replace assertRaisesRegexp with assertRaisesRegex * add python 3.6 unit test job * switch documentation job to new PTI * import zuul job settings from project-config * Fix format in release notes * Update reno for stable/rocky * Add support for static routes * python-shade expose MTU setting 1.29.0 ------ * Remove redundant target in README * Use valid filters to list floating IPs in neutron * Fix doc mistake * Make OpenStackCloud a subclass of Connection * Fix for passing dict for get\_\* methods * Switch bifrost jobs to nonvoting * Finish migrating image tests to requests-mock * Convert image\_client mocks in test\_shade\_operator * Convert test\_caching to requests-mock * Convert domain params tests to requests\_mock * Use RequestsMockTestCase everywhere * Improve Magnum cluster templates functions * add release notes to README.rst * Change 'Member' role reference to 'member' * fix tox python3 overrides * Remove shade-ansible-devel job * Update ansible test job to run against stable-2.5 * Allow explicitly setting enable\_snat to either value * Switch to iterable version of server listing (and expose iterable method) * Fix recent pep8 issues * Use openstack.config directly for config * remove redundant information from release notes build * Make name setting in connect\_as more resilient 1.28.0 ------ * Backport connect\_as fix from openstacksdk * Add get\_volume\_limits() support * Trivial: Update pypi url to new url * Build universal wheels * add lower-constraints job * list\_servers pagination support * Rename python-openstacksdk to openstacksdk * Updated from global requirements * Updated from global requirements * create\_subnet: Add filter on tenant\_id if specified * Updated from global requirements * Adds toggle port security on network create * Add proper return value for validate\_node * Add extra failure codes to bad request exception * Make shade-tox-tips actually run shade tests * Fix private\_v4 selection related to floating ip matching * Functional test robustness and performance fix * Update the invalid url in pages * Fix for timeout=None in orchestration API calls * Updated from global requirements * Fetch tox dir and html reports * Fix functional test about port * Use openstacksdk for most transitive depends * Allow not resolving outputs on get stacks * Zuul: Remove project name * Fix get\_server to work with use\_direct\_get * Plumb use-direct-get through factory functions * Switch to providing created\_at field for servers * Shift voting flag and test\_matrix\_branch for ansible-devel job * Updated from global requirements * Update reno for stable/queens * Add devel branches and override-checkout for ansible-devel job * Shift doc requirements to doc/requirements.txt * Use devstack functional test base job * Updated from global requirements * Remove inner\_exceptions plumbing * Make meta.find\_best\_address() more generic * Make floating IP to be prefered over fixed when looking for IP * Updated from global requirements * Fix address format used in find\_best\_address() * Use Zuul v3 fetch-subunit-output * List ansible/ansible in required-projects * Add supported method for checking the network exts 1.26.0 ------ * Throw OpenStackCloudCreateException on create errors * Revert "Allow grant\_role to select users outside default domain" * Add tag support to create\_stack * Pass through all\_projects for get\_server * Baremetal NIC list should return a list * Fix batching for floating ips and ports * Add retry logic mechanism * Remove shade-functional-devstack-legacy * Remove python-ironicclient * De-client-ify fixed method get\_nic\_by\_mac * Fix operator cloud get\_nic\_by\_mac * De-client-ify many baremetal calls * Add helper to wait until baremetal locks clear * Add bifrost jobs * Remove version arg from updated ironic calls * De-client-ify machine patch operations * De-client-ify baremetal machine port list * De-clientify baremetal create/delete * Avoid tox\_install.sh for constraints support * Fix basepython setting in tox.ini * Catch attrbute error for other APIs * de-client-ify baremetal get\_machine * De-client-ify baremetal node\_set\_provision\_state 1.25.0 ------ * Implement availability\_zone\_hints for networks and routers * Allow grant\_role to select users outside default domain * Switch baremetal nics/ports tests over * Complete move of baremetal machine tests * Add method to cleanup autocreated image objects * Remove setting of version/release from releasenotes * Cleanup objects that we create on behalf of images * Updated from global requirements * Remove unnecessary roles reference * Fix the devstack role for base functional job * Document current\_user\_id in a release note * Updated from global requirements * Remove reference to context-managers from release note * Fix creating a server with specifying scheduler\_hints * Add helper property to get the current user id * Add ability to work in other auth contexts * Fix regression for list\_router\_interfaces * Zuul: add file extension to playbook path * Add project-template for functional tips jobs * Turn on voting for functional tips jobs * Add devstack jobs for zuul v3 * Add unittest tips jobs * Support filtering servers in list\_servers using arbitrary parameters * Handle glance image pagination links better 1.24.0 ------ * Move role normalization to normalize.py * Allow domain\_id for roles * Add method to set bootable flag on volumes * Image should be optional * Add group parameter to create\_server * Fix image task uploads * Temporarily disable volume and os\_image functional tests * Consume publish-openstack-sphinx-docs * Record server.id in server creation exception * Stop using openstack-doc-build * Add support for network quota details command * Add pypi and doc publication templates * Updated from global requirements * Fix search\_groups * Remove EndpointCreate and \_project\_manager * Remove use of legacy keystone client in functional tests * Updated from global requirements * Remove keystoneclient dependency * De-client-ify Endpoint Create * Refactor the create endpoint code * Reorganize endpoint create code * Switch to constraints version of tox job * Convert test\_baremetal\_machine\_patch to testscenarios * Add openstack-doc-build to shade * Switch to normal tox-py35 job * Switch to using stestr * Migrate machine tests related to state transitions * Migrate machine inspection tests to requests\_mock * Migrate additional machine tests * De-client-ify Endpoint Update * De-client-ify List Role Assignments * De-client-ify Endpoint List * De-client-ify List Roles for User in v2.0 * De-client-ify Role Grant and Revoke * De-client-ify Endpoint Delete * De-client-ify User Password Update * Begin converting baremetal node tests * Remove improper exc handling in is\_user\_in\_group 1.23.0 ------ * De-client-ify Remove User from Group * Correct baremetal fake data model * De-client-ify Check User in Group * De-client-ify Add User to Group * Use direct calls to get\_\_by\_id * De-client-ify User Update * Use new keystoneauth version discovery * Fix typo in tox.ini * Updated from global requirements * Add tox\_install.sh to deal with upper-constraints * Support domain\_id for user operations * Add domain\_id to groups * Add handling timeout in servers cleanup function * Fix handling timeouts in volume functional tests cleanup * Fix switched params * Switch to \_is\_client\_version in list\_services * De-client-ify Service Delete * De-client-ify Service Update * Fix cleaning of Cinder volumes in functional tests * De-client-ify Service List * Add option to force delete cinder volume * Updated from global requirements * Fix determining if IPv6 is supported when it's disabled * Don't determine local IPv6 support if force\_ip4=True * Consolidate client version checks in an utility method * Add functional tests for Neutron QoS policies and rules * Support to get resource by id * Make get\_server\_console tests more resilient * Make QoS rules required parameters to be not optional * Use valid\_kwargs decorator in QoS related functions * Add support for get details of available QoS rule type * Use more specific asserts in tests * Add Neutron QoS minimum bandwidth rule commands * Update reno for stable/pike * Add Neutron QoS dscp marking rule commands * Updated from global requirements * router: Ignore L3 HA ports when listing interfaces * Initial commit of zuulv3 jobs * Update the documentation link for doc migration * Replace six.itervalues with dict.values() * Consolidate the use of self.\_get\_and\_munchify * De-client-ify Role Delete * De-client-ify Role List * De-client-ify Role Create * De-client-ify Group Delete * De-client-ify Group Update * De-client-ify Group List * De-client-ify Group Create * Updated from global requirements * Don't remove top-container element in the adapter * Improve doc formatting a bit * Added useful links to README * Add Neutron QoS bandwidth limit rule commands * De-client-ify Service Create * Add debug to tox environment * Remove hard-coding of timeout from API * Make sure we don't fail open on bad input to validate * Make sure we pass propert dicts to validate * Add flag to include all images in image list * Add support for list available QoS rule types * Add validation of required QoS extensions in Neutron * De-client-ify Domain Search * De-client-ify Domain Get * De-client-ify Domain List * De-client-ify User Create * Use the right variable name in userdata encoding * Add searching for Neutron API extensions * Add Neutron QoS policies commands * De-client-ify Domain Update and Delete * De-client-ify Domain Create * switch from oslosphinx to openstackdocstheme * reorganize docs using the new standard layout * Don't remove top-container element for flavor, zones and server groups * Updated from global requirements * Don't remove top-container element for flavors and clusters * Project update to change enabled only when provided 1.22.2 ------ * Fix mismatch between port and port-id for REST call * Remove a direct mocking of \_image\_client * Fix image normalization when image has properties property * Fix delete\_ips on delete\_server and add tests * Fix config\_drive, scheduler\_hints and key\_name in create\_server * Don't fail hard on 404 from neutron FIP listing * Only search for floating ips if the server has them 1.22.1 ------ * Don't try to delete fips on non-fip clouds * Return an empty list on FIP listing failure 1.22.0 ------ * Don't remove top-container element for server REST API calls * base64 encode user\_data sent to create server * Remove novaclient from shade's dependencies * Translate final nova calls to REST * Convert remaining nova tests to requests\_mock * Convert host aggregates calls to REST * Convert host aggregate tests to requests\_mock * Convert hypervisor list to REST * Convert hypervisor test to requests\_mock * Convert Server Groups to REST * Convert server group tests to requests\_mock * Convert FakeSecGroup to dict * Remove use of FakeServer from tests * Don't remove top-container element for user and project REST API calls * Convert keypairs calls to REST * Add normalization and functional tests for keypairs * Remove future document * Add text about microversions * Convert keypairs tests to requests\_mock * Convert list\_servers to REST * Convert list servers tests to requests\_mock * Remove some unused mocks * Break early from volume cleanup loop * Add some release notes we forgot to add * Retry to fetch paginated volumes if we get 404 for next link * docs: make the first example easier to understand * Properly expand server dicts after rebuild and update * Migrate non-list server interactions to REST * Increase timeout for volume tests * Skip pagination test for now * Fix urljoin for neutron endpoint * Remove py34 and pypy in tox * Replace six.iteritems() with .items() * Update tests for server calls that aren't list * Convert delete server calls to REST * Convert delete server mocks to requests\_mock * Convert get\_server\_by\_id * RESTify create\_server * Don't fetch extra\_specs in functional tests * Convert create\_server mocks to request\_mock * Add boot from volume unit tests * Cleanup volumes in functional tests in parallel * De-client-ify Project Update * De-client-ify Project Create * De-client-ify Project Delete * De-client-ify Project List * Don't remove top-container element for sec group REST API calls * Improve grant docs on when and how use domain arg * Don't remove top-container for stack and zone REST API calls * Updated from global requirements * Rename obj\_to\_dict and obj\_list\_to\_dict * Don't remove top-container element for network REST API calls * Convert data from raw clients to Munch objects * Remove unneeded calls to shade\_exceptions * Don't remove top-container element for volume REST API calls * Use get\_discovery from keystoneauth * De-client-ify User Ops * Add links to user list dict * Avoid keystoneclient making yet another discovery call * Use shade discovery for keystone * Updated from global requirements * Migrate dns to new discovery method * Generalize version discovery for re-use * Pass hints to Cinder scheduler in create\_volume * Remove designate client from shade's dependencies * Do less work when deleting a server and floating ips * Remove designateclient from commands related to recordsets * Add pagination for the list\_volumes call * Handle ports with no 'created\_at' attribute * Update test\_user\_update\_password to overlay clouds.yaml * Fix legacy clients helpers * Remove unused occ version tie * Remove designateclient from commands related to zones * Add documentation about shade's use of logging * Add novaclient interactions to http\_debug * Set some logger names explicitly * Add logging of non-standard error message documents * Log specific error message from RetriableConnectionFailure * Updated from global requirements * Fix python3 issues in functional tests * Add time reporting to Connection Retry message * Log cloud name on Connection retry issues * Use catalog endpoint on any errors in image version discovery 1.21.0 ------ * Pick most recent rather than first fixed address * Allow a user to submit start and end time as strings * Fix get\_compute\_limits error message * Fix get\_compute\_usage normalization problem * Find private ip addr based on fip attachment * Add ability to run any tox env in python3 * Fix issue with list\_volumes when pagination is used * Make sure security\_groups is always a list * Updated from global requirements * Remove direct uses of nova\_client in functional tests * Updated from global requirements * Remove designateclient mock from recordset tests * Convert list\_server\_security\_groups to REST * Remove two unused nova tasks * Include error message from server if one exists * Optimize the case of versioned image endpoint in catalog * Fix broken version discovery endpoints * Remove cinderclient from install-tips.sh * Fix tips jobs and convert Nova Floating IP calls * Convert first ironic\_client test to REST * Move mocks of designate API discovery calls to base test class * Fix exception when using boot\_from\_volume for create\_server * Move legacy client constructors to mixin * Fix pep8 errors that were lurking * Remove cinder client * Make deprecated client helper method * Add 'public' as a default interface for get\_mock\_url * Add super basic machine normalization * Remove designateclient mock from zones tests * Remove direct calls to cinderclient * Add "Multi Cloud with Shade" presentation * Use REST API for volume quotas calls * Add pprint and pformat helper methods * extend security\_group and \_rule with project id * Remove neutronclient from shade's dependencies * Remove cinderclient mocks from quotas tests * Fix Neutron floating IP test * Use REST API for volume snapshot calls * Remove usage of neutron\_client from functional tests * Enable neutron service in server create and rebuild tests * Replace neutronclient with REST API calls in FIP commands * Updated from global requirements * Add assert\_calls check testing volume calls with timeout enabled * Remove has\_service mock from Neutron FIP tests * Remove cinderclient mocks from snapshot tests * Remove neutronclient mocks from floating ips tests * Use REST API for volume attach and volume backup calls * Replace neutronclient with REST API calls in ports commands * Don't get ports info from unavailable neutron service * Removing unsed fake methods and classes * Replace neutronclient with REST API calls in quotas commands * Replace neutronclient with REST API calls in security groups commands * Use REST API for volume delete and detach calls * Use REST API for volume type\_access and volume create * Refactor the test\_create\_volume\_invalidates test * Replace neutronclient with REST API calls in router commands * Move REST error\_messages to error\_message argument * Remove two lines that are leftover and broken * Convert test\_role\_assignments to requests mock * Remove neutronclient mocks from sec groups tests * Remove neutronclient mocks from quotas tests * Remove neutronclient mocks from ports tests * Add optional error\_message to adapter.request * Add in a bunch of TODOs about interface=admin * Set interface=admin for keystonev2 keystone tests * Add a \_normalize\_volume\_backups method * Use requests-mock for the volume backup tests * Remove neutronclient mocks from router tests * Replace neutronclient with REST API calls in subnet commands * Define a base function to remove unneeded attributes * Remove neutronclient mocks from subnet tests * Replace neutronclient with REST API calls in network commands * Move router related tests to separate module * Updated from global requirements * Move subnet related tests to separate module * Fix list\_servers tests to not need a ton of neutron * Remove neutronclient mocks from network create tests * Remove neutronclient mocks from network exceptions tests * Remove neutronclient mocks from network delete tests * Remove neutronclient mocks from network list tests * Use requests-mock for the list/add/remove volume types tests * Fix create/rebuild tests to not need a ton of neutron * Don't do all the network stuff in the rebuild poll * Move unit tests for list networks to test\_network.py file * Include two transitive dependencies to work around conflicts * Use requests-mock for all the attach/detach/delete tests * Remove stray line * Strip trailing slashes in test helper method 1.20.0 ------ * Clarify some variable names in glance discovery * \_discover\_latest\_version is private and not used * Remove extra unneeded API calls * Change versioned\_endpoint to endpoint\_uri * Futureproof keystone unit tests against new occ * Actually fix the app\_name protection * Replace nova security groups with REST * Transition nova security group tests to REST * Remove dead ImageSnapshotCreate task * Pass in app\_name information to keystoneauth * Use REST for cinder list volumes * Upgrade list volumes tests to use requests-mock * Updated from global requirements * Pass shade version info to session user\_agent * Use keystone\_session in \_get\_raw\_client * Don't fail on security\_groups=None * Stop defaulting container\_format to ovf for vhd * Don't run extra server info on every server in list * Use REST for neutron floating IP list * Migrate create\_image\_snapshot to REST * Add ability to configure extra\_specs to be off * Migrate server snapshot tests to requests\_mock 1.19.0 ------ * Add test to validate multi \_ heat stack\_status * Fixed stack\_status.split() exception * Add server security groups to shade * Updated from global requirements * Add bare parameter to get/list/search server * Update tox build settings * Take care of multiple imports and update explanation * Reenable hacking tests that already pass * Enable H201 - don't throw bare exceptions * Enable H238 - classes should be subclasses of object * Fix a few minor annoyances that snuck in * Don't use project-id in catalog tests * Change metadata to align with team affiliation 1.18.1 ------ * Move futures to requirements 1.18.0 ------ * Remove python-heatclient and replace with REST * Replace heatclient testing with requests\_mock * Add normalization for heat stacks * Add list\_availability\_zone\_names method * Switch list\_floating\_ip\_pools to REST * Strip out novaclient extra attributes * Convert floating\_ip\_pools unittest to requests\_mock * Migrate get\_server\_console to REST * Migrate server console tests to requests\_mock * Fix old-style mocking of nova\_client * Accept device\_id option when updating ports * Get rid of magnumclient dependency * attach\_volume should always return a vol attachment * wait\_for\_server: ensure we sleep a bit when waiting for server * delete\_server: make sure we sleep a bit when waiting for server deletion * Convert magnum service to requests\_mock * RESTify cluster template tests * Add normalization for cluster templates * Get the ball rolling on magnumclient * Use data when the request has a non-json content type * Cleanup some workarounds for old OCC versions * add separate releasenotes build * Update sphinx and turn on warnings-is-error * Convert test\_identity\_roles to requests mock * change test\_endpoints to use requests mock 1.17.0 ------ * Depend on pbr>=2.0.0 * Convert test\_services to requests\_mock * Only do fnmatch compilation and logging once per loop * Put fnmatch code back, but safely this time * Replace keystone\_client mock in test\_groups * Use unicode match for name\_or\_id * Raise a more specific exception on nova 400 errors * Don't glob match name\_or\_id * Rename ClusterTemplate in OpenStackCloud docs * Fix OpenStack and ID misspellings * Remove service names in OpenStackCloud docs * Convert test\_object to use .register\_uris * Convert use of .register\_uri to .register\_uris * Change request\_id logging to match nova format * Actually normalize nova usage data * Fix several concurrent shade gate issues * Wait for volumes to detach before deleting them * Add accessor method to pull URLs from the catalog * Convert use of .register\_uri to .register\_uris * Remove keystoneclient mocks in test\_caching for users * Remove mock of keystoneclient for test\_caching for projects * Remove mock of keystone where single projects are consumed * Rename demo\_cloud to user\_cloud * Add all\_projects parameter to list and search servers * Convert test\_project to requests\_mock * convert test\_domain to use requests\_mock * Move mock utilies into base * Convert test\_users to requests\_mock * Add request validation to user v2 test * Convert first V3 keystone test to requests\_mock * Cleanup new requests\_mock stuff for test\_users * First keystone test using request\_mock 1.16.0 ------ * Add test of attaching a volume at boot time * pass -1 for boot\_index of non-boot volumes * Pass task to post\_task\_run hook * Rename ENDPOINT to COMPUTE\_ENDPOINT * Transition half of test\_floating\_ip\_neutron to requests\_mock * Start switching neutron tests * Port in log-on-failure code from zuul v3 * Honor cloud.private in the check for public connectivity * Support globbing in name or id checks * Stop spamming logs with unreachable address message * Remove troveclient from the direct dependency list * Move nova flavor interactions to REST * Migrate flavor usage in test\_create\_server to request\_mock * Migrate final flavor tests to requests\_mock * Move flavor cache tests to requests\_mock * Transition nova flavor tests to requests\_mock * Add ability to create image from volume 1.15.0 ------ * Use port list to find missing floating ips * Process json based on content-type * Copy in needed template processing utils from heatclient * Upload images to swift as application/octet-stream * Add ability to stream object directly to file * Update coding document to mention direct REST calls * Skip discovery for neutron * Add helper test method for registering REST calls * Do neutron version discovery and change one test * Add raw client constructors for all the things * Replace SwiftService with direct REST uploads * Fix spin-lock behavior in \_iterate\_timeout * Add helper script to install branch tips * Basic volume\_type access * Add support to task manager for async tasks * Added list\_flavor\_access * Removes unnecessary utf-8 encoding * Log request ids when debug logging is enabled * Honor image\_endpoint\_override for image discovery * Rework limits normalization 1.14.1 ------ * Handle pagination for glance images * Add support for using the default subnetpool * Remove link to modindex 1.14.0 ------ * Fix exception name typo * Add failure check to node\_set\_provision\_state * Add test to verify devstack keystone config * Make assert\_calls a bit more readable * Update swift exception tests to use 416 * Make delete\_object return True and False * Switch swift calls to REST * Stop using full\_listing in prep for REST calls * Stop calling HEAD before DELETE for objects * Replace mocks of swiftclient with request\_mock * Put in magnumclient service\_type workaround * Let use\_glance handle adding the entry to self.calls * Combine list of calls with list of request assertions * Extract helper methods and change test default to v3 * Make munch aware assertEqual test method * Extract assertion method for asserting calls made * Change get\_object\_metadata to use REST * Update test of object metadata to mock requests * Add release notes and an error message for release * Add total image import time to debug log * Clear the exception stack when we catch and continue * Magnum's keystone id is container-infra, not container * Stop double-reporting extra\_data in exceptions * Pass md5 and sha256 to create\_object sanely * Convert glance parts of task test to requests\_mock * Collapse base classes in test\_image * Skip volume backup tests on clouds without swift * Add new attributes to floating ips * Add test to trap for missing services * Change fixtures to use https * Honor image\_api\_version when doing version discovery * Replace swift capabilities call with REST * Change register\_uri to use the per-method calls * Convert test\_create\_image\_put\_v2 to requests\_mock * Remove caching config from test\_image * Move image tests from caching to image test file * Remove glanceclient and warlock from shade * Remove a few glance client mocks we missed * Change image update to REST * Make available\_floating\_ips use normalized keys * Fix \_neutron\_available\_floating\_ips filtering * Stop telling users to check logs * Plumb nat\_destination through for ip\_pool case * Update image downloads to use direct REST * Move image tasks to REST * Add support for limits * Tox: optimize the \`docs\` target * Replace Image Create/Delete v2 PUT with REST calls * Replace Image Creation v1 with direct REST calls * Remove test of having a thundering herd * Pull service\_type directly off of the Adapter * Add compute usage support 1.13.2 ------ * Re-add metadata to image in non-strict mode * Added documentation for delete\_image() * Add an e to the word therefore * Allow server to be snapshot to be name, id or dict * Add docstring for create\_image\_snapshot * Allow security\_groups to be a scalar * Remove stray debugging line * Start using requests-mock for REST unit tests * Have OpenStackHTTPError inherit from HTTPError * Use REST for listing images * Create and use a Adapter wrapper for REST in TaskManager * Normalize volumes * Expose visibility on images 1.13.1 ------ * Be specific about protected being bool * Remove pointless and fragile unittest * Fail up to date check on one out of sync value * Normalize projects * Cache file checksums by filename and mtime * Only generate checksums if neither is given * Make search\_projects a special case of list\_projects * Make a private method more privater 1.13.0 ------ * Add unit test to show herd protection in action * Refactor out the fallback-to-router logic * Update floating ip polling to account for DOWN status * Use floating-ip-by-router * Don't fail on trying to delete non-existant images * Allow server-side filtering of Neutron floating IPs * list\_servers(): thread safety: never return bogus data * Depend on normalization in list\_flavors * Add unit tests for image and flavor normalization * Add strict mode for trimming out non-API data * list\_security\_groups: enable server-side filtering * Don't fail image create on failure of cleanup * Try to return working IP if we get more than one * Add test for os\_keystone\_role Ansible module * Document and be more explicit in normalization * Add external\_ipv4\_floating\_networks * Logging: avoid string interpolation when not needed * Add a devstack plugin for shade * Allow setting env variables for functional options * Add test for os\_keystone\_domain Ansible module * Add abililty to find floating IP network by subnet * Remove useless mocking in tests/unit/test\_shade.py * Fix TypeError in list\_router\_interfaces * Fix a NameError exc in operatorcloud.py * Fix some docstrings * Fix a NameError exception in \_nat\_destination\_port * Implement create/get/list/delete volume backups * Move normalize\_neutron\_floating\_ips to \_normalize * Delete image if we timeout waiting for it to upload * Add description field to create\_user method * Allow boolean values to pass through to glance * Update location info to include object owner * Move and fix security group normalization * Add location field to flavors * Move normalize\_flavors to \_normalize * Move image normalize calls to \_normalize * Add location to server record * Start splitting normalize functions into a mixin * Make sure we're matching image status properly * Normalize images * Add helper properties to generate location info * Update simple\_logging to not not log request ids by default * Add simple field for disabled flavors * List py35 in the default tox env list * remove\_router\_interface: check subnet\_id or port\_id is provided * Add test for os\_group Ansible module * Add support for jmespath filter expressions 1.12.1 ------ * Add libffi-dev to bindep.txt 1.12.0 ------ * Use list\_servers for polling rather than get\_server\_by\_id * Fix up image and flavor by name in create\_server * Batch calls to list\_floating\_ips * Allow str for ip\_version param in create\_subnet * Skip test creating provider network if one exists * Revert per-resource dogpile.cache work * Fix two minor bugs in generate\_task\_class * Change naming style of submitTask * Add submit\_function method to TaskManager * Refactor TaskManager to be more generic * Poll for image to be ready for PUT protocol * Cleanup old internal/external network handling * Support dual-stack neutron networks * Rename \_get\_free\_fixed\_port to \_nat\_destination\_port * Log request ids * Detect the need for FIPs better in auto\_ip * Delete objname in image\_delete * Move list\_server cache to dogpile * Ensure per-resource caches work without global cache * Support more than one network in create\_server * Add support for fetching console logs from servers * Allow image and flavor by name for create\_server * Allow object storage endpoint to return 404 for missing /info endpoint * Batch calls to list\_floating\_ips * Get the status of the ip with ip.get('status') * Stop getting extra flavor specs where they're useless * Change deprecated assertEquals to assertEqual * Use cloud fixtures from the unittest base class * Add debug logging to unit test base class * Update HACKING.rst with a couple of shade specific notes * Only run flake8 on shade directory * Add bindep.txt file listing distro depends * Set physical\_network to public in devstack test * Use "image" as argument for Glance V1 upload error path * Honor default\_interface OCC setting in create\_server * Validate config vs reality better than length of list * Base auto\_ip on interface\_ip not public\_v4 * Add tests to show IP inference in missed conditions 1.11.1 ------ * Deal with clouds that don't have fips betterer 1.11.0 ------ * Infer nova-net security groups better * Add update\_endpoint() * Protect cinderclient import * Do not instantiate logging on import * Don't supplement floating ip list on clouds without * Move list\_ports to using dogpile.cache * Create and return per-resource caches * Lay the groundwork for per-resource cache 1.10.0 ------ * Rename baymodel to cluster\_template * Make shared an optional keyword param to create\_network * Add a 'meta' passthrough parameter for glance images * Allow creating a floating ip on an arbitrary port * Add ability to upload duplicate images * Fix requirements for broken os-client-config * Add new test with betamax for create flavors * Stop creating cloud objects in functional tests * Move list\_magnum\_services to OperatorCloud * Go ahead and admit that we return Munch objects * Depend on python-heatclient>=1.0.0 * Add update\_server method * Remove discover from test-requirements * Update hacking version * Change operating to interacting with in README * Add floating IPs to server dict ourselves * Treat DELETE\_COMPLETE stacks as NotFound * Add support for changing metadata of compute instances * Use keystoneauth.betamax for shade mocks * Add network quotas support * Add reno note for create\_object and update\_object * Add magnum services call to shade * Add function to update object metadata * incorporate unit test in test\_shade.py, remove test\_router.py fix tenant\_id in router add functional test test\_create\_router\_project to functional/test\_router.py add unit/test\_router.py add project\_id to create\_router * Add magnum baymodel calls to shade * Make it easier to give swift objects metadata * Add volume quotas support * Add quotas support 1.9.0 ----- * Add error logging around FIP delete * Be more precise in our detection of provider networks * Rework delete\_unattached\_floating\_ips function * Make sure Ansible tests only use cirros images * Don't fail getting flavors if extra\_specs is off * Add initial setup for magnum in shade * Amend the valid fields to update on recordsets * Move cloud fixtures to independent yaml files * Add support for host aggregates * Add support for server groups * Add release note doc to dev guide 1.8.0 ----- * Add Designate recordsets support * Add support for Designate zones * Fail if FIP doens't have the requested port\_id * Add public helper method for cleaning floating ips * Rework floating ip use test to be neutron based * Delete floating IP on nova refresh failure * Retry floating ip deletion before deleting server * Have delete\_server use the timed server list cache * Document create\_stack * delete\_stack add wait argument * Implement update\_stack * Fix string formatting * Add domain\_id param to project operations * Remove get\_extra parameter from get\_flavor * Honor floating\_ip\_source: nova everywhere * Use configured overrides for internal/external * Start stamping the has\_service debug messages * Consume floating\_ip\_source config value * Honor default\_network for interface\_ip * Refactor the port search logic * Allow passing nat\_destination to get\_active\_server * Add nat\_destination filter to floating IP creation * Refactor guts of \_find\_interesting\_networks * Search subnets for gateway\_ip to discover NAT dest * Consume config values for NAT destination * Return boolean from delete\_project * Correct error message when domain is required * Add release note about the swift Large Object changes * Delete image objects after failed upload * Delete uploaded swift objects on image delete * Add option to control whether SLO or DLO is used * Upload large objects as SLOs * Set min\_segment\_size from the swift capabilities * Don't use singleton dicts unwittingly * Update func tests for latest devstack flavors * Fix search\_domains when not passing filters * Wrap stack operations in a heat\_exceptions * Use event\_utils.poll\_for\_events for stack polling * Follow name\_or\_id pattern on domain operations 1.7.0 ----- * Remove conditional blocking on server list * Cache ports like servers * Workaround multiple private network ports * Reset network caches after network create/delete * Fix test\_list\_servers unit test * Fix test\_get\_server\_ip unit test * Remove duplicate FakeServer class from unit tests * Mutex protect internal/external network detection * Support provider networks in public network detection * Re-allow list of networks for FIP assignment * Support InsecureRequestWarning == None * Add release notes for new create\_image\_snapshot() args * Split waiting for images into its own method 1.6.2 ----- * Add wait support to create\_image\_snapshot() * Also add server interfaces for server get * Import os module as it is referenced in line 2097 * Fix grant\_role docstring 1.6.1 ----- * Add default value to wait parameter 1.6.0 ----- * Use OpenStackCloudException when \_delete\_server() raises * Always do network interface introspection * Fix race condition in deleting volumes * Use direct requests for flavor extra\_specs set/unset * Fix search\_projects docstring * Fix search\_users docstring * Add new tasks to os\_port playbook * Deal with is\_public and ephemeral in normalize\_flavors * Create clouds in Functional Test base class * Run extra specs through TaskManager and use requests * Bug fix: Make set/unset of flavor specs work again * Refactor unit tests to construct cloud in base * Add constructor param to turn on inner logging * Log inner\_exception in test runs * Pass specific cloud to openstack\_clouds function * Make get\_stack fetch a single full stack * Add environment\_files to stack\_create * Add normalize stack function for heat stack\_list * Add wait\_for\_server API call * Update create\_endpoint() * Make delete\_project to call get\_project * Test v3 params on v2.0 endpoint; Add v3 unit * Add update\_service() * Use network in neutron\_available\_floating\_ips * Allow passing project\_id to create\_network 1.5.1 ----- * In the service lock, reset the service, not the lock * Bug fix: Do not fail on routers with no ext gw 1.5.0 ----- * Mock glance v1 image with object not dict * Use warlock in the glance v2 tests * Fixes for latest cinder and neutron clients * Add debug message about file hash calculation * Pass username/password to SwiftService * Also reset swift service object at upload time * Invalidate volume cache when waiting for attach * Use isinstance() for result type checking * Add test for os\_server Ansible module * Fix create\_server() with a named network * os\_router playbook cleanup * Fix heat create\_stack and delete\_stack * Catch failures with particular clouds * Allow testing against Ansible dev branch * Recognize subclasses of list types * Add ability to pass just filename to create\_image * Add support for provider network options * Remove mock testing of os-client-config for swift * Add a method to download an image from glance * Add test option to use Ansible source repo * Add enabled flag to keystone service data * Clarify Munch object usage in documentation * Add docs tox target * create\_service() should normalize return value * Prepare functional test subunit stream for collection * Use release version of Ansible for testing * Modify test workaround for extra\_dhcp\_opts * Fix for stable/liberty job * granting and revoking privs to users and groups * Add release note for FIP timeout fix * include keystonev2 role assignments * Add release note for new get\_object() API call * Pass timeout through to floating ip creation * Fix normalize\_role\_assignments() return value * Remove a done todo list item * add the ability to get an object back from swift * allow for updating passwords in keystone v2 * Support neutron subnets without gateway IPs * Save the adminPass if returned on server create * Fix unit tests that validate client call arguments * Allow inventory filtering by cloud name * Add range search functionality 1.4.0 ----- * correct rpmlint errors * Add tests for stack search API * Fix filtering in search\_stacks() * Bug fix: Cinder v2 returns bools now * Normalize server objects * Make server variable expansion optional * Have inventory use os-client-config extra\_config * Fix unittest stack status * Fix shade tests with OCC 1.13.0 * No Mutable Defaults * Add option to enable HTTP tracing * Add support for querying role assignments * Add inventory unit tests * Fix server deletes when cinder isn't available * Pedantic spelling correction * Bug fix: create\_stack() fails when waiting * Stack API improvements * Bug fix: delete\_object() returns True/False * Add wait support for ironic node [de]activate * Improve test coverage: container/object list API * Make a new swift client prior to each image upload * Improve test coverage: volume attach/detach API * Bug fix: Allow name update for domains * Improve test coverage: network delete API * Bug fix: Fix pass thru filtering in list\_networks * Consider 'in-use' a non-pending volume for caching * Improve test coverage: private extension API * Improve test coverage: hypervisor list * Use reno for release notes * Improve test coverage: list\_router\_interfaces API * Change the client imports to stop shadowing * Use non-versioned cinderclient constructor * Improve test coverage: server secgroup API * Improve test coverage: container API 1.3.0 ----- * Improve test coverage: project API * Improve test coverage: user API * Provide a better comment for the object short-circuit * Remove cinderclient version pin * Add functional tests for boot from volume * Enable running tests against RAX and IBM * Don't double-print exception subjects * Accept objects in name\_or\_id parameter * Normalize volume objects * Fix argument sequences for boot from volume * Make delete\_server() return True/False * Adjust conditions when enable\_snat is specified * Only log errors in exceptions on demand * Fix resource leak in test\_compute * Clean up compute functional tests * Stop using nova client in test\_compute * Retry API calls if they get a Retryable failure * Fix call to shade\_exceptions in update\_project * Add test for os\_volume Ansible module 1.2.0 ----- * Fix for min\_disk/min\_ram in create\_image API * Add test for os\_image Ansible module * Fix warnings.filterwarnings call * boot-from-volume and network params for server create * Do not send 'router:external' unless it is set * Add test for os\_port Ansible module * Allow specifying cloud name to ansible tests * Fix a 60 second unit test * Make sure timeouts are floats * Remove default values from innner method * Bump os-client-config requirement * Add test for os\_user\_group Ansible module * Add user group assignment API * Add test for os\_user Ansible module * Add test for os\_nova\_flavor Ansible module * Stop using uuid in functional tests * Make functional object tests actually run * Add Ansible object role * Fix for create\_object * Four minor fixes that make debugging better * Add new context manager for shade exceptions, final * Add ability to selectively run ansible tests * Add Ansible testing infrastructure * Add new context manager for shade exceptions, cont. again * Pull server list cache setting via API * Plumb fixed\_address through add\_ips\_to\_server * Let os-client-config handle session creation * Remove designate support * Remove test reference to api\_versions * Update dated project methods * Fix incorrect variable name * Add CRUD methods for keystone groups * Bump ironicclient depend * Make sure cache expiration time is an int * Add new context manager for shade exceptions, cont * Use the requestsexceptions library * Don't warn on configured insecure certs * Normalize domain data * Normalization methods should return Munch * Fix keystone domain searching * Add new context manager for shade exceptions * teach shade how to list\_hypervisors * Update ansible router playbook * Stop calling obj\_to\_dict everwhere * Always return a munch from Tasks * Make raw-requests calls behave like client calls * Minor logging improvements * Remove another extraneous get for create\_server * Don't wrap wrapped exception in create\_server * Skip an extra unneeded server get * Don't wrap wrapped exceptions in operatorcloud.py * Add docs for create\_server * Update README to not reference client passthrough * Move ironic client attribute to correct class * Move \_neutron\_exceptions context manager to \_utils * Fix misspelling of ironic state name * Timeout too aggressive for inspection tests * Split out OpenStackCloud and OperatorCloud classes * Adds volume snapshot functionality to shade * Fix the return values of create and delete volume * Remove removal of jenkins clouds.yaml * Consume /etc/openstack/clouds.yaml * Add logic to support baremetal inspection 1.0.0 ----- * node\_set\_provision\_state wait/timeout support * Add warning suppression for keystoneauth loggers * Suppress Rackspace SAN warnings again * return additional detail about servers * expand security groups in get\_hostvars\_from\_server * add list\_server\_security\_groups method * Add swift object and container list functionality * Translate task name in log message always * Add debug logging to iterate timeout * Change the fallback on server wait to 2 seconds * Add entry for James Blair to .mailmap * handle routers without an external gateway in list\_router\_interfaces * Fix projects list/search/get interface * Remove unused parameter from create\_stack * Move valid\_kwargs decorator to \_utils * Add heat support * Abstract out the name of the name key * Add heatclient support * Use OCC to create clouds in inventory * novaclient 2.32.0 does not work against rackspace * Support private address override in inventory * Normalize user information * Set cache information from clouds.yaml * Make designate record methods private for now * Rely on devstack for clouds.yaml * Rename identity\_domain to domain * Rename designate domains to zones * Replace Bunch with compatible fork Munch * Make a few IP methods private 0.16.0 ------ * Push filtering down into neutron * Make floating IP func tests less racey * Make router func tests less racey * Create neutron floating ips with server info * Undecorate cache decorated methods on null cache * Tweak create\_server to use list\_servers cache * Add API method to list router interfaces * Handle list\_servers caching more directly * Split the nova server active check out * Pass wait to add\_ips\_to\_server * Fix floating ip removal on delete server * Document filters for get methods * Add some more docstrings * Remove shared=False from get\_internal\_network * Make attach\_instance return updated volume object * Tell git to ignore .eggs directory * Align users with list/search/get interface * Add script to document deleting private networks * Add create/delete for keystone roles * Accept and emit union of keystone v2/v3 service * Use keystone v3 service type argument * Add get/list/search methods for identity roles * Add methods to update internal router interfaces * Add get\_server\_by\_id optmization * Add option to floating ip creation to not reuse * Provide option to delete floating IP with server * Update python-troveclient requirement * Add a private method for nodepool server vars * Update required ironicclient version * Split get\_hostvars\_from\_server into two * Invalidate image cache everytime we make a change * Use the ipaddress library for ip calculations * Optimize network finding * Fix create\_image\_snapshot 0.15.0 ------ * Return IPv6 address for interface\_ip on request * Plumb wait and timout down to add\_auto\_ip * Pass parameters correctly for image snapshots * Fix mis-named has\_service entry * Provide shortcut around has\_service * Provide short-circuit for finding server networks * Update fake to match latest OCC * Dont throw exception on missing service * Add functional test for private\_v4 * Attempt to use glanceclient strip\_version * Fix baremetal port deletion 0.14.0 ------ * Add router ansible test and update network role * Trap exceptions in helper functions * Add more info to some exceptions * Allow more complex router updates * Allow more complex router creation * Allow creating externally accessible networks * Handle glance v1 and v2 difference with is\_public * Get defaults for image type from occ * Use the get\_auth function from occ * Add a NullHandler to all of our loggers * Remove many redundant debug logs * Make inner\_exception a private member * Just do the error logging in the base exception * Store the inner exception when creating an OSCException * Start using keystoneauth for keystone sessions * Move keystone to common identity client interface * Bump the default API version for python-ironicclient * Avoid 2.27.0 of novaclient * unregister\_machine blocking logic * Fix exception lists in functional tests * Migrate neutron to the common client interface * Remove last vestige of glanceclient being different * Pass timeout to session, not constructors * Delete floating ip by ID instead of name 0.13.0 ------ * Move glanceclient to new common interface * Addition of shade unregister\_machine timeout * Initial support for ironic enroll state * Add flavor access API * Make client constructor calls consistent * Change functional testing to use clouds.yaml * Add a developer coding standards doc 0.12.0 ------ * Add flavor functional tests * Bug fix for obj\_to\_dict() * Add log message for when IP addresses fail * Add methods to set and unset flavor extra specs * Listing flavors should pull all flavors * Be consistent with accessing server dict * Throw an exception on a server without an IP * Be smarter finding private IP * Clarify future changes in docs * Remove meta.get\_server\_public\_ip() function * Document create\_object * Remove unused server functions * Fix two typos and one readablity on shade documentation * Pass socket timeout to swiftclient * Process config options via os-client-config * Update ansible subnet test * Fix test\_object.py test class name * Fix for swift servers older than 1.11.0 * Use disable\_vendor\_agent flags in create\_image * Use os-client-config SSL arg processing * Correctly pass the server ID to add\_ip\_from\_pool * Add initial designate read-only operations * Always use a fixed address when attaching a floating IP to a server * Catch leaky exceptions from create\_image() 0.11.0 ------ * Add flavor admin support * Fix debug logging lines * Account for Error 396 on Rackspace * Fix small error in README.rst * Allow use of admin tokens in keystone * Fix identity domain methods * Update ansible module playbooks * Rework how we get domains * Fix "Bad floatingip request" when multiple fixed IPs are present * Add Ansible module test for subnet * Add Ansible module test for networks * Add a testing framework for the Ansible modules * Support project/tenant and domain vs. None * Add CRUD methods for Keystone domains 0.10.0 ------ * Raise exception for nova egress secgroup rule * Modify secgroup rule processing * Make sure we are returning floating IPs in current domain * Correctly name the functional TestImage class 0.9.0 ----- * Locking ironic API microversion * Add Neutron/Nova Floating IP tests * Adding SSL arguments to glance client * Remove list\_keypair\_dicts method * Do not use environment for Swift unit tests * Add Neutron/Nova Floating IP attach/detach * Fix available\_floating\_ip when using Nova network * Skip Swift functional tests if needed * Fix AttributeError in keystone functional tests * Update keypair APIs to latest standards * Add Neutron/Nova Floating IP delete (i.e. deallocate from project) * Add Neutron/Nova Floating IP create (i.e. allocate to project) * Convert ironicclient node.update() call to Task * Convert ironicclient node.get() call to Task * Move TestShadeOperator in a separate file * Fix intermittent error in unit tests * Pin cinderclient * Add comment explaining why finding an IP is hard * Add IPv6 to the server information too * Use accessIPv4 and accessIPv6 if they're there * Add Neutron/Nova Floating IP list/search/get 0.8.2 ----- * Catch all exceptions around port for ip finding * Centralize exception management for Neutron 0.8.1 ----- * Fix MD5 headers regression * Ensure that service values are strings * Pass token and endpoint to swift os\_options * Convert ironicclient node.validate() call to Task * Convert ironicclient node.list() call to Task * Return True/False for delete methods 0.8.0 ----- * Add delete method for security group rules * Add get\_server\_external\_ipv6() to meta * Refactor find\_nova\_addresses() * Replace get\_server\_public\_ip() with get\_server\_external\_ipv4() * Add get\_server\_external\_ipv4() to meta * Add more parameters to update\_port() * Improve documentation for create\_port() * Correct get\_machine\_by\_mac and add test * Add create method for secgroup rule * Coalesce port values in secgroup rules * Move \_utils unit testing to separate file 0.7.0 ----- * Add secgroup update API * Add very initial support for passing in occ object * Don't emit volume tracebacks in inventory debug * Return new secgroup object * Port ironic client port.get\_by\_address() to a Task * Port ironic client port.get() to a Task * Add inventory command to shade * Extract logging config into a helper function * Add create method for security groups * Add delete method for security groups * Switch to SwiftService for segmented uploads * Add support to get a SwiftService object * Add port resource methods * Split security group list operations * Add keystone endpoint resource methods * Add Keystone service resource methods * Rely on defaults being present * Consume os\_client\_config defaults as base defaults * Remove hacking select line * Add design for an object interface * Port ironic client node.list\_ports() to a Task * Port ironic client port.list() to a Task * Split list filtering into \_utils 0.6.5 ----- * Cast nova server object to dict after refetch * Split iterate\_timeout into \_utils * Cleanup OperatorCloud doc errors/warnings * Update pbr version pins 0.6.4 ----- * Set metadata headers on object create 0.6.3 ----- * Always refresh glanceclient for tokens validity * Don't cache keystone tokens as KSC does it for us * Make sure glance image list actually runs in Tasks 0.6.2 ----- * Make caching work when cloud name is None * Handle novaclient exception in delete\_server wait * Support PUT in Image v2 API * Make ironic use the API version system * Catch client exceptions during list ops * Replace ci.o.o links with docs.o.o/infra * Pass OS\_ variables through to functional tests * Improve error message on auth\_plugin failure * Handle novaclient exceptions during delete\_server * Add floating IP pool resource methods * Don't error on missing certs 0.6.1 ----- * Stop leaking server objects * Use fakes instead of mocks for data objects * Update images API for get/list/search interface * Rewrite extension checking methods * Update server API for get/list/search interface * Fix delete\_server when wait=True * Return Bunch objects instead of plain dicts 0.6.0 ----- * Switch tasks vs put on a boolean config flag * Enhance the OperatorCloud constructor * Convert node\_set\_provision\_state to task * Update recent Ironic exceptions * Enhance error message in update\_machine * Rename get\_endpoint() to get\_session\_endpoint() * Make warlock filtering match dict filtering * Fix exception re-raise during task execution for py34 * Add more tests for server metadata processing * Add thread sync points to Task * Add early service fail and active check method * Add a method for getting an endpoint * Raise a shade exception on broken volumes * Split exceptions into their own file * Add minor OperatorCloud documentation * Allow for int or string ID comparisons * Change ironic maintenance method to align with power method * Add Ironic machine power state pass-through * Update secgroup API for new get/list/search API * Fix functional tests to run against live clouds * Add functional tests for create\_image * Do not cache unsteady state images * Add tests and invalidation for glance v2 upload * Allow complex filtering with embedded dicts * Call super in OpenStackCloudException * Add Ironic maintenance state pass-through * Add update\_machine method * Replace e.message with str(e) * Update flavor API for new get/list/search API * Add a docstring to the Task class * Remove REST links from inventory metadata * Have get\_image\_name return an image\_name * Fix get\_hostvars\_from\_server for volume API update * Add test for create\_image with glance v1 * Explicitly request cloud name in test\_caching * Add test for caching in list\_images * Test flavor cache and add invalidation * Fix major update\_user issues * create\_user should return the user created * Test that deleting user invalidates user cache * Use new getters in update\_subnet and update\_router * Update volume API for new getters and dict retval * Search methods for networks, subnets and routers * Update unregister\_machine to use tasks * Invalidate user cache on user create * Update register\_machine to use tasks * Add test of OperatorCloud auth\_type=None * Allow name or ID for update\_router() * Allow name or ID for update\_subnet() * Add test for user\_cache * MonkeyPatch time.sleep in unit tests to avoid wait * Add patch\_machine method and operator unit test substrate * Wrap ironicclient methods that leak objects * Basic test for meta method obj\_list\_to\_dict * Change Ironic node lookups to support names * Add meta method obj\_list\_to\_dict * Add test for invalidation after delete * Deprecate use of cache in list\_volumes * Invalidate volume list cache when creating * Make cache key generator ignore cache argument * Add get\_subnet() method * Add API method update\_subnet() * Add API method delete\_subnet() * Add API method create\_subnet() * Unsteady state in volume list should prevent cache * Test volume list caching * Allow passing config into shade.openstack\_cloud * Refactor caching to allow per-method invalidate * Add tests for caching * Rename auth\_plugin to auth\_type * Update os-client-config min version * Fix volume operations * Fix exception in update\_router() * Add API auto-generation based on docstrings 0.5.0 ----- * Fix docs nit - make it clear the arg is a string * Poll on the actual image showing up * Add delete\_image call * Skip passing in timeout to glance if it's not set * Add some unit test for create\_server * Migrate API calls to task management * Fix naming inconsistencies in rebuild\_server tests * Add task management framework * Namespace caching per cloud * Allow for passing cache class in as a parameter * Add 'rebuild' to shade * Let router update to specify external gw net ID * Create swift container if it does not exist * Fix a use of in where it should be equality * Disable warnings about old Rackspace certificates * Pass socket timeout to all of the Client objects * Add methods for logical router management * Add api-level timeout parameter * Custom exception needs str representation 0.4.0 ----- * Add basic unit test for shade.openstack\_cloud * Small fixes found working on ansible modules * Disable dogpile.cache if cache\_interval is None * Add support for keystone projects * Fix up and document input parameters * Handle image name for boot from volume * Clean up race condition in functional tests * Add initial compute functional tests to Shade * Add cover to .gitignore * Add ironic node deployment support * Align cert, key, cacert and verify with requests * Add methods to create and delete networks * Add behavior to enable ironic noauth mode * Reorder envlist to avoid the rm -fr .testrepository when running tox -epy34 0.3.0 ----- * Make image processing work for v2 * Utilize dogpile.cache for caching * Add support for volume attach/detach * Do not allow to pass \*-cache on init * Import from v2 instead of v1\_1 * Add unit test for meta.get\_groups\_from\_server * Add unit tests for meta module * Add a method to create image snapshots from nova * Return extra information for debugging on failures * Don't try to add an IP if there is one * Revamp README file * Add hasExtension method to check cloud capabilities * Don't compare images when image is None * Add service\_catalog property * Remove unnecessary container creation * Make is\_object\_stale() a public method * Fix broken object hashing * Adds some more swift operations * Adds get\_network() and list\_networks function * Add support for creating/deleting volumes * Get auth token lazily * Pass service\_name to nova\_client constructor * Create a neutron client * Port to use keystone sessions and auth plugins * Add consistent methods for returning dicts * Add get\_flavor method * Make get\_image return None * Use the "iterate timeout" idiom from nodepool * Fix obj\_to\_dict type filtering * Adds a method to get security group * Pull in improvements from nodepool * Remove positional args to create\_server * Don't include deleted images by default * Add image upload support * Refactor glance version call into method * Support uploading swift objects * Debug log any time we re-raise an exception * Remove py26 support * Explain obj\_to\_dict * Fix python3 unittests * Change meta info to be an Infra project * Fix flake8 errors and turn off hacking * Fix up copyright headers * Add better caching around volumes * Support boot from volume * Make get\_image work on name or id * Add some additional server meta munging * Support injecting mount-point meta info * Move ironic node create/delete logic into shade * Refactor ironic commands into OperatorCloud class * fix typo in create\_server * Don't die if we didn't grab a floating ip * Process flavor and image names * Stop prefixing values with slugify * Don't access object members on a None * Make all of the compute logic work * Add delete and get server name * Fixed up a bunch of flake8 warnings * Add in server metadata routines * Plumb through a small name change for args * Consume project\_name from os-client-config * add Ironic client * Updates to use keystone session * Discover Trove API version * Offload config to the os-client-config library * Add example code to README * Add volumes and config file parsing * Fix log invocations * Remove some extra lines from the README * Add the initial library code * Initial cookiecutter repo shade-1.31.0/MANIFEST.in0000666000175000017500000000013513440327640014444 0ustar zuulzuul00000000000000include AUTHORS include ChangeLog exclude .gitignore exclude .gitreview global-exclude *.pycshade-1.31.0/CONTRIBUTING.rst0000666000175000017500000000213313440327640015347 0ustar zuulzuul00000000000000.. _contributing: ===================== Contributing to shade ===================== If you're interested in contributing to the shade project, the following will help get you started. Contributor License Agreement ----------------------------- .. index:: single: license; agreement In order to contribute to the shade project, you need to have signed OpenStack's contributor's agreement. .. seealso:: * http://wiki.openstack.org/HowToContribute * http://wiki.openstack.org/CLA Project Hosting Details ------------------------- Project Documentation http://docs.openstack.org/infra/shade/ Bug tracker http://storyboard.openstack.org Mailing list (prefix subjects with ``[shade]`` for faster responses) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra Code Hosting https://git.openstack.org/cgit/openstack-infra/shade Code Review https://review.openstack.org/#/q/status:open+project:openstack-infra/shade,n,z Please read `GerritWorkflow`_ before sending your first patch for review. .. _GerritWorkflow: https://wiki.openstack.org/wiki/GerritWorkflow shade-1.31.0/doc/0000775000175000017500000000000013440330010013433 5ustar zuulzuul00000000000000shade-1.31.0/doc/source/0000775000175000017500000000000013440330010014733 5ustar zuulzuul00000000000000shade-1.31.0/doc/source/user/0000775000175000017500000000000013440330010015711 5ustar zuulzuul00000000000000shade-1.31.0/doc/source/user/examples/0000775000175000017500000000000013440330010017527 5ustar zuulzuul00000000000000shade-1.31.0/doc/source/user/examples/create-server-dict.py0000666000175000017500000000156613440327640023622 0ustar zuulzuul00000000000000import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) for cloud_name, region_name, image, flavor_id in [ ('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]', '5cf64088-893b-46b5-9bb1-ee020277635d'), ('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus', '0dab10b5-42a2-438e-be7b-505741a7ffcc'), ('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. server = cloud.create_server( 'my-server', image=image, flavor=dict(id=flavor_id), wait=True, auto_ip=True) # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) shade-1.31.0/doc/source/user/examples/user-agent.py0000666000175000017500000000025413440327640022175 0ustar zuulzuul00000000000000import shade shade.simple_logging(http_debug=True) cloud = shade.openstack_cloud( cloud='datacentred', app_name='AmazingApp', app_version='1.0') cloud.list_networks() shade-1.31.0/doc/source/user/examples/service-conditional-overrides.py0000666000175000017500000000022113440327640026056 0ustar zuulzuul00000000000000import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='rax', region_name='DFW') print(cloud.has_service('network')) shade-1.31.0/doc/source/user/examples/upload-object.py0000666000175000017500000000052313440327640022652 0ustar zuulzuul00000000000000import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='ovh', region_name='SBG1') cloud.create_object( container='my-container', name='my-object', filename='/home/mordred/briarcliff.sh3d', segment_size=1000000) cloud.delete_object('my-container', 'my-object') cloud.delete_container('my-container') shade-1.31.0/doc/source/user/examples/strict-mode.py0000666000175000017500000000036013440327640022353 0ustar zuulzuul00000000000000import shade shade.simple_logging() cloud = shade.openstack_cloud( cloud='fuga', region_name='cystack', strict=True) image = cloud.get_image( 'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image') cloud.pprint(image) shade-1.31.0/doc/source/user/examples/service-conditionals.py0000666000175000017500000000031213440327640024242 0ustar zuulzuul00000000000000import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='kiss', region_name='region1') print(cloud.has_service('network')) print(cloud.has_service('container-orchestration')) shade-1.31.0/doc/source/user/examples/normalization.py0000666000175000017500000000033613440327640023012 0ustar zuulzuul00000000000000import shade shade.simple_logging() cloud = shade.openstack_cloud(cloud='fuga', region_name='cystack') image = cloud.get_image( 'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image') cloud.pprint(image) shade-1.31.0/doc/source/user/examples/cleanup-servers.py0000666000175000017500000000067313440327640023246 0ustar zuulzuul00000000000000import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) for server in cloud.search_servers('my-server'): cloud.delete_server(server, wait=True, delete_ips=True) shade-1.31.0/doc/source/user/examples/upload-large-object.py0000666000175000017500000000052313440327640023742 0ustar zuulzuul00000000000000import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='ovh', region_name='SBG1') cloud.create_object( container='my-container', name='my-object', filename='/home/mordred/briarcliff.sh3d', segment_size=1000000) cloud.delete_object('my-container', 'my-object') cloud.delete_container('my-container') shade-1.31.0/doc/source/user/examples/find-an-image.py0000666000175000017500000000031213440327640022512 0ustar zuulzuul00000000000000import shade shade.simple_logging() cloud = shade.openstack_cloud(cloud='fuga', region_name='cystack') cloud.pprint([ image for image in cloud.list_images() if 'ubuntu' in image.name.lower()]) shade-1.31.0/doc/source/user/examples/debug-logging.py0000666000175000017500000000026213440327640022634 0ustar zuulzuul00000000000000import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud( cloud='my-vexxhost', region_name='ca-ymq-1') cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]') shade-1.31.0/doc/source/user/examples/munch-dict-object.py0000666000175000017500000000027513440327640023425 0ustar zuulzuul00000000000000import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='ovh', region_name='SBG1') image = cloud.get_image('Ubuntu 16.10') print(image.name) print(image['name']) shade-1.31.0/doc/source/user/examples/create-server-name-or-id.py0000666000175000017500000000167313440327640024626 0ustar zuulzuul00000000000000import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) for cloud_name, region_name, image, flavor in [ ('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]', 'v1-standard-4'), ('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus', '4C-4GB-100GB'), ('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) cloud.delete_server('my-server', wait=True, delete_ips=True) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. server = cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) print(server.name) print(server['name']) cloud.pprint(server) # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) shade-1.31.0/doc/source/user/examples/server-information.py0000666000175000017500000000123713440327640023756 0ustar zuulzuul00000000000000import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='my-citycloud', region_name='Buf1') try: server = cloud.create_server( 'my-server', image='Ubuntu 16.04 Xenial Xerus', flavor=dict(id='0dab10b5-42a2-438e-be7b-505741a7ffcc'), wait=True, auto_ip=True) print("\n\nFull Server\n\n") cloud.pprint(server) print("\n\nTurn Detailed Off\n\n") cloud.pprint(cloud.get_server('my-server', detailed=False)) print("\n\nBare Server\n\n") cloud.pprint(cloud.get_server('my-server', bare=True)) finally: # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) shade-1.31.0/doc/source/user/examples/http-debug-logging.py0000666000175000017500000000026713440327640023616 0ustar zuulzuul00000000000000import shade shade.simple_logging(http_debug=True) cloud = shade.openstack_cloud( cloud='my-vexxhost', region_name='ca-ymq-1') cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]') shade-1.31.0/doc/source/user/usage.rst0000666000175000017500000000106613440327640017573 0ustar zuulzuul00000000000000===== Usage ===== To use shade in a project: .. code-block:: python import shade For a simple example, see :ref:`usage_example`. .. note:: API methods that return a description of an OpenStack resource (e.g., server instance, image, volume, etc.) do so using a `munch.Munch` object from the `Munch library `_. `Munch` objects can be accessed using either dictionary or object notation (e.g., ``server.id``, ``image.name`` and ``server['id']``, ``image['name']``) .. autoclass:: shade.OpenStackCloud :members: shade-1.31.0/doc/source/user/index.rst0000666000175000017500000000032513440327640017573 0ustar zuulzuul00000000000000================== Shade User Guide ================== .. toctree:: :maxdepth: 2 usage logging model microversions Presentations ============= .. toctree:: :maxdepth: 1 multi-cloud-demo shade-1.31.0/doc/source/user/microversions.rst0000666000175000017500000001006313440327640021366 0ustar zuulzuul00000000000000============= Microversions ============= As shade rolls out support for consuming microversions, it will do so on a call by call basis as needed. Just like with major versions, shade should have logic to handle each microversion for a given REST call it makes, with the following rules in mind: * If an activity shade performs can be done differently or more efficiently with a new microversion, the support should be added to shade. * shade should always attempt to use the latest microversion it is aware of for a given call, unless a microversion removes important data. * Microversion selection should under no circumstances be exposed to the user, except in the case of missing feature error messages. * If a feature is only exposed for a given microversion and cannot be simulated for older clouds without that microversion, it is ok to add it to shade but a clear error message should be given to the user that the given feature is not available on their cloud. (A message such as "This cloud only supports a maximum microversion of XXX for service YYY and this feature only exists on clouds with microversion ZZZ. Please contact your cloud provider for information about when this feature might be available") * When adding a feature to shade that only exists behind a new microversion, every effort should be made to figure out how to provide the same functionality if at all possible, even if doing so is inefficient. If an inefficient workaround is employed, a warning should be provided to the user. (the user's workaround to skip the inefficient behavior would be to stop using that shade API call) * If shade is aware of logic for more than one microversion, it should always attempt to use the latest version available for the service for that call. * Objects returned from shade should always go through normalization and thus should always conform to shade's documented data model and should never look different to the shade user regardless of the microversion used for the REST call. * If a microversion adds new fields to an object, those fields should be added to shade's data model contract for that object and the data should either be filled in by performing additional REST calls if the data is available that way, or the field should have a default value of None which the user can be expected to test for when attempting to use the new value. * If a microversion removes fields from an object that are part of shade's existing data model contract, care should be taken to not use the new microversion for that call unless forced to by lack of availablity of the old microversion on the cloud in question. In the case where an old microversion is no longer available, care must be taken to either find the data from another source and fill it in, or to put a value of None into the field and document for the user that on some clouds the value may not exist. * If a microversion removes a field and the outcome is particularly intractable and impossible to work around without fundamentally breaking shade's users, an issue should be raised with the service team in question. Hopefully a resolution can be found during the period while clouds still have the old microversion. * As new calls or objects are added to shade, it is important to check in with the service team in question on the expected stability of the object. If there are known changes expected in the future, even if they may be a few years off, shade should take care to not add committments to its data model for those fields/features. It is ok for shade to not have something. ..note:: shade does not currently have any sort of "experimental" opt-in API that would allow a shade to expose things to a user that may not be supportable under shade's normal compatibility contract. If a conflict arises in the future where there is a strong desire for a feature but also a lack of certainty about its stability over time, an experimental API may want to be explored ... but concrete use cases should arise before such a thing is started. shade-1.31.0/doc/source/user/multi-cloud-demo.rst0000666000175000017500000005174513440327640021660 0ustar zuulzuul00000000000000================ Multi-Cloud Demo ================ This document contains a presentation in `presentty`_ format. If you want to walk through it like a presentation, install `presentty` and run: .. code:: bash presentty doc/source/user/multi-cloud-demo.rst The content is hopefully helpful even if it's not being narrated, so it's being included in the `shade` docs. .. _presentty: https://pypi.org/project/presentty Using Multiple OpenStack Clouds Easily with Shade ================================================= Who am I? ========= Monty Taylor * OpenStack Infra Core * irc: mordred * twitter: @e_monty What are we going to talk about? ================================ `shade` * a task and end-user oriented Python library * abstracts deployment differences * designed for multi-cloud * simple to use * massive scale * optional advanced features to handle 20k servers a day * Initial logic/design extracted from nodepool * Librified to re-use in Ansible shade is Free Software ====================== * https://git.openstack.org/cgit/openstack-infra/shade * openstack-dev@lists.openstack.org * #openstack-shade on freenode This talk is Free Software, too =============================== * Written for presentty (https://pypi.org/project/presentty) * doc/source/multi-cloud-demo.rst * examples in doc/source/examples * Paths subject to change- this is the first presentation in tree! Complete Example ================ .. code:: python import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) # Upload an image to the cloud image = cloud.create_image( 'devuan-jessie', filename='devuan-jessie.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Let's Take a Few Steps Back =========================== Multi-cloud is easy, but you need to know a few things. * Terminology * Config * Shade API Cloud Terminology ================= Let's define a few terms, so that we can use them with ease: * `cloud` - logically related collection of services * `region` - completely independent subset of a given cloud * `patron` - human who has an account * `user` - account on a cloud * `project` - logical collection of cloud resources * `domain` - collection of users and projects Cloud Terminology Relationships =============================== * A `cloud` has one or more `regions` * A `patron` has one or more `users` * A `patron` has one or more `projects` * A `cloud` has one or more `domains` * In a `cloud` with one `domain` it is named "default" * Each `patron` may have their own `domain` * Each `user` is in one `domain` * Each `project` is in one `domain` * A `user` has one or more `roles` on one or more `projects` HTTP Sessions ============= * HTTP interactions are authenticated via keystone * Authenticating returns a `token` * An authenticated HTTP Session is shared across a `region` Cloud Regions ============= A `cloud region` is the basic unit of REST interaction. * A `cloud` has a `service catalog` * The `service catalog` is returned in the `token` * The `service catalog` lists `endpoint` for each `service` in each `region` * A `region` is completely autonomous Users, Projects and Domains =========================== In clouds with multiple domains, project and user names are only unique within a region. * Names require `domain` information for uniqueness. IDs do not. * Providing `domain` information when not needed is fine. * `project_name` requires `project_domain_name` or `project_domain_id` * `project_id` does not * `username` requires `user_domain_name` or `user_domain_id` * `user_id` does not Confused Yet? ============= Don't worry - you don't have to deal with most of that. Auth per cloud, select per region ================================= In general, the thing you need to know is: * Configure authentication per `cloud` * Select config to use by `cloud` and `region` clouds.yaml =========== Information about the clouds you want to connect to is stored in a file called `clouds.yaml`. `clouds.yaml` can be in your homedir: `~/.config/openstack/clouds.yaml` or system-wide: `/etc/openstack/clouds.yaml`. Information in your homedir, if it exists, takes precedence. Full docs on `clouds.yaml` are at https://docs.openstack.org/os-client-config/latest/ What about Mac and Windows? =========================== `USER_CONFIG_DIR` is different on Linux, OSX and Windows. * Linux: `~/.config/openstack` * OSX: `~/Library/Application Support/openstack` * Windows: `C:\\Users\\USERNAME\\AppData\\Local\\OpenStack\\openstack` `SITE_CONFIG_DIR` is different on Linux, OSX and Windows. * Linux: `/etc/openstack` * OSX: `/Library/Application Support/openstack` * Windows: `C:\\ProgramData\\OpenStack\\openstack` Config Terminology ================== For multi-cloud, think of two types: * `profile` - Facts about the `cloud` that are true for everyone * `cloud` - Information specific to a given `user` Apologies for the use of `cloud` twice. Environment Variables and Simple Usage ====================================== * Environment variables starting with `OS_` go into a cloud called `envvars` * If you only have one cloud, you don't have to specify it * `OS_CLOUD` and `OS_REGION_NAME` are default values for `cloud` and `region_name` TOO MUCH TALKING - NOT ENOUGH CODE ================================== basic clouds.yaml for the example code ====================================== Simple example of a clouds.yaml * Config for a named `cloud` "my-citycloud" * Reference a well-known "named" profile: `citycloud` * `os-client-config` has a built-in list of profiles at https://docs.openstack.org/os-client-config/latest/user/vendor-support.html * Vendor profiles contain various advanced config * `cloud` name can match `profile` name (using different names for clarity) .. code:: yaml clouds: my-citycloud: profile: citycloud auth: username: mordred project_id: 65222a4d09ea4c68934fa1028c77f394 user_domain_id: d0919bd5e8d74e49adf0e145807ffc38 project_domain_id: d0919bd5e8d74e49adf0e145807ffc38 Where's the password? secure.yaml =========== * Optional additional file just like `clouds.yaml` * Values overlaid on `clouds.yaml` * Useful if you want to protect secrets more stringently Example secure.yaml =================== * No, my password isn't XXXXXXXX * `cloud` name should match `clouds.yaml` * Optional - I actually keep mine in my `clouds.yaml` .. code:: yaml clouds: my-citycloud: auth: password: XXXXXXXX more clouds.yaml ================ More information can be provided. * Use v3 of the `identity` API - even if others are present * Use `https://image-ca-ymq-1.vexxhost.net/v2` for `image` API instead of what's in the catalog .. code:: yaml my-vexxhost: identity_api_version: 3 image_endpoint_override: https://image-ca-ymq-1.vexxhost.net/v2 profile: vexxhost auth: user_domain_id: default project_domain_id: default project_name: d8af8a8f-a573-48e6-898a-af333b970a2d username: 0b8c435b-cc4d-4e05-8a47-a2ada0539af1 Much more complex clouds.yaml example ===================================== * Not using a profile - all settings included * In the `ams01` `region` there are two networks with undiscoverable qualities * Each one are labeled here so choices can be made * Any of the settings can be specific to a `region` if needed * `region` settings override `cloud` settings * `cloud` does not support `floating-ips` .. code:: yaml my-internap: auth: auth_url: https://identity.api.cloud.iweb.com username: api-55f9a00fb2619 project_name: inap-17037 identity_api_version: 3 floating_ip_source: None regions: - name: ams01 values: networks: - name: inap-17037-WAN1654 routes_externally: true default_interface: true - name: inap-17037-LAN3631 routes_externally: false Complete Example Again ====================== .. code:: python import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) # Upload an image to the cloud image = cloud.create_image( 'devuan-jessie', filename='devuan-jessie.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Step By Step ============ Import the library ================== .. code:: python import shade Logging ======= * `shade` uses standard python logging * Special `shade.request_ids` logger for API request IDs * `simple_logging` does easy defaults * Squelches some meaningless warnings * `debug` * Logs shade loggers at debug level * Includes `shade.request_ids` debug logging * `http_debug` Implies `debug`, turns on HTTP tracing .. code:: python # Initialize and turn on debug logging shade.simple_logging(debug=True) Example with Debug Logging ========================== * doc/source/examples/debug-logging.py .. code:: python import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud( cloud='my-vexxhost', region_name='ca-ymq-1') cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]') Example with HTTP Debug Logging =============================== * doc/source/examples/http-debug-logging.py .. code:: python import shade shade.simple_logging(http_debug=True) cloud = shade.openstack_cloud( cloud='my-vexxhost', region_name='ca-ymq-1') cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]') Cloud Regions ============= * `cloud` constructor needs `cloud` and `region_name` * `shade.openstack_cloud` is a helper factory function .. code:: python for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) Upload an Image =============== * Picks the correct upload mechanism * **SUGGESTION** Always upload your own base images .. code:: python # Upload an image to the cloud image = cloud.create_image( 'devuan-jessie', filename='devuan-jessie.qcow2', wait=True) Always Upload an Image ====================== Ok. You don't have to. But, for multi-cloud... * Images with same content are named different on different clouds * Images with same name on different clouds can have different content * Upload your own to all clouds, both problems go away * Download from OS vendor or build with `diskimage-builder` Find a flavor ============= * Flavors are all named differently on clouds * Flavors can be found via RAM * `get_flavor_by_ram` finds the smallest matching flavor .. code:: python # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) Create a server =============== * my-vexxhost * Boot server * Wait for `status==ACTIVE` * my-internap * Boot server on network `inap-17037-WAN1654` * Wait for `status==ACTIVE` * my-citycloud * Boot server * Wait for `status==ACTIVE` * Find the `port` for the `fixed_ip` for `server` * Create `floating-ip` on that `port` * Wait for `floating-ip` to attach .. code:: python # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Wow. We didn't even deploy Wordpress! ===================================== Image and Flavor by Name or ID ============================== * Pass string to image/flavor * Image/Flavor will be found by name or ID * Common pattern * doc/source/examples/create-server-name-or-id.py .. code:: python import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) for cloud_name, region_name, image, flavor in [ ('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]', 'v1-standard-4'), ('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus', '4C-4GB-100GB'), ('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. server = cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) print(server.name) print(server['name']) cloud.pprint(server) # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) cloud.pprint method was just added this morning =============================================== Delete Servers ============== * `delete_ips` Delete any `floating_ips` the server may have .. code:: python cloud.delete_server('my-server', wait=True, delete_ips=True) Image and Flavor by Dict ======================== * Pass dict to image/flavor * If you know if the value is Name or ID * Common pattern * doc/source/examples/create-server-dict.py .. code:: python import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) for cloud_name, region_name, image, flavor_id in [ ('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]', '5cf64088-893b-46b5-9bb1-ee020277635d'), ('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus', '0dab10b5-42a2-438e-be7b-505741a7ffcc'), ('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. server = cloud.create_server( 'my-server', image=image, flavor=dict(id=flavor_id), wait=True, auto_ip=True) # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) Munch Objects ============= * Behave like a dict and an object * doc/source/examples/munch-dict-object.py .. code:: python import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='zetta', region_name='no-osl1') image = cloud.get_image('Ubuntu 14.04 (AMD64) [Local Storage]') print(image.name) print(image['name']) API Organized by Logical Resource ================================= * list_servers * search_servers * get_server * create_server * delete_server * update_server For other things, it's still {verb}_{noun} * attach_volume * wait_for_server * add_auto_ip Cleanup Script ============== * Sometimes my examples had bugs * doc/source/examples/cleanup-servers.py .. code:: python import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) for cloud_name, region_name in [ ('my-vexxhost', 'ca-ymq-1'), ('my-citycloud', 'Buf1'), ('my-internap', 'ams01')]: # Initialize cloud cloud = shade.openstack_cloud(cloud=cloud_name, region_name=region_name) for server in cloud.search_servers('my-server'): cloud.delete_server(server, wait=True, delete_ips=True) Normalization ============= * https://docs.openstack.org/shade/latest/user/model.html#image * doc/source/examples/normalization.py .. code:: python import shade shade.simple_logging() cloud = shade.openstack_cloud(cloud='fuga', region_name='cystack') image = cloud.get_image( 'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image') cloud.pprint(image) Strict Normalized Results ========================= * Return only the declared model * doc/source/examples/strict-mode.py .. code:: python import shade shade.simple_logging() cloud = shade.openstack_cloud( cloud='fuga', region_name='cystack', strict=True) image = cloud.get_image( 'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image') cloud.pprint(image) How Did I Find the Image Name for the Last Example? =================================================== * I often make stupid little utility scripts * doc/source/examples/find-an-image.py .. code:: python import shade shade.simple_logging() cloud = shade.openstack_cloud(cloud='fuga', region_name='cystack') cloud.pprint([ image for image in cloud.list_images() if 'ubuntu' in image.name.lower()]) Added / Modified Information ============================ * Servers need more extra help * Fetch addresses dict from neutron * Figure out which IPs are good * `detailed` - defaults to True, add everything * `bare` - no extra calls - don't even fix broken things * `bare` is still normalized * doc/source/examples/server-information.py .. code:: python import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='my-citycloud', region_name='Buf1') try: server = cloud.create_server( 'my-server', image='Ubuntu 16.04 Xenial Xerus', flavor=dict(id='0dab10b5-42a2-438e-be7b-505741a7ffcc'), wait=True, auto_ip=True) print("\n\nFull Server\n\n") cloud.pprint(server) print("\n\nTurn Detailed Off\n\n") cloud.pprint(cloud.get_server('my-server', detailed=False)) print("\n\nBare Server\n\n") cloud.pprint(cloud.get_server('my-server', bare=True)) finally: # Delete it - this is a demo cloud.delete_server(server, wait=True, delete_ips=True) Exceptions ========== * All shade exceptions are subclasses of `OpenStackCloudException` * Direct REST calls throw `OpenStackCloudHTTPError` * `OpenStackCloudHTTPError` subclasses `OpenStackCloudException` and `requests.exceptions.HTTPError` * `OpenStackCloudURINotFound` for 404 * `OpenStackCloudBadRequest` for 400 User Agent Info =============== * Set `app_name` and `app_version` for User Agents * (sssh ... `region_name` is optional if the cloud has one region) * doc/source/examples/user-agent.py .. code:: python import shade shade.simple_logging(http_debug=True) cloud = shade.openstack_cloud( cloud='datacentred', app_name='AmazingApp', app_version='1.0') cloud.list_networks() Uploading Large Objects ======================= * swift has a maximum object size * Large Objects are uploaded specially * shade figures this out and does it * multi-threaded * doc/source/examples/upload-object.py .. code:: python import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='ovh', region_name='SBG1') cloud.create_object( container='my-container', name='my-object', filename='/home/mordred/briarcliff.sh3d') cloud.delete_object('my-container', 'my-object') cloud.delete_container('my-container') Uploading Large Objects ======================= * Default max_file_size is 5G * This is a conference demo * Let's force a segment_size * One MILLION bytes * doc/source/examples/upload-object.py .. code:: python import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='ovh', region_name='SBG1') cloud.create_object( container='my-container', name='my-object', filename='/home/mordred/briarcliff.sh3d', segment_size=1000000) cloud.delete_object('my-container', 'my-object') cloud.delete_container('my-container') Service Conditionals ==================== .. code:: python import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='kiss', region_name='region1') print(cloud.has_service('network')) print(cloud.has_service('container-orchestration')) Service Conditional Overrides ============================= * Sometimes clouds are weird and figuring that out won't work .. code:: python import shade shade.simple_logging(debug=True) cloud = shade.openstack_cloud(cloud='rax', region_name='DFW') print(cloud.has_service('network')) .. code:: yaml clouds: rax: profile: rackspace auth: username: mordred project_id: 245018 # This is already in profile: rackspace has_network: false Coming Soon =========== * Completion of RESTification * Full version discovery support * Multi-cloud facade layer * Microversion support (talk tomorrow) * Completion of caching tier (talk tomorrow) * All of you helping hacking on shade!!! (we're friendly) shade-1.31.0/doc/source/user/logging.rst0000666000175000017500000000645013440327640020117 0ustar zuulzuul00000000000000======= Logging ======= `shade` uses `Python Logging`_. As `shade` is a library, it does not configure logging handlers automatically, expecting instead for that to be the purview of the consuming application. Simple Usage ------------ For consumers who just want to get a basic logging setup without thinking about it too deeply, there is a helper method. If used, it should be called before any other `shade` functionality. .. code-block:: python import shade shade.simple_logging() `shade.simple_logging` takes two optional boolean arguments: debug Turns on debug logging. http_debug Turns on debug logging as well as debug logging of the underlying HTTP calls. `shade.simple_logging` also sets up a few other loggers and squelches some warnings or log messages that are otherwise uninteresting or unactionable by a `shade` user. Advanced Usage -------------- `shade` logs to a set of different named loggers. Most of the logging is set up to log to the root `shade` logger. There are additional sub-loggers that are used at times, primarily so that a user can decide to turn on or off a specific type of logging. They are listed below. shade.task_manager `shade` uses a Task Manager to perform remote calls. The `shade.task_manager` logger emits messages at the start and end of each Task announging what it is going to run and then what it ran and how long it took. Logging `shade.task_manager` is a good way to get a trace of external actions shade is taking without full `HTTP Tracing`_. shade.request_ids The `shade.request_ids` logger emits a log line at the end of each HTTP interaction with the OpenStack Request ID associated with the interaction. This can be be useful for tracking action taken on the server-side if one does not want `HTTP Tracing`_. shade.iterate_timeout When `shade` needs to poll a resource, it does so in a loop that waits between iterations and ultimately timesout. The `shade.iterate_timeout` logger emits messages for each iteration indicating it is waiting and for how long. These can be useful to see for long running tasks so that one can know things are not stuck, but can also be noisy. shade.http `shade` will sometimes log additional information about HTTP interactions to the `shade.http` logger. This can be verbose, as it sometimes logs entire response bodies. shade.fnmatch `shade` will try to use `fnmatch`_ on given `name_or_id` arguments. It's a best effort attempt, so pattern misses are logged to `shade.fnmatch`. A user may not be intending to use an fnmatch pattern - such as if they are trying to find an image named ``Fedora 24 [official]``, so these messages are logged separately. .. _fnmatch: https://pymotw.com/2/fnmatch/ HTTP Tracing ------------ HTTP Interactions are handled by `keystoneauth`. If you want to enable HTTP tracing while using `shade` and are not using `shade.simple_logging`, set the log level of the `keystoneauth` logger to `DEBUG`. Python Logging -------------- Python logging is a standard feature of Python and is documented fully in the Python Documentation, which varies by version of Python. For more information on Python Logging for Python v2, see https://docs.python.org/2/library/logging.html. For more information on Python Logging for Python v3, see https://docs.python.org/3/library/logging.html. shade-1.31.0/doc/source/user/model.rst0000666000175000017500000002766113440327640017600 0ustar zuulzuul00000000000000========== Data Model ========== shade has a very strict policy on not breaking backwards compatability ever. However, with the data structures returned from OpenStack, there are places where the resource structures from OpenStack are returned to the user somewhat directly, leaving a shade user open to changes/differences in result content. To combat that, shade 'normalizes' the return structure from OpenStack in many places, and the results of that normalization are listed below. Where shade performs normalization, a user can count on any fields declared in the docs as being completely safe to use - they are as much a part of shade's API contract as any other Python method. Some OpenStack objects allow for arbitrary attributes at the root of the object. shade will pass those through so as not to break anyone who may be counting on them, but as they are arbitrary shade can make no guarantees as to their existence. As part of normalization, shade will put any attribute from an OpenStack resource that is not in its data model contract into an attribute called 'properties'. The contents of properties are defined to be an arbitrary collection of key value pairs with no promises as to any particular key ever existing. If a user passes `strict=True` to the shade constructor, shade will not pass through arbitrary objects to the root of the resource, and will instead only put them in the properties dict. If a user is worried about accidentally writing code that depends on an attribute that is not part of the API contract, this can be a useful tool. Keep in mind all data can still be accessed via the properties dict, but any code touching anything in the properties dict should be aware that the keys found there are highly user/cloud specific. Any key that is transformed as part of the shade data model contract will not wind up with an entry in properties - only keys that are unknown. Location -------- A Location defines where a resource lives. It includes a cloud name and a region name, an availability zone as well as information about the project that owns the resource. The project information may contain a project id, or a combination of one or more of a project name with a domain name or id. If a project id is present, it should be considered correct. Some resources do not carry ownership information with them. For those, the project information will be filled in from the project the user currently has a token for. Some resources do not have information about availability zones, or may exist region wide. Those resources will have None as their availability zone. If all of the project information is None, then .. code-block:: python Location = dict( cloud=str(), region=str(), zone=str() or None, project=dict( id=str() or None, name=str() or None, domain_id=str() or None, domain_name=str() or None)) Resources ========= Flavor ------ A flavor for a Nova Server. .. code-block:: python Flavor = dict( location=Location(), id=str(), name=str(), is_public=bool(), is_disabled=bool(), ram=int(), vcpus=int(), disk=int(), ephemeral=int(), swap=int(), rxtx_factor=float(), extra_specs=dict(), properties=dict()) Flavor Access ------------- An access entry for a Nova Flavor. .. code-block:: python FlavorAccess = dict( flavor_id=str(), project_id=str()) Image ----- A Glance Image. .. code-block:: python Image = dict( location=Location(), id=str(), name=str(), min_ram=int(), min_disk=int(), size=int(), virtual_size=int(), container_format=str(), disk_format=str(), checksum=str(), created_at=str(), updated_at=str(), owner=str(), is_public=bool(), is_protected=bool(), visibility=str(), status=str(), locations=list(), direct_url=str() or None, tags=list(), properties=dict()) Keypair ------- A keypair for a Nova Server. .. code-block:: python Keypair = dict( location=Location(), name=str(), id=str(), public_key=str(), fingerprint=str(), type=str(), user_id=str(), private_key=str() or None properties=dict()) Security Group -------------- A Security Group from either Nova or Neutron .. code-block:: python SecurityGroup = dict( location=Location(), id=str(), name=str(), description=str(), security_group_rules=list(), properties=dict()) Security Group Rule ------------------- A Security Group Rule from either Nova or Neutron .. code-block:: python SecurityGroupRule = dict( location=Location(), id=str(), direction=str(), # oneof('ingress', 'egress') ethertype=str(), port_range_min=int() or None, port_range_max=int() or None, protocol=str() or None, remote_ip_prefix=str() or None, security_group_id=str() or None, remote_group_id=str() or None properties=dict()) Server ------ A Server from Nova .. code-block:: python Server = dict( location=Location(), id=str(), name=str(), image=dict() or str(), flavor=dict(), volumes=list(), # Volume interface_ip=str(), has_config_drive=bool(), accessIPv4=str(), accessIPv6=str(), addresses=dict(), # string, list(Address) created=str(), created_at=str(), key_name=str(), metadata=dict(), # string, string private_v4=str(), progress=int(), public_v4=str(), public_v6=str(), security_groups=list(), # SecurityGroup status=str(), updated=str(), user_id=str(), host_id=str() or None, power_state=str() or None, task_state=str() or None, vm_state=str() or None, launched_at=str() or None, terminated_at=str() or None, task_state=str() or None, properties=dict()) ComputeLimits ------------- Limits and current usage for a project in Nova .. code-block:: python ComputeLimits = dict( location=Location(), max_personality=int(), max_personality_size=int(), max_server_group_members=int(), max_server_groups=int(), max_server_meta=int(), max_total_cores=int(), max_total_instances=int(), max_total_keypairs=int(), max_total_ram_size=int(), total_cores_used=int(), total_instances_used=int(), total_ram_used=int(), total_server_groups_used=int(), properties=dict()) ComputeUsage ------------ Current usage for a project in Nova .. code-block:: python ComputeUsage = dict( location=Location(), started_at=str(), stopped_at=str(), server_usages=list(), max_personality=int(), max_personality_size=int(), max_server_group_members=int(), max_server_groups=int(), max_server_meta=int(), max_total_cores=int(), max_total_instances=int(), max_total_keypairs=int(), max_total_ram_size=int(), total_cores_used=int(), total_hours=int(), total_instances_used=int(), total_local_gb_usage=int(), total_memory_mb_usage=int(), total_ram_used=int(), total_server_groups_used=int(), total_vcpus_usage=int(), properties=dict()) ServerUsage ----------- Current usage for a server in Nova .. code-block:: python ComputeUsage = dict( started_at=str(), ended_at=str(), flavor=str(), hours=int(), instance_id=str(), local_gb=int(), memory_mb=int(), name=str(), state=str(), uptime=int(), vcpus=int(), properties=dict()) Floating IP ----------- A Floating IP from Neutron or Nova .. code-block:: python FloatingIP = dict( location=Location(), id=str(), description=str(), attached=bool(), fixed_ip_address=str() or None, floating_ip_address=str() or None, network=str() or None, port=str() or None, router=str(), status=str(), created_at=str() or None, updated_at=str() or None, revision_number=int() or None, properties=dict()) Volume ------ A volume from cinder. .. code-block:: python Volume = dict( location=Location(), id=str(), name=str(), description=str(), size=int(), attachments=list(), status=str(), migration_status=str() or None, host=str() or None, replication_driver=str() or None, replication_status=str() or None, replication_extended_status=str() or None, snapshot_id=str() or None, created_at=str(), updated_at=str() or None, source_volume_id=str() or None, consistencygroup_id=str() or None, volume_type=str() or None, metadata=dict(), is_bootable=bool(), is_encrypted=bool(), can_multiattach=bool(), properties=dict()) VolumeType ---------- A volume type from cinder. .. code-block:: python VolumeType = dict( location=Location(), id=str(), name=str(), description=str() or None, is_public=bool(), qos_specs_id=str() or None, extra_specs=dict(), properties=dict()) VolumeTypeAccess ---------------- A volume type access from cinder. .. code-block:: python VolumeTypeAccess = dict( location=Location(), volume_type_id=str(), project_id=str(), properties=dict()) ClusterTemplate --------------- A Cluster Template from magnum. .. code-block:: python ClusterTemplate = dict( location=Location(), apiserver_port=int(), cluster_distro=str(), coe=str(), created_at=str(), dns_nameserver=str(), docker_volume_size=int(), external_network_id=str(), fixed_network=str() or None, flavor_id=str(), http_proxy=str() or None, https_proxy=str() or None, id=str(), image_id=str(), insecure_registry=str(), is_public=bool(), is_registry_enabled=bool(), is_tls_disabled=bool(), keypair_id=str(), labels=dict(), master_flavor_id=str() or None, name=str(), network_driver=str(), no_proxy=str() or None, server_type=str(), updated_at=str() or None, volume_driver=str(), properties=dict()) MagnumService ------------- A Magnum Service from magnum .. code-block:: python MagnumService = dict( location=Location(), binary=str(), created_at=str(), disabled_reason=str() or None, host=str(), id=str(), report_count=int(), state=str(), properties=dict()) Stack ----- A Stack from Heat .. code-block:: python Stack = dict( location=Location(), id=str(), name=str(), created_at=str(), deleted_at=str(), updated_at=str(), description=str(), action=str(), identifier=str(), is_rollback_enabled=bool(), notification_topics=list(), outputs=list(), owner=str(), parameters=dict(), parent=str(), stack_user_project_id=str(), status=str(), status_reason=str(), tags=dict(), tempate_description=str(), timeout_mins=int(), properties=dict()) Identity Resources ================== Identity Resources are slightly different. They are global to a cloud, so location.availability_zone and location.region_name and will always be None. If a deployer happens to deploy OpenStack in such a way that users and projects are not shared amongst regions, that necessitates treating each of those regions as separate clouds from shade's POV. The Identity Resources that are not Project do not exist within a Project, so all of the values in ``location.project`` will be None. Project ------- A Project from Keystone (or a tenant if Keystone v2) Location information for Project has some additional specific semantics. If the project has a parent project, that will be in ``location.project.id``, and if it doesn't that should be ``None``. If the Project is associated with a domain that will be in ``location.project.domain_id`` in addition to the normal ``domain_id`` regardless of the current user's token scope. .. code-block:: python Project = dict( location=Location(), id=str(), name=str(), description=str(), is_enabled=bool(), is_domain=bool(), domain_id=str(), properties=dict()) Role ---- A Role from Keystone .. code-block:: python Project = dict( location=Location(), id=str(), name=str(), domain_id=str(), properties=dict()) shade-1.31.0/doc/source/contributor/0000775000175000017500000000000013440330010017305 5ustar zuulzuul00000000000000shade-1.31.0/doc/source/contributor/contributing.rst0000666000175000017500000000004713440327640022570 0ustar zuulzuul00000000000000.. include:: ../../../CONTRIBUTING.rst shade-1.31.0/doc/source/contributor/index.rst0000666000175000017500000000020613440327640021165 0ustar zuulzuul00000000000000========================= Shade Contributor Guide ========================= .. toctree:: :maxdepth: 2 contributing coding shade-1.31.0/doc/source/contributor/coding.rst0000666000175000017500000001060713440327640021327 0ustar zuulzuul00000000000000******************************** Shade Developer Coding Standards ******************************** In the beginning, there were no guidelines. And it was good. But that didn't last long. As more and more people added more and more code, we realized that we needed a set of coding standards to make sure that the shade API at least *attempted* to display some form of consistency. Thus, these coding standards/guidelines were developed. Note that not all of shade adheres to these standards just yet. Some older code has not been updated because we need to maintain backward compatibility. Some of it just hasn't been changed yet. But be clear, all new code *must* adhere to these guidelines. Below are the patterns that we expect Shade developers to follow. Release Notes ============= Shade uses `reno `_ for managing its release notes. A new release note should be added to your contribution anytime you add new API calls, fix significant bugs, add new functionality or parameters to existing API calls, or make any other significant changes to the code base that we should draw attention to for the user base. It is *not* necessary to add release notes for minor fixes, such as correction of documentation typos, minor code cleanup or reorganization, or any other change that a user would not notice through normal usage. API Methods =========== - When an API call acts on a resource that has both a unique ID and a name, that API call should accept either identifier with a name_or_id parameter. - All resources should adhere to the get/list/search interface that control retrieval of those resources. E.g., `get_image()`, `list_images()`, `search_images()`. - Resources should have `create_RESOURCE()`, `delete_RESOURCE()`, `update_RESOURCE()` API methods (as it makes sense). - For those methods that should behave differently for omitted or None-valued parameters, use the `_utils.valid_kwargs` decorator. Notably: all Neutron `update_*` functions. - Deleting a resource should return True if the delete succeeded, or False if the resource was not found. Exceptions ========== All underlying client exceptions must be captured and converted to an `OpenStackCloudException` or one of its derivatives. REST Calls ============ All interactions with the cloud should be done with direct REST using the appropriate `keystoneauth1.adapter.Adapter`. See Glance and Swift calls for examples. Returned Resources ================== Complex objects returned to the caller must be a `munch.Munch` type. The `shade._adapter.Adapter` class makes resources into `munch.Munch`. All objects should be normalized. It is shade's purpose in life to make OpenStack consistent for end users, and this means not trusting the clouds to return consistent objects. There should be a normalize function in `shade/_normalize.py` that is applied to objects before returning them to the user. See :doc:`../user/model` for further details on object model requirements. Fields should not be in the normalization contract if we cannot commit to providing them to all users. Fields should be renamed in normalization to be consistent with the rest of shade. For instance, nothing in shade exposes the legacy OpenStack concept of "tenant" to a user, but instead uses "project" even if the cloud uses tenant. Nova vs. Neutron ================ - Recognize that not all cloud providers support Neutron, so never assume it will be present. If a task can be handled by either Neutron or Nova, code it to be handled by either. - For methods that accept either a Nova pool or Neutron network, the parameter should just refer to the network, but documentation of it should explain about the pool. See: `create_floating_ip()` and `available_floating_ip()` methods. Tests ===== - New API methods *must* have unit tests! - New unit tests should only mock at the REST layer using `requests_mock`. Any mocking of shade itself or of legacy client libraries should be considered legacy and to be avoided. - Functional tests should be added, when possible. - In functional tests, always use unique names (for resources that have this attribute) and use it for clean up (see next point). - In functional tests, always define cleanup functions to delete data added by your test, should something go wrong. Data removal should be wrapped in a try except block and try to delete as many entries added by the test as possible. shade-1.31.0/doc/source/conf.py0000777000175000017500000000222013440327640016252 0ustar zuulzuul00000000000000import os import sys sys.path.insert(0, os.path.abspath('../..')) extensions = [ 'sphinx.ext.autodoc', 'openstackdocstheme', 'reno.sphinxext' ] # openstackdocstheme options repository_name = 'openstack-infra/shade' bug_project = '760' bug_tag = '' html_last_updated_fmt = '%Y-%m-%d %H:%M' html_theme = 'openstackdocs' # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'shade' copyright = u'2014 Hewlett-Packard Development Company, L.P.' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, u'%s Documentation' % project, u'Monty Taylor', 'manual'), ] shade-1.31.0/doc/source/releasenotes/0000775000175000017500000000000013440330010017424 5ustar zuulzuul00000000000000shade-1.31.0/doc/source/releasenotes/index.rst0000666000175000017500000000007613440327640021311 0ustar zuulzuul00000000000000============= Release Notes ============= .. release-notes:: shade-1.31.0/doc/source/index.rst0000666000175000017500000000120713440327640016615 0ustar zuulzuul00000000000000.. shade documentation master file, created by sphinx-quickstart on Tue Jul 9 22:26:36 2013. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. ================================= Welcome to shade's documentation! ================================= Contents: .. toctree:: :maxdepth: 2 install/index user/index contributor/index .. releasenotes contains a lot of sections, toctree with maxdepth 1 is used. .. toctree:: :maxdepth: 1 releasenotes/index .. include:: ../../README.rst Indices and tables ================== * :ref:`genindex` * :ref:`search` shade-1.31.0/doc/source/install/0000775000175000017500000000000013440330010016401 5ustar zuulzuul00000000000000shade-1.31.0/doc/source/install/index.rst0000666000175000017500000000027113440327640020263 0ustar zuulzuul00000000000000============ Installation ============ At the command line:: $ pip install shade Or, if you have virtualenv wrapper installed:: $ mkvirtualenv shade $ pip install shade shade-1.31.0/doc/requirements.txt0000666000175000017500000000046713440327640016747 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. sphinx!=1.6.6,!=1.6.7,>=1.6.2 # BSD openstackdocstheme>=1.18.1 # Apache-2.0 reno>=2.5.0 # Apache-2.0 shade-1.31.0/setup.cfg0000666000175000017500000000165313440330010014516 0ustar zuulzuul00000000000000[metadata] name = shade summary = Simple client library for interacting with OpenStack clouds description-file = README.rst author = OpenStack author-email = openstack-discuss@lists.openstack.org home-page = http://docs.openstack.org/shade/latest classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.5 [entry_points] console_scripts = shade-inventory = shade.cmd.inventory:main [bdist_wheel] universal = 1 [build_sphinx] source-dir = doc/source build-dir = doc/build all_files = 1 warning-is-error = 1 [upload_sphinx] upload-dir = doc/build/html [egg_info] tag_build = tag_date = 0 shade-1.31.0/devstack/0000775000175000017500000000000013440330010014472 5ustar zuulzuul00000000000000shade-1.31.0/devstack/plugin.sh0000666000175000017500000000202113440327640016340 0ustar zuulzuul00000000000000# Install and configure **shade** library in devstack # # To enable shade in devstack add an entry to local.conf that looks like # # [[local|localrc]] # enable_plugin shade git://git.openstack.org/openstack-infra/shade function preinstall_shade { : } function install_shade { if use_library_from_git "shade"; then # don't clone, it'll be done by the plugin install setup_dev_lib "shade" else pip_install "shade" fi } function configure_shade { : } function initialize_shade { : } function unstack_shade { : } function clean_shade { : } # This is the main for plugin.sh if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then preinstall_shade elif [[ "$1" == "stack" && "$2" == "install" ]]; then install_shade elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then configure_shade elif [[ "$1" == "stack" && "$2" == "extra" ]]; then initialize_shade fi if [[ "$1" == "unstack" ]]; then unstack_shade fi if [[ "$1" == "clean" ]]; then clean_shade fi shade-1.31.0/shade/0000775000175000017500000000000013440330010013752 5ustar zuulzuul00000000000000shade-1.31.0/shade/task_manager.py0000666000175000017500000002510013440327640016777 0ustar zuulzuul00000000000000# Copyright (C) 2011-2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. import abc import concurrent.futures import sys import threading import time import types import keystoneauth1.exceptions import six from shade import _log from shade import exc from shade import meta def _is_listlike(obj): # NOTE(Shrews): Since the client API might decide to subclass one # of these result types, we use isinstance() here instead of type(). return ( isinstance(obj, list) or isinstance(obj, types.GeneratorType)) def _is_objlike(obj): # NOTE(Shrews): Since the client API might decide to subclass one # of these result types, we use isinstance() here instead of type(). return ( not isinstance(obj, bool) and not isinstance(obj, int) and not isinstance(obj, float) and not isinstance(obj, six.string_types) and not isinstance(obj, set) and not isinstance(obj, tuple)) @six.add_metaclass(abc.ABCMeta) class BaseTask(object): """Represent a task to be performed on an OpenStack Cloud. Some consumers need to inject things like rate-limiting or auditing around each external REST interaction. Task provides an interface to encapsulate each such interaction. Also, although shade itself operates normally in a single-threaded direct action manner, consuming programs may provide a multi-threaded TaskManager themselves. For that reason, Task uses threading events to ensure appropriate wait conditions. These should be a no-op in single-threaded applications. A consumer is expected to overload the main method. :param dict kw: Any args that are expected to be passed to something in the main payload at execution time. """ def __init__(self, **kw): self._exception = None self._traceback = None self._result = None self._response = None self._finished = threading.Event() self.run_async = False self.args = kw self.name = type(self).__name__ @abc.abstractmethod def main(self, client): """ Override this method with the actual workload to be performed """ def done(self, result): self._result = result self._finished.set() def exception(self, e, tb): self._exception = e self._traceback = tb self._finished.set() def wait(self, raw=False): self._finished.wait() if self._exception: six.reraise(type(self._exception), self._exception, self._traceback) return self._result def run(self, client): self._client = client try: # Retry one time if we get a retriable connection failure try: # Keep time for connection retrying logging start = time.time() self.done(self.main(client)) except keystoneauth1.exceptions.RetriableConnectionFailure as e: end = time.time() dt = end - start if client.region_name: client.log.debug(str(e)) client.log.debug( "Connection failure on %(cloud)s:%(region)s" " for %(name)s after %(secs)s seconds, retrying", {'cloud': client.name, 'region': client.region_name, 'secs': dt, 'name': self.name}) else: client.log.debug( "Connection failure on %(cloud)s for %(name)s after" " %(secs)s seconds, retrying", {'cloud': client.name, 'name': self.name, 'secs': dt}) self.done(self.main(client)) except Exception: raise except Exception as e: self.exception(e, sys.exc_info()[2]) class Task(BaseTask): """ Shade specific additions to the BaseTask Interface. """ def wait(self, raw=False): super(Task, self).wait() if raw: # Do NOT convert the result. return self._result if _is_listlike(self._result): return meta.obj_list_to_munch(self._result) elif _is_objlike(self._result): return meta.obj_to_munch(self._result) else: return self._result class RequestTask(BaseTask): """ Extensions to the Shade Tasks to handle raw requests """ # It's totally legit for calls to not return things result_key = None # keystoneauth1 throws keystoneauth1.exceptions.http.HttpError on !200 def done(self, result): self._response = result try: result_json = self._response.json() except Exception as e: result_json = self._response.text self._client.log.debug( 'Could not decode json in response: %(e)s', {'e': str(e)}) self._client.log.debug(result_json) if self.result_key: self._result = result_json[self.result_key] else: self._result = result_json self._request_id = self._response.headers.get('x-openstack-request-id') self._finished.set() def wait(self, raw=False): super(RequestTask, self).wait() if raw: # Do NOT convert the result. return self._result if _is_listlike(self._result): return meta.obj_list_to_munch( self._result, request_id=self._request_id) elif _is_objlike(self._result): return meta.obj_to_munch(self._result, request_id=self._request_id) return self._result def _result_filter_cb(result): return result def generate_task_class(method, name, result_filter_cb): if name is None: if callable(method): name = method.__name__ else: name = method class RunTask(Task): def __init__(self, **kw): super(RunTask, self).__init__(**kw) self.name = name self._method = method def wait(self, raw=False): super(RunTask, self).wait() if raw: # Do NOT convert the result. return self._result return result_filter_cb(self._result) def main(self, client): if callable(self._method): return method(**self.args) else: meth = getattr(client, self._method) return meth(**self.args) return RunTask class TaskManager(object): log = _log.setup_logging('shade.task_manager') def __init__( self, client, name, result_filter_cb=None, workers=5, **kwargs): self.name = name self._client = client self._executor = concurrent.futures.ThreadPoolExecutor( max_workers=workers) if not result_filter_cb: self._result_filter_cb = _result_filter_cb else: self._result_filter_cb = result_filter_cb def set_client(self, client): self._client = client def stop(self): """ This is a direct action passthrough TaskManager """ self._executor.shutdown(wait=True) def run(self): """ This is a direct action passthrough TaskManager """ pass def submit_task(self, task, raw=False): """Submit and execute the given task. :param task: The task to execute. :param bool raw: If True, return the raw result as received from the underlying client call. """ return self.run_task(task=task, raw=raw) def _run_task_async(self, task, raw=False): self.log.debug( "Manager %s submitting task %s", self.name, task.name) return self._executor.submit(self._run_task, task, raw=raw) def run_task(self, task, raw=False): if hasattr(task, 'run_async') and task.run_async: return self._run_task_async(task, raw=raw) else: return self._run_task(task, raw=raw) def _run_task(self, task, raw=False): self.log.debug( "Manager %s running task %s", self.name, task.name) start = time.time() task.run(self._client) end = time.time() dt = end - start self.log.debug( "Manager %s ran task %s in %ss", self.name, task.name, dt) self.post_run_task(dt, task) return task.wait(raw) def post_run_task(self, elasped_time, task): pass # Backwards compatibility submitTask = submit_task def submit_function( self, method, name=None, result_filter_cb=None, **kwargs): """ Allows submitting an arbitrary method for work. :param method: Method to run in the TaskManager. Can be either the name of a method to find on self.client, or a callable. """ if not result_filter_cb: result_filter_cb = self._result_filter_cb task_class = generate_task_class(method, name, result_filter_cb) return self._executor.submit_task(task_class(**kwargs)) def wait_for_futures(futures, raise_on_error=True, log=None): '''Collect results or failures from a list of running future tasks.''' results = [] retries = [] # Check on each result as its thread finishes for completed in concurrent.futures.as_completed(futures): try: result = completed.result() # We have to do this here because munch_response doesn't # get called on async job results exc.raise_from_response(result) results.append(result) except (keystoneauth1.exceptions.RetriableConnectionFailure, exc.OpenStackCloudException) as e: if log: log.debug( "Exception processing async task: {e}".format( e=str(e)), exc_info=True) # If we get an exception, put the result into a list so we # can try again if raise_on_error: raise else: retries.append(result) return results, retries shade-1.31.0/shade/__init__.py0000666000175000017500000001143113440327640016104 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import warnings import keystoneauth1.exceptions import os_client_config import pbr.version import requestsexceptions # The star import is for backwards compat reasons from shade.exc import * # noqa from shade import exc from shade.openstackcloud import OpenStackCloud from shade.operatorcloud import OperatorCloud from shade import _log __version__ = pbr.version.VersionInfo('shade').version_string() if requestsexceptions.SubjectAltNameWarning: warnings.filterwarnings( 'ignore', category=requestsexceptions.SubjectAltNameWarning) def _get_openstack_config(app_name=None, app_version=None): # Protect against older versions of os-client-config that don't expose this try: return os_client_config.OpenStackConfig( app_name=app_name, app_version=app_version) except Exception: return os_client_config.OpenStackConfig() def simple_logging(debug=False, http_debug=False): if http_debug: debug = True if debug: log_level = logging.DEBUG else: log_level = logging.INFO if http_debug: # Enable HTTP level tracing log = _log.setup_logging('keystoneauth') log.addHandler(logging.StreamHandler()) log.setLevel(log_level) # We only want extra shade HTTP tracing in http debug mode log = _log.setup_logging('shade.http') log.setLevel(log_level) else: # We only want extra shade HTTP tracing in http debug mode log = _log.setup_logging('shade.http') log.setLevel(logging.WARNING) # Simple case - we only care about request id log during debug log = _log.setup_logging('shade.request_ids') log.setLevel(log_level) log = _log.setup_logging('shade') log.addHandler(logging.StreamHandler()) log.setLevel(log_level) # Suppress warning about keystoneauth loggers log = _log.setup_logging('keystoneauth.identity.base') log = _log.setup_logging('keystoneauth.identity.generic.base') def openstack_clouds( config=None, debug=False, cloud=None, strict=False, app_name=None, app_version=None, use_direct_get=False): if not config: config = _get_openstack_config(app_name, app_version) try: if cloud is None: return [ OpenStackCloud( cloud=f.name, debug=debug, cloud_config=f, strict=strict, use_direct_get=use_direct_get, **f.config) for f in config.get_all_clouds() ] else: return [ OpenStackCloud( cloud=f.name, debug=debug, cloud_config=f, strict=strict, use_direct_get=use_direct_get, **f.config) for f in config.get_all_clouds() if f.name == cloud ] except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e: raise exc.OpenStackCloudException( "Invalid cloud configuration: {exc}".format(exc=str(e))) def openstack_cloud( config=None, strict=False, app_name=None, app_version=None, use_direct_get=False, **kwargs): if not config: config = _get_openstack_config(app_name, app_version) try: cloud_config = config.get_one_cloud(**kwargs) except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e: raise exc.OpenStackCloudException( "Invalid cloud configuration: {exc}".format(exc=str(e))) return OpenStackCloud( cloud_config=cloud_config, strict=strict, use_direct_get=use_direct_get) def operator_cloud( config=None, strict=False, app_name=None, app_version=None, use_direct_get=False, **kwargs): if not config: config = _get_openstack_config(app_name, app_version) try: cloud_config = config.get_one_cloud(**kwargs) except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e: raise exc.OpenStackCloudException( "Invalid cloud configuration: {exc}".format(exc=str(e))) return OperatorCloud( cloud_config=cloud_config, strict=strict, use_direct_get=use_direct_get) shade-1.31.0/shade/cmd/0000775000175000017500000000000013440330010014515 5ustar zuulzuul00000000000000shade-1.31.0/shade/cmd/__init__.py0000666000175000017500000000000013440327640016635 0ustar zuulzuul00000000000000shade-1.31.0/shade/cmd/inventory.py0000777000175000017500000000504113440327640017150 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import json import sys import yaml import shade import shade.inventory def output_format_dict(data, use_yaml): if use_yaml: return yaml.safe_dump(data, default_flow_style=False) else: return json.dumps(data, sort_keys=True, indent=2) def get_parser(): parser = argparse.ArgumentParser(description='OpenStack Inventory Module', prog='shade-inventory') parser.add_argument('--refresh', action='store_true', help='Refresh cached information') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--list', action='store_true', help='List active servers') group.add_argument('--host', help='List details about the specific host') parser.add_argument('--private', action='store_true', default=False, help='Use private IPs for interface_ip') parser.add_argument('--cloud', default=None, help='Return data for one cloud only') parser.add_argument('--yaml', action='store_true', default=False, help='Output data in nicely readable yaml') parser.add_argument('--debug', action='store_true', default=False, help='Enable debug output') return parser def parse_args(): return get_parser().parse_args() def main(): args = parse_args() try: shade.simple_logging(debug=args.debug) inventory = shade.inventory.OpenStackInventory( refresh=args.refresh, private=args.private, cloud=args.cloud) if args.list: output = inventory.list_hosts() elif args.host: output = inventory.get_host(args.host) print(output_format_dict(output, args.yaml)) except shade.OpenStackCloudException as e: sys.stderr.write(e.message + '\n') sys.exit(1) sys.exit(0) if __name__ == '__main__': main() shade-1.31.0/shade/meta.py0000666000175000017500000005637213440327640015310 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import munch import ipaddress import six import socket from shade import _log from shade import exc NON_CALLABLES = (six.string_types, bool, dict, int, float, list, type(None)) def find_nova_interfaces(addresses, ext_tag=None, key_name=None, version=4, mac_addr=None): ret = [] for (k, v) in iter(addresses.items()): if key_name is not None and k != key_name: # key_name is specified and it doesn't match the current network. # Continue with the next one continue for interface_spec in v: if ext_tag is not None: if 'OS-EXT-IPS:type' not in interface_spec: # ext_tag is specified, but this interface has no tag # We could actually return right away as this means that # this cloud doesn't support OS-EXT-IPS. Nevertheless, # it would be better to perform an explicit check. e.g.: # cloud._has_nova_extension('OS-EXT-IPS') # But this needs cloud to be passed to this function. continue elif interface_spec['OS-EXT-IPS:type'] != ext_tag: # Type doesn't match, continue with next one continue if mac_addr is not None: if 'OS-EXT-IPS-MAC:mac_addr' not in interface_spec: # mac_addr is specified, but this interface has no mac_addr # We could actually return right away as this means that # this cloud doesn't support OS-EXT-IPS-MAC. Nevertheless, # it would be better to perform an explicit check. e.g.: # cloud._has_nova_extension('OS-EXT-IPS-MAC') # But this needs cloud to be passed to this function. continue elif interface_spec['OS-EXT-IPS-MAC:mac_addr'] != mac_addr: # MAC doesn't match, continue with next one continue if interface_spec['version'] == version: ret.append(interface_spec) return ret def find_nova_addresses(addresses, ext_tag=None, key_name=None, version=4, mac_addr=None): interfaces = find_nova_interfaces(addresses, ext_tag, key_name, version, mac_addr) floating_addrs = [] fixed_addrs = [] for i in interfaces: if i.get('OS-EXT-IPS:type') == 'floating': floating_addrs.append(i['addr']) else: fixed_addrs.append(i['addr']) return floating_addrs + fixed_addrs def get_server_ip(server, public=False, cloud_public=True, **kwargs): """Get an IP from the Nova addresses dict :param server: The server to pull the address from :param public: Whether the address we're looking for should be considered 'public' and therefore reachabiliity tests should be used. (defaults to False) :param cloud_public: Whether the cloud has been configured to use private IPs from servers as the interface_ip. This inverts the public reachability logic, as in this case it's the private ip we expect shade to be able to reach """ addrs = find_nova_addresses(server['addresses'], **kwargs) return find_best_address( addrs, public=public, cloud_public=cloud_public) def get_server_private_ip(server, cloud=None): """Find the private IP address If Neutron is available, search for a port on a network where `router:external` is False and `shared` is False. This combination indicates a private network with private IP addresses. This port should have the private IP. If Neutron is not available, or something goes wrong communicating with it, as a fallback, try the list of addresses associated with the server dict, looking for an IP type tagged as 'fixed' in the network named 'private'. Last resort, ignore the IP type and just look for an IP on the 'private' network (e.g., Rackspace). """ if cloud and not cloud.use_internal_network(): return None # Try to get a floating IP interface. If we have one then return the # private IP address associated with that floating IP for consistency. fip_ints = find_nova_interfaces(server['addresses'], ext_tag='floating') fip_mac = None if fip_ints: fip_mac = fip_ints[0].get('OS-EXT-IPS-MAC:mac_addr') # Short circuit the ports/networks search below with a heavily cached # and possibly pre-configured network name if cloud: int_nets = cloud.get_internal_ipv4_networks() for int_net in int_nets: int_ip = get_server_ip( server, key_name=int_net['name'], ext_tag='fixed', cloud_public=not cloud.private, mac_addr=fip_mac) if int_ip is not None: return int_ip # Try a second time without the fixed tag. This is for old nova-network # results that do not have the fixed/floating tag. for int_net in int_nets: int_ip = get_server_ip( server, key_name=int_net['name'], cloud_public=not cloud.private, mac_addr=fip_mac) if int_ip is not None: return int_ip ip = get_server_ip( server, ext_tag='fixed', key_name='private', mac_addr=fip_mac) if ip: return ip # Last resort, and Rackspace return get_server_ip( server, key_name='private') def get_server_external_ipv4(cloud, server): """Find an externally routable IP for the server. There are 5 different scenarios we have to account for: * Cloud has externally routable IP from neutron but neutron APIs don't work (only info available is in nova server record) (rackspace) * Cloud has externally routable IP from neutron (runabove, ovh) * Cloud has externally routable IP from neutron AND supports optional private tenant networks (vexxhost, unitedstack) * Cloud only has private tenant network provided by neutron and requires floating-ip for external routing (dreamhost, hp) * Cloud only has private tenant network provided by nova-network and requires floating-ip for external routing (auro) :param cloud: the cloud we're working with :param server: the server dict from which we want to get an IPv4 address :return: a string containing the IPv4 address or None """ if not cloud.use_external_network(): return None if server['accessIPv4']: return server['accessIPv4'] # Short circuit the ports/networks search below with a heavily cached # and possibly pre-configured network name ext_nets = cloud.get_external_ipv4_networks() for ext_net in ext_nets: ext_ip = get_server_ip( server, key_name=ext_net['name'], public=True, cloud_public=not cloud.private) if ext_ip is not None: return ext_ip # Try to get a floating IP address # Much as I might find floating IPs annoying, if it has one, that's # almost certainly the one that wants to be used ext_ip = get_server_ip( server, ext_tag='floating', public=True, cloud_public=not cloud.private) if ext_ip is not None: return ext_ip # The cloud doesn't support Neutron or Neutron can't be contacted. The # server might have fixed addresses that are reachable from outside the # cloud (e.g. Rax) or have plain ol' floating IPs # Try to get an address from a network named 'public' ext_ip = get_server_ip( server, key_name='public', public=True, cloud_public=not cloud.private) if ext_ip is not None: return ext_ip # Nothing else works, try to find a globally routable IP address for interfaces in server['addresses'].values(): for interface in interfaces: try: ip = ipaddress.ip_address(interface['addr']) except Exception: # Skip any error, we're looking for a working ip - if the # cloud returns garbage, it wouldn't be the first weird thing # but it still doesn't meet the requirement of "be a working # ip address" continue if ip.version == 4 and not ip.is_private: return str(ip) return None def find_best_address(addresses, public=False, cloud_public=True): do_check = public == cloud_public if not addresses: return None if len(addresses) == 1: return addresses[0] if len(addresses) > 1 and do_check: # We only want to do this check if the address is supposed to be # reachable. Otherwise we're just debug log spamming on every listing # of private ip addresses for address in addresses: # Return the first one that is reachable try: for res in socket.getaddrinfo( address, 22, socket.AF_UNSPEC, socket.SOCK_STREAM, 0): family, socktype, proto, _, sa = res connect_socket = socket.socket(family, socktype, proto) connect_socket.settimeout(1) connect_socket.connect(sa) return address except Exception: pass # Give up and return the first - none work as far as we can tell if do_check: log = _log.setup_logging('shade') log.debug( 'The cloud returned multiple addresses, and none of them seem' ' to work. That might be what you wanted, but we have no clue' " what's going on, so we just picked one at random") return addresses[0] def get_server_external_ipv6(server): """ Get an IPv6 address reachable from outside the cloud. This function assumes that if a server has an IPv6 address, that address is reachable from outside the cloud. :param server: the server from which we want to get an IPv6 address :return: a string containing the IPv6 address or None """ if server['accessIPv6']: return server['accessIPv6'] addresses = find_nova_addresses(addresses=server['addresses'], version=6) return find_best_address(addresses, public=True) def get_server_default_ip(cloud, server): """ Get the configured 'default' address It is possible in clouds.yaml to configure for a cloud a network that is the 'default_interface'. This is the network that should be used to talk to instances on the network. :param cloud: the cloud we're working with :param server: the server dict from which we want to get the default IPv4 address :return: a string containing the IPv4 address or None """ ext_net = cloud.get_default_network() if ext_net: if (cloud._local_ipv6 and not cloud.force_ipv4): # try 6 first, fall back to four versions = [6, 4] else: versions = [4] for version in versions: ext_ip = get_server_ip( server, key_name=ext_net['name'], version=version, public=True, cloud_public=not cloud.private) if ext_ip is not None: return ext_ip return None def _get_interface_ip(cloud, server): """ Get the interface IP for the server Interface IP is the IP that should be used for communicating with the server. It is: - the IP on the configured default_interface network - if cloud.private, the private ip if it exists - if the server has a public ip, the public ip """ default_ip = get_server_default_ip(cloud, server) if default_ip: return default_ip if cloud.private and server['private_v4']: return server['private_v4'] if (server['public_v6'] and cloud._local_ipv6 and not cloud.force_ipv4): return server['public_v6'] else: return server['public_v4'] def get_groups_from_server(cloud, server, server_vars): groups = [] region = cloud.region_name cloud_name = cloud.name # Create a group for the cloud groups.append(cloud_name) # Create a group on region groups.append(region) # And one by cloud_region groups.append("%s_%s" % (cloud_name, region)) # Check if group metadata key in servers' metadata group = server['metadata'].get('group') if group: groups.append(group) for extra_group in server['metadata'].get('groups', '').split(','): if extra_group: groups.append(extra_group) groups.append('instance-%s' % server['id']) for key in ('flavor', 'image'): if 'name' in server_vars[key]: groups.append('%s-%s' % (key, server_vars[key]['name'])) for key, value in iter(server['metadata'].items()): groups.append('meta-%s_%s' % (key, value)) az = server_vars.get('az', None) if az: # Make groups for az, region_az and cloud_region_az groups.append(az) groups.append('%s_%s' % (region, az)) groups.append('%s_%s_%s' % (cloud.name, region, az)) return groups def expand_server_vars(cloud, server): """Backwards compatibility function.""" return add_server_interfaces(cloud, server) def _make_address_dict(fip, port): address = dict(version=4, addr=fip['floating_ip_address']) address['OS-EXT-IPS:type'] = 'floating' address['OS-EXT-IPS-MAC:mac_addr'] = port['mac_address'] return address def _get_supplemental_addresses(cloud, server): fixed_ip_mapping = {} for name, network in server['addresses'].items(): for address in network: if address['version'] == 6: continue if address.get('OS-EXT-IPS:type') == 'floating': # We have a floating IP that nova knows about, do nothing return server['addresses'] fixed_ip_mapping[address['addr']] = name try: # Don't bother doing this before the server is active, it's a waste # of an API call while polling for a server to come up if (cloud.has_service('network') and cloud._has_floating_ips() and server['status'] == 'ACTIVE'): for port in cloud.search_ports( filters=dict(device_id=server['id'])): for fip in cloud.search_floating_ips( filters=dict(port_id=port['id'])): # This SHOULD return one and only one FIP - but doing # it as a search/list lets the logic work regardless if fip['fixed_ip_address'] not in fixed_ip_mapping: log = _log.setup_logging('shade') log.debug( "The cloud returned floating ip %(fip)s attached" " to server %(server)s but the fixed ip associated" " with the floating ip in the neutron listing" " does not exist in the nova listing. Something" " is exceptionally broken.", dict(fip=fip['id'], server=server['id'])) fixed_net = fixed_ip_mapping[fip['fixed_ip_address']] server['addresses'][fixed_net].append( _make_address_dict(fip, port)) except exc.OpenStackCloudException: # If something goes wrong with a cloud call, that's cool - this is # an attempt to provide additional data and should not block forward # progress pass return server['addresses'] def add_server_interfaces(cloud, server): """Add network interface information to server. Query the cloud as necessary to add information to the server record about the network information needed to interface with the server. Ensures that public_v4, public_v6, private_v4, private_v6, interface_ip, accessIPv4 and accessIPv6 are always set. """ # First, add an IP address. Set it to '' rather than None if it does # not exist to remain consistent with the pre-existing missing values server['addresses'] = _get_supplemental_addresses(cloud, server) server['public_v4'] = get_server_external_ipv4(cloud, server) or '' server['public_v6'] = get_server_external_ipv6(server) or '' server['private_v4'] = get_server_private_ip(server, cloud) or '' server['interface_ip'] = _get_interface_ip(cloud, server) or '' # Some clouds do not set these, but they're a regular part of the Nova # server record. Since we know them, go ahead and set them. In the case # where they were set previous, we use the values, so this will not break # clouds that provide the information if cloud.private and server['private_v4']: server['accessIPv4'] = server['private_v4'] else: server['accessIPv4'] = server['public_v4'] server['accessIPv6'] = server['public_v6'] return server def expand_server_security_groups(cloud, server): try: groups = cloud.list_server_security_groups(server) except exc.OpenStackCloudException: groups = [] server['security_groups'] = groups or [] def get_hostvars_from_server(cloud, server, mounts=None): """Expand additional server information useful for ansible inventory. Variables in this function may make additional cloud queries to flesh out possibly interesting info, making it more expensive to call than expand_server_vars if caching is not set up. If caching is set up, the extra cost should be minimal. """ server_vars = add_server_interfaces(cloud, server) flavor_id = server['flavor']['id'] flavor_name = cloud.get_flavor_name(flavor_id) if flavor_name: server_vars['flavor']['name'] = flavor_name expand_server_security_groups(cloud, server) # OpenStack can return image as a string when you've booted from volume if str(server['image']) == server['image']: image_id = server['image'] server_vars['image'] = dict(id=image_id) else: image_id = server['image'].get('id', None) if image_id: image_name = cloud.get_image_name(image_id) if image_name: server_vars['image']['name'] = image_name volumes = [] if cloud.has_service('volume'): try: for volume in cloud.get_volumes(server): # Make things easier to consume elsewhere volume['device'] = volume['attachments'][0]['device'] volumes.append(volume) except exc.OpenStackCloudException: pass server_vars['volumes'] = volumes if mounts: for mount in mounts: for vol in server_vars['volumes']: if vol['display_name'] == mount['display_name']: if 'mount' in mount: vol['mount'] = mount['mount'] return server_vars def _log_request_id(obj, request_id): if request_id: # Log the request id and object id in a specific logger. This way # someone can turn it on if they're interested in this kind of tracing. log = _log.setup_logging('shade.request_ids') obj_id = None if isinstance(obj, dict): obj_id = obj.get('id', obj.get('uuid')) if obj_id: log.debug("Retrieved object %(id)s. Request ID %(request_id)s", {'id': obj.get('id', obj.get('uuid')), 'request_id': request_id}) else: log.debug("Retrieved a response. Request ID %(request_id)s", {'request_id': request_id}) return obj def obj_to_munch(obj): """ Turn an object with attributes into a dict suitable for serializing. Some of the things that are returned in OpenStack are objects with attributes. That's awesome - except when you want to expose them as JSON structures. We use this as the basis of get_hostvars_from_server above so that we can just have a plain dict of all of the values that exist in the nova metadata for a server. """ if obj is None: return None elif isinstance(obj, munch.Munch) or hasattr(obj, 'mock_add_spec'): # If we obj_to_munch twice, don't fail, just return the munch # Also, don't try to modify Mock objects - that way lies madness return obj elif isinstance(obj, dict): # The new request-id tracking spec: # https://specs.openstack.org/openstack/nova-specs/specs/juno/approved/log-request-id-mappings.html # adds a request-ids attribute to returned objects. It does this even # with dicts, which now become dict subclasses. So we want to convert # the dict we get, but we also want it to fall through to object # attribute processing so that we can also get the request_ids # data into our resulting object. instance = munch.Munch(obj) else: instance = munch.Munch() for key in dir(obj): try: value = getattr(obj, key) # some attributes can be defined as a @propierty, so we can't assure # to have a valid value # e.g. id in python-novaclient/tree/novaclient/v2/quotas.py except AttributeError: continue if isinstance(value, NON_CALLABLES) and not key.startswith('_'): instance[key] = value return instance obj_to_dict = obj_to_munch def obj_list_to_munch(obj_list): """Enumerate through lists of objects and return lists of dictonaries. Some of the objects returned in OpenStack are actually lists of objects, and in order to expose the data structures as JSON, we need to facilitate the conversion to lists of dictonaries. """ return [obj_to_munch(obj) for obj in obj_list] obj_list_to_dict = obj_list_to_munch def warlock_to_dict(obj): # This function is unused in shade - but it is a public function, so # removing it would be rude. We don't actually have to depend on warlock # ourselves to keep this - so just leave it here. # # glanceclient v2 uses warlock to construct its objects. Warlock does # deep black magic to attribute look up to support validation things that # means we cannot use normal obj_to_munch obj_dict = munch.Munch() for (key, value) in obj.items(): if isinstance(value, NON_CALLABLES) and not key.startswith('_'): obj_dict[key] = value return obj_dict def get_and_munchify(key, data): """Get the value associated to key and convert it. The value will be converted in a Munch object or a list of Munch objects based on the type """ result = data.get(key, []) if key else data if isinstance(result, list): return obj_list_to_munch(result) elif isinstance(result, dict): return obj_to_munch(result) return result shade-1.31.0/shade/_log.py0000666000175000017500000000147613440327640015275 0ustar zuulzuul00000000000000# Copyright (c) 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging class NullHandler(logging.Handler): def emit(self, record): pass def setup_logging(name): log = logging.getLogger(name) if len(log.handlers) == 0: h = NullHandler() log.addHandler(h) return log shade-1.31.0/shade/_legacy_clients.py0000666000175000017500000002541613440327640017501 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib import warnings from keystoneauth1 import plugin from os_client_config import constructors from shade import _utils from shade import exc class LegacyClientFactoryMixin(object): """Mixin Class containing factory functions for legacy client objects. Methods in this class exist for backwards compatibility so will not go away any time - but they are all things whose use is discouraged. They're in a mixin to unclutter the main class file. """ def _create_legacy_client( self, client, service, deprecated=True, module_name=None, **kwargs): if client not in self._legacy_clients: if deprecated: self._deprecated_import_check(client) if module_name: constructors.get_constructor_mapping()[service] = module_name self._legacy_clients[client] = self._get_client(service, **kwargs) return self._legacy_clients[client] def _deprecated_import_check(self, client): module_name = '{client}client'.format(client=client) warnings.warn( 'Using shade to get a {module_name} object is deprecated. If you' ' need a {module_name} object, please use make_legacy_client in' ' os-client-config instead'.format(module_name=module_name)) try: importlib.import_module(module_name) except ImportError: self.log.error( '{module_name} is no longer a dependency of shade. You need to' ' install python-{module_name} directly.'.format( module_name=module_name)) raise @property def trove_client(self): return self._create_legacy_client('trove', 'database') @property def magnum_client(self): return self._create_legacy_client('magnum', 'container-infra') @property def neutron_client(self): return self._create_legacy_client('neutron', 'network') @property def nova_client(self): return self._create_legacy_client('nova', 'compute', version='2.0') @property def glance_client(self): return self._create_legacy_client('glance', 'image') @property def heat_client(self): return self._create_legacy_client('heat', 'orchestration') @property def swift_client(self): return self._create_legacy_client('swift', 'object-store') @property def cinder_client(self): return self._create_legacy_client('cinder', 'volume') @property def designate_client(self): return self._create_legacy_client('designate', 'dns') @property def keystone_client(self): # Trigger discovery from ksa self._identity_client # Skip broken discovery in ksc. We're good thanks. from keystoneclient.v2_0 import client as v2_client from keystoneclient.v3 import client as v3_client if self.cloud_config.config['identity_api_version'] == '3': client_class = v3_client else: client_class = v2_client return self._create_legacy_client( 'keystone', 'identity', client_class=client_class.Client, deprecated=True, endpoint=self._identity_client.get_endpoint(), endpoint_override=self._identity_client.get_endpoint()) # Set the ironic API microversion to a known-good # supported/tested with the contents of shade. # # NOTE(TheJulia): Defaulted to version 1.6 as the ironic # state machine changes which will increment the version # and break an automatic transition of an enrolled node # to an available state. Locking the version is intended # to utilize the original transition until shade supports # calling for node inspection to allow the transition to # take place automatically. # NOTE(mordred): shade will handle microversions more # directly in the REST layer. This microversion property # will never change. When we implement REST, we should # start at 1.6 since that's what we've been requesting # via ironic_client @property def ironic_api_microversion(self): # NOTE(mordred) Abuse _legacy_clients to only show # this warning once if 'ironic-microversion' not in self._legacy_clients: warnings.warn( 'shade is transitioning to direct REST calls which' ' will handle microversions with no action needed' ' on the part of the user. The ironic_api_microversion' ' property is only used by the legacy ironic_client' ' constructor and will never change. If you are using' ' it for any reason, either switch to just using' ' shade ironic-related API calls, or use os-client-config' ' make_legacy_client directly and pass os_ironic_api_version' ' to it as an argument. It is highly recommended to' ' stop using this property.') self._legacy_clients['ironic-microversion'] = True return self._get_legacy_ironic_microversion() def _get_legacy_ironic_microversion(self): return '1.6' def _join_ksa_version(self, version): return ".".join([str(x) for x in version]) @property def ironic_client(self): # Trigger discovery from ksa. This will make ironicclient and # keystoneauth1.adapter.Adapter code paths both go through discovery. # ironicclient does its own magic with discovery, so we won't # pass an endpoint_override here like we do for keystoneclient. # Just so it's not wasted though, make sure we can handle the # min microversion we need. needed = self._get_legacy_ironic_microversion() # TODO(mordred) Bug in ksa - don't do microversion matching for # auth_type = admin_token. Remove this if when the fix lands. if (hasattr(plugin.BaseAuthPlugin, 'get_endpoint_data') or self.cloud_config.config['auth_type'] not in ( 'admin_token', 'none')): # TODO(mordred) once we're on REST properly, we need a better # method for matching requested and available microversion endpoint_data = self._baremetal_client.get_endpoint_data() if not endpoint_data.min_microversion: raise exc.OpenStackCloudException( "shade needs an ironic that supports microversions") if endpoint_data.min_microversion[1] > int(needed[-1]): raise exc.OpenStackCloudException( "shade needs an ironic that supports microversion {needed}" " but the ironic found has a minimum microversion" " of {found}".format( needed=needed, found=self._join_ksa_version( endpoint_data.min_microversion))) if endpoint_data.max_microversion[1] < int(needed[-1]): raise exc.OpenStackCloudException( "shade needs an ironic that supports microversion {needed}" " but the ironic found has a maximum microversion" " of {found}".format( needed=needed, found=self._join_ksa_version( endpoint_data.max_microversion))) return self._create_legacy_client( 'ironic', 'baremetal', module_name='ironicclient.client.Client', os_ironic_api_version=self._get_legacy_ironic_microversion()) def _get_swift_kwargs(self): auth_version = self.cloud_config.get_api_version('identity') auth_args = self.cloud_config.config.get('auth', {}) os_options = {'auth_version': auth_version} if auth_version == '2.0': os_options['os_tenant_name'] = auth_args.get('project_name') os_options['os_tenant_id'] = auth_args.get('project_id') else: os_options['os_project_name'] = auth_args.get('project_name') os_options['os_project_id'] = auth_args.get('project_id') for key in ( 'username', 'password', 'auth_url', 'user_id', 'project_domain_id', 'project_domain_name', 'user_domain_id', 'user_domain_name'): os_options['os_{key}'.format(key=key)] = auth_args.get(key) return os_options @property def swift_service(self): suppress_warning = 'swift-service' not in self._legacy_clients return self.make_swift_service_object(suppress_warning) def make_swift_service(self, suppress_warning=False): # NOTE(mordred): Not using helper functions because the # error message needs to be different if not suppress_warning: warnings.warn( 'Using shade to get a SwiftService object is deprecated. shade' ' will automatically do the things SwiftServices does as part' ' of the normal object resource calls. If you are having' ' trouble using those such that you still need to use' ' SwiftService, please file a bug with shade.' ' If you understand the issues and want to make this warning' ' go away, use cloud.make_swift_service(True) instead of' ' cloud.swift_service') # Abuse self._legacy_clients so that we only give the warning # once. We don't cache SwiftService objects. self._legacy_clients['swift-service'] = True try: import swiftclient.service except ImportError: self.log.error( 'swiftclient is no longer a dependency of shade. You need to' ' install python-swiftclient directly.') with _utils.shade_exceptions("Error constructing SwiftService"): endpoint = self.get_session_endpoint( service_key='object-store') options = dict(os_auth_token=self.auth_token, os_storage_url=endpoint, os_region_name=self.region_name) options.update(self._get_swift_kwargs()) return swiftclient.service.SwiftService(options=options) shade-1.31.0/shade/_normalize.py0000666000175000017500000011235413440327640016512 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import munch import six _IMAGE_FIELDS = ( 'checksum', 'container_format', 'direct_url', 'disk_format', 'file', 'id', 'name', 'owner', 'virtual_size', ) _SERVER_FIELDS = ( 'accessIPv4', 'accessIPv6', 'addresses', 'adminPass', 'created', 'key_name', 'metadata', 'networks', 'private_v4', 'public_v4', 'public_v6', 'status', 'updated', 'user_id', ) _KEYPAIR_FIELDS = ( 'fingerprint', 'name', 'private_key', 'public_key', 'user_id', ) _KEYPAIR_USELESS_FIELDS = ( 'deleted', 'deleted_at', 'id', 'updated_at', ) _COMPUTE_LIMITS_FIELDS = ( ('maxPersonality', 'max_personality'), ('maxPersonalitySize', 'max_personality_size'), ('maxServerGroupMembers', 'max_server_group_members'), ('maxServerGroups', 'max_server_groups'), ('maxServerMeta', 'max_server_meta'), ('maxTotalCores', 'max_total_cores'), ('maxTotalInstances', 'max_total_instances'), ('maxTotalKeypairs', 'max_total_keypairs'), ('maxTotalRAMSize', 'max_total_ram_size'), ('totalCoresUsed', 'total_cores_used'), ('totalInstancesUsed', 'total_instances_used'), ('totalRAMUsed', 'total_ram_used'), ('totalServerGroupsUsed', 'total_server_groups_used'), ) _pushdown_fields = { 'project': [ 'domain_id' ] } def _split_filters(obj_name='', filters=None, **kwargs): # Handle jmsepath filters if not filters: filters = {} if not isinstance(filters, dict): return {}, filters # Filter out None values from extra kwargs, because those are # defaults. If you want to search for things with None values, # they're going to need to go into the filters dict for (key, value) in kwargs.items(): if value is not None: filters[key] = value pushdown = {} client = {} for (key, value) in filters.items(): if key in _pushdown_fields.get(obj_name, {}): pushdown[key] = value else: client[key] = value return pushdown, client def _to_bool(value): if isinstance(value, six.string_types): if not value: return False prospective = value.lower().capitalize() return prospective == 'True' return bool(value) def _pop_int(resource, key): return int(resource.pop(key, 0) or 0) def _pop_float(resource, key): return float(resource.pop(key, 0) or 0) def _pop_or_get(resource, key, default, strict): if strict: return resource.pop(key, default) else: return resource.get(key, default) class Normalizer(object): '''Mix-in class to provide the normalization functions. This is in a separate class just for on-disk source code organization reasons. ''' def _normalize_compute_limits(self, limits, project_id=None): """ Normalize a limits object. Limits modified in this method and shouldn't be modified afterwards. """ # Copy incoming limits because of shared dicts in unittests limits = limits['absolute'].copy() new_limits = munch.Munch() new_limits['location'] = self._get_current_location( project_id=project_id) for field in _COMPUTE_LIMITS_FIELDS: new_limits[field[1]] = limits.pop(field[0], None) new_limits['properties'] = limits.copy() return new_limits def _remove_novaclient_artifacts(self, item): # Remove novaclient artifacts item.pop('links', None) item.pop('NAME_ATTR', None) item.pop('HUMAN_ID', None) item.pop('human_id', None) item.pop('request_ids', None) item.pop('x_openstack_request_ids', None) def _normalize_flavors(self, flavors): """ Normalize a list of flavor objects """ ret = [] for flavor in flavors: ret.append(self._normalize_flavor(flavor)) return ret def _normalize_flavor(self, flavor): """ Normalize a flavor object """ new_flavor = munch.Munch() # Copy incoming group because of shared dicts in unittests flavor = flavor.copy() # Discard noise self._remove_novaclient_artifacts(flavor) flavor.pop('links', None) ephemeral = int(_pop_or_get( flavor, 'OS-FLV-EXT-DATA:ephemeral', 0, self.strict_mode)) ephemeral = flavor.pop('ephemeral', ephemeral) is_public = _to_bool(_pop_or_get( flavor, 'os-flavor-access:is_public', True, self.strict_mode)) is_public = _to_bool(flavor.pop('is_public', is_public)) is_disabled = _to_bool(_pop_or_get( flavor, 'OS-FLV-DISABLED:disabled', False, self.strict_mode)) extra_specs = _pop_or_get( flavor, 'OS-FLV-WITH-EXT-SPECS:extra_specs', {}, self.strict_mode) extra_specs = flavor.pop('extra_specs', extra_specs) extra_specs = munch.Munch(extra_specs) new_flavor['location'] = self.current_location new_flavor['id'] = flavor.pop('id') new_flavor['name'] = flavor.pop('name') new_flavor['is_public'] = is_public new_flavor['is_disabled'] = is_disabled new_flavor['ram'] = _pop_int(flavor, 'ram') new_flavor['vcpus'] = _pop_int(flavor, 'vcpus') new_flavor['disk'] = _pop_int(flavor, 'disk') new_flavor['ephemeral'] = ephemeral new_flavor['swap'] = _pop_int(flavor, 'swap') new_flavor['rxtx_factor'] = _pop_float(flavor, 'rxtx_factor') new_flavor['properties'] = flavor.copy() new_flavor['extra_specs'] = extra_specs # Backwards compat with nova - passthrough values if not self.strict_mode: for (k, v) in new_flavor['properties'].items(): new_flavor.setdefault(k, v) return new_flavor def _normalize_keypairs(self, keypairs): """Normalize Nova Keypairs""" ret = [] for keypair in keypairs: ret.append(self._normalize_keypair(keypair)) return ret def _normalize_keypair(self, keypair): """Normalize Ironic Machine""" new_keypair = munch.Munch() keypair = keypair.copy() # Discard noise self._remove_novaclient_artifacts(keypair) new_keypair['location'] = self.current_location for key in _KEYPAIR_FIELDS: new_keypair[key] = keypair.pop(key, None) # These are completely meaningless fields for key in _KEYPAIR_USELESS_FIELDS: keypair.pop(key, None) new_keypair['type'] = keypair.pop('type', 'ssh') # created_at isn't returned from the keypair creation. (what?) new_keypair['created_at'] = keypair.pop( 'created_at', datetime.datetime.now().isoformat()) # Don't even get me started on this new_keypair['id'] = new_keypair['name'] new_keypair['properties'] = keypair.copy() return new_keypair def _normalize_images(self, images): ret = [] for image in images: ret.append(self._normalize_image(image)) return ret def _normalize_image(self, image): new_image = munch.Munch( location=self._get_current_location(project_id=image.get('owner'))) # This copy is to keep things from getting epically weird in tests image = image.copy() # Discard noise self._remove_novaclient_artifacts(image) # If someone made a property called "properties" that contains a # string (this has happened at least one time in the wild), the # the rest of the normalization here goes belly up. properties = image.pop('properties', {}) if not isinstance(properties, dict): properties = {'properties': properties} visibility = image.pop('visibility', None) protected = _to_bool(image.pop('protected', False)) if visibility: is_public = (visibility == 'public') else: is_public = image.pop('is_public', False) visibility = 'public' if is_public else 'private' new_image['size'] = image.pop('OS-EXT-IMG-SIZE:size', 0) new_image['size'] = image.pop('size', new_image['size']) new_image['min_ram'] = image.pop('minRam', 0) new_image['min_ram'] = image.pop('min_ram', new_image['min_ram']) new_image['min_disk'] = image.pop('minDisk', 0) new_image['min_disk'] = image.pop('min_disk', new_image['min_disk']) new_image['created_at'] = image.pop('created', '') new_image['created_at'] = image.pop( 'created_at', new_image['created_at']) new_image['updated_at'] = image.pop('updated', '') new_image['updated_at'] = image.pop( 'updated_at', new_image['updated_at']) for field in _IMAGE_FIELDS: new_image[field] = image.pop(field, None) new_image['tags'] = image.pop('tags', []) new_image['status'] = image.pop('status').lower() for field in ('min_ram', 'min_disk', 'size', 'virtual_size'): new_image[field] = _pop_int(new_image, field) new_image['is_protected'] = protected new_image['locations'] = image.pop('locations', []) metadata = image.pop('metadata', {}) for key, val in metadata.items(): properties.setdefault(key, val) for key, val in image.items(): properties.setdefault(key, val) new_image['properties'] = properties new_image['is_public'] = is_public new_image['visibility'] = visibility # Backwards compat with glance if not self.strict_mode: for key, val in properties.items(): if key != 'properties': new_image[key] = val new_image['protected'] = protected new_image['metadata'] = properties new_image['created'] = new_image['created_at'] new_image['updated'] = new_image['updated_at'] new_image['minDisk'] = new_image['min_disk'] new_image['minRam'] = new_image['min_ram'] return new_image def _normalize_secgroups(self, groups): """Normalize the structure of security groups This makes security group dicts, as returned from nova, look like the security group dicts as returned from neutron. This does not make them look exactly the same, but it's pretty close. :param list groups: A list of security group dicts. :returns: A list of normalized dicts. """ ret = [] for group in groups: ret.append(self._normalize_secgroup(group)) return ret def _normalize_secgroup(self, group): ret = munch.Munch() # Copy incoming group because of shared dicts in unittests group = group.copy() # Discard noise self._remove_novaclient_artifacts(group) rules = self._normalize_secgroup_rules( group.pop('security_group_rules', group.pop('rules', []))) project_id = group.pop('tenant_id', '') project_id = group.pop('project_id', project_id) ret['location'] = self._get_current_location(project_id=project_id) ret['id'] = group.pop('id') ret['name'] = group.pop('name') ret['security_group_rules'] = rules ret['description'] = group.pop('description') ret['properties'] = group # Backwards compat with Neutron if not self.strict_mode: ret['tenant_id'] = project_id ret['project_id'] = project_id for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_secgroup_rules(self, rules): """Normalize the structure of nova security group rules Note that nova uses -1 for non-specific port values, but neutron represents these with None. :param list rules: A list of security group rule dicts. :returns: A list of normalized dicts. """ ret = [] for rule in rules: ret.append(self._normalize_secgroup_rule(rule)) return ret def _normalize_secgroup_rule(self, rule): ret = munch.Munch() # Copy incoming rule because of shared dicts in unittests rule = rule.copy() ret['id'] = rule.pop('id') ret['direction'] = rule.pop('direction', 'ingress') ret['ethertype'] = rule.pop('ethertype', 'IPv4') port_range_min = rule.get( 'port_range_min', rule.pop('from_port', None)) if port_range_min == -1: port_range_min = None if port_range_min is not None: port_range_min = int(port_range_min) ret['port_range_min'] = port_range_min port_range_max = rule.pop( 'port_range_max', rule.pop('to_port', None)) if port_range_max == -1: port_range_max = None if port_range_min is not None: port_range_min = int(port_range_min) ret['port_range_max'] = port_range_max ret['protocol'] = rule.pop('protocol', rule.pop('ip_protocol', None)) ret['remote_ip_prefix'] = rule.pop( 'remote_ip_prefix', rule.pop('ip_range', {}).get('cidr', None)) ret['security_group_id'] = rule.pop( 'security_group_id', rule.pop('parent_group_id', None)) ret['remote_group_id'] = rule.pop('remote_group_id', None) project_id = rule.pop('tenant_id', '') project_id = rule.pop('project_id', project_id) ret['location'] = self._get_current_location(project_id=project_id) ret['properties'] = rule # Backwards compat with Neutron if not self.strict_mode: ret['tenant_id'] = project_id ret['project_id'] = project_id for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_servers(self, servers): # Here instead of _utils because we need access to region and cloud # name from the cloud object ret = [] for server in servers: ret.append(self._normalize_server(server)) return ret def _normalize_server(self, server): ret = munch.Munch() # Copy incoming server because of shared dicts in unittests server = server.copy() self._remove_novaclient_artifacts(server) ret['id'] = server.pop('id') ret['name'] = server.pop('name') server['flavor'].pop('links', None) ret['flavor'] = server.pop('flavor') # OpenStack can return image as a string when you've booted # from volume if str(server['image']) != server['image']: server['image'].pop('links', None) ret['image'] = server.pop('image') project_id = server.pop('tenant_id', '') project_id = server.pop('project_id', project_id) az = _pop_or_get( server, 'OS-EXT-AZ:availability_zone', None, self.strict_mode) ret['location'] = self._get_current_location( project_id=project_id, zone=az) # Ensure volumes is always in the server dict, even if empty ret['volumes'] = _pop_or_get( server, 'os-extended-volumes:volumes_attached', [], self.strict_mode) config_drive = server.pop('config_drive', False) ret['has_config_drive'] = _to_bool(config_drive) host_id = server.pop('hostId', None) ret['host_id'] = host_id ret['progress'] = _pop_int(server, 'progress') # Leave these in so that the general properties handling works ret['disk_config'] = _pop_or_get( server, 'OS-DCF:diskConfig', None, self.strict_mode) for key in ( 'OS-EXT-STS:power_state', 'OS-EXT-STS:task_state', 'OS-EXT-STS:vm_state', 'OS-SRV-USG:launched_at', 'OS-SRV-USG:terminated_at'): short_key = key.split(':')[1] ret[short_key] = _pop_or_get(server, key, None, self.strict_mode) # Protect against security_groups being None ret['security_groups'] = server.pop('security_groups', None) or [] # NOTE(mnaser): The Nova API returns the creation date in `created` # however the Shade contract returns `created_at` for # all resources. ret['created_at'] = server.get('created') for field in _SERVER_FIELDS: ret[field] = server.pop(field, None) if not ret['networks']: ret['networks'] = {} ret['interface_ip'] = '' ret['properties'] = server.copy() # Backwards compat if not self.strict_mode: ret['hostId'] = host_id ret['config_drive'] = config_drive ret['project_id'] = project_id ret['tenant_id'] = project_id ret['region'] = self.region_name ret['cloud'] = self.name ret['az'] = az for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_floating_ips(self, ips): """Normalize the structure of floating IPs Unfortunately, not all the Neutron floating_ip attributes are available with Nova and not all Nova floating_ip attributes are available with Neutron. This function extract attributes that are common to Nova and Neutron floating IP resource. If the whole structure is needed inside shade, shade provides private methods that returns "original" objects (e.g. _neutron_allocate_floating_ip) :param list ips: A list of Neutron floating IPs. :returns: A list of normalized dicts with the following attributes:: [ { "id": "this-is-a-floating-ip-id", "fixed_ip_address": "192.0.2.10", "floating_ip_address": "198.51.100.10", "network": "this-is-a-net-or-pool-id", "attached": True, "status": "ACTIVE" }, ... ] """ return [ self._normalize_floating_ip(ip) for ip in ips ] def _normalize_floating_ip(self, ip): ret = munch.Munch() # Copy incoming floating ip because of shared dicts in unittests ip = ip.copy() fixed_ip_address = ip.pop('fixed_ip_address', ip.pop('fixed_ip', None)) floating_ip_address = ip.pop('floating_ip_address', ip.pop('ip', None)) network_id = ip.pop( 'floating_network_id', ip.pop('network', ip.pop('pool', None))) project_id = ip.pop('tenant_id', '') project_id = ip.pop('project_id', project_id) instance_id = ip.pop('instance_id', None) router_id = ip.pop('router_id', None) id = ip.pop('id') port_id = ip.pop('port_id', None) created_at = ip.pop('created_at', None) updated_at = ip.pop('updated_at', None) # Note - description may not always be on the underlying cloud. # Normalizing it here is easy - what do we do when people want to # set a description? description = ip.pop('description', '') revision_number = ip.pop('revision_number', None) if self._use_neutron_floating(): attached = bool(port_id) status = ip.pop('status', 'UNKNOWN') else: attached = bool(instance_id) # In neutron's terms, Nova floating IPs are always ACTIVE status = 'ACTIVE' ret = munch.Munch( attached=attached, fixed_ip_address=fixed_ip_address, floating_ip_address=floating_ip_address, id=id, location=self._get_current_location(project_id=project_id), network=network_id, port=port_id, router=router_id, status=status, created_at=created_at, updated_at=updated_at, description=description, revision_number=revision_number, properties=ip.copy(), ) # Backwards compat if not self.strict_mode: ret['port_id'] = port_id ret['router_id'] = router_id ret['project_id'] = project_id ret['tenant_id'] = project_id ret['floating_network_id'] = network_id for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_projects(self, projects): """Normalize the structure of projects This makes tenants from keystone v2 look like projects from v3. :param list projects: A list of projects to normalize :returns: A list of normalized dicts. """ ret = [] for project in projects: ret.append(self._normalize_project(project)) return ret def _normalize_project(self, project): # Copy incoming project because of shared dicts in unittests project = project.copy() # Discard noise self._remove_novaclient_artifacts(project) # In both v2 and v3 project_id = project.pop('id') name = project.pop('name', '') description = project.pop('description', '') is_enabled = project.pop('enabled', True) # v3 additions domain_id = project.pop('domain_id', 'default') parent_id = project.pop('parent_id', None) is_domain = project.pop('is_domain', False) # Projects have a special relationship with location location = self._get_identity_location() location['project']['domain_id'] = domain_id location['project']['id'] = parent_id ret = munch.Munch( location=location, id=project_id, name=name, description=description, is_enabled=is_enabled, is_domain=is_domain, domain_id=domain_id, properties=project.copy() ) # Backwards compat if not self.strict_mode: ret['enabled'] = is_enabled ret['parent_id'] = parent_id for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_volume_type_access(self, volume_type_access): volume_type_access = volume_type_access.copy() volume_type_id = volume_type_access.pop('volume_type_id') project_id = volume_type_access.pop('project_id') ret = munch.Munch( location=self.current_location, project_id=project_id, volume_type_id=volume_type_id, properties=volume_type_access.copy(), ) return ret def _normalize_volume_type_accesses(self, volume_type_accesses): ret = [] for volume_type_access in volume_type_accesses: ret.append(self._normalize_volume_type_access(volume_type_access)) return ret def _normalize_volume_type(self, volume_type): volume_type = volume_type.copy() volume_id = volume_type.pop('id') description = volume_type.pop('description', None) name = volume_type.pop('name', None) old_is_public = volume_type.pop('os-volume-type-access:is_public', False) is_public = volume_type.pop('is_public', old_is_public) qos_specs_id = volume_type.pop('qos_specs_id', None) extra_specs = volume_type.pop('extra_specs', {}) ret = munch.Munch( location=self.current_location, is_public=is_public, id=volume_id, name=name, description=description, qos_specs_id=qos_specs_id, extra_specs=extra_specs, properties=volume_type.copy(), ) return ret def _normalize_volume_types(self, volume_types): ret = [] for volume in volume_types: ret.append(self._normalize_volume_type(volume)) return ret def _normalize_volumes(self, volumes): """Normalize the structure of volumes This makes tenants from cinder v1 look like volumes from v2. :param list projects: A list of volumes to normalize :returns: A list of normalized dicts. """ ret = [] for volume in volumes: ret.append(self._normalize_volume(volume)) return ret def _normalize_volume(self, volume): volume = volume.copy() # Discard noise self._remove_novaclient_artifacts(volume) volume_id = volume.pop('id') name = volume.pop('display_name', None) name = volume.pop('name', name) description = volume.pop('display_description', None) description = volume.pop('description', description) is_bootable = _to_bool(volume.pop('bootable', True)) is_encrypted = _to_bool(volume.pop('encrypted', False)) can_multiattach = _to_bool(volume.pop('multiattach', False)) project_id = _pop_or_get( volume, 'os-vol-tenant-attr:tenant_id', None, self.strict_mode) az = volume.pop('availability_zone', None) location = self._get_current_location(project_id=project_id, zone=az) host = _pop_or_get( volume, 'os-vol-host-attr:host', None, self.strict_mode) replication_extended_status = _pop_or_get( volume, 'os-volume-replication:extended_status', None, self.strict_mode) migration_status = _pop_or_get( volume, 'os-vol-mig-status-attr:migstat', None, self.strict_mode) migration_status = volume.pop('migration_status', migration_status) _pop_or_get(volume, 'user_id', None, self.strict_mode) source_volume_id = _pop_or_get( volume, 'source_volid', None, self.strict_mode) replication_driver = _pop_or_get( volume, 'os-volume-replication:driver_data', None, self.strict_mode) ret = munch.Munch( location=location, id=volume_id, name=name, description=description, size=_pop_int(volume, 'size'), attachments=volume.pop('attachments', []), status=volume.pop('status'), migration_status=migration_status, host=host, replication_driver=replication_driver, replication_status=volume.pop('replication_status', None), replication_extended_status=replication_extended_status, snapshot_id=volume.pop('snapshot_id', None), created_at=volume.pop('created_at'), updated_at=volume.pop('updated_at', None), source_volume_id=source_volume_id, consistencygroup_id=volume.pop('consistencygroup_id', None), volume_type=volume.pop('volume_type', None), metadata=volume.pop('metadata', {}), is_bootable=is_bootable, is_encrypted=is_encrypted, can_multiattach=can_multiattach, properties=volume.copy(), ) # Backwards compat if not self.strict_mode: ret['display_name'] = name ret['display_description'] = description ret['bootable'] = is_bootable ret['encrypted'] = is_encrypted ret['multiattach'] = can_multiattach ret['availability_zone'] = az for key, val in ret['properties'].items(): ret.setdefault(key, val) return ret def _normalize_volume_attachment(self, attachment): """ Normalize a volume attachment object""" attachment = attachment.copy() # Discard noise self._remove_novaclient_artifacts(attachment) return munch.Munch(**attachment) def _normalize_volume_backups(self, backups): ret = [] for backup in backups: ret.append(self._normalize_volume_backup(backup)) return ret def _normalize_volume_backup(self, backup): """ Normalize a valume backup object""" backup = backup.copy() # Discard noise self._remove_novaclient_artifacts(backup) return munch.Munch(**backup) def _normalize_compute_usage(self, usage): """ Normalize a compute usage object """ usage = usage.copy() # Discard noise self._remove_novaclient_artifacts(usage) project_id = usage.pop('tenant_id', None) ret = munch.Munch( location=self._get_current_location(project_id=project_id), ) for key in ( 'max_personality', 'max_personality_size', 'max_server_group_members', 'max_server_groups', 'max_server_meta', 'max_total_cores', 'max_total_instances', 'max_total_keypairs', 'max_total_ram_size', 'total_cores_used', 'total_hours', 'total_instances_used', 'total_local_gb_usage', 'total_memory_mb_usage', 'total_ram_used', 'total_server_groups_used', 'total_vcpus_usage'): ret[key] = usage.pop(key, 0) ret['started_at'] = usage.pop('start', None) ret['stopped_at'] = usage.pop('stop', None) ret['server_usages'] = self._normalize_server_usages( usage.pop('server_usages', [])) ret['properties'] = usage return ret def _normalize_server_usage(self, server_usage): """ Normalize a server usage object """ server_usage = server_usage.copy() # TODO(mordred) Right now there is already a location on the usage # object. Including one here seems verbose. server_usage.pop('tenant_id') ret = munch.Munch() ret['ended_at'] = server_usage.pop('ended_at', None) ret['started_at'] = server_usage.pop('started_at', None) for key in ( 'flavor', 'instance_id', 'name', 'state'): ret[key] = server_usage.pop(key, '') for key in ( 'hours', 'local_gb', 'memory_mb', 'uptime', 'vcpus'): ret[key] = server_usage.pop(key, 0) ret['properties'] = server_usage return ret def _normalize_server_usages(self, server_usages): ret = [] for server_usage in server_usages: ret.append(self._normalize_server_usage(server_usage)) return ret def _normalize_cluster_templates(self, cluster_templates): ret = [] for cluster_template in cluster_templates: ret.append(self._normalize_cluster_template(cluster_template)) return ret def _normalize_cluster_template(self, cluster_template): """Normalize Magnum cluster_templates.""" cluster_template = cluster_template.copy() # Discard noise cluster_template.pop('links', None) cluster_template.pop('human_id', None) # model_name is a magnumclient-ism cluster_template.pop('model_name', None) ct_id = cluster_template.pop('uuid') ret = munch.Munch( id=ct_id, location=self._get_current_location(), ) ret['is_public'] = cluster_template.pop('public') ret['is_registry_enabled'] = cluster_template.pop('registry_enabled') ret['is_tls_disabled'] = cluster_template.pop('tls_disabled') # pop floating_ip_enabled since we want to hide it in a future patch fip_enabled = cluster_template.pop('floating_ip_enabled', None) if not self.strict_mode: ret['uuid'] = ct_id if fip_enabled is not None: ret['floating_ip_enabled'] = fip_enabled ret['public'] = ret['is_public'] ret['registry_enabled'] = ret['is_registry_enabled'] ret['tls_disabled'] = ret['is_tls_disabled'] # Optional keys for (key, default) in ( ('fixed_network', None), ('fixed_subnet', None), ('http_proxy', None), ('https_proxy', None), ('labels', {}), ('master_flavor_id', None), ('no_proxy', None)): if key in cluster_template: ret[key] = cluster_template.pop(key, default) for key in ( 'apiserver_port', 'cluster_distro', 'coe', 'created_at', 'dns_nameserver', 'docker_volume_size', 'external_network_id', 'flavor_id', 'image_id', 'insecure_registry', 'keypair_id', 'name', 'network_driver', 'server_type', 'updated_at', 'volume_driver'): ret[key] = cluster_template.pop(key) ret['properties'] = cluster_template return ret def _normalize_magnum_services(self, magnum_services): ret = [] for magnum_service in magnum_services: ret.append(self._normalize_magnum_service(magnum_service)) return ret def _normalize_magnum_service(self, magnum_service): """Normalize Magnum magnum_services.""" magnum_service = magnum_service.copy() # Discard noise magnum_service.pop('links', None) magnum_service.pop('human_id', None) # model_name is a magnumclient-ism magnum_service.pop('model_name', None) ret = munch.Munch(location=self._get_current_location()) for key in ( 'binary', 'created_at', 'disabled_reason', 'host', 'id', 'report_count', 'state', 'updated_at'): ret[key] = magnum_service.pop(key) ret['properties'] = magnum_service return ret def _normalize_stacks(self, stacks): """Normalize Heat Stacks""" ret = [] for stack in stacks: ret.append(self._normalize_stack(stack)) return ret def _normalize_stack(self, stack): """Normalize Heat Stack""" stack = stack.copy() # Discard noise self._remove_novaclient_artifacts(stack) # Discard things heatclient adds that aren't in the REST stack.pop('action', None) stack.pop('status', None) stack.pop('identifier', None) stack_status = stack.pop('stack_status') (action, status) = stack_status.split('_', 1) ret = munch.Munch( id=stack.pop('id'), location=self._get_current_location(), action=action, status=status, ) if not self.strict_mode: ret['stack_status'] = stack_status for (new_name, old_name) in ( ('name', 'stack_name'), ('created_at', 'creation_time'), ('deleted_at', 'deletion_time'), ('updated_at', 'updated_time'), ('description', 'description'), ('is_rollback_enabled', 'disable_rollback'), ('parent', 'parent'), ('notification_topics', 'notification_topics'), ('parameters', 'parameters'), ('outputs', 'outputs'), ('owner', 'stack_owner'), ('status_reason', 'stack_status_reason'), ('stack_user_project_id', 'stack_user_project_id'), ('tempate_description', 'template_description'), ('timeout_mins', 'timeout_mins'), ('tags', 'tags')): value = stack.pop(old_name, None) ret[new_name] = value if not self.strict_mode: ret[old_name] = value ret['identifier'] = '{name}/{id}'.format( name=ret['name'], id=ret['id']) ret['properties'] = stack return ret def _normalize_machines(self, machines): """Normalize Ironic Machines""" ret = [] for machine in machines: ret.append(self._normalize_machine(machine)) return ret def _normalize_machine(self, machine): """Normalize Ironic Machine""" machine = machine.copy() # Discard noise self._remove_novaclient_artifacts(machine) # TODO(mordred) Normalize this resource return machine def _normalize_roles(self, roles): """Normalize Keystone roles""" ret = [] for role in roles: ret.append(self._normalize_role(role)) return ret def _normalize_role(self, role): """Normalize Identity roles.""" return munch.Munch( id=role.get('id'), name=role.get('name'), domain_id=role.get('domain_id'), location=self._get_identity_location(), properties={}, ) shade-1.31.0/shade/tests/0000775000175000017500000000000013440330010015114 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/__init__.py0000666000175000017500000000000013440327640017234 0ustar zuulzuul00000000000000shade-1.31.0/shade/tests/fakes.py0000666000175000017500000003525613440327640016613 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License.V """ fakes ---------------------------------- Fakes used for testing """ import datetime import json import uuid from shade._heat import template_format from shade import meta PROJECT_ID = '1c36b64c840a42cd9e9b931a369337f0' FLAVOR_ID = u'0c1d9008-f546-4608-9e8f-f8bdaec8dddd' CHOCOLATE_FLAVOR_ID = u'0c1d9008-f546-4608-9e8f-f8bdaec8ddde' STRAWBERRY_FLAVOR_ID = u'0c1d9008-f546-4608-9e8f-f8bdaec8dddf' COMPUTE_ENDPOINT = 'https://compute.example.com/v2.1' ORCHESTRATION_ENDPOINT = 'https://orchestration.example.com/v1/{p}'.format( p=PROJECT_ID) NO_MD5 = '93b885adfe0da089cdf634904fd59f71' NO_SHA256 = '6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d' FAKE_PUBLIC_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkF3MX59OrlBs3dH5CU7lNmvpbrgZxSpyGjlnE8Flkirnc/Up22lpjznoxqeoTAwTW034k7Dz6aYIrZGmQwe2TkE084yqvlj45Dkyoj95fW/sZacm0cZNuL69EObEGHdprfGJQajrpz22NQoCD8TFB8Wv+8om9NH9Le6s+WPe98WC77KLw8qgfQsbIey+JawPWl4O67ZdL5xrypuRjfIPWjgy/VH85IXg/Z/GONZ2nxHgSShMkwqSFECAC5L3PHB+0+/12M/iikdatFSVGjpuHvkLOs3oe7m6HlOfluSJ85BzLWBbvva93qkGmLg4ZAc8rPh2O+YIsBUHNLLMM/oQp Generated-by-Nova\n" # flake8: noqa def make_fake_flavor(flavor_id, name, ram=100, disk=1600, vcpus=24): return { u'OS-FLV-DISABLED:disabled': False, u'OS-FLV-EXT-DATA:ephemeral': 0, u'disk': disk, u'id': flavor_id, u'links': [{ u'href': u'{endpoint}/flavors/{id}'.format( endpoint=COMPUTE_ENDPOINT, id=flavor_id), u'rel': u'self' }, { u'href': u'{endpoint}/flavors/{id}'.format( endpoint=COMPUTE_ENDPOINT, id=flavor_id), u'rel': u'bookmark' }], u'name': name, u'os-flavor-access:is_public': True, u'ram': ram, u'rxtx_factor': 1.0, u'swap': u'', u'vcpus': vcpus } FAKE_FLAVOR = make_fake_flavor(FLAVOR_ID, 'vanilla') FAKE_CHOCOLATE_FLAVOR = make_fake_flavor( CHOCOLATE_FLAVOR_ID, 'chocolate', ram=200) FAKE_STRAWBERRY_FLAVOR = make_fake_flavor( STRAWBERRY_FLAVOR_ID, 'strawberry', ram=300) FAKE_FLAVOR_LIST = [FAKE_FLAVOR, FAKE_CHOCOLATE_FLAVOR, FAKE_STRAWBERRY_FLAVOR] FAKE_TEMPLATE = '''heat_template_version: 2014-10-16 parameters: length: type: number default: 10 resources: my_rand: type: OS::Heat::RandomString properties: length: {get_param: length} outputs: rand: value: get_attr: [my_rand, value] ''' FAKE_TEMPLATE_CONTENT = template_format.parse(FAKE_TEMPLATE) def make_fake_server( server_id, name, status='ACTIVE', admin_pass=None, addresses=None, image=None, flavor=None): if addresses is None: if status == 'ACTIVE': addresses = { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:df:b0:8d", "version": 6, "addr": "fddb:b018:307:0:f816:3eff:fedf:b08d", "OS-EXT-IPS:type": "fixed"}, { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:df:b0:8d", "version": 4, "addr": "10.1.0.9", "OS-EXT-IPS:type": "fixed"}, { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:df:b0:8d", "version": 4, "addr": "172.24.5.5", "OS-EXT-IPS:type": "floating"}]} else: addresses = {} if image is None: image = {"id": "217f3ab1-03e0-4450-bf27-63d52b421e9e", "links": []} if flavor is None: flavor = {"id": "64", "links": []} server = { "OS-EXT-STS:task_state": None, "addresses": addresses, "links": [], "image": image, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-03-23T23:57:38.000000", "flavor": flavor, "id": server_id, "security_groups": [{"name": "default"}], "user_id": "9c119f4beaaa438792ce89387362b3ad", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "metadata": {}, "status": status, "updated": "2017-03-23T23:57:39Z", "hostId": "89d165f04384e3ffa4b6536669eb49104d30d6ca832bba2684605dbc", "OS-SRV-USG:terminated_at": None, "key_name": None, "name": name, "created": "2017-03-23T23:57:12Z", "tenant_id": PROJECT_ID, "os-extended-volumes:volumes_attached": [], "config_drive": "True"} if admin_pass: server['adminPass'] = admin_pass return json.loads(json.dumps(server)) def make_fake_keypair(name): # Note: this is literally taken from: # https://developer.openstack.org/api-ref/compute/ return { "fingerprint": "7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd", "name": name, "type": "ssh", "public_key": FAKE_PUBLIC_KEY, "created_at": datetime.datetime.now().isoformat(), } def make_fake_stack(id, name, description=None, status='CREATE_COMPLETE'): return { 'creation_time': '2017-03-23T23:57:12Z', 'deletion_time': '2017-03-23T23:57:12Z', 'description': description, 'id': id, 'links': [], 'parent': None, 'stack_name': name, 'stack_owner': None, 'stack_status': status, 'stack_user_project_id': PROJECT_ID, 'tags': None, 'updated_time': '2017-03-23T23:57:12Z', } def make_fake_stack_event( id, name, status='CREATE_COMPLETED', resource_name='id'): event_id = uuid.uuid4().hex self_url = "{endpoint}/stacks/{name}/{id}/resources/{name}/events/{event}" resource_url = "{endpoint}/stacks/{name}/{id}/resources/{name}" return { "resource_name": id if resource_name == 'id' else name, "event_time": "2017-03-26T19:38:18", "links": [ { "href": self_url.format( endpoint=ORCHESTRATION_ENDPOINT, name=name, id=id, event=event_id), "rel": "self" }, { "href": resource_url.format( endpoint=ORCHESTRATION_ENDPOINT, name=name, id=id), "rel": "resource" }, { "href": "{endpoint}/stacks/{name}/{id}".format( endpoint=ORCHESTRATION_ENDPOINT, name=name, id=id), "rel": "stack" }], "logical_resource_id": name, "resource_status": status, "resource_status_reason": "", "physical_resource_id": id, "id": event_id, } def make_fake_image( image_id=None, md5=NO_MD5, sha256=NO_SHA256, status='active'): return { u'image_state': u'available', u'container_format': u'bare', u'min_ram': 0, u'ramdisk_id': None, u'updated_at': u'2016-02-10T05:05:02Z', u'file': '/v2/images/' + image_id + '/file', u'size': 3402170368, u'image_type': u'snapshot', u'disk_format': u'qcow2', u'id': image_id, u'schema': u'/v2/schemas/image', u'status': status, u'tags': [], u'visibility': u'private', u'locations': [{ u'url': u'http://127.0.0.1/images/' + image_id, u'metadata': {}}], u'min_disk': 40, u'virtual_size': None, u'name': u'fake_image', u'checksum': u'ee36e35a297980dee1b514de9803ec6d', u'created_at': u'2016-02-10T05:03:11Z', u'owner_specified.shade.md5': NO_MD5, u'owner_specified.shade.sha256': NO_SHA256, u'owner_specified.shade.object': 'images/fake_image', u'protected': False} def make_fake_machine(machine_name, machine_id=None): if not machine_id: machine_id = uuid.uuid4().hex return meta.obj_to_munch(FakeMachine( id=machine_id, name=machine_name)) def make_fake_port(address, node_id=None, port_id=None): if not node_id: node_id = uuid.uuid4().hex if not port_id: port_id = uuid.uuid4().hex return meta.obj_to_munch(FakeMachinePort( id=port_id, address=address, node_id=node_id)) class FakeFloatingIP(object): def __init__(self, id, pool, ip, fixed_ip, instance_id): self.id = id self.pool = pool self.ip = ip self.fixed_ip = fixed_ip self.instance_id = instance_id def make_fake_server_group(id, name, policies): return json.loads(json.dumps({ 'id': id, 'name': name, 'policies': policies, 'members': [], 'metadata': {}, })) def make_fake_hypervisor(id, name): return json.loads(json.dumps({ 'id': id, 'hypervisor_hostname': name, 'state': 'up', 'status': 'enabled', "cpu_info": { "arch": "x86_64", "model": "Nehalem", "vendor": "Intel", "features": [ "pge", "clflush" ], "topology": { "cores": 1, "threads": 1, "sockets": 4 } }, "current_workload": 0, "status": "enabled", "state": "up", "disk_available_least": 0, "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, "hypervisor_type": "fake", "hypervisor_version": 1000, "local_gb": 1028, "local_gb_used": 0, "memory_mb": 8192, "memory_mb_used": 512, "running_vms": 0, "service": { "host": "host1", "id": 7, "disabled_reason": None }, "vcpus": 1, "vcpus_used": 0 })) class FakeVolume(object): def __init__( self, id, status, name, attachments=[], size=75): self.id = id self.status = status self.name = name self.attachments = attachments self.size = size self.snapshot_id = 'id:snapshot' self.description = 'description' self.volume_type = 'type:volume' self.availability_zone = 'az1' self.created_at = '1900-01-01 12:34:56' self.source_volid = '12345' self.metadata = {} class FakeVolumeSnapshot(object): def __init__( self, id, status, name, description, size=75): self.id = id self.status = status self.name = name self.description = description self.size = size self.created_at = '1900-01-01 12:34:56' self.volume_id = '12345' self.metadata = {} class FakeMachine(object): def __init__(self, id, name=None, driver=None, driver_info=None, chassis_uuid=None, instance_info=None, instance_uuid=None, properties=None, reservation=None, last_error=None, provision_state=None): self.uuid = id self.name = name self.driver = driver self.driver_info = driver_info self.chassis_uuid = chassis_uuid self.instance_info = instance_info self.instance_uuid = instance_uuid self.properties = properties self.reservation = reservation self.last_error = last_error self.provision_state = provision_state class FakeMachinePort(object): def __init__(self, id, address, node_id): self.uuid = id self.address = address self.node_uuid = node_id def make_fake_neutron_security_group( id, name, description, rules, project_id=None): if not rules: rules = [] if not project_id: project_id = PROJECT_ID return json.loads(json.dumps({ 'id': id, 'name': name, 'description': description, 'project_id': project_id, 'tenant_id': project_id, 'security_group_rules': rules, })) def make_fake_nova_security_group_rule( id, from_port, to_port, ip_protocol, cidr): return json.loads(json.dumps({ 'id': id, 'from_port': int(from_port), 'to_port': int(to_port), 'ip_protcol': 'tcp', 'ip_range': { 'cidr': cidr } })) def make_fake_nova_security_group(id, name, description, rules): if not rules: rules = [] return json.loads(json.dumps({ 'id': id, 'name': name, 'description': description, 'tenant_id': PROJECT_ID, 'rules': rules, })) class FakeNovaSecgroupRule(object): def __init__(self, id, from_port=None, to_port=None, ip_protocol=None, cidr=None, parent_group_id=None): self.id = id self.from_port = from_port self.to_port = to_port self.ip_protocol = ip_protocol if cidr: self.ip_range = {'cidr': cidr} self.parent_group_id = parent_group_id class FakeHypervisor(object): def __init__(self, id, hostname): self.id = id self.hypervisor_hostname = hostname class FakeZone(object): def __init__(self, id, name, type_, email, description, ttl, masters): self.id = id self.name = name self.type_ = type_ self.email = email self.description = description self.ttl = ttl self.masters = masters class FakeRecordset(object): def __init__(self, zone, id, name, type_, description, ttl, records): self.zone = zone self.id = id self.name = name self.type_ = type_ self.description = description self.ttl = ttl self.records = records def make_fake_aggregate(id, name, availability_zone='nova', metadata=None, hosts=None): if not metadata: metadata = {} if not hosts: hosts = [] return json.loads(json.dumps({ "availability_zone": availability_zone, "created_at": datetime.datetime.now().isoformat(), "deleted": False, "deleted_at": None, "hosts": hosts, "id": int(id), "metadata": { "availability_zone": availability_zone, }, "name": name, "updated_at": None, })) shade-1.31.0/shade/tests/ansible/0000775000175000017500000000000013440330010016531 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/hooks/0000775000175000017500000000000013440330010017654 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/hooks/post_test_hook.sh0000777000175000017500000000176713440327640023313 0ustar zuulzuul00000000000000#!/bin/sh # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. export SHADE_DIR="$BASE/new/shade" cd $SHADE_DIR sudo chown -R jenkins:stack $SHADE_DIR echo "Running shade Ansible test suite" if [ ${SHADE_ANSIBLE_DEV:-0} -eq 1 ] then # Use the upstream development version of Ansible set +e sudo -E -H -u jenkins tox -eansible -- -d EXIT_CODE=$? set -e else # Use the release version of Ansible set +e sudo -E -H -u jenkins tox -eansible EXIT_CODE=$? set -e fi exit $EXIT_CODE shade-1.31.0/shade/tests/ansible/run.yml0000666000175000017500000000162313440327640020103 0ustar zuulzuul00000000000000--- - hosts: localhost connection: local gather_facts: true roles: - { role: auth, tags: auth } - { role: client_config, tags: client_config } - { role: group, tags: group } # TODO(mordred) Reenable this once the fixed os_image winds up in an # upstream ansible release. # - { role: image, tags: image } - { role: keypair, tags: keypair } - { role: keystone_domain, tags: keystone_domain } - { role: keystone_role, tags: keystone_role } - { role: network, tags: network } - { role: nova_flavor, tags: nova_flavor } - { role: object, tags: object } - { role: port, tags: port } - { role: router, tags: router } - { role: security_group, tags: security_group } - { role: server, tags: server } - { role: subnet, tags: subnet } - { role: user, tags: user } - { role: user_group, tags: user_group } - { role: volume, tags: volume } shade-1.31.0/shade/tests/ansible/README.txt0000666000175000017500000000211113440327640020243 0ustar zuulzuul00000000000000This directory contains a testing infrastructure for the Ansible OpenStack modules. You will need a clouds.yaml file in order to run the tests. You must provide a value for the `cloud` variable for each run (using the -e option) as a default is not currently provided. If you want to run these tests against devstack, it is easiest to use the tox target. This assumes you have a devstack-admin cloud defined in your clouds.yaml file that points to devstack. Some examples of using tox: tox -e ansible tox -e ansible keypair security_group If you want to run these tests directly, or against different clouds, then you'll need to use the ansible-playbook command that comes with the Ansible distribution and feed it the run.yml playbook. Some examples: # Run all module tests against a provider ansible-playbook run.yml -e "cloud=hp" # Run only the keypair and security_group tests ansible-playbook run.yml -e "cloud=hp" --tags "keypair,security_group" # Run all tests except security_group ansible-playbook run.yml -e "cloud=hp" --skip-tags "security_group" shade-1.31.0/shade/tests/ansible/roles/0000775000175000017500000000000013440330010017655 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/image/0000775000175000017500000000000013440330010020737 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/image/tasks/0000775000175000017500000000000013440330010022064 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/image/tasks/main.yml0000666000175000017500000000215113440327640023553 0ustar zuulzuul00000000000000--- - name: Create a test image file shell: mktemp register: tmp_file - name: Fill test image file to 1MB shell: truncate -s 1048576 {{ tmp_file.stdout }} - name: Create raw image (defaults) os_image: cloud: "{{ cloud }}" state: present name: "{{ image_name }}" filename: "{{ tmp_file.stdout }}" disk_format: raw register: image - debug: var=image - name: Delete raw image (defaults) os_image: cloud: "{{ cloud }}" state: absent name: "{{ image_name }}" - name: Create raw image (complex) os_image: cloud: "{{ cloud }}" state: present name: "{{ image_name }}" filename: "{{ tmp_file.stdout }}" disk_format: raw is_public: True min_disk: 10 min_ram: 1024 kernel: cirros-vmlinuz ramdisk: cirros-initrd properties: cpu_arch: x86_64 distro: ubuntu register: image - debug: var=image - name: Delete raw image (complex) os_image: cloud: "{{ cloud }}" state: absent name: "{{ image_name }}" - name: Delete test image file file: name: "{{ tmp_file.stdout }}" state: absent shade-1.31.0/shade/tests/ansible/roles/image/vars/0000775000175000017500000000000013440330010021712 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/image/vars/main.yml0000666000175000017500000000003213440327640023375 0ustar zuulzuul00000000000000image_name: ansible_image shade-1.31.0/shade/tests/ansible/roles/user/0000775000175000017500000000000013440330010020633 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/user/tasks/0000775000175000017500000000000013440330010021760 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/user/tasks/main.yml0000666000175000017500000000107013440327640023446 0ustar zuulzuul00000000000000--- - name: Create user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: ansible.user@nowhere.net domain: default default_project: demo register: user - debug: var=user - name: Update user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: updated.ansible.user@nowhere.net register: updateduser - debug: var=updateduser - name: Delete user os_user: cloud: "{{ cloud }}" state: absent name: ansible_user shade-1.31.0/shade/tests/ansible/roles/security_group/0000775000175000017500000000000013440330010022740 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/security_group/tasks/0000775000175000017500000000000013440330010024065 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/security_group/tasks/main.yml0000666000175000017500000000570613440327640025565 0ustar zuulzuul00000000000000--- - name: Create security group os_security_group: cloud: "{{ cloud }}" name: "{{ secgroup_name }}" state: present description: Created from Ansible playbook - name: Create empty ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: icmp remote_ip_prefix: 0.0.0.0/0 - name: Create -1 ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: icmp port_range_min: -1 port_range_max: -1 remote_ip_prefix: 0.0.0.0/0 - name: Create empty TCP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp remote_ip_prefix: 0.0.0.0/0 - name: Create empty UDP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: udp remote_ip_prefix: 0.0.0.0/0 - name: Create HTTP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp port_range_min: 80 port_range_max: 80 remote_ip_prefix: 0.0.0.0/0 - name: Create egress rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: present protocol: tcp port_range_min: 30000 port_range_max: 30001 remote_ip_prefix: 0.0.0.0/0 direction: egress - name: Delete empty ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: icmp remote_ip_prefix: 0.0.0.0/0 - name: Delete -1 ICMP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: icmp port_range_min: -1 port_range_max: -1 remote_ip_prefix: 0.0.0.0/0 - name: Delete empty TCP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp remote_ip_prefix: 0.0.0.0/0 - name: Delete empty UDP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: udp remote_ip_prefix: 0.0.0.0/0 - name: Delete HTTP rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp port_range_min: 80 port_range_max: 80 remote_ip_prefix: 0.0.0.0/0 - name: Delete egress rule os_security_group_rule: cloud: "{{ cloud }}" security_group: "{{ secgroup_name }}" state: absent protocol: tcp port_range_min: 30000 port_range_max: 30001 remote_ip_prefix: 0.0.0.0/0 direction: egress - name: Delete security group os_security_group: cloud: "{{ cloud }}" name: "{{ secgroup_name }}" state: absent shade-1.31.0/shade/tests/ansible/roles/security_group/vars/0000775000175000017500000000000013440330010023713 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/security_group/vars/main.yml0000666000175000017500000000003613440327640025402 0ustar zuulzuul00000000000000secgroup_name: shade_secgroup shade-1.31.0/shade/tests/ansible/roles/auth/0000775000175000017500000000000013440330010020616 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/auth/tasks/0000775000175000017500000000000013440330010021743 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/auth/tasks/main.yml0000666000175000017500000000014513440327640023433 0ustar zuulzuul00000000000000--- - name: Authenticate to the cloud os_auth: cloud={{ cloud }} - debug: var=service_catalog shade-1.31.0/shade/tests/ansible/roles/port/0000775000175000017500000000000013440330010020641 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/port/tasks/0000775000175000017500000000000013440330010021766 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/port/tasks/main.yml0000666000175000017500000000426013440327640023460 0ustar zuulzuul00000000000000--- - name: Create network os_network: cloud: "{{ cloud }}" state: present name: "{{ network_name }}" external: True - name: Create subnet os_subnet: cloud: "{{ cloud }}" state: present name: "{{ subnet_name }}" network_name: "{{ network_name }}" cidr: 10.5.5.0/24 - name: Create port (no security group) os_port: cloud: "{{ cloud }}" state: present name: "{{ port_name }}" network: "{{ network_name }}" no_security_groups: True fixed_ips: - ip_address: 10.5.5.69 register: port - debug: var=port - name: Delete port (no security group) os_port: cloud: "{{ cloud }}" state: absent name: "{{ port_name }}" - name: Create security group os_security_group: cloud: "{{ cloud }}" state: present name: "{{ secgroup_name }}" description: Test group - name: Create port (with security group) os_port: cloud: "{{ cloud }}" state: present name: "{{ port_name }}" network: "{{ network_name }}" fixed_ips: - ip_address: 10.5.5.69 security_groups: - "{{ secgroup_name }}" register: port - debug: var=port - name: Delete port (with security group) os_port: cloud: "{{ cloud }}" state: absent name: "{{ port_name }}" - name: Create port (with allowed_address_pairs and extra_dhcp_opts) os_port: cloud: "{{ cloud }}" state: present name: "{{ port_name }}" network: "{{ network_name }}" no_security_groups: True allowed_address_pairs: - ip_address: 10.6.7.0/24 extra_dhcp_opts: - opt_name: "bootfile-name" opt_value: "testfile.1" register: port - debug: var=port - name: Delete port (with allowed_address_pairs and extra_dhcp_opts) os_port: cloud: "{{ cloud }}" state: absent name: "{{ port_name }}" - name: Delete security group os_security_group: cloud: "{{ cloud }}" state: absent name: "{{ secgroup_name }}" - name: Delete subnet os_subnet: cloud: "{{ cloud }}" state: absent name: "{{ subnet_name }}" - name: Delete network os_network: cloud: "{{ cloud }}" state: absent name: "{{ network_name }}" shade-1.31.0/shade/tests/ansible/roles/port/vars/0000775000175000017500000000000013440330010021614 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/port/vars/main.yml0000666000175000017500000000020113440327640023275 0ustar zuulzuul00000000000000network_name: ansible_port_network subnet_name: ansible_port_subnet port_name: ansible_port secgroup_name: ansible_port_secgroup shade-1.31.0/shade/tests/ansible/roles/keystone_domain/0000775000175000017500000000000013440330010023045 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keystone_domain/tasks/0000775000175000017500000000000013440330010024172 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keystone_domain/tasks/main.yml0000666000175000017500000000070413440327640025663 0ustar zuulzuul00000000000000--- - name: Create keystone domain os_keystone_domain: cloud: "{{ cloud }}" state: present name: "{{ domain_name }}" description: "test description" - name: Update keystone domain os_keystone_domain: cloud: "{{ cloud }}" name: "{{ domain_name }}" description: "updated description" - name: Delete keystone domain os_keystone_domain: cloud: "{{ cloud }}" state: absent name: "{{ domain_name }}" shade-1.31.0/shade/tests/ansible/roles/keystone_domain/vars/0000775000175000017500000000000013440330010024020 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keystone_domain/vars/main.yml0000666000175000017500000000003413440327640025505 0ustar zuulzuul00000000000000domain_name: ansible_domain shade-1.31.0/shade/tests/ansible/roles/group/0000775000175000017500000000000013440330010021011 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/group/tasks/0000775000175000017500000000000013440330010022136 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/group/tasks/main.yml0000666000175000017500000000056413440327640023633 0ustar zuulzuul00000000000000--- - name: Create group os_group: cloud: "{{ cloud }}" state: present name: "{{ group_name }}" - name: Update group os_group: cloud: "{{ cloud }}" state: present name: "{{ group_name }}" description: "updated description" - name: Delete group os_group: cloud: "{{ cloud }}" state: absent name: "{{ group_name }}" shade-1.31.0/shade/tests/ansible/roles/group/vars/0000775000175000017500000000000013440330010021764 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/group/vars/main.yml0000666000175000017500000000003213440327640023447 0ustar zuulzuul00000000000000group_name: ansible_group shade-1.31.0/shade/tests/ansible/roles/server/0000775000175000017500000000000013440330010021163 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/server/tasks/0000775000175000017500000000000013440330010022310 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/server/tasks/main.yml0000666000175000017500000000370413440327640024004 0ustar zuulzuul00000000000000--- - name: Create server with meta as CSV os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" network: "{{ server_network }}" auto_floating_ip: false meta: "key1=value1,key2=value2" wait: true register: server - debug: var=server - name: Delete server with meta as CSV os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true - name: Create server with meta as dict os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" auto_floating_ip: false network: "{{ server_network }}" meta: key1: value1 key2: value2 wait: true register: server - debug: var=server - name: Delete server with meta as dict os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true - name: Create server (FIP from pool/network) os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" network: "{{ server_network }}" floating_ip_pools: - public wait: true register: server - debug: var=server - name: Delete server (FIP from pool/network) os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true - name: Create server from volume os_server: cloud: "{{ cloud }}" state: present name: "{{ server_name }}" image: "{{ image }}" flavor: "{{ flavor }}" network: "{{ server_network }}" auto_floating_ip: false boot_from_volume: true volume_size: 5 terminate_volume: true wait: true register: server - debug: var=server - name: Delete server with volume os_server: cloud: "{{ cloud }}" state: absent name: "{{ server_name }}" wait: true shade-1.31.0/shade/tests/ansible/roles/server/vars/0000775000175000017500000000000013440330010022136 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/server/vars/main.yaml0000666000175000017500000000010413440327640023762 0ustar zuulzuul00000000000000server_network: private server_name: ansible_server flavor: m1.tiny shade-1.31.0/shade/tests/ansible/roles/nova_flavor/0000775000175000017500000000000013440330010022171 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/nova_flavor/tasks/0000775000175000017500000000000013440330010023316 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/nova_flavor/tasks/main.yml0000666000175000017500000000204013440327640025002 0ustar zuulzuul00000000000000--- - name: Create public flavor os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_public_flavor is_public: True ram: 1024 vcpus: 1 disk: 10 ephemeral: 10 swap: 1 flavorid: 12345 - name: Delete public flavor os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_public_flavor - name: Create private flavor os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_private_flavor is_public: False ram: 1024 vcpus: 1 disk: 10 ephemeral: 10 swap: 1 flavorid: 12345 - name: Delete private flavor os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_private_flavor - name: Create flavor (defaults) os_nova_flavor: cloud: "{{ cloud }}" state: present name: ansible_defaults_flavor ram: 1024 vcpus: 1 disk: 10 - name: Delete flavor (defaults) os_nova_flavor: cloud: "{{ cloud }}" state: absent name: ansible_defaults_flavor shade-1.31.0/shade/tests/ansible/roles/user_group/0000775000175000017500000000000013440330010022047 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/user_group/tasks/0000775000175000017500000000000013440330010023174 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/user_group/tasks/main.yml0000666000175000017500000000116513440327640024667 0ustar zuulzuul00000000000000--- - name: Create user os_user: cloud: "{{ cloud }}" state: present name: ansible_user password: secret email: ansible.user@nowhere.net domain: default default_project: demo register: user - name: Assign user to nonadmins group os_user_group: cloud: "{{ cloud }}" state: present user: ansible_user group: nonadmins - name: Remove user from nonadmins group os_user_group: cloud: "{{ cloud }}" state: absent user: ansible_user group: nonadmins - name: Delete user os_user: cloud: "{{ cloud }}" state: absent name: ansible_user shade-1.31.0/shade/tests/ansible/roles/object/0000775000175000017500000000000013440330010021123 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/object/tasks/0000775000175000017500000000000013440330010022250 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/object/tasks/main.yml0000666000175000017500000000136513440327640023745 0ustar zuulzuul00000000000000--- - name: Create a test object file shell: mktemp register: tmp_file - name: Create container os_object: cloud: "{{ cloud }}" state: present container: ansible_container container_access: private - name: Put object os_object: cloud: "{{ cloud }}" state: present name: ansible_object filename: "{{ tmp_file.stdout }}" container: ansible_container - name: Delete object os_object: cloud: "{{ cloud }}" state: absent name: ansible_object container: ansible_container - name: Delete container os_object: cloud: "{{ cloud }}" state: absent container: ansible_container - name: Delete test object file file: name: "{{ tmp_file.stdout }}" state: absent shade-1.31.0/shade/tests/ansible/roles/volume/0000775000175000017500000000000013440330010021164 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/volume/tasks/0000775000175000017500000000000013440330010022311 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/volume/tasks/main.yml0000666000175000017500000000047713440327640024011 0ustar zuulzuul00000000000000--- - name: Create volume os_volume: cloud: "{{ cloud }}" state: present size: 1 display_name: ansible_volume display_description: Test volume register: vol - debug: var=vol - name: Delete volume os_volume: cloud: "{{ cloud }}" state: absent display_name: ansible_volume shade-1.31.0/shade/tests/ansible/roles/client_config/0000775000175000017500000000000013440330010022460 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/client_config/tasks/0000775000175000017500000000000013440330010023605 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/client_config/tasks/main.yml0000666000175000017500000000023313440327640025273 0ustar zuulzuul00000000000000--- - name: List all profiles os_client_config: register: list # WARNING: This will output sensitive authentication information!!!! - debug: var=list shade-1.31.0/shade/tests/ansible/roles/keystone_role/0000775000175000017500000000000013440330010022537 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keystone_role/tasks/0000775000175000017500000000000013440330010023664 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keystone_role/tasks/main.yml0000666000175000017500000000037413440327640025360 0ustar zuulzuul00000000000000--- - name: Create keystone role os_keystone_role: cloud: "{{ cloud }}" state: present name: "{{ role_name }}" - name: Delete keystone role os_keystone_role: cloud: "{{ cloud }}" state: absent name: "{{ role_name }}" shade-1.31.0/shade/tests/ansible/roles/keystone_role/vars/0000775000175000017500000000000013440330010023512 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keystone_role/vars/main.yml0000666000175000017500000000004113440327640025175 0ustar zuulzuul00000000000000role_name: ansible_keystone_role shade-1.31.0/shade/tests/ansible/roles/subnet/0000775000175000017500000000000013440330010021155 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/subnet/tasks/0000775000175000017500000000000013440330010022302 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/subnet/tasks/main.yml0000666000175000017500000000176013440327640023776 0ustar zuulzuul00000000000000--- - name: Create network {{ network_name }} os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: present - name: Create subnet {{ subnet_name }} on network {{ network_name }} os_subnet: cloud: "{{ cloud }}" network_name: "{{ network_name }}" name: "{{ subnet_name }}" state: present enable_dhcp: false dns_nameservers: - 8.8.8.7 - 8.8.8.8 cidr: 192.168.0.0/24 gateway_ip: 192.168.0.1 allocation_pool_start: 192.168.0.2 allocation_pool_end: 192.168.0.254 - name: Update subnet os_subnet: cloud: "{{ cloud }}" network_name: "{{ network_name }}" name: "{{ subnet_name }}" state: present dns_nameservers: - 8.8.8.7 cidr: 192.168.0.0/24 - name: Delete subnet {{ subnet_name }} os_subnet: cloud: "{{ cloud }}" name: "{{ subnet_name }}" state: absent - name: Delete network {{ network_name }} os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: absent shade-1.31.0/shade/tests/ansible/roles/subnet/vars/0000775000175000017500000000000013440330010022130 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/subnet/vars/main.yml0000666000175000017500000000003213440327640023613 0ustar zuulzuul00000000000000subnet_name: shade_subnet shade-1.31.0/shade/tests/ansible/roles/keypair/0000775000175000017500000000000013440330010021321 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keypair/tasks/0000775000175000017500000000000013440330010022446 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keypair/tasks/main.yml0000666000175000017500000000240613440327640024140 0ustar zuulzuul00000000000000--- - name: Create keypair (non-existing) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present - name: Delete keypair (non-existing) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Generate test key file user: name: "{{ ansible_env.USER }}" generate_ssh_key: yes ssh_key_file: .ssh/shade_id_rsa - name: Create keypair (file) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present public_key_file: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa.pub" - name: Delete keypair (file) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Create keypair (key) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: present public_key: "{{ lookup('file', '~/.ssh/shade_id_rsa.pub') }}" - name: Delete keypair (key) os_keypair: cloud: "{{ cloud }}" name: "{{ keypair_name }}" state: absent - name: Delete test key pub file file: name: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa.pub" state: absent - name: Delete test key pvt file file: name: "{{ ansible_env.HOME }}/.ssh/shade_id_rsa" state: absent shade-1.31.0/shade/tests/ansible/roles/keypair/vars/0000775000175000017500000000000013440330010022274 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/keypair/vars/main.yml0000666000175000017500000000003413440327640023761 0ustar zuulzuul00000000000000keypair_name: shade_keypair shade-1.31.0/shade/tests/ansible/roles/router/0000775000175000017500000000000013440330010021175 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/router/tasks/0000775000175000017500000000000013440330010022322 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/router/tasks/main.yml0000666000175000017500000000307113440327640024013 0ustar zuulzuul00000000000000--- - name: Create external network os_network: cloud: "{{ cloud }}" state: present name: "{{ external_network_name }}" external: true - name: Create internal network os_network: cloud: "{{ cloud }}" state: present name: "{{ network_name }}" external: false - name: Create subnet1 os_subnet: cloud: "{{ cloud }}" state: present network_name: "{{ external_network_name }}" name: shade_subnet1 cidr: 10.6.6.0/24 - name: Create subnet2 os_subnet: cloud: "{{ cloud }}" state: present network_name: "{{ network_name }}" name: shade_subnet2 cidr: 10.7.7.0/24 - name: Create router os_router: cloud: "{{ cloud }}" state: present name: "{{ router_name }}" network: "{{ external_network_name }}" - name: Update router os_router: cloud: "{{ cloud }}" state: present name: "{{ router_name }}" network: "{{ external_network_name }}" interfaces: - shade_subnet2 - name: Delete router os_router: cloud: "{{ cloud }}" state: absent name: "{{ router_name }}" - name: Delete subnet1 os_subnet: cloud: "{{ cloud }}" state: absent name: shade_subnet1 - name: Delete subnet2 os_subnet: cloud: "{{ cloud }}" state: absent name: shade_subnet2 - name: Delete internal network os_network: cloud: "{{ cloud }}" state: absent name: "{{ network_name }}" - name: Delete external network os_network: cloud: "{{ cloud }}" state: absent name: "{{ external_network_name }}" shade-1.31.0/shade/tests/ansible/roles/router/vars/0000775000175000017500000000000013440330010022150 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/router/vars/main.yml0000666000175000017500000000011013440327640023630 0ustar zuulzuul00000000000000external_network_name: ansible_external_net router_name: ansible_router shade-1.31.0/shade/tests/ansible/roles/network/0000775000175000017500000000000013440330010021346 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/network/tasks/0000775000175000017500000000000013440330010022473 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/network/tasks/main.yml0000666000175000017500000000046613440327640024171 0ustar zuulzuul00000000000000--- - name: Create network os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: present shared: "{{ network_shared }}" external: "{{ network_external }}" - name: Delete network os_network: cloud: "{{ cloud }}" name: "{{ network_name }}" state: absent shade-1.31.0/shade/tests/ansible/roles/network/vars/0000775000175000017500000000000013440330010022321 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/ansible/roles/network/vars/main.yml0000666000175000017500000000011213440327640024003 0ustar zuulzuul00000000000000network_name: shade_network network_shared: false network_external: false shade-1.31.0/shade/tests/unit/0000775000175000017500000000000013440330010016073 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/unit/test_rebuild_server.py0000666000175000017500000002207513440327640022547 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_rebuild_server ---------------------------------- Tests for the `rebuild_server` command. """ import uuid from shade import exc from shade.tests import fakes from shade.tests.unit import base class TestRebuildServer(base.RequestsMockTestCase): def setUp(self): super(TestRebuildServer, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) self.rebuild_server = fakes.make_fake_server( self.server_id, self.server_name, 'REBUILD') self.error_server = fakes.make_fake_server( self.server_id, self.server_name, 'ERROR') def test_rebuild_server_rebuild_exception(self): """ Test that an exception in the rebuild raises an exception in rebuild_server. """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), status_code=400, validate=dict( json={ 'rebuild': { 'imageRef': 'a', 'adminPass': 'b'}})), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.rebuild_server, self.fake_server['id'], "a", "b") self.assert_calls() def test_rebuild_server_server_error(self): """ Test that a server error while waiting for the server to rebuild raises an exception in rebuild_server. """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': self.rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.error_server]}), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.rebuild_server, self.fake_server['id'], "a", wait=True) self.assert_calls() def test_rebuild_server_timeout(self): """ Test that a timeout while waiting for the server to rebuild raises an exception in rebuild_server. """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': self.rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.rebuild_server]}), ]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.rebuild_server, self.fake_server['id'], "a", wait=True, timeout=0.001) self.assert_calls(do_count=False) def test_rebuild_server_no_wait(self): """ Test that rebuild_server with no wait and no exception in the rebuild call returns the server instance. """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': self.rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a'}})), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( self.rebuild_server['status'], self.cloud.rebuild_server(self.fake_server['id'], "a")['status']) self.assert_calls() def test_rebuild_server_with_admin_pass_no_wait(self): """ Test that a server with an admin_pass passed returns the password """ password = self.getUniqueString('password') rebuild_server = self.rebuild_server.copy() rebuild_server['adminPass'] = password self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a', 'adminPass': password}})), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( password, self.cloud.rebuild_server( self.fake_server['id'], 'a', admin_pass=password)['adminPass']) self.assert_calls() def test_rebuild_server_with_admin_pass_wait(self): """ Test that a server with an admin_pass passed returns the password """ password = self.getUniqueString('password') rebuild_server = self.rebuild_server.copy() rebuild_server['adminPass'] = password self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a', 'adminPass': password}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.rebuild_server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( password, self.cloud.rebuild_server( self.fake_server['id'], 'a', admin_pass=password, wait=True)['adminPass']) self.assert_calls() def test_rebuild_server_wait(self): """ Test that rebuild_server with a wait returns the server instance when its status changes to "ACTIVE". """ self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id, 'action']), json={'server': self.rebuild_server}, validate=dict( json={ 'rebuild': { 'imageRef': 'a'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.rebuild_server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( 'ACTIVE', self.cloud.rebuild_server( self.fake_server['id'], 'a', wait=True)['status']) self.assert_calls() shade-1.31.0/shade/tests/unit/test_floating_ip_common.py0000666000175000017500000001771713440327640023405 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_common ---------------------------------- Tests floating IP resource methods for Neutron and Nova-network. """ from mock import patch from shade import meta from shade import OpenStackCloud from shade.tests import fakes from shade.tests.unit import base class TestFloatingIP(base.RequestsMockTestCase): @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_attach_ip_to_server') @patch.object(OpenStackCloud, 'available_floating_ip') def test_add_auto_ip( self, mock_available_floating_ip, mock_attach_ip_to_server, mock_get_floating_ip): server_dict = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={} ) floating_ip_dict = { "id": "this-is-a-floating-ip-id", "fixed_ip_address": None, "internal_network": None, "floating_ip_address": "203.0.113.29", "network": "this-is-a-net-or-pool-id", "attached": False, "status": "ACTIVE" } mock_available_floating_ip.return_value = floating_ip_dict self.cloud.add_auto_ip(server=server_dict) mock_attach_ip_to_server.assert_called_with( timeout=60, wait=False, server=server_dict, floating_ip=floating_ip_dict, skip_attach=False) @patch.object(OpenStackCloud, '_add_ip_from_pool') def test_add_ips_to_server_pool(self, mock_add_ip_from_pool): server_dict = fakes.make_fake_server( server_id='romeo', name='test-server', status="ACTIVE", addresses={}) pool = 'nova' self.cloud.add_ips_to_server(server_dict, ip_pool=pool) mock_add_ip_from_pool.assert_called_with( server_dict, pool, reuse=True, wait=False, timeout=60, fixed_address=None, nat_destination=None) @patch.object(OpenStackCloud, 'has_service') @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_ipv6_only( self, mock_add_auto_ip, mock_get_floating_ip, mock_has_service): self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = True mock_has_service.return_value = False server = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42', u'OS-EXT-IPS:type': u'fixed', 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } ) server_dict = meta.add_server_interfaces(self.cloud, server) new_server = self.cloud.add_ips_to_server(server=server_dict) mock_get_floating_ip.assert_not_called() mock_add_auto_ip.assert_not_called() self.assertEqual( new_server['interface_ip'], '2001:4800:7819:103:be76:4eff:fe05:8525') self.assertEqual(new_server['private_v4'], '10.223.160.141') self.assertEqual(new_server['public_v4'], '') self.assertEqual( new_server['public_v6'], '2001:4800:7819:103:be76:4eff:fe05:8525') @patch.object(OpenStackCloud, 'has_service') @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_rackspace( self, mock_add_auto_ip, mock_get_floating_ip, mock_has_service): self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = True mock_has_service.return_value = False server = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ 'addr': "104.130.246.91", 'version': 4 }, { 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } ) server_dict = meta.add_server_interfaces(self.cloud, server) new_server = self.cloud.add_ips_to_server(server=server_dict) mock_get_floating_ip.assert_not_called() mock_add_auto_ip.assert_not_called() self.assertEqual( new_server['interface_ip'], '2001:4800:7819:103:be76:4eff:fe05:8525') @patch.object(OpenStackCloud, 'has_service') @patch.object(OpenStackCloud, 'get_floating_ip') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_rackspace_local_ipv4( self, mock_add_auto_ip, mock_get_floating_ip, mock_has_service): self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = False mock_has_service.return_value = False server = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ 'addr': "104.130.246.91", 'version': 4 }, { 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } ) server_dict = meta.add_server_interfaces(self.cloud, server) new_server = self.cloud.add_ips_to_server(server=server_dict) mock_get_floating_ip.assert_not_called() mock_add_auto_ip.assert_not_called() self.assertEqual(new_server['interface_ip'], '104.130.246.91') @patch.object(OpenStackCloud, 'add_ip_list') def test_add_ips_to_server_ip_list(self, mock_add_ip_list): server_dict = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={}) ips = ['203.0.113.29', '172.24.4.229'] self.cloud.add_ips_to_server(server_dict, ips=ips) mock_add_ip_list.assert_called_with( server_dict, ips, wait=False, timeout=60, fixed_address=None) @patch.object(OpenStackCloud, '_needs_floating_ip') @patch.object(OpenStackCloud, '_add_auto_ip') def test_add_ips_to_server_auto_ip( self, mock_add_auto_ip, mock_needs_floating_ip): server_dict = fakes.make_fake_server( server_id='server-id', name='test-server', status="ACTIVE", addresses={}) # TODO(mordred) REMOVE THIS MOCK WHEN THE NEXT PATCH LANDS # SERIOUSLY THIS TIME. NEXT PATCH - WHICH SHOULD ADD MOCKS FOR # list_ports AND list_networks AND list_subnets. BUT THAT WOULD # BE NOT ACTUALLY RELATED TO THIS PATCH. SO DO IT NEXT PATCH mock_needs_floating_ip.return_value = True self.cloud.add_ips_to_server(server_dict) mock_add_auto_ip.assert_called_with( server_dict, wait=False, timeout=60, reuse=True) shade-1.31.0/shade/tests/unit/test_inventory.py0000666000175000017500000001262613440327640021571 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os_client_config from os_client_config import exceptions as occ_exc from shade import exc from shade import inventory from shade.tests import fakes from shade.tests.unit import base class TestInventory(base.RequestsMockTestCase): def setUp(self): super(TestInventory, self).setUp() @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test__init(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() mock_config.assert_called_once_with( config_files=os_client_config.config.CONFIG_FILES ) self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) self.assertTrue(mock_config.return_value.get_all_clouds.called) @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test__init_one_cloud(self, mock_cloud, mock_config): mock_config.return_value.get_one_cloud.return_value = [{}] inv = inventory.OpenStackInventory(cloud='supercloud') mock_config.assert_called_once_with( config_files=os_client_config.config.CONFIG_FILES ) self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) self.assertFalse(mock_config.return_value.get_all_clouds.called) mock_config.return_value.get_one_cloud.assert_called_once_with( 'supercloud') @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test__raise_exception_on_no_cloud(self, mock_cloud, mock_config): """ Test that when os-client-config can't find a named cloud, a shade exception is emitted. """ mock_config.return_value.get_one_cloud.side_effect = ( occ_exc.OpenStackConfigException() ) self.assertRaises(exc.OpenStackCloudException, inventory.OpenStackInventory, cloud='supercloud') mock_config.return_value.get_one_cloud.assert_called_once_with( 'supercloud') @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test_list_hosts(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.list_hosts() inv.clouds[0].list_servers.assert_called_once_with(detailed=True) self.assertFalse(inv.clouds[0].get_openstack_vars.called) self.assertEqual([server], ret) @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test_list_hosts_no_detail(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() server = self.cloud._normalize_server( fakes.make_fake_server( '1234', 'test', 'ACTIVE', addresses={})) self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.list_hosts(expand=False) inv.clouds[0].list_servers.assert_called_once_with(detailed=False) self.assertFalse(inv.clouds[0].get_openstack_vars.called) @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test_search_hosts(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.search_hosts('server_id') self.assertEqual([server], ret) @mock.patch("os_client_config.config.OpenStackConfig") @mock.patch("shade.OpenStackCloud") def test_get_host(self, mock_cloud, mock_config): mock_config.return_value.get_all_clouds.return_value = [{}] inv = inventory.OpenStackInventory() server = dict(id='server_id', name='server_name') self.assertIsInstance(inv.clouds, list) self.assertEqual(1, len(inv.clouds)) inv.clouds[0].list_servers.return_value = [server] inv.clouds[0].get_openstack_vars.return_value = server ret = inv.get_host('server_id') self.assertEqual(server, ret) shade-1.31.0/shade/tests/unit/test_image_snapshot.py0000666000175000017500000000774313440327640022541 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from shade import exc from shade.tests import fakes from shade.tests.unit import base class TestImageSnapshot(base.RequestsMockTestCase): def setUp(self): super(TestImageSnapshot, self).setUp() self.server_id = str(uuid.uuid4()) self.image_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) def test_create_image_snapshot_wait_until_active_never_active(self): snapshot_name = 'test-snapshot' fake_image = fakes.make_fake_image(self.image_id, status='pending') self.register_uris([ dict( method='POST', uri='{endpoint}/servers/{server_id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, server_id=self.server_id), headers=dict( Location='{endpoint}/images/{image_id}'.format( endpoint='https://images.example.com', image_id=self.image_id)), validate=dict( json={ "createImage": { "name": snapshot_name, "metadata": {}, }})), self.get_glance_discovery_mock_dict(), dict( method='GET', uri='https://image.example.com/v2/images', json=dict(images=[fake_image])), ]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.create_image_snapshot, snapshot_name, dict(id=self.server_id), wait=True, timeout=0.01) # After the fifth call, we just keep polling get images for status. # Due to mocking sleep, we have no clue how many times we'll call it. self.assert_calls(stop_after=5, do_count=False) def test_create_image_snapshot_wait_active(self): snapshot_name = 'test-snapshot' pending_image = fakes.make_fake_image(self.image_id, status='pending') fake_image = fakes.make_fake_image(self.image_id) self.register_uris([ dict( method='POST', uri='{endpoint}/servers/{server_id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, server_id=self.server_id), headers=dict( Location='{endpoint}/images/{image_id}'.format( endpoint='https://images.example.com', image_id=self.image_id)), validate=dict( json={ "createImage": { "name": snapshot_name, "metadata": {}, }})), self.get_glance_discovery_mock_dict(), dict( method='GET', uri='https://image.example.com/v2/images', json=dict(images=[pending_image])), dict( method='GET', uri='https://image.example.com/v2/images', json=dict(images=[fake_image])), ]) image = self.cloud.create_image_snapshot( 'test-snapshot', dict(id=self.server_id), wait=True, timeout=2) self.assertEqual(image['id'], self.image_id) self.assert_calls() shade-1.31.0/shade/tests/unit/test_flavors.py0000666000175000017500000002421113440327640021201 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import shade from shade.tests import fakes from shade.tests.unit import base class TestFlavors(base.RequestsMockTestCase): def test_create_flavor(self): self.register_uris([ dict(method='POST', uri='{endpoint}/flavors'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavor': fakes.FAKE_FLAVOR}, validate=dict( json={ 'flavor': { "name": "vanilla", "ram": 65536, "vcpus": 24, "swap": 0, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 1600, "id": None}}))]) self.op_cloud.create_flavor( 'vanilla', ram=65536, disk=1600, vcpus=24, ) self.assert_calls() def test_delete_flavor(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), dict(method='DELETE', uri='{endpoint}/flavors/{id}'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=fakes.FLAVOR_ID))]) self.assertTrue(self.op_cloud.delete_flavor('vanilla')) self.assert_calls() def test_delete_flavor_not_found(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST})]) self.assertFalse(self.op_cloud.delete_flavor('invalid')) self.assert_calls() def test_delete_flavor_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), dict(method='DELETE', uri='{endpoint}/flavors/{id}'.format( endpoint=fakes.FAKE_FLAVOR_LIST, id=fakes.FLAVOR_ID), status_code=503)]) self.assertRaises(shade.OpenStackCloudException, self.op_cloud.delete_flavor, 'vanilla') def test_list_flavors(self): uris_to_mock = [ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), ] uris_to_mock.extend([ dict(method='GET', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=flavor['id']), json={'extra_specs': {}}) for flavor in fakes.FAKE_FLAVOR_LIST]) self.register_uris(uris_to_mock) flavors = self.cloud.list_flavors() # test that new flavor is created correctly found = False for flavor in flavors: if flavor['name'] == 'vanilla': found = True break self.assertTrue(found) needed_keys = {'name', 'ram', 'vcpus', 'id', 'is_public', 'disk'} if found: # check flavor content self.assertTrue(needed_keys.issubset(flavor.keys())) self.assert_calls() def test_get_flavor_by_ram(self): uris_to_mock = [ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), ] uris_to_mock.extend([ dict(method='GET', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=flavor['id']), json={'extra_specs': {}}) for flavor in fakes.FAKE_FLAVOR_LIST]) self.register_uris(uris_to_mock) flavor = self.cloud.get_flavor_by_ram(ram=250) self.assertEqual(fakes.STRAWBERRY_FLAVOR_ID, flavor['id']) def test_get_flavor_by_ram_and_include(self): uris_to_mock = [ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': fakes.FAKE_FLAVOR_LIST}), ] uris_to_mock.extend([ dict(method='GET', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=flavor['id']), json={'extra_specs': {}}) for flavor in fakes.FAKE_FLAVOR_LIST]) self.register_uris(uris_to_mock) flavor = self.cloud.get_flavor_by_ram(ram=150, include='strawberry') self.assertEqual(fakes.STRAWBERRY_FLAVOR_ID, flavor['id']) def test_get_flavor_by_ram_not_found(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'flavors': []})]) self.assertRaises( shade.OpenStackCloudException, self.cloud.get_flavor_by_ram, ram=100) def test_get_flavor_string_and_int(self): flavor_list_uri = '{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT) flavor_resource_uri = '{endpoint}/flavors/1/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT) flavor_list_json = {'flavors': [fakes.make_fake_flavor( '1', 'vanilla')]} flavor_json = {'extra_specs': {}} self.register_uris([ dict(method='GET', uri=flavor_list_uri, json=flavor_list_json), dict(method='GET', uri=flavor_resource_uri, json=flavor_json), dict(method='GET', uri=flavor_list_uri, json=flavor_list_json), dict(method='GET', uri=flavor_resource_uri, json=flavor_json)]) flavor1 = self.cloud.get_flavor('1') self.assertEqual('1', flavor1['id']) flavor2 = self.cloud.get_flavor(1) self.assertEqual('1', flavor2['id']) def test_set_flavor_specs(self): extra_specs = dict(key1='value1') self.register_uris([ dict(method='POST', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=1), json=dict(extra_specs=extra_specs))]) self.op_cloud.set_flavor_specs(1, extra_specs) self.assert_calls() def test_unset_flavor_specs(self): keys = ['key1', 'key2'] self.register_uris([ dict(method='DELETE', uri='{endpoint}/flavors/{id}/os-extra_specs/{key}'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=1, key=key)) for key in keys]) self.op_cloud.unset_flavor_specs(1, keys) self.assert_calls() def test_add_flavor_access(self): self.register_uris([ dict(method='POST', uri='{endpoint}/flavors/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id='flavor_id'), json={ 'flavor_access': [{ 'flavor_id': 'flavor_id', 'tenant_id': 'tenant_id'}]}, validate=dict( json={'addTenantAccess': {'tenant': 'tenant_id'}}))]) self.op_cloud.add_flavor_access('flavor_id', 'tenant_id') self.assert_calls() def test_remove_flavor_access(self): self.register_uris([ dict(method='POST', uri='{endpoint}/flavors/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id='flavor_id'), json={'flavor_access': []}, validate=dict( json={'removeTenantAccess': {'tenant': 'tenant_id'}}))]) self.op_cloud.remove_flavor_access('flavor_id', 'tenant_id') self.assert_calls() def test_list_flavor_access(self): self.register_uris([ dict(method='GET', uri='{endpoint}/flavors/vanilla/os-flavor-access'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={ 'flavor_access': [ {'flavor_id': 'vanilla', 'tenant_id': 'tenant_id'}]}) ]) self.op_cloud.list_flavor_access('vanilla') self.assert_calls() def test_get_flavor_by_id(self): flavor_uri = '{endpoint}/flavors/1'.format( endpoint=fakes.COMPUTE_ENDPOINT) flavor_extra_uri = '{endpoint}/flavors/1/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT) flavor_json = {'flavor': fakes.make_fake_flavor('1', 'vanilla')} flavor_extra_json = {'extra_specs': {'name': 'test'}} self.register_uris([ dict(method='GET', uri=flavor_uri, json=flavor_json), dict(method='GET', uri=flavor_extra_uri, json=flavor_extra_json), ]) flavor1 = self.cloud.get_flavor_by_id('1') self.assertEqual('1', flavor1['id']) self.assertEqual({'name': 'test'}, flavor1.extra_specs) flavor2 = self.cloud.get_flavor_by_id('1', get_extra=False) self.assertEqual('1', flavor2['id']) self.assertEqual({}, flavor2.extra_specs) shade-1.31.0/shade/tests/unit/test_domains.py0000666000175000017500000002325713440327640021170 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid import testtools from testtools import matchers import shade from shade.tests.unit import base class TestDomains(base.RequestsMockTestCase): def get_mock_url(self, service_type='identity', interface='admin', resource='domains', append=None, base_url_append='v3'): return super(TestDomains, self).get_mock_url( service_type=service_type, interface=interface, resource=resource, append=append, base_url_append=base_url_append) def test_list_domains(self): domain_data = self._get_domain_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'domains': [domain_data.json_response['domain']]})]) domains = self.op_cloud.list_domains() self.assertThat(len(domains), matchers.Equals(1)) self.assertThat(domains[0].name, matchers.Equals(domain_data.domain_name)) self.assertThat(domains[0].id, matchers.Equals(domain_data.domain_id)) self.assert_calls() def test_get_domain(self): domain_data = self._get_domain_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(append=[domain_data.domain_id]), status_code=200, json=domain_data.json_response)]) domain = self.op_cloud.get_domain(domain_id=domain_data.domain_id) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assert_calls() def test_get_domain_with_name_or_id(self): domain_data = self._get_domain_data() response = {'domains': [domain_data.json_response['domain']]} self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json=response), dict(method='GET', uri=self.get_mock_url(), status_code=200, json=response)]) domain = self.op_cloud.get_domain(name_or_id=domain_data.domain_id) domain_by_name = self.op_cloud.get_domain( name_or_id=domain_data.domain_name) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assertThat(domain_by_name.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain_by_name.name, matchers.Equals(domain_data.domain_name)) self.assert_calls() def test_create_domain(self): domain_data = self._get_domain_data(description=uuid.uuid4().hex, enabled=True) self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=200, json=domain_data.json_response, validate=dict(json=domain_data.json_request))]) domain = self.op_cloud.create_domain( domain_data.domain_name, domain_data.description) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assertThat( domain.description, matchers.Equals(domain_data.description)) self.assert_calls() def test_create_domain_exception(self): domain_data = self._get_domain_data(domain_name='domain_name', enabled=True) with testtools.ExpectedException( shade.OpenStackCloudBadRequest, "Failed to create domain domain_name" ): self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=400, json=domain_data.json_response, validate=dict(json=domain_data.json_request))]) self.op_cloud.create_domain('domain_name') self.assert_calls() def test_delete_domain(self): domain_data = self._get_domain_data() new_resp = domain_data.json_response.copy() new_resp['domain']['enabled'] = False domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=new_resp, validate=dict(json={'domain': {'enabled': False}})), dict(method='DELETE', uri=domain_resource_uri, status_code=204)]) self.op_cloud.delete_domain(domain_data.domain_id) self.assert_calls() def test_delete_domain_name_or_id(self): domain_data = self._get_domain_data() new_resp = domain_data.json_response.copy() new_resp['domain']['enabled'] = False domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'domains': [domain_data.json_response['domain']]}), dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=new_resp, validate=dict(json={'domain': {'enabled': False}})), dict(method='DELETE', uri=domain_resource_uri, status_code=204)]) self.op_cloud.delete_domain(name_or_id=domain_data.domain_id) self.assert_calls() def test_delete_domain_exception(self): # NOTE(notmorgan): This test does not reflect the case where the domain # cannot be updated to be disabled, Shade raises that as an unable # to update domain even though it is called via delete_domain. This # should be fixed in shade to catch either a failure on PATCH, # subsequent GET, or DELETE call(s). domain_data = self._get_domain_data() new_resp = domain_data.json_response.copy() new_resp['domain']['enabled'] = False domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=new_resp, validate=dict(json={'domain': {'enabled': False}})), dict(method='DELETE', uri=domain_resource_uri, status_code=404)]) with testtools.ExpectedException( shade.OpenStackCloudURINotFound, "Failed to delete domain %s" % domain_data.domain_id ): self.op_cloud.delete_domain(domain_data.domain_id) self.assert_calls() def test_update_domain(self): domain_data = self._get_domain_data( description=self.getUniqueString('domainDesc')) domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=domain_data.json_response, validate=dict(json=domain_data.json_request))]) domain = self.op_cloud.update_domain( domain_data.domain_id, name=domain_data.domain_name, description=domain_data.description) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assertThat( domain.description, matchers.Equals(domain_data.description)) self.assert_calls() def test_update_domain_name_or_id(self): domain_data = self._get_domain_data( description=self.getUniqueString('domainDesc')) domain_resource_uri = self.get_mock_url(append=[domain_data.domain_id]) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'domains': [domain_data.json_response['domain']]}), dict(method='PATCH', uri=domain_resource_uri, status_code=200, json=domain_data.json_response, validate=dict(json=domain_data.json_request))]) domain = self.op_cloud.update_domain( name_or_id=domain_data.domain_id, name=domain_data.domain_name, description=domain_data.description) self.assertThat(domain.id, matchers.Equals(domain_data.domain_id)) self.assertThat(domain.name, matchers.Equals(domain_data.domain_name)) self.assertThat( domain.description, matchers.Equals(domain_data.description)) self.assert_calls() def test_update_domain_exception(self): domain_data = self._get_domain_data( description=self.getUniqueString('domainDesc')) self.register_uris([ dict(method='PATCH', uri=self.get_mock_url(append=[domain_data.domain_id]), status_code=409, json=domain_data.json_response, validate=dict(json={'domain': {'enabled': False}}))]) with testtools.ExpectedException( shade.OpenStackCloudHTTPError, "Error in updating domain %s" % domain_data.domain_id ): self.op_cloud.delete_domain(domain_data.domain_id) self.assert_calls() shade-1.31.0/shade/tests/unit/test_operator_noauth.py0000666000175000017500000000536513440327640022747 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import shade from shade.tests.unit import base class TestShadeOperatorNoAuth(base.RequestsMockTestCase): def setUp(self): """Setup Noauth OperatorCloud tests Setup the test to utilize no authentication and an endpoint URL in the auth data. This is permits testing of the basic mechanism that enables Ironic noauth mode to be utilized with Shade. Uses base.RequestsMockTestCase instead of IronicTestCase because we need to do completely different things with discovery. """ super(TestShadeOperatorNoAuth, self).setUp() # By clearing the URI registry, we remove all calls to a keystone # catalog or getting a token self._uri_registry.clear() self.register_uris([ dict(method='GET', uri=self.get_mock_url( service_type='baremetal', base_url_append='v1', resource='nodes'), json={'nodes': []}), ]) def test_ironic_noauth_none_auth_type(self): """Test noauth selection for Ironic in OperatorCloud The new way of doing this is with the keystoneauth none plugin. """ # NOTE(TheJulia): When we are using the python-ironicclient # library, the library will automatically prepend the URI path # with 'v1'. As such, since we are overriding the endpoint, # we must explicitly do the same as we move away from the # client library. self.cloud_noauth = shade.operator_cloud( auth_type='none', baremetal_endpoint_override="https://bare-metal.example.com/v1") self.cloud_noauth.list_machines() self.assert_calls() def test_ironic_noauth_admin_token_auth_type(self): """Test noauth selection for Ironic in OperatorCloud The old way of doing this was to abuse admin_token. """ self.cloud_noauth = shade.operator_cloud( auth_type='admin_token', auth=dict( endpoint='https://bare-metal.example.com/v1', token='ignored')) self.cloud_noauth.list_machines() self.assert_calls() shade-1.31.0/shade/tests/unit/test_qos_bandwidth_limit_rule.py0000666000175000017500000004165513440327640024613 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from shade import exc from shade.tests.unit import base class TestQosBandwidthLimitRule(base.RequestsMockTestCase): policy_name = 'qos test policy' policy_id = '881d1bb7-a663-44c0-8f9f-ee2765b74486' project_id = 'c88fc89f-5121-4a4c-87fd-496b5af864e9' rule_id = 'ed1a2b05-0ad7-45d7-873f-008b575a02b3' rule_max_kbps = 1000 rule_max_burst = 100 mock_policy = { 'id': policy_id, 'name': policy_name, 'description': '', 'rules': [], 'project_id': project_id, 'tenant_id': project_id, 'shared': False, 'is_default': False } mock_rule = { 'id': rule_id, 'max_kbps': rule_max_kbps, 'max_burst_kbps': rule_max_burst, 'direction': 'egress' } qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } qos_bw_limit_direction_extension = { "updated": "2017-04-10T10:00:00-00:00", "name": "Direction for QoS bandwidth limit rule", "links": [], "alias": "qos-bw-limit-direction", "description": ("Allow to configure QoS bandwidth limit rule with " "specific direction: ingress or egress") } enabled_neutron_extensions = [qos_extension, qos_bw_limit_direction_extension] def test_get_qos_bandwidth_limit_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': self.mock_rule}) ]) r = self.cloud.get_qos_bandwidth_limit_rule(self.policy_name, self.rule_id) self.assertDictEqual(self.mock_rule, r) self.assert_calls() def test_get_qos_bandwidth_limit_rule_no_qos_policy_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': []}) ]) self.assertRaises( exc.OpenStackCloudResourceNotFound, self.cloud.get_qos_bandwidth_limit_rule, self.policy_name, self.rule_id) self.assert_calls() def test_get_qos_bandwidth_limit_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_bandwidth_limit_rule, self.policy_name, self.rule_id) self.assert_calls() def test_create_qos_bandwidth_limit_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules']), json={'bandwidth_limit_rule': self.mock_rule}) ]) rule = self.cloud.create_qos_bandwidth_limit_rule( self.policy_name, max_kbps=self.rule_max_kbps) self.assertDictEqual(self.mock_rule, rule) self.assert_calls() def test_create_qos_bandwidth_limit_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_qos_bandwidth_limit_rule, self.policy_name, max_kbps=100) self.assert_calls() def test_create_qos_bandwidth_limit_rule_no_qos_direction_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules']), json={'bandwidth_limit_rule': self.mock_rule}) ]) rule = self.cloud.create_qos_bandwidth_limit_rule( self.policy_name, max_kbps=self.rule_max_kbps, direction="ingress") self.assertDictEqual(self.mock_rule, rule) self.assert_calls() def test_update_qos_bandwidth_limit_rule(self): expected_rule = copy.copy(self.mock_rule) expected_rule['max_kbps'] = self.rule_max_kbps + 100 self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': self.mock_rule}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': expected_rule}, validate=dict( json={'bandwidth_limit_rule': { 'max_kbps': self.rule_max_kbps + 100}})) ]) rule = self.cloud.update_qos_bandwidth_limit_rule( self.policy_id, self.rule_id, max_kbps=self.rule_max_kbps + 100) self.assertDictEqual(expected_rule, rule) self.assert_calls() def test_update_qos_bandwidth_limit_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.update_qos_bandwidth_limit_rule, self.policy_id, self.rule_id, max_kbps=2000) self.assert_calls() def test_update_qos_bandwidth_limit_rule_no_qos_direction_extension(self): expected_rule = copy.copy(self.mock_rule) expected_rule['direction'] = self.rule_max_kbps + 100 self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': self.mock_rule}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={'bandwidth_limit_rule': expected_rule}, validate=dict( json={'bandwidth_limit_rule': { 'max_kbps': self.rule_max_kbps + 100}})) ]) rule = self.cloud.update_qos_bandwidth_limit_rule( self.policy_id, self.rule_id, max_kbps=self.rule_max_kbps + 100, direction="ingress") # Even if there was attempt to change direction to 'ingress' it should # be not changed in returned rule self.assertDictEqual(expected_rule, rule) self.assert_calls() def test_delete_qos_bandwidth_limit_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), json={}) ]) self.assertTrue( self.cloud.delete_qos_bandwidth_limit_rule( self.policy_name, self.rule_id)) self.assert_calls() def test_delete_qos_bandwidth_limit_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_qos_bandwidth_limit_rule, self.policy_name, self.rule_id) self.assert_calls() def test_delete_qos_bandwidth_limit_rule_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'bandwidth_limit_rules', '%s.json' % self.rule_id]), status_code=404) ]) self.assertFalse( self.cloud.delete_qos_bandwidth_limit_rule( self.policy_name, self.rule_id)) self.assert_calls() shade-1.31.0/shade/tests/unit/fixtures/0000775000175000017500000000000013440330010017744 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/unit/fixtures/image-version-broken.json0000666000175000017500000000204313440327640024702 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v2.3", "links": [ { "href": "http://localhost/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.2", "links": [ { "href": "http://localhost/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.1", "links": [ { "href": "http://localhost/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.0", "links": [ { "href": "http://localhost/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.1", "links": [ { "href": "http://localhost/v1/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.0", "links": [ { "href": "http://localhost/v1/", "rel": "self" } ] } ] } shade-1.31.0/shade/tests/unit/fixtures/dns.json0000666000175000017500000000101113440327640021435 0ustar zuulzuul00000000000000{ "versions": { "values": [{ "id": "v1", "links": [ { "href": "https://dns.example.com/v1", "rel": "self" } ], "status": "DEPRECATED" }, { "id": "v2", "links": [ { "href": "https://dns.example.com/v2", "rel": "self" } ], "status": "CURRENT" }] } } shade-1.31.0/shade/tests/unit/fixtures/catalog-v3-suburl.json0000666000175000017500000001113613440327640024134 0ustar zuulzuul00000000000000{ "token": { "audit_ids": [ "Rvn7eHkiSeOwucBIPaKdYA" ], "catalog": [ { "endpoints": [ { "id": "32466f357f3545248c47471ca51b0d3a", "interface": "public", "region": "RegionOne", "url": "https://example.com/compute/v2.1/" } ], "name": "nova", "type": "compute" }, { "endpoints": [ { "id": "1e875ca2225b408bbf3520a1b8e1a537", "interface": "public", "region": "RegionOne", "url": "https://example.com/volumev2/v2/1c36b64c840a42cd9e9b931a369337f0" } ], "name": "cinderv2", "type": "volumev2" }, { "endpoints": [ { "id": "5a64de3c4a614d8d8f8d1ba3dee5f45f", "interface": "public", "region": "RegionOne", "url": "https://example.com/image" } ], "name": "glance", "type": "image" }, { "endpoints": [ { "id": "3d15fdfc7d424f3c8923324417e1a3d1", "interface": "public", "region": "RegionOne", "url": "https://example.com/volume/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "name": "cinder", "type": "volume" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://identity.example.com" }, { "id": "012322eeedcd459edabb4933021112bc", "interface": "admin", "region": "RegionOne", "url": "https://example.com/identity" } ], "endpoints_links": [], "name": "keystone", "type": "identity" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628d", "interface": "public", "region": "RegionOne", "url": "https://example.com/example" } ], "endpoints_links": [], "name": "neutron", "type": "network" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628e", "interface": "public", "region": "RegionOne", "url": "https://example.com/container-infra/v1" } ], "endpoints_links": [], "name": "magnum", "type": "container-infra" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://example.com/object-store/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "endpoints_links": [], "name": "swift", "type": "object-store" }, { "endpoints": [ { "id": "652f0612744042bfbb8a8bb2c777a16d", "interface": "public", "region": "RegionOne", "url": "https://example.com/bare-metal" } ], "endpoints_links": [], "name": "ironic", "type": "baremetal" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://example.com/orchestration/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "endpoints_links": [], "name": "heat", "type": "orchestration" }, { "endpoints": [ { "id": "10c76ffd2b744a67950ed1365190d352", "interface": "public", "region": "RegionOne", "url": "https://example.com/dns" } ], "endpoints_links": [], "name": "designate", "type": "dns" } ], "expires_at": "9999-12-31T23:59:59Z", "issued_at": "2016-12-17T14:25:05.000000Z", "methods": [ "password" ], "project": { "domain": { "id": "default", "name": "default" }, "id": "1c36b64c840a42cd9e9b931a369337f0", "name": "Default Project" }, "roles": [ { "id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_" }, { "id": "37071fc082e14c2284c32a2761f71c63", "name": "swiftoperator" } ], "user": { "domain": { "id": "default", "name": "default" }, "id": "c17534835f8f42bf98fc367e0bf35e09", "name": "mordred" } } } shade-1.31.0/shade/tests/unit/fixtures/image-version-suburl.json0000666000175000017500000000212313440327640024735 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v2.3", "links": [ { "href": "http://example.com/image/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.2", "links": [ { "href": "http://example.com/image/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.1", "links": [ { "href": "http://example.com/image/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.0", "links": [ { "href": "http://example.com/image/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.1", "links": [ { "href": "http://example.com/image/v1/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.0", "links": [ { "href": "http://example.com/image/v1/", "rel": "self" } ] } ] } shade-1.31.0/shade/tests/unit/fixtures/discovery.json0000666000175000017500000000176213440327640022675 0ustar zuulzuul00000000000000{ "versions": { "values": [ { "status": "stable", "updated": "2016-04-04T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v3+json" } ], "id": "v3.6", "links": [ { "href": "https://identity.example.com/v3/", "rel": "self" } ] }, { "status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json" } ], "id": "v2.0", "links": [ { "href": "https://identity.example.com/v2.0/", "rel": "self" }, { "href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby" } ] } ] } } shade-1.31.0/shade/tests/unit/fixtures/image-version-v1.json0000666000175000017500000000057713440327640023762 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v1.1", "links": [ { "href": "http://image.example.com/v1/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.0", "links": [ { "href": "http://image.example.com/v1/", "rel": "self" } ] } ] } shade-1.31.0/shade/tests/unit/fixtures/catalog-v3.json0000666000175000017500000001113513440327640022621 0ustar zuulzuul00000000000000{ "token": { "audit_ids": [ "Rvn7eHkiSeOwucBIPaKdYA" ], "catalog": [ { "endpoints": [ { "id": "32466f357f3545248c47471ca51b0d3a", "interface": "public", "region": "RegionOne", "url": "https://compute.example.com/v2.1/" } ], "name": "nova", "type": "compute" }, { "endpoints": [ { "id": "1e875ca2225b408bbf3520a1b8e1a537", "interface": "public", "region": "RegionOne", "url": "https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0" } ], "name": "cinderv2", "type": "volumev2" }, { "endpoints": [ { "id": "5a64de3c4a614d8d8f8d1ba3dee5f45f", "interface": "public", "region": "RegionOne", "url": "https://image.example.com" } ], "name": "glance", "type": "image" }, { "endpoints": [ { "id": "3d15fdfc7d424f3c8923324417e1a3d1", "interface": "public", "region": "RegionOne", "url": "https://volume.example.com/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "name": "cinder", "type": "volume" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://identity.example.com" }, { "id": "012322eeedcd459edabb4933021112bc", "interface": "admin", "region": "RegionOne", "url": "https://identity.example.com" } ], "endpoints_links": [], "name": "keystone", "type": "identity" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628d", "interface": "public", "region": "RegionOne", "url": "https://network.example.com" } ], "endpoints_links": [], "name": "neutron", "type": "network" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628e", "interface": "public", "region": "RegionOne", "url": "https://container-infra.example.com/v1" } ], "endpoints_links": [], "name": "magnum", "type": "container-infra" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://object-store.example.com/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "endpoints_links": [], "name": "swift", "type": "object-store" }, { "endpoints": [ { "id": "652f0612744042bfbb8a8bb2c777a16d", "interface": "public", "region": "RegionOne", "url": "https://bare-metal.example.com/" } ], "endpoints_links": [], "name": "ironic", "type": "baremetal" }, { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://orchestration.example.com/v1/1c36b64c840a42cd9e9b931a369337f0" } ], "endpoints_links": [], "name": "heat", "type": "orchestration" }, { "endpoints": [ { "id": "10c76ffd2b744a67950ed1365190d352", "interface": "public", "region": "RegionOne", "url": "https://dns.example.com" } ], "endpoints_links": [], "name": "designate", "type": "dns" } ], "expires_at": "9999-12-31T23:59:59Z", "issued_at": "2016-12-17T14:25:05.000000Z", "methods": [ "password" ], "project": { "domain": { "id": "default", "name": "default" }, "id": "1c36b64c840a42cd9e9b931a369337f0", "name": "Default Project" }, "roles": [ { "id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_" }, { "id": "37071fc082e14c2284c32a2761f71c63", "name": "swiftoperator" } ], "user": { "domain": { "id": "default", "name": "default" }, "id": "c17534835f8f42bf98fc367e0bf35e09", "name": "mordred" } } } shade-1.31.0/shade/tests/unit/fixtures/baremetal.json0000666000175000017500000000115313440327640022614 0ustar zuulzuul00000000000000{ "default_version": { "id": "v1", "links": [ { "href": "https://bare-metal.example.com/v1/", "rel": "self" } ], "min_version": "1.1", "status": "CURRENT", "version": "1.33" }, "description": "Ironic is an OpenStack project which aims to provision baremetal machines.", "name": "OpenStack Ironic API", "versions": [ { "id": "v1", "links": [ { "href": "https://bare-metal.example.com/v1/", "rel": "self" } ], "min_version": "1.1", "status": "CURRENT", "version": "1.33" } ] } shade-1.31.0/shade/tests/unit/fixtures/clouds/0000775000175000017500000000000013440330010021235 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/unit/fixtures/clouds/clouds_cache.yaml0000666000175000017500000000135513440327640024562 0ustar zuulzuul00000000000000cache: max_age: 90 class: dogpile.cache.memory expiration: server: 1 clouds: _test_cloud_: auth: auth_url: https://identity.example.com password: password project_name: admin username: admin user_domain_name: default project_domain_name: default region_name: RegionOne _test_cloud_v2_: auth: auth_url: https://identity.example.com password: password project_name: admin username: admin identity_api_version: '2.0' region_name: RegionOne _bogus_test_: auth_type: bogus auth: auth_url: http://identity.example.com/v2.0 username: _test_user_ password: _test_pass_ project_name: _test_project_ region_name: _test_region_ shade-1.31.0/shade/tests/unit/fixtures/clouds/clouds.yaml0000666000175000017500000000123713440327640023436 0ustar zuulzuul00000000000000clouds: _test_cloud_: auth: auth_url: https://identity.example.com password: password project_name: admin username: admin user_domain_name: default project_domain_name: default region_name: RegionOne _test_cloud_v2_: auth: auth_url: https://identity.example.com password: password project_name: admin username: admin identity_api_version: '2.0' region_name: RegionOne _bogus_test_: auth_type: bogus auth: auth_url: https://identity.example.com/v2.0 username: _test_user_ password: _test_pass_ project_name: _test_project_ region_name: _test_region_ shade-1.31.0/shade/tests/unit/fixtures/image-version-v2.json0000666000175000017500000000135113440327640023752 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v2.3", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.2", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.1", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.0", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] } ] } shade-1.31.0/shade/tests/unit/fixtures/image-version.json0000666000175000017500000000212313440327640023423 0ustar zuulzuul00000000000000{ "versions": [ { "status": "CURRENT", "id": "v2.3", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.2", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.1", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v2.0", "links": [ { "href": "http://image.example.com/v2/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.1", "links": [ { "href": "http://image.example.com/v1/", "rel": "self" } ] }, { "status": "SUPPORTED", "id": "v1.0", "links": [ { "href": "http://image.example.com/v1/", "rel": "self" } ] } ] } shade-1.31.0/shade/tests/unit/fixtures/catalog-v2.json0000666000175000017500000001074613440327640022627 0ustar zuulzuul00000000000000{ "access": { "token": { "issued_at": "2016-04-14T10:09:58.014014Z", "expires": "9999-12-31T23:59:59Z", "id": "7fa3037ae2fe48ada8c626a51dc01ffd", "tenant": { "enabled": true, "description": "Bootstrap project for initializing the cloud.", "name": "admin", "id": "1c36b64c840a42cd9e9b931a369337f0" }, "audit_ids": [ "FgG3Q8T3Sh21r_7HyjHP8A" ] }, "serviceCatalog": [ { "endpoints_links": [], "endpoints": [ { "adminURL": "https://compute.example.com/v2.1/1c36b64c840a42cd9e9b931a369337f0", "region": "RegionOne", "publicURL": "https://compute.example.com/v2.1/1c36b64c840a42cd9e9b931a369337f0", "internalURL": "https://compute.example.com/v2.1/1c36b64c840a42cd9e9b931a369337f0", "id": "32466f357f3545248c47471ca51b0d3a" } ], "type": "compute", "name": "nova" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0", "region": "RegionOne", "publicURL": "https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0", "internalURL": "https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0", "id": "1e875ca2225b408bbf3520a1b8e1a537" } ], "type": "volumev2", "name": "cinderv2" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://image.example.com/v2", "region": "RegionOne", "publicURL": "https://image.example.com/v2", "internalURL": "https://image.example.com/v2", "id": "5a64de3c4a614d8d8f8d1ba3dee5f45f" } ], "type": "image", "name": "glance" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://volume.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "region": "RegionOne", "publicURL": "https://volume.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "internalURL": "https://volume.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "id": "3d15fdfc7d424f3c8923324417e1a3d1" } ], "type": "volume", "name": "cinder" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://identity.example.com/v2.0", "region": "RegionOne", "publicURL": "https://identity.example.com/v2.0", "internalURL": "https://identity.example.com/v2.0", "id": "4deb4d0504a044a395d4480741ba628c" } ], "type": "identity", "name": "keystone" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://network.example.com", "region": "RegionOne", "publicURL": "https://network.example.com", "internalURL": "https://network.example.com", "id": "4deb4d0504a044a395d4480741ba628d" } ], "type": "network", "name": "neutron" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://object-store.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "region": "RegionOne", "publicURL": "https://object-store.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "internalURL": "https://object-store.example.com/v1/1c36b64c840a42cd9e9b931a369337f0", "id": "4deb4d0504a044a395d4480741ba628c" } ], "type": "object-store", "name": "swift" }, { "endpoints_links": [], "endpoints": [ { "adminURL": "https://dns.example.com", "region": "RegionOne", "publicURL": "https://dns.example.com", "internalURL": "https://dns.example.com", "id": "652f0612744042bfbb8a8bb2c777a16d" } ], "type": "dns", "name": "designate" } ], "user": { "username": "dummy", "roles_links": [], "id": "71675f719c3343e8ac441cc28f396474", "roles": [ { "name": "admin" } ], "name": "admin" }, "metadata": { "is_admin": 0, "roles": [ "6d813db50b6e4a1ababdbbb5a83c7de5" ] } } } shade-1.31.0/shade/tests/unit/test_router.py0000666000175000017500000003744013440327640021055 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import testtools from shade import exc from shade.tests.unit import base class TestRouter(base.RequestsMockTestCase): router_name = 'goofy' router_id = '57076620-dcfb-42ed-8ad6-79ccb4a79ed2' subnet_id = '1f1696eb-7f47-47f6-835c-4889bff88604' mock_router_rep = { 'admin_state_up': True, 'availability_zone_hints': [], 'availability_zones': [], 'description': u'', 'distributed': False, 'external_gateway_info': None, 'flavor_id': None, 'ha': False, 'id': router_id, 'name': router_name, 'project_id': u'861808a93da0484ea1767967c4df8a23', 'routes': [], 'status': u'ACTIVE', 'tenant_id': u'861808a93da0484ea1767967c4df8a23' } mock_router_interface_rep = { 'network_id': '53aee281-b06d-47fc-9e1a-37f045182b8e', 'subnet_id': '1f1696eb-7f47-47f6-835c-4889bff88604', 'tenant_id': '861808a93da0484ea1767967c4df8a23', 'subnet_ids': [subnet_id], 'port_id': '23999891-78b3-4a6b-818d-d1b713f67848', 'id': '57076620-dcfb-42ed-8ad6-79ccb4a79ed2', 'request_ids': ['req-f1b0b1b4-ae51-4ef9-b371-0cc3c3402cf7'] } router_availability_zone_extension = { "alias": "router_availability_zone", "updated": "2015-01-01T10:00:00-00:00", "description": "Availability zone support for router.", "links": [], "name": "Router Availability Zone" } enabled_neutron_extensions = [router_availability_zone_extension] def test_get_router(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [self.mock_router_rep]}) ]) r = self.cloud.get_router(self.router_name) self.assertIsNotNone(r) self.assertDictEqual(self.mock_router_rep, r) self.assert_calls() def test_get_router_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': []}) ]) r = self.cloud.get_router('mickey') self.assertIsNone(r) self.assert_calls() def test_create_router(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True}})) ]) new_router = self.cloud.create_router(name=self.router_name, admin_state_up=True) self.assertDictEqual(self.mock_router_rep, new_router) self.assert_calls() def test_create_router_specific_tenant(self): new_router_tenant_id = "project_id_value" mock_router_rep = copy.copy(self.mock_router_rep) mock_router_rep['tenant_id'] = new_router_tenant_id mock_router_rep['project_id'] = new_router_tenant_id self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True, 'tenant_id': new_router_tenant_id}})) ]) self.cloud.create_router(self.router_name, project_id=new_router_tenant_id) self.assert_calls() def test_create_router_with_availability_zone_hints(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True, 'availability_zone_hints': ['nova']}})) ]) self.cloud.create_router( name=self.router_name, admin_state_up=True, availability_zone_hints=['nova']) self.assert_calls() def test_create_router_without_enable_snat(self): """Do not send enable_snat when not given.""" self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True}})) ]) self.cloud.create_router( name=self.router_name, admin_state_up=True) self.assert_calls() def test_create_router_with_enable_snat_True(self): """Send enable_snat when it is True.""" self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'admin_state_up': True, 'external_gateway_info': {'enable_snat': True}}})) ]) self.cloud.create_router( name=self.router_name, admin_state_up=True, enable_snat=True) self.assert_calls() def test_create_router_with_enable_snat_False(self): """Send enable_snat when it is False.""" self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'router': self.mock_router_rep}, validate=dict( json={'router': { 'name': self.router_name, 'external_gateway_info': {'enable_snat': False}, 'admin_state_up': True}})) ]) self.cloud.create_router( name=self.router_name, admin_state_up=True, enable_snat=False) self.assert_calls() def test_create_router_wrong_availability_zone_hints_type(self): azh_opts = "invalid" with testtools.ExpectedException( exc.OpenStackCloudException, "Parameter 'availability_zone_hints' must be a list" ): self.cloud.create_router( name=self.router_name, admin_state_up=True, availability_zone_hints=azh_opts) def test_add_router_interface(self): self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', self.router_id, 'add_router_interface.json']), json={'port': self.mock_router_interface_rep}, validate=dict( json={'subnet_id': self.subnet_id})) ]) self.cloud.add_router_interface( {'id': self.router_id}, subnet_id=self.subnet_id) self.assert_calls() def test_remove_router_interface(self): self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', self.router_id, 'remove_router_interface.json']), json={'port': self.mock_router_interface_rep}, validate=dict( json={'subnet_id': self.subnet_id})) ]) self.cloud.remove_router_interface( {'id': self.router_id}, subnet_id=self.subnet_id) self.assert_calls() def test_remove_router_interface_missing_argument(self): self.assertRaises(ValueError, self.cloud.remove_router_interface, {'id': '123'}) def test_update_router(self): new_router_name = "mickey" expected_router_rep = copy.copy(self.mock_router_rep) expected_router_rep['name'] = new_router_name self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [self.mock_router_rep]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', '%s.json' % self.router_id]), json={'router': expected_router_rep}, validate=dict( json={'router': { 'name': new_router_name}})) ]) new_router = self.cloud.update_router( self.router_id, name=new_router_name) self.assertDictEqual(expected_router_rep, new_router) self.assert_calls() def test_delete_router(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [self.mock_router_rep]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', '%s.json' % self.router_id]), json={}) ]) self.assertTrue(self.cloud.delete_router(self.router_name)) self.assert_calls() def test_delete_router_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': []}), ]) self.assertFalse(self.cloud.delete_router(self.router_name)) self.assert_calls() def test_delete_router_multiple_found(self): router1 = dict(id='123', name='mickey') router2 = dict(id='456', name='mickey') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [router1, router2]}), ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.delete_router, 'mickey') self.assert_calls() def test_delete_router_multiple_using_id(self): router1 = dict(id='123', name='mickey') router2 = dict(id='456', name='mickey') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers.json']), json={'routers': [router1, router2]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'routers', '123.json']), json={}) ]) self.assertTrue(self.cloud.delete_router("123")) self.assert_calls() def _get_mock_dict(self, owner, json): return dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=["device_id=%s" % self.router_id, "device_owner=network:%s" % owner]), json=json) def _test_list_router_interfaces(self, router, interface_type, router_type="normal", expected_result=None): if router_type == "normal": device_owner = 'router_interface' elif router_type == "ha": device_owner = 'ha_router_replicated_interface' elif router_type == "dvr": device_owner = 'router_interface_distributed' internal_port = { 'id': 'internal_port_id', 'fixed_ips': [{ 'subnet_id': 'internal_subnet_id', 'ip_address': "10.0.0.1" }], 'device_id': self.router_id, 'device_owner': 'network:%s' % device_owner } external_port = { 'id': 'external_port_id', 'fixed_ips': [{ 'subnet_id': 'external_subnet_id', 'ip_address': "1.2.3.4" }], 'device_id': self.router_id, 'device_owner': 'network:router_gateway' } if expected_result is None: if interface_type == "internal": expected_result = [internal_port] elif interface_type == "external": expected_result = [external_port] else: expected_result = [internal_port, external_port] mock_uris = [] for port_type in ['router_interface', 'router_interface_distributed', 'ha_router_replicated_interface']: ports = {} if port_type == device_owner: ports = {'ports': [internal_port]} mock_uris.append(self._get_mock_dict(port_type, ports)) mock_uris.append(self._get_mock_dict('router_gateway', {'ports': [external_port]})) self.register_uris(mock_uris) ret = self.cloud.list_router_interfaces(router, interface_type) self.assertEqual(expected_result, ret) self.assert_calls() router = { 'id': router_id, 'external_gateway_info': { 'external_fixed_ips': [{ 'subnet_id': 'external_subnet_id', 'ip_address': '1.2.3.4'}] } } def test_list_router_interfaces_all(self): self._test_list_router_interfaces(self.router, interface_type=None) def test_list_router_interfaces_internal(self): self._test_list_router_interfaces(self.router, interface_type="internal") def test_list_router_interfaces_external(self): self._test_list_router_interfaces(self.router, interface_type="external") def test_list_router_interfaces_internal_ha(self): self._test_list_router_interfaces(self.router, router_type="ha", interface_type="internal") def test_list_router_interfaces_internal_dvr(self): self._test_list_router_interfaces(self.router, router_type="dvr", interface_type="internal") shade-1.31.0/shade/tests/unit/__init__.py0000666000175000017500000000000013440327640020213 0ustar zuulzuul00000000000000shade-1.31.0/shade/tests/unit/test_keypair.py0000666000175000017500000000722313440327640021175 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from shade import exc from shade.tests import fakes from shade.tests.unit import base class TestKeypair(base.RequestsMockTestCase): def setUp(self): super(TestKeypair, self).setUp() self.keyname = self.getUniqueString('key') self.key = fakes.make_fake_keypair(self.keyname) def test_create_keypair(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs']), json={'keypair': self.key}, validate=dict(json={ 'keypair': { 'name': self.key['name'], 'public_key': self.key['public_key']}})), ]) new_key = self.cloud.create_keypair( self.keyname, self.key['public_key']) self.assertEqual(new_key, self.cloud._normalize_keypair(self.key)) self.assert_calls() def test_create_keypair_exception(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs']), status_code=400, validate=dict(json={ 'keypair': { 'name': self.key['name'], 'public_key': self.key['public_key']}})), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_keypair, self.keyname, self.key['public_key']) self.assert_calls() def test_delete_keypair(self): self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs', self.keyname]), status_code=202), ]) self.assertTrue(self.cloud.delete_keypair(self.keyname)) self.assert_calls() def test_delete_keypair_not_found(self): self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs', self.keyname]), status_code=404), ]) self.assertFalse(self.cloud.delete_keypair(self.keyname)) self.assert_calls() def test_list_keypairs(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs']), json={'keypairs': [{'keypair': self.key}]}), ]) keypairs = self.cloud.list_keypairs() self.assertEqual(keypairs, self.cloud._normalize_keypairs([self.key])) self.assert_calls() def test_list_keypairs_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-keypairs']), status_code=400), ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.list_keypairs) self.assert_calls() shade-1.31.0/shade/tests/unit/test_security_groups.py0000666000175000017500000007255213440327640023006 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import shade from shade.tests.unit import base from shade.tests import fakes # TODO(mordred): Move id and name to using a getUniqueString() value neutron_grp_dict = fakes.make_fake_neutron_security_group( id='1', name='neutron-sec-group', description='Test Neutron security group', rules=[ dict(id='1', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0') ] ) nova_grp_dict = fakes.make_fake_nova_security_group( id='2', name='nova-sec-group', description='Test Nova security group #1', rules=[ fakes.make_fake_nova_security_group_rule( id='2', from_port=8000, to_port=8001, ip_protocol='tcp', cidr='0.0.0.0/0'), ] ) class TestSecurityGroups(base.RequestsMockTestCase): def setUp(self): super(TestSecurityGroups, self).setUp() self.has_neutron = True def fake_has_service(*args, **kwargs): return self.has_neutron self.cloud.has_service = fake_has_service def test_list_security_groups_neutron(self): project_id = 42 self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json'], qs_elements=["project_id=%s" % project_id]), json={'security_groups': [neutron_grp_dict]}) ]) self.cloud.list_security_groups(filters={'project_id': project_id}) self.assert_calls() def test_list_security_groups_nova(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups?project_id=42'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}), ]) self.cloud.secgroup_source = 'nova' self.has_neutron = False self.cloud.list_security_groups(filters={'project_id': 42}) self.assert_calls() def test_list_security_groups_none(self): self.cloud.secgroup_source = None self.has_neutron = False self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.list_security_groups) def test_delete_security_group_neutron(self): sg_id = neutron_grp_dict['id'] self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups', '%s.json' % sg_id]), json={}) ]) self.assertTrue(self.cloud.delete_security_group('1')) self.assert_calls() def test_delete_security_group_nova(self): self.cloud.secgroup_source = 'nova' self.has_neutron = False nova_return = [nova_grp_dict] self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), dict(method='DELETE', uri='{endpoint}/os-security-groups/2'.format( endpoint=fakes.COMPUTE_ENDPOINT)), ]) self.cloud.delete_security_group('2') self.assert_calls() def test_delete_security_group_neutron_not_found(self): self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}) ]) self.assertFalse(self.cloud.delete_security_group('10')) self.assert_calls() def test_delete_security_group_nova_not_found(self): self.cloud.secgroup_source = 'nova' self.has_neutron = False nova_return = [nova_grp_dict] self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), ]) self.assertFalse(self.cloud.delete_security_group('doesNotExist')) def test_delete_security_group_none(self): self.cloud.secgroup_source = None self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.delete_security_group, 'doesNotExist') def test_create_security_group_neutron(self): self.cloud.secgroup_source = 'neutron' group_name = self.getUniqueString() group_desc = self.getUniqueString('description') new_group = fakes.make_fake_neutron_security_group( id='2', name=group_name, description=group_desc, rules=[]) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_group': new_group}, validate=dict( json={'security_group': { 'name': group_name, 'description': group_desc }})) ]) r = self.cloud.create_security_group(group_name, group_desc) self.assertEqual(group_name, r['name']) self.assertEqual(group_desc, r['description']) self.assert_calls() def test_create_security_group_neutron_specific_tenant(self): self.cloud.secgroup_source = 'neutron' project_id = "861808a93da0484ea1767967c4df8a23" group_name = self.getUniqueString() group_desc = 'security group from' \ ' test_create_security_group_neutron_specific_tenant' new_group = fakes.make_fake_neutron_security_group( id='2', name=group_name, description=group_desc, project_id=project_id, rules=[]) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_group': new_group}, validate=dict( json={'security_group': { 'name': group_name, 'description': group_desc, 'tenant_id': project_id }})) ]) r = self.cloud.create_security_group( group_name, group_desc, project_id ) self.assertEqual(group_name, r['name']) self.assertEqual(group_desc, r['description']) self.assertEqual(project_id, r['tenant_id']) self.assert_calls() def test_create_security_group_nova(self): group_name = self.getUniqueString() self.has_neutron = False group_desc = self.getUniqueString('description') new_group = fakes.make_fake_nova_security_group( id='2', name=group_name, description=group_desc, rules=[]) self.register_uris([ dict(method='POST', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_group': new_group}, validate=dict(json={ 'security_group': { 'name': group_name, 'description': group_desc }})), ]) self.cloud.secgroup_source = 'nova' r = self.cloud.create_security_group(group_name, group_desc) self.assertEqual(group_name, r['name']) self.assertEqual(group_desc, r['description']) self.assert_calls() def test_create_security_group_none(self): self.cloud.secgroup_source = None self.has_neutron = False self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.create_security_group, '', '') def test_update_security_group_neutron(self): self.cloud.secgroup_source = 'neutron' new_name = self.getUniqueString() sg_id = neutron_grp_dict['id'] update_return = neutron_grp_dict.copy() update_return['name'] = new_name self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups', '%s.json' % sg_id]), json={'security_group': update_return}, validate=dict(json={ 'security_group': {'name': new_name}})) ]) r = self.cloud.update_security_group(sg_id, name=new_name) self.assertEqual(r['name'], new_name) self.assert_calls() def test_update_security_group_nova(self): self.has_neutron = False new_name = self.getUniqueString() self.cloud.secgroup_source = 'nova' nova_return = [nova_grp_dict] update_return = nova_grp_dict.copy() update_return['name'] = new_name self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), dict(method='PUT', uri='{endpoint}/os-security-groups/2'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_group': update_return}), ]) r = self.cloud.update_security_group( nova_grp_dict['id'], name=new_name) self.assertEqual(r['name'], new_name) self.assert_calls() def test_update_security_group_bad_kwarg(self): self.assertRaises(TypeError, self.cloud.update_security_group, 'doesNotExist', bad_arg='') def test_create_security_group_rule_neutron(self): self.cloud.secgroup_source = 'neutron' args = dict( port_range_min=-1, port_range_max=40000, protocol='tcp', remote_ip_prefix='0.0.0.0/0', remote_group_id='456', direction='egress', ethertype='IPv6' ) expected_args = copy.copy(args) # For neutron, -1 port should be converted to None expected_args['port_range_min'] = None expected_args['security_group_id'] = neutron_grp_dict['id'] expected_new_rule = copy.copy(expected_args) expected_new_rule['id'] = '1234' expected_new_rule['tenant_id'] = '' expected_new_rule['project_id'] = expected_new_rule['tenant_id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-group-rules.json']), json={'security_group_rule': expected_new_rule}, validate=dict(json={ 'security_group_rule': expected_args})) ]) new_rule = self.cloud.create_security_group_rule( secgroup_name_or_id=neutron_grp_dict['id'], **args) # NOTE(slaweq): don't check location and properties in new rule new_rule.pop("location") new_rule.pop("properties") self.assertEqual(expected_new_rule, new_rule) self.assert_calls() def test_create_security_group_rule_neutron_specific_tenant(self): self.cloud.secgroup_source = 'neutron' args = dict( port_range_min=-1, port_range_max=40000, protocol='tcp', remote_ip_prefix='0.0.0.0/0', remote_group_id='456', direction='egress', ethertype='IPv6', project_id='861808a93da0484ea1767967c4df8a23' ) expected_args = copy.copy(args) # For neutron, -1 port should be converted to None expected_args['port_range_min'] = None expected_args['security_group_id'] = neutron_grp_dict['id'] expected_args['tenant_id'] = expected_args['project_id'] expected_args.pop('project_id') expected_new_rule = copy.copy(expected_args) expected_new_rule['id'] = '1234' expected_new_rule['project_id'] = expected_new_rule['tenant_id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-group-rules.json']), json={'security_group_rule': expected_new_rule}, validate=dict(json={ 'security_group_rule': expected_args})) ]) new_rule = self.cloud.create_security_group_rule( secgroup_name_or_id=neutron_grp_dict['id'], ** args) # NOTE(slaweq): don't check location and properties in new rule new_rule.pop("location") new_rule.pop("properties") self.assertEqual(expected_new_rule, new_rule) self.assert_calls() def test_create_security_group_rule_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' nova_return = [nova_grp_dict] new_rule = fakes.make_fake_nova_security_group_rule( id='xyz', from_port=1, to_port=2000, ip_protocol='tcp', cidr='1.2.3.4/32') self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), dict(method='POST', uri='{endpoint}/os-security-group-rules'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_group_rule': new_rule}, validate=dict(json={ "security_group_rule": { "from_port": 1, "ip_protocol": "tcp", "to_port": 2000, "parent_group_id": "2", "cidr": "1.2.3.4/32", "group_id": "123"}})), ]) self.cloud.create_security_group_rule( '2', port_range_min=1, port_range_max=2000, protocol='tcp', remote_ip_prefix='1.2.3.4/32', remote_group_id='123') self.assert_calls() def test_create_security_group_rule_nova_no_ports(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' new_rule = fakes.make_fake_nova_security_group_rule( id='xyz', from_port=1, to_port=65535, ip_protocol='tcp', cidr='1.2.3.4/32') nova_return = [nova_grp_dict] self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': nova_return}), dict(method='POST', uri='{endpoint}/os-security-group-rules'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_group_rule': new_rule}, validate=dict(json={ "security_group_rule": { "from_port": 1, "ip_protocol": "tcp", "to_port": 65535, "parent_group_id": "2", "cidr": "1.2.3.4/32", "group_id": "123"}})), ]) self.cloud.create_security_group_rule( '2', protocol='tcp', remote_ip_prefix='1.2.3.4/32', remote_group_id='123') self.assert_calls() def test_create_security_group_rule_none(self): self.has_neutron = False self.cloud.secgroup_source = None self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.create_security_group_rule, '') def test_delete_security_group_rule_neutron(self): rule_id = "xyz" self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-group-rules', '%s.json' % rule_id]), json={}) ]) self.assertTrue(self.cloud.delete_security_group_rule(rule_id)) self.assert_calls() def test_delete_security_group_rule_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict(method='DELETE', uri='{endpoint}/os-security-group-rules/xyz'.format( endpoint=fakes.COMPUTE_ENDPOINT)), ]) r = self.cloud.delete_security_group_rule('xyz') self.assertTrue(r) self.assert_calls() def test_delete_security_group_rule_none(self): self.has_neutron = False self.cloud.secgroup_source = None self.assertRaises(shade.OpenStackCloudUnavailableFeature, self.cloud.delete_security_group_rule, '') def test_delete_security_group_rule_not_found(self): rule_id = "doesNotExist" self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}) ]) self.assertFalse(self.cloud.delete_security_group(rule_id)) self.assert_calls() def test_delete_security_group_rule_not_found_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': [nova_grp_dict]}), ]) r = self.cloud.delete_security_group('doesNotExist') self.assertFalse(r) self.assert_calls() def test_nova_egress_security_group_rule(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': [nova_grp_dict]}), ]) self.assertRaises(shade.OpenStackCloudException, self.cloud.create_security_group_rule, secgroup_name_or_id='nova-sec-group', direction='egress') self.assert_calls() def test_list_server_security_groups_nova(self): self.has_neutron = False server = dict(id='server_id') self.register_uris([ dict( method='GET', uri='{endpoint}/servers/{id}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT, id='server_id'), json={'security_groups': [nova_grp_dict]}), ]) groups = self.cloud.list_server_security_groups(server) self.assertIn('location', groups[0]) self.assertEqual( groups[0]['security_group_rules'][0]['remote_ip_prefix'], nova_grp_dict['rules'][0]['ip_range']['cidr']) self.assert_calls() def test_list_server_security_groups_bad_source(self): self.has_neutron = False self.cloud.secgroup_source = 'invalid' server = dict(id='server_id') ret = self.cloud.list_server_security_groups(server) self.assertEqual([], ret) def test_add_security_group_to_server_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict( method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT, id='server_id'), json={'security_groups': [nova_grp_dict]}), dict( method='POST', uri='%s/servers/%s/action' % (fakes.COMPUTE_ENDPOINT, '1234'), validate=dict( json={'addSecurityGroup': {'name': 'nova-sec-group'}}), status_code=202, ), ]) ret = self.cloud.add_server_security_groups( dict(id='1234'), 'nova-sec-group') self.assertTrue(ret) self.assert_calls() def test_add_security_group_to_server_neutron(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') # use neutron for secgroup list and return an existing fake self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='POST', uri='%s/servers/%s/action' % (fakes.COMPUTE_ENDPOINT, '1234'), validate=dict( json={'addSecurityGroup': {'name': 'neutron-sec-group'}}), status_code=202), ]) self.assertTrue(self.cloud.add_server_security_groups( 'server-name', 'neutron-sec-group')) self.assert_calls() def test_remove_security_group_from_server_nova(self): self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict( method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': [nova_grp_dict]}), dict( method='POST', uri='%s/servers/%s/action' % (fakes.COMPUTE_ENDPOINT, '1234'), validate=dict( json={'removeSecurityGroup': {'name': 'nova-sec-group'}}), ), ]) ret = self.cloud.remove_server_security_groups( dict(id='1234'), 'nova-sec-group') self.assertTrue(ret) self.assert_calls() def test_remove_security_group_from_server_neutron(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') # use neutron for secgroup list and return an existing fake self.cloud.secgroup_source = 'neutron' validate = {'removeSecurityGroup': {'name': 'neutron-sec-group'}} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}), dict(method='POST', uri='%s/servers/%s/action' % (fakes.COMPUTE_ENDPOINT, '1234'), validate=dict(json=validate)), ]) self.assertTrue(self.cloud.remove_server_security_groups( 'server-name', 'neutron-sec-group')) self.assert_calls() def test_add_bad_security_group_to_server_nova(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') # use nova for secgroup list and return an existing fake self.has_neutron = False self.cloud.secgroup_source = 'nova' self.register_uris([ dict( method='GET', uri='{endpoint}/servers/detail'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'servers': [fake_server]}), dict( method='GET', uri='{endpoint}/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': [nova_grp_dict]}), ]) ret = self.cloud.add_server_security_groups('server-name', 'unknown-sec-group') self.assertFalse(ret) self.assert_calls() def test_add_bad_security_group_to_server_neutron(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') # use neutron for secgroup list and return an existing fake self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups.json']), json={'security_groups': [neutron_grp_dict]}) ]) self.assertFalse(self.cloud.add_server_security_groups( 'server-name', 'unknown-sec-group')) self.assert_calls() def test_add_security_group_to_bad_server(self): # fake to get server by name, server-name must match fake_server = fakes.make_fake_server('1234', 'server-name', 'ACTIVE') self.register_uris([ dict( method='GET', uri='{endpoint}/servers/detail'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'servers': [fake_server]}), ]) ret = self.cloud.add_server_security_groups('unknown-server-name', 'nova-sec-group') self.assertFalse(ret) self.assert_calls() def test_get_security_group_by_id_neutron(self): self.cloud.secgroup_source = 'neutron' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'security-groups', neutron_grp_dict['id']]), json={'security_group': neutron_grp_dict}) ]) ret_sg = self.cloud.get_security_group_by_id(neutron_grp_dict['id']) self.assertEqual(neutron_grp_dict['id'], ret_sg['id']) self.assertEqual(neutron_grp_dict['name'], ret_sg['name']) self.assertEqual(neutron_grp_dict['description'], ret_sg['description']) self.assert_calls() def test_get_security_group_by_id_nova(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-security-groups/{id}'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=nova_grp_dict['id']), json={'security_group': nova_grp_dict}), ]) self.cloud.secgroup_source = 'nova' self.has_neutron = False ret_sg = self.cloud.get_security_group_by_id(nova_grp_dict['id']) self.assertEqual(nova_grp_dict['id'], ret_sg['id']) self.assertEqual(nova_grp_dict['name'], ret_sg['name']) self.assert_calls() shade-1.31.0/shade/tests/unit/test_volume.py0000666000175000017500000005422613440327640021045 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import shade from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestVolume(base.RequestsMockTestCase): def test_attach_volume(self): server = dict(id='server001') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) rattach = {'server_id': server['id'], 'device': 'device001', 'volumeId': volume['id'], 'id': 'attachmentId'} self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments']), json={'volumeAttachment': rattach}, validate=dict(json={ 'volumeAttachment': { 'volumeId': vol['id']}}) )]) ret = self.cloud.attach_volume(server, volume, wait=False) self.assertEqual(rattach, ret) self.assert_calls() def test_attach_volume_exception(self): server = dict(id='server001') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments']), status_code=404, validate=dict(json={ 'volumeAttachment': { 'volumeId': vol['id']}}) )]) with testtools.ExpectedException( shade.OpenStackCloudURINotFound, "Error attaching volume %s to server %s" % ( volume['id'], server['id']) ): self.cloud.attach_volume(server, volume, wait=False) self.assert_calls() def test_attach_volume_wait(self): server = dict(id='server001') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) vol['attachments'] = [{'server_id': server['id'], 'device': 'device001'}] vol['status'] = 'attached' attached_volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) rattach = {'server_id': server['id'], 'device': 'device001', 'volumeId': volume['id'], 'id': 'attachmentId'} self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments']), json={'volumeAttachment': rattach}, validate=dict(json={ 'volumeAttachment': { 'volumeId': vol['id']}})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [attached_volume]})]) # defaults to wait=True ret = self.cloud.attach_volume(server, volume) self.assertEqual(rattach, ret) self.assert_calls() def test_attach_volume_wait_error(self): server = dict(id='server001') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) vol['status'] = 'error' errored_volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) rattach = {'server_id': server['id'], 'device': 'device001', 'volumeId': volume['id'], 'id': 'attachmentId'} self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments']), json={'volumeAttachment': rattach}, validate=dict(json={ 'volumeAttachment': { 'volumeId': vol['id']}})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [errored_volume]})]) with testtools.ExpectedException( shade.OpenStackCloudException, "Error in attaching volume %s" % errored_volume['id'] ): self.cloud.attach_volume(server, volume) self.assert_calls() def test_attach_volume_not_available(self): server = dict(id='server001') volume = dict(id='volume001', status='error', attachments=[]) with testtools.ExpectedException( shade.OpenStackCloudException, "Volume %s is not available. Status is '%s'" % ( volume['id'], volume['status']) ): self.cloud.attach_volume(server, volume) self.assertEqual(0, len(self.adapter.request_history)) def test_attach_volume_already_attached(self): device_id = 'device001' server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': device_id} ]) with testtools.ExpectedException( shade.OpenStackCloudException, "Volume %s already attached to server %s on device %s" % ( volume['id'], server['id'], device_id) ): self.cloud.attach_volume(server, volume) self.assertEqual(0, len(self.adapter.request_history)) def test_detach_volume(self): server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': 'device001'} ]) self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments', volume['id']]))]) self.cloud.detach_volume(server, volume, wait=False) self.assert_calls() def test_detach_volume_exception(self): server = dict(id='server001') volume = dict(id='volume001', attachments=[ {'server_id': 'server001', 'device': 'device001'} ]) self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments', volume['id']]), status_code=404)]) with testtools.ExpectedException( shade.OpenStackCloudURINotFound, "Error detaching volume %s from server %s" % ( volume['id'], server['id']) ): self.cloud.detach_volume(server, volume, wait=False) self.assert_calls() def test_detach_volume_wait(self): server = dict(id='server001') attachments = [{'server_id': 'server001', 'device': 'device001'}] vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': attachments} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) vol['status'] = 'available' vol['attachments'] = [] avail_volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments', volume.id])), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [avail_volume]})]) self.cloud.detach_volume(server, volume) self.assert_calls() def test_detach_volume_wait_error(self): server = dict(id='server001') attachments = [{'server_id': 'server001', 'device': 'device001'}] vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': attachments} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) vol['status'] = 'error' vol['attachments'] = [] errored_volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', server['id'], 'os-volume_attachments', volume.id])), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [errored_volume]})]) with testtools.ExpectedException( shade.OpenStackCloudException, "Error in detaching volume %s" % errored_volume['id'] ): self.cloud.detach_volume(server, volume) self.assert_calls() def test_delete_volume_deletes(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id])), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': []})]) self.assertTrue(self.cloud.delete_volume(volume['id'])) self.assert_calls() def test_delete_volume_gone_away(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id]), status_code=404)]) self.assertFalse(self.cloud.delete_volume(volume['id'])) self.assert_calls() def test_delete_volume_force(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id, 'action']), validate=dict( json={'os-force_delete': None})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': []})]) self.assertTrue(self.cloud.delete_volume(volume['id'], force=True)) self.assert_calls() def test_set_volume_bootable(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id, 'action']), json={'os-set_bootable': {'bootable': True}}), ]) self.cloud.set_volume_bootable(volume['id']) self.assert_calls() def test_set_volume_bootable_false(self): vol = {'id': 'volume001', 'status': 'attached', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [volume]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', volume.id, 'action']), json={'os-set_bootable': {'bootable': False}}), ]) self.cloud.set_volume_bootable(volume['id']) self.assert_calls() def test_list_volumes_with_pagination(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) vol2 = meta.obj_to_munch(fakes.FakeVolume('02', 'available', 'vol2')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={ 'volumes': [vol1], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), json={ 'volumes': [vol2], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=02']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=02']), json={'volumes': []})]) self.assertEqual( [self.cloud._normalize_volume(vol1), self.cloud._normalize_volume(vol2)], self.cloud.list_volumes()) self.assert_calls() def test_list_volumes_with_pagination_next_link_fails_once(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) vol2 = meta.obj_to_munch(fakes.FakeVolume('02', 'available', 'vol2')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={ 'volumes': [vol1], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), status_code=404), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={ 'volumes': [vol1], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), json={ 'volumes': [vol2], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=02']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=02']), json={'volumes': []})]) self.assertEqual( [self.cloud._normalize_volume(vol1), self.cloud._normalize_volume(vol2)], self.cloud.list_volumes()) self.assert_calls() def test_list_volumes_with_pagination_next_link_fails_all_attempts(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) uris = [] attempts = 5 for i in range(attempts): uris.extend([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={ 'volumes': [vol1], 'volumes_links': [ {'href': self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), 'rel': 'next'}]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail'], qs_elements=['marker=01']), status_code=404)]) self.register_uris(uris) # Check that found volumes are returned even if pagination didn't # complete because call to get next link 404'ed for all the allowed # attempts self.assertEqual( [self.cloud._normalize_volume(vol1)], self.cloud.list_volumes()) self.assert_calls() def test_get_volume_by_id(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', '01']), json={'volume': vol1} ) ]) self.assertEqual( self.cloud._normalize_volume(vol1), self.cloud.get_volume_by_id('01')) self.assert_calls() def test_create_volume(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes']), json={'volume': vol1}, validate=dict(json={ 'volume': { 'size': 50, 'name': 'vol1', }})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [vol1]}), ]) self.cloud.create_volume(50, name='vol1') self.assert_calls() def test_create_bootable_volume(self): vol1 = meta.obj_to_munch(fakes.FakeVolume('01', 'available', 'vol1')) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes']), json={'volume': vol1}, validate=dict(json={ 'volume': { 'size': 50, 'name': 'vol1', }})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [vol1]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', '01', 'action']), validate=dict( json={'os-set_bootable': {'bootable': True}})), ]) self.cloud.create_volume(50, name='vol1', bootable=True) self.assert_calls() shade-1.31.0/shade/tests/unit/test_baremetal_ports.py0000666000175000017500000001034413440327640022712 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_baremetal_ports ---------------------------------- Tests for baremetal port related operations """ from testscenarios import load_tests_apply_scenarios as load_tests # noqa from shade import exc from shade.tests import fakes from shade.tests.unit import base class TestBaremetalPort(base.IronicTestCase): def setUp(self): super(TestBaremetalPort, self).setUp() self.fake_baremetal_node = fakes.make_fake_machine( self.name, self.uuid) # TODO(TheJulia): Some tests below have fake ports, # since they are required in some processes. Lets refactor # them at some point to use self.fake_baremetal_port. self.fake_baremetal_port = fakes.make_fake_port( '00:01:02:03:04:05', node_id=self.uuid) self.fake_baremetal_port2 = fakes.make_fake_port( '0a:0b:0c:0d:0e:0f', node_id=self.uuid) def test_list_nics(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='ports'), json={'ports': [self.fake_baremetal_port, self.fake_baremetal_port2]}), ]) return_value = self.op_cloud.list_nics() self.assertEqual(2, len(return_value)) self.assertEqual(self.fake_baremetal_port, return_value[0]) self.assert_calls() def test_list_nics_failure(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='ports'), status_code=400) ]) self.assertRaises(exc.OpenStackCloudException, self.op_cloud.list_nics) self.assert_calls() def test_list_nics_for_machine(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'ports']), json={'ports': [self.fake_baremetal_port, self.fake_baremetal_port2]}), ]) return_value = self.op_cloud.list_nics_for_machine( self.fake_baremetal_node['uuid']) self.assertEqual(2, len(return_value)) self.assertEqual(self.fake_baremetal_port, return_value[0]) self.assert_calls() def test_list_nics_for_machine_failure(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'ports']), status_code=400) ]) self.assertRaises(exc.OpenStackCloudException, self.op_cloud.list_nics_for_machine, self.fake_baremetal_node['uuid']) self.assert_calls() def test_get_nic_by_mac(self): mac = self.fake_baremetal_port['address'] query = 'detail?address=%s' % mac self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='ports', append=[query]), json={'ports': [self.fake_baremetal_port]}), ]) return_value = self.op_cloud.get_nic_by_mac(mac) self.assertEqual(self.fake_baremetal_port, return_value) self.assert_calls() def test_get_nic_by_mac_failure(self): mac = self.fake_baremetal_port['address'] query = 'detail?address=%s' % mac self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='ports', append=[query]), json={'ports': []}), ]) self.assertIsNone(self.op_cloud.get_nic_by_mac(mac)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_port.py0000666000175000017500000003272313440327640020520 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_port ---------------------------------- Test port resource (managed by neutron) """ from shade.exc import OpenStackCloudException from shade.tests.unit import base class TestPort(base.RequestsMockTestCase): mock_neutron_port_create_rep = { 'port': { 'status': 'DOWN', 'binding:host_id': '', 'name': 'test-port-name', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'test-net-id', 'tenant_id': 'test-tenant-id', 'binding:vif_details': {}, 'binding:vnic_type': 'normal', 'binding:vif_type': 'unbound', 'device_owner': '', 'mac_address': '50:1c:0d:e4:f0:0d', 'binding:profile': {}, 'fixed_ips': [ { 'subnet_id': 'test-subnet-id', 'ip_address': '29.29.29.29' } ], 'id': 'test-port-id', 'security_groups': [], 'device_id': '' } } mock_neutron_port_update_rep = { 'port': { 'status': 'DOWN', 'binding:host_id': '', 'name': 'test-port-name-updated', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'test-net-id', 'tenant_id': 'test-tenant-id', 'binding:vif_details': {}, 'binding:vnic_type': 'normal', 'binding:vif_type': 'unbound', 'device_owner': '', 'mac_address': '50:1c:0d:e4:f0:0d', 'binding:profile': {}, 'fixed_ips': [ { 'subnet_id': 'test-subnet-id', 'ip_address': '29.29.29.29' } ], 'id': 'test-port-id', 'security_groups': [], 'device_id': '' } } mock_neutron_port_list_rep = { 'ports': [ { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': 'first-port', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': '70c1db1f-b701-45bd-96e0-a313ee3430b3', 'tenant_id': '', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'network:router_gateway', 'mac_address': 'fa:16:3e:58:42:ed', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '008ba151-0b8c-4a67-98b5-0d2b87666062', 'ip_address': '172.24.4.2' } ], 'id': 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b', 'security_groups': [], 'device_id': '9ae135f4-b6e0-4dad-9e91-3c223e385824' }, { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': '', 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': 'f27aa545-cbdd-4907-b0c6-c9e8b039dcc2', 'tenant_id': 'd397de8a63f341818f198abb0966f6f3', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'network:router_interface', 'mac_address': 'fa:16:3e:bb:3c:e4', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '288bf4a1-51ba-43b6-9d0a-520e9005db17', 'ip_address': '10.0.0.1' } ], 'id': 'f71a6703-d6de-4be1-a91a-a570ede1d159', 'security_groups': [], 'device_id': '9ae135f4-b6e0-4dad-9e91-3c223e385824' } ] } def test_create_port(self): self.register_uris([ dict(method="POST", uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_create_rep, validate=dict( json={'port': { 'network_id': 'test-net-id', 'name': 'test-port-name', 'admin_state_up': True}})) ]) port = self.cloud.create_port( network_id='test-net-id', name='test-port-name', admin_state_up=True) self.assertEqual(self.mock_neutron_port_create_rep['port'], port) self.assert_calls() def test_create_port_parameters(self): """Test that we detect invalid arguments passed to create_port""" self.assertRaises( TypeError, self.cloud.create_port, network_id='test-net-id', nome='test-port-name', stato_amministrativo_porta=True) def test_create_port_exception(self): self.register_uris([ dict(method="POST", uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), status_code=500, validate=dict( json={'port': { 'network_id': 'test-net-id', 'name': 'test-port-name', 'admin_state_up': True}})) ]) self.assertRaises( OpenStackCloudException, self.cloud.create_port, network_id='test-net-id', name='test-port-name', admin_state_up=True) self.assert_calls() def test_update_port(self): port_id = 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', '%s.json' % port_id]), json=self.mock_neutron_port_update_rep, validate=dict( json={'port': {'name': 'test-port-name-updated'}})) ]) port = self.cloud.update_port( name_or_id=port_id, name='test-port-name-updated') self.assertEqual(self.mock_neutron_port_update_rep['port'], port) self.assert_calls() def test_update_port_parameters(self): """Test that we detect invalid arguments passed to update_port""" self.assertRaises( TypeError, self.cloud.update_port, name_or_id='test-port-id', nome='test-port-name-updated') def test_update_port_exception(self): port_id = 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', '%s.json' % port_id]), status_code=500, validate=dict( json={'port': {'name': 'test-port-name-updated'}})) ]) self.assertRaises( OpenStackCloudException, self.cloud.update_port, name_or_id='d80b1a3b-4fc1-49f3-952e-1e2ab7081d8b', name='test-port-name-updated') self.assert_calls() def test_list_ports(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) ports = self.cloud.list_ports() self.assertItemsEqual(self.mock_neutron_port_list_rep['ports'], ports) self.assert_calls() def test_list_ports_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), status_code=500) ]) self.assertRaises(OpenStackCloudException, self.cloud.list_ports) def test_search_ports_by_id(self): port_id = 'f71a6703-d6de-4be1-a91a-a570ede1d159' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) ports = self.cloud.search_ports(name_or_id=port_id) self.assertEqual(1, len(ports)) self.assertEqual('fa:16:3e:bb:3c:e4', ports[0]['mac_address']) self.assert_calls() def test_search_ports_by_name(self): port_name = "first-port" self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) ports = self.cloud.search_ports(name_or_id=port_name) self.assertEqual(1, len(ports)) self.assertEqual('fa:16:3e:58:42:ed', ports[0]['mac_address']) self.assert_calls() def test_search_ports_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) ports = self.cloud.search_ports(name_or_id='non-existent') self.assertEqual(0, len(ports)) self.assert_calls() def test_delete_port(self): port_id = 'd80b1a3b-4fc1-49f3-952e-1e2ab7081d8b' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', '%s.json' % port_id]), json={}) ]) self.assertTrue(self.cloud.delete_port(name_or_id='first-port')) def test_delete_port_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json=self.mock_neutron_port_list_rep) ]) self.assertFalse(self.cloud.delete_port(name_or_id='non-existent')) self.assert_calls() def test_delete_subnet_multiple_found(self): port_name = "port-name" port1 = dict(id='123', name=port_name) port2 = dict(id='456', name=port_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json={'ports': [port1, port2]}) ]) self.assertRaises(OpenStackCloudException, self.cloud.delete_port, port_name) self.assert_calls() def test_delete_subnet_multiple_using_id(self): port_name = "port-name" port1 = dict(id='123', name=port_name) port2 = dict(id='456', name=port_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json']), json={'ports': [port1, port2]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', '%s.json' % port1['id']]), json={}) ]) self.assertTrue(self.cloud.delete_port(name_or_id=port1['id'])) self.assert_calls() def test_get_port_by_id(self): fake_port = dict(id='123', name='456') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports', fake_port['id']]), json={'port': fake_port}) ]) r = self.cloud.get_port_by_id(fake_port['id']) self.assertIsNotNone(r) self.assertDictEqual(fake_port, r) self.assert_calls() shade-1.31.0/shade/tests/unit/test_server_console.py0000666000175000017500000000541513440327640022562 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from shade.tests.unit import base from shade.tests import fakes class TestServerConsole(base.RequestsMockTestCase): def setUp(self): super(TestServerConsole, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.server = fakes.make_fake_server( server_id=self.server_id, name=self.server_name) self.output = self.getUniqueString('output') def test_get_server_console_dict(self): self.register_uris([ dict(method='POST', uri='{endpoint}/servers/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=self.server_id), json={"output": self.output}, validate=dict( json={'os-getConsoleOutput': {'length': None}})) ]) self.assertEqual( self.output, self.cloud.get_server_console(self.server)) self.assert_calls() def test_get_server_console_name_or_id(self): self.register_uris([ dict(method='GET', uri='{endpoint}/servers/detail'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={"servers": [self.server]}), dict(method='POST', uri='{endpoint}/servers/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=self.server_id), json={"output": self.output}, validate=dict( json={'os-getConsoleOutput': {'length': None}})) ]) self.assertEqual( self.output, self.cloud.get_server_console(self.server['id'])) self.assert_calls() def test_get_server_console_no_console(self): self.register_uris([ dict(method='POST', uri='{endpoint}/servers/{id}/action'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=self.server_id), status_code=400, validate=dict( json={'os-getConsoleOutput': {'length': None}})) ]) self.assertEqual('', self.cloud.get_server_console(self.server)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_limits.py0000666000175000017500000000754413440327640021040 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from shade.tests.unit import base class TestLimits(base.RequestsMockTestCase): def test_get_compute_limits(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['limits']), json={ "limits": { "absolute": { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxSecurityGroupRules": 20, "maxSecurityGroups": 10, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalFloatingIps": 10, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalSecurityGroupsUsed": 0, "totalFloatingIpsUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } }), ]) self.cloud.get_compute_limits() self.assert_calls() def test_other_get_compute_limits(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['limits'], qs_elements=[ 'tenant_id={id}'.format(id=project.project_id) ]), json={ "limits": { "absolute": { "maxImageMeta": 128, "maxPersonality": 5, "maxPersonalitySize": 10240, "maxSecurityGroupRules": 20, "maxSecurityGroups": 10, "maxServerMeta": 128, "maxTotalCores": 20, "maxTotalFloatingIps": 10, "maxTotalInstances": 10, "maxTotalKeypairs": 100, "maxTotalRAMSize": 51200, "maxServerGroups": 10, "maxServerGroupMembers": 10, "totalCoresUsed": 0, "totalInstancesUsed": 0, "totalRAMUsed": 0, "totalSecurityGroupsUsed": 0, "totalFloatingIpsUsed": 0, "totalServerGroupsUsed": 0 }, "rate": [] } }), ]) self.op_cloud.get_compute_limits(project.project_id) self.assert_calls() shade-1.31.0/shade/tests/unit/test_users.py0000666000175000017500000002115013440327640020665 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import testtools import shade from shade.tests.unit import base class TestUsers(base.RequestsMockTestCase): def _get_keystone_mock_url(self, resource, append=None, v3=True): base_url_append = None if v3: base_url_append = 'v3' return self.get_mock_url( service_type='identity', interface='admin', resource=resource, append=append, base_url_append=base_url_append) def _get_user_list(self, user_data): uri = self._get_keystone_mock_url(resource='users') return { 'users': [ user_data.json_response['user'], ], 'links': { 'self': uri, 'previous': None, 'next': None, } } def test_create_user_v2(self): self.use_keystone_v2() user_data = self._get_user_data() self.register_uris([ dict(method='POST', uri=self._get_keystone_mock_url(resource='users', v3=False), status_code=200, json=user_data.json_response, validate=dict(json=user_data.json_request)), ]) user = self.op_cloud.create_user( name=user_data.name, email=user_data.email, password=user_data.password) self.assertEqual(user_data.name, user.name) self.assertEqual(user_data.email, user.email) self.assertEqual(user_data.user_id, user.id) self.assert_calls() def test_create_user_v3(self): user_data = self._get_user_data( domain_id=uuid.uuid4().hex, description=self.getUniqueString('description')) self.register_uris([ dict(method='POST', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json=user_data.json_response, validate=dict(json=user_data.json_request)), ]) user = self.op_cloud.create_user( name=user_data.name, email=user_data.email, password=user_data.password, description=user_data.description, domain_id=user_data.domain_id) self.assertEqual(user_data.name, user.name) self.assertEqual(user_data.email, user.email) self.assertEqual(user_data.description, user.description) self.assertEqual(user_data.user_id, user.id) self.assert_calls() def test_update_user_password_v2(self): self.use_keystone_v2() user_data = self._get_user_data(email='test@example.com') mock_user_resource_uri = self._get_keystone_mock_url( resource='users', append=[user_data.user_id], v3=False) mock_users_uri = self._get_keystone_mock_url( resource='users', v3=False) self.register_uris([ # GET list to find user id # PUT user with password update # PUT empty update (password change is different than update) # but is always chained together [keystoneclient oddity] dict(method='GET', uri=mock_users_uri, status_code=200, json=self._get_user_list(user_data)), dict(method='PUT', uri=self._get_keystone_mock_url( resource='users', v3=False, append=[user_data.user_id, 'OS-KSADM', 'password']), status_code=200, json=user_data.json_response, validate=dict( json={'user': {'password': user_data.password}})), dict(method='PUT', uri=mock_user_resource_uri, status_code=200, json=user_data.json_response, validate=dict(json={'user': {}}))]) user = self.op_cloud.update_user( user_data.user_id, password=user_data.password) self.assertEqual(user_data.name, user.name) self.assertEqual(user_data.email, user.email) self.assert_calls() def test_create_user_v3_no_domain(self): user_data = self._get_user_data(domain_id=uuid.uuid4().hex, email='test@example.com') with testtools.ExpectedException( shade.OpenStackCloudException, "User or project creation requires an explicit" " domain_id argument." ): self.op_cloud.create_user( name=user_data.name, email=user_data.email, password=user_data.password) def test_delete_user(self): user_data = self._get_user_data(domain_id=uuid.uuid4().hex) user_resource_uri = self._get_keystone_mock_url( resource='users', append=[user_data.user_id]) self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json=self._get_user_list(user_data)), dict(method='GET', uri=user_resource_uri, status_code=200, json=user_data.json_response), dict(method='DELETE', uri=user_resource_uri, status_code=204)]) self.op_cloud.delete_user(user_data.name) self.assert_calls() def test_delete_user_not_found(self): self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json={'users': []})]) self.assertFalse(self.op_cloud.delete_user(self.getUniqueString())) def test_add_user_to_group(self): user_data = self._get_user_data() group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json=self._get_user_list(user_data)), dict(method='GET', uri=self._get_keystone_mock_url(resource='groups'), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='PUT', uri=self._get_keystone_mock_url( resource='groups', append=[group_data.group_id, 'users', user_data.user_id]), status_code=200)]) self.op_cloud.add_user_to_group(user_data.user_id, group_data.group_id) self.assert_calls() def test_is_user_in_group(self): user_data = self._get_user_data() group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), status_code=200, json=self._get_user_list(user_data)), dict(method='GET', uri=self._get_keystone_mock_url(resource='groups'), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='HEAD', uri=self._get_keystone_mock_url( resource='groups', append=[group_data.group_id, 'users', user_data.user_id]), status_code=204)]) self.assertTrue(self.op_cloud.is_user_in_group( user_data.user_id, group_data.group_id)) self.assert_calls() def test_remove_user_from_group(self): user_data = self._get_user_data() group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self._get_keystone_mock_url(resource='users'), json=self._get_user_list(user_data)), dict(method='GET', uri=self._get_keystone_mock_url(resource='groups'), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='DELETE', uri=self._get_keystone_mock_url( resource='groups', append=[group_data.group_id, 'users', user_data.user_id]), status_code=204)]) self.op_cloud.remove_user_from_group(user_data.user_id, group_data.group_id) self.assert_calls() shade-1.31.0/shade/tests/unit/test_stack.py0000666000175000017500000005624713440327640020650 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile import testtools import shade from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestStack(base.RequestsMockTestCase): def setUp(self): super(TestStack, self).setUp() self.stack_id = self.getUniqueString('id') self.stack_name = self.getUniqueString('name') self.stack_tag = self.getUniqueString('tag') self.stack = fakes.make_fake_stack(self.stack_id, self.stack_name) def test_list_stacks(self): fake_stacks = [ self.stack, fakes.make_fake_stack( self.getUniqueString('id'), self.getUniqueString('name')) ] self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stacks": fake_stacks}), ]) stacks = self.cloud.list_stacks() self.assertEqual( [f.toDict() for f in self.cloud._normalize_stacks(fake_stacks)], [f.toDict() for f in stacks]) self.assert_calls() def test_list_stacks_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), status_code=404) ]) with testtools.ExpectedException(shade.OpenStackCloudURINotFound): self.cloud.list_stacks() self.assert_calls() def test_search_stacks(self): fake_stacks = [ self.stack, fakes.make_fake_stack( self.getUniqueString('id'), self.getUniqueString('name')) ] self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stacks": fake_stacks}), ]) stacks = self.cloud.search_stacks() self.assertEqual( self.cloud._normalize_stacks(meta.obj_list_to_munch(fake_stacks)), stacks) self.assert_calls() def test_search_stacks_filters(self): fake_stacks = [ self.stack, fakes.make_fake_stack( self.getUniqueString('id'), self.getUniqueString('name'), status='CREATE_FAILED') ] self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stacks": fake_stacks}), ]) filters = {'status': 'FAILED'} stacks = self.cloud.search_stacks(filters=filters) self.assertEqual( self.cloud._normalize_stacks( meta.obj_list_to_munch(fake_stacks[1:])), stacks) self.assert_calls() def test_search_stacks_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), status_code=404) ]) with testtools.ExpectedException(shade.OpenStackCloudURINotFound): self.cloud.search_stacks() def test_delete_stack(self): resolve = 'resolve_outputs=False' self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{name}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name, resolve=resolve), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve), json={"stack": self.stack}), dict(method='DELETE', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id)), ]) self.assertTrue(self.cloud.delete_stack(self.stack_name)) self.assert_calls() def test_delete_stack_not_found(self): resolve = 'resolve_outputs=False' self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/stack_name?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, resolve=resolve), status_code=404), ]) self.assertFalse(self.cloud.delete_stack('stack_name')) self.assert_calls() def test_delete_stack_exception(self): resolve = 'resolve_outputs=False' self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, resolve=resolve), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve), json={"stack": self.stack}), dict(method='DELETE', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id), status_code=400, reason="ouch"), ]) with testtools.ExpectedException(shade.OpenStackCloudBadRequest): self.cloud.delete_stack(self.stack_id) self.assert_calls() def test_delete_stack_wait(self): marker_event = fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='CREATE_COMPLETE') marker_qs = 'marker={e_id}&sort_dir=asc'.format( e_id=marker_event['id']) resolve = 'resolve_outputs=False' self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, resolve=resolve), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve), json={"stack": self.stack}), dict(method='GET', uri='{endpoint}/stacks/{id}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, qs='limit=1&sort_dir=desc'), complete_qs=True, json={"events": [marker_event]}), dict(method='DELETE', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id)), dict(method='GET', uri='{endpoint}/stacks/{id}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, qs=marker_qs), complete_qs=True, json={"events": [ fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='DELETE_COMPLETE'), ]}), dict(method='GET', uri='{endpoint}/stacks/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve), status_code=404), ]) self.assertTrue(self.cloud.delete_stack(self.stack_id, wait=True)) self.assert_calls() def test_delete_stack_wait_failed(self): failed_stack = self.stack.copy() failed_stack['stack_status'] = 'DELETE_FAILED' marker_event = fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='CREATE_COMPLETE') marker_qs = 'marker={e_id}&sort_dir=asc'.format( e_id=marker_event['id']) resolve = 'resolve_outputs=False' self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, resolve=resolve), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve), json={"stack": self.stack}), dict(method='GET', uri='{endpoint}/stacks/{id}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, qs='limit=1&sort_dir=desc'), complete_qs=True, json={"events": [marker_event]}), dict(method='DELETE', uri='{endpoint}/stacks/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id)), dict(method='GET', uri='{endpoint}/stacks/{id}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, qs=marker_qs), complete_qs=True, json={"events": [ fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='DELETE_COMPLETE'), ]}), dict(method='GET', uri='{endpoint}/stacks/{id}?resolve_outputs=False'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}?{resolve}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name, resolve=resolve), json={"stack": failed_stack}), ]) with testtools.ExpectedException(shade.OpenStackCloudException): self.cloud.delete_stack(self.stack_id, wait=True) self.assert_calls() def test_create_stack(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.register_uris([ dict( method='POST', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stack": self.stack}, validate=dict( json={ 'disable_rollback': False, 'environment': {}, 'files': {}, 'parameters': {}, 'stack_name': self.stack_name, 'tags': self.stack_tag, 'template': fakes.FAKE_TEMPLATE_CONTENT, 'timeout_mins': 60} )), dict( method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict( method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) self.cloud.create_stack( self.stack_name, tags=self.stack_tag, template_file=test_template.name ) self.assert_calls() def test_create_stack_wait(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.register_uris([ dict( method='POST', uri='{endpoint}/stacks'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT), json={"stack": self.stack}, validate=dict( json={ 'disable_rollback': False, 'environment': {}, 'files': {}, 'parameters': {}, 'stack_name': self.stack_name, 'tags': self.stack_tag, 'template': fakes.FAKE_TEMPLATE_CONTENT, 'timeout_mins': 60} )), dict( method='GET', uri='{endpoint}/stacks/{name}/events?sort_dir=asc'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), json={"events": [ fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='CREATE_COMPLETE', resource_name='name'), ]}), dict( method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict( method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) self.cloud.create_stack( self.stack_name, tags=self.stack_tag, template_file=test_template.name, wait=True) self.assert_calls() def test_update_stack(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.register_uris([ dict( method='PUT', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), validate=dict( json={ 'disable_rollback': False, 'environment': {}, 'files': {}, 'parameters': {}, 'template': fakes.FAKE_TEMPLATE_CONTENT, 'timeout_mins': 60})), dict( method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict( method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) self.cloud.update_stack( self.stack_name, template_file=test_template.name) self.assert_calls() def test_update_stack_wait(self): marker_event = fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='CREATE_COMPLETE', resource_name='name') marker_qs = 'marker={e_id}&sort_dir=asc'.format( e_id=marker_event['id']) test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.register_uris([ dict( method='GET', uri='{endpoint}/stacks/{name}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name, qs='limit=1&sort_dir=desc'), json={"events": [marker_event]}), dict( method='PUT', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), validate=dict( json={ 'disable_rollback': False, 'environment': {}, 'files': {}, 'parameters': {}, 'template': fakes.FAKE_TEMPLATE_CONTENT, 'timeout_mins': 60})), dict( method='GET', uri='{endpoint}/stacks/{name}/events?{qs}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name, qs=marker_qs), json={"events": [ fakes.make_fake_stack_event( self.stack_id, self.stack_name, status='UPDATE_COMPLETE', resource_name='name'), ]}), dict( method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict( method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) self.cloud.update_stack( self.stack_name, template_file=test_template.name, wait=True) self.assert_calls() def test_get_stack(self): self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": self.stack}), ]) res = self.cloud.get_stack(self.stack_name) self.assertIsNotNone(res) self.assertEqual(self.stack['stack_name'], res['stack_name']) self.assertEqual(self.stack['stack_name'], res['name']) self.assertEqual(self.stack['stack_status'], res['stack_status']) self.assertEqual('COMPLETE', res['status']) self.assert_calls() def test_get_stack_in_progress(self): in_progress = self.stack.copy() in_progress['stack_status'] = 'CREATE_IN_PROGRESS' self.register_uris([ dict(method='GET', uri='{endpoint}/stacks/{name}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, name=self.stack_name), status_code=302, headers=dict( location='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name))), dict(method='GET', uri='{endpoint}/stacks/{name}/{id}'.format( endpoint=fakes.ORCHESTRATION_ENDPOINT, id=self.stack_id, name=self.stack_name), json={"stack": in_progress}), ]) res = self.cloud.get_stack(self.stack_name) self.assertIsNotNone(res) self.assertEqual(in_progress['stack_name'], res['stack_name']) self.assertEqual(in_progress['stack_name'], res['name']) self.assertEqual(in_progress['stack_status'], res['stack_status']) self.assertEqual('CREATE', res['action']) self.assertEqual('IN_PROGRESS', res['status']) self.assert_calls() shade-1.31.0/shade/tests/unit/test_shade_operator.py0000666000175000017500000001355013440327640022530 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import plugin as ksa_plugin from distutils import version as du_version import mock import testtools import os_client_config as occ from os_client_config import cloud_config import shade from shade import exc from shade.tests import fakes from shade.tests.unit import base class TestShadeOperator(base.RequestsMockTestCase): def setUp(self): super(TestShadeOperator, self).setUp() def test_operator_cloud(self): self.assertIsInstance(self.op_cloud, shade.OperatorCloud) def test_get_image_name(self): self.use_glance() image_id = self.getUniqueString() fake_image = fakes.make_fake_image(image_id=image_id) list_return = {'images': [fake_image]} self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=list_return), dict(method='GET', uri='https://image.example.com/v2/images', json=list_return), ]) self.assertEqual('fake_image', self.op_cloud.get_image_name(image_id)) self.assertEqual( 'fake_image', self.op_cloud.get_image_name('fake_image')) self.assert_calls() def test_get_image_id(self): self.use_glance() image_id = self.getUniqueString() fake_image = fakes.make_fake_image(image_id=image_id) list_return = {'images': [fake_image]} self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=list_return), dict(method='GET', uri='https://image.example.com/v2/images', json=list_return), ]) self.assertEqual(image_id, self.op_cloud.get_image_id(image_id)) self.assertEqual( image_id, self.op_cloud.get_image_id('fake_image')) self.assert_calls() @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_get_session_endpoint_exception(self, get_session_mock): class FakeException(Exception): pass def side_effect(*args, **kwargs): raise FakeException("No service") session_mock = mock.Mock() session_mock.get_endpoint.side_effect = side_effect get_session_mock.return_value = session_mock self.op_cloud.name = 'testcloud' self.op_cloud.region_name = 'testregion' with testtools.ExpectedException( exc.OpenStackCloudException, "Error getting image endpoint on testcloud:testregion:" " No service"): self.op_cloud.get_session_endpoint("image") @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_get_session_endpoint_unavailable(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = None get_session_mock.return_value = session_mock image_endpoint = self.op_cloud.get_session_endpoint("image") self.assertIsNone(image_endpoint) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_get_session_endpoint_identity(self, get_session_mock): session_mock = mock.Mock() get_session_mock.return_value = session_mock self.op_cloud.get_session_endpoint('identity') # occ > 1.26.0 fixes keystoneclient construction. Unfortunately, it # breaks our mocking of what keystoneclient does here. Since we're # close to just getting rid of ksc anyway, just put in a version match occ_version = du_version.StrictVersion(occ.__version__) if occ_version > du_version.StrictVersion('1.26.0'): kwargs = dict( interface='public', region_name='RegionOne', service_name=None, service_type='identity') else: kwargs = dict(interface=ksa_plugin.AUTH_INTERFACE) session_mock.get_endpoint.assert_called_with(**kwargs) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_has_service_no(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = None get_session_mock.return_value = session_mock self.assertFalse(self.op_cloud.has_service("image")) @mock.patch.object(cloud_config.CloudConfig, 'get_session') def test_has_service_yes(self, get_session_mock): session_mock = mock.Mock() session_mock.get_endpoint.return_value = 'http://fake.url' get_session_mock.return_value = session_mock self.assertTrue(self.op_cloud.has_service("image")) def test_list_hypervisors(self): '''This test verifies that calling list_hypervisors results in a call to nova client.''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-hypervisors', 'detail']), json={'hypervisors': [ fakes.make_fake_hypervisor('1', 'testserver1'), fakes.make_fake_hypervisor('2', 'testserver2'), ]}), ]) r = self.op_cloud.list_hypervisors() self.assertEqual(2, len(r)) self.assertEqual('testserver1', r[0]['hypervisor_hostname']) self.assertEqual('testserver2', r[1]['hypervisor_hostname']) self.assert_calls() shade-1.31.0/shade/tests/unit/test_availability_zones.py0000666000175000017500000000441013440327640023414 0ustar zuulzuul00000000000000# Copyright (c) 2017 Red Hat, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from shade.tests.unit import base from shade.tests import fakes _fake_zone_list = { "availabilityZoneInfo": [ { "hosts": None, "zoneName": "az1", "zoneState": { "available": True } }, { "hosts": None, "zoneName": "nova", "zoneState": { "available": False } } ] } class TestAvailabilityZoneNames(base.RequestsMockTestCase): def test_list_availability_zone_names(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-availability-zone'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=_fake_zone_list), ]) self.assertEqual( ['az1'], self.cloud.list_availability_zone_names()) self.assert_calls() def test_unauthorized_availability_zone_names(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-availability-zone'.format( endpoint=fakes.COMPUTE_ENDPOINT), status_code=403), ]) self.assertEqual( [], self.cloud.list_availability_zone_names()) self.assert_calls() def test_list_all_availability_zone_names(self): self.register_uris([ dict(method='GET', uri='{endpoint}/os-availability-zone'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=_fake_zone_list), ]) self.assertEqual( ['az1', 'nova'], self.cloud.list_availability_zone_names(unavailable=True)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_qos_dscp_marking_rule.py0000666000175000017500000002767613440327640024121 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from shade import exc from shade.tests.unit import base class TestQosDscpMarkingRule(base.RequestsMockTestCase): policy_name = 'qos test policy' policy_id = '881d1bb7-a663-44c0-8f9f-ee2765b74486' project_id = 'c88fc89f-5121-4a4c-87fd-496b5af864e9' rule_id = 'ed1a2b05-0ad7-45d7-873f-008b575a02b3' rule_dscp_mark = 32 mock_policy = { 'id': policy_id, 'name': policy_name, 'description': '', 'rules': [], 'project_id': project_id, 'tenant_id': project_id, 'shared': False, 'is_default': False } mock_rule = { 'id': rule_id, 'dscp_mark': rule_dscp_mark, } qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } enabled_neutron_extensions = [qos_extension] def test_get_qos_dscp_marking_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), json={'dscp_marking_rule': self.mock_rule}) ]) r = self.cloud.get_qos_dscp_marking_rule(self.policy_name, self.rule_id) self.assertDictEqual(self.mock_rule, r) self.assert_calls() def test_get_qos_dscp_marking_rule_no_qos_policy_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': []}) ]) self.assertRaises( exc.OpenStackCloudResourceNotFound, self.cloud.get_qos_dscp_marking_rule, self.policy_name, self.rule_id) self.assert_calls() def test_get_qos_dscp_marking_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_dscp_marking_rule, self.policy_name, self.rule_id) self.assert_calls() def test_create_qos_dscp_marking_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules']), json={'dscp_marking_rule': self.mock_rule}) ]) rule = self.cloud.create_qos_dscp_marking_rule( self.policy_name, dscp_mark=self.rule_dscp_mark) self.assertDictEqual(self.mock_rule, rule) self.assert_calls() def test_create_qos_dscp_marking_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_qos_dscp_marking_rule, self.policy_name, dscp_mark=16) self.assert_calls() def test_update_qos_dscp_marking_rule(self): new_dscp_mark_value = 16 expected_rule = copy.copy(self.mock_rule) expected_rule['dscp_mark'] = new_dscp_mark_value self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), json={'dscp_marking_rule': self.mock_rule}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), json={'dscp_marking_rule': expected_rule}, validate=dict( json={'dscp_marking_rule': { 'dscp_mark': new_dscp_mark_value}})) ]) rule = self.cloud.update_qos_dscp_marking_rule( self.policy_id, self.rule_id, dscp_mark=new_dscp_mark_value) self.assertDictEqual(expected_rule, rule) self.assert_calls() def test_update_qos_dscp_marking_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.update_qos_dscp_marking_rule, self.policy_id, self.rule_id, dscp_mark=8) self.assert_calls() def test_delete_qos_dscp_marking_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), json={}) ]) self.assertTrue( self.cloud.delete_qos_dscp_marking_rule( self.policy_name, self.rule_id)) self.assert_calls() def test_delete_qos_dscp_marking_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_qos_dscp_marking_rule, self.policy_name, self.rule_id) self.assert_calls() def test_delete_qos_dscp_marking_rule_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'dscp_marking_rules', '%s.json' % self.rule_id]), status_code=404) ]) self.assertFalse( self.cloud.delete_qos_dscp_marking_rule( self.policy_name, self.rule_id)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_role_assignment.py0000666000175000017500000042747113440327640022735 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from shade import exc from shade.tests.unit import base import testtools from testtools import matchers class TestRoleAssignment(base.RequestsMockTestCase): def _build_role_assignment_response(self, role_id, scope_type, scope_id, entity_type, entity_id): self.assertThat(['group', 'user'], matchers.Contains(entity_type)) self.assertThat(['project', 'domain'], matchers.Contains(scope_type)) # NOTE(notmorgan): Links are thrown out by shade, but we construct them # for corectness. link_str = ('https://identity.example.com/identity/v3/{scope_t}s' '/{scopeid}/{entity_t}s/{entityid}/roles/{roleid}') return [{ 'links': {'assignment': link_str.format( scope_t=scope_type, scopeid=scope_id, entity_t=entity_type, entityid=entity_id, roleid=role_id)}, 'role': {'id': role_id}, 'scope': {scope_type: {'id': scope_id}}, entity_type: {'id': entity_id} }] def setUp(self, cloud_config_fixture='clouds.yaml'): super(TestRoleAssignment, self).setUp(cloud_config_fixture) self.role_data = self._get_role_data() self.domain_data = self._get_domain_data() self.user_data = self._get_user_data( domain_id=self.domain_data.domain_id) self.project_data = self._get_project_data( domain_id=self.domain_data.domain_id) self.project_data_v2 = self._get_project_data( project_name=self.project_data.project_name, project_id=self.project_data.project_id, v3=False) self.group_data = self._get_group_data( domain_id=self.domain_data.domain_id) self.user_project_assignment = self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id) self.group_project_assignment = self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id) self.user_domain_assignment = self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id) self.group_domain_assignment = self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id) # Cleanup of instances to ensure garbage collection/no leaking memory # in tests. self.addCleanup(delattr, self, 'role_data') self.addCleanup(delattr, self, 'user_data') self.addCleanup(delattr, self, 'domain_data') self.addCleanup(delattr, self, 'group_data') self.addCleanup(delattr, self, 'project_data') self.addCleanup(delattr, self, 'project_data_v2') self.addCleanup(delattr, self, 'user_project_assignment') self.addCleanup(delattr, self, 'group_project_assignment') self.addCleanup(delattr, self, 'user_domain_assignment') self.addCleanup(delattr, self, 'group_domain_assignment') def get_mock_url(self, service_type='identity', interface='admin', resource='role_assignments', append=None, base_url_append='v3', qs_elements=None): return super(TestRoleAssignment, self).get_mock_url( service_type, interface, resource, append, base_url_append, qs_elements) def test_grant_role_user_v2(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', status_code=201, uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201) ]) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_user_project_v2(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201, ), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201) ]) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data_v2.project_id)) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data_v2.project_id)) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_id, user=self.user_data.name, project=self.project_data_v2.project_id)) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_id, user=self.user_data.user_id, project=self.project_data_v2.project_id)) self.assert_calls() def test_grant_role_user_project_v2_exists(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), ]) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data_v2.project_id)) self.assert_calls() def test_grant_role_user_project(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url( resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url( resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), ]) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_user_project_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), ]) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_id, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_group_project(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url( resource='projects', append=[self.project_data.project_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url( resource='projects', append=[self.project_data.project_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, project=self.project_data.project_id)) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_group_project_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id)}), ]) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, project=self.project_data.project_id)) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, project=self.project_data.project_id)) self.assert_calls() def test_grant_role_user_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_id)) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_id)) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_name)) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_grant_role_user_domain_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'user.id=%s' % self.user_data.user_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), ]) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_id)) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_id)) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_name)) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_grant_role_group_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_id)) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_id)) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name)) self.assertTrue(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_grant_role_group_domain_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'role.id=%s' % self.role_data.role_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'group.id=%s' % self.group_data.group_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), ]) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_id)) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_id)) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name)) self.assertFalse(self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_revoke_role_user_v2(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url( base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url( base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_project_v2(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}) ]) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_id, user=self.user_data.name, project=self.project_data.project_id)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_id, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_project_v2_exists(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), ]) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_project(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), ]) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_project_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), ]) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_id, user=self.user_data.user_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_group_project(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), ]) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, project=self.project_data.project_id)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_group_project_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='projects'), status_code=200, json={'projects': [ self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), ]) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, project=self.project_data.project_id)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, project=self.project_data.project_id)) self.assert_calls() def test_revoke_role_user_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), ]) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_id)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_id)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_name)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_revoke_role_user_domain_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id])), ]) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_name)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_name)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain=self.domain_data.domain_id)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.user_id, domain=self.domain_data.domain_id)) self.assert_calls() def test_revoke_role_group_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': []}), ]) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_name)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_id)) self.assertFalse(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_id)) self.assert_calls() def test_revoke_role_group_domain_exists(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'group.id=%s' % self.group_data.group_id, 'scope.domain.id=%s' % self.domain_data.domain_id, 'role.id=%s' % self.role_data.role_id]), status_code=200, complete_qs=True, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='domain', scope_id=self.domain_data.domain_id, entity_type='group', entity_id=self.group_data.group_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_id, 'groups', self.group_data.group_id, 'roles', self.role_data.role_id])), ]) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_name)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_id)) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_id, domain=self.domain_data.domain_id)) self.assert_calls() def test_grant_no_role(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Role {0} not found'.format(self.role_data.role_name) ): self.op_cloud.grant_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name) self.assert_calls() def test_revoke_no_role(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Role {0} not found'.format(self.role_data.role_name) ): self.op_cloud.revoke_role( self.role_data.role_name, group=self.group_data.group_name, domain=self.domain_data.domain_name) self.assert_calls() def test_grant_no_user_or_group_specified(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a user or a group' ): self.op_cloud.grant_role(self.role_data.role_name) self.assert_calls() def test_revoke_no_user_or_group_specified(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a user or a group' ): self.op_cloud.revoke_role(self.role_data.role_name) self.assert_calls() def test_grant_no_user_or_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a user or a group' ): self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name) self.assert_calls() def test_revoke_no_user_or_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a user or a group' ): self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name) self.assert_calls() def test_grant_both_user_and_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Specify either a group or a user, not both' ): self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, group=self.group_data.group_name) self.assert_calls() def test_revoke_both_user_and_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(resource='groups'), status_code=200, json={'groups': [self.group_data.json_response['group']]}), ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Specify either a group or a user, not both' ): self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, group=self.group_data.group_name) self.assert_calls() def test_grant_both_project_and_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % self.domain_data.domain_id)), status_code=200, json={'projects': [self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}), dict(method='PUT', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204) ]) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_revoke_both_project_and_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=[self.domain_data.domain_name]), status_code=200, json=self.domain_data.json_response), dict(method='GET', uri=self.get_mock_url(resource='users', qs_elements=['domain_id=%s' % self.domain_data. domain_id]), complete_qs=True, status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % self.domain_data.domain_id)), status_code=200, json={'projects': [self.project_data.json_response['project']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=[ 'user.id=%s' % self.user_data.user_id, 'scope.project.id=%s' % self.project_data.project_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}), dict(method='DELETE', uri=self.get_mock_url(resource='projects', append=[self.project_data.project_id, 'users', self.user_data.user_id, 'roles', self.role_data.role_id]), status_code=204) ]) self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, domain=self.domain_data.domain_name)) self.assert_calls() def test_grant_no_project_or_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=['user.id=%s' % self.user_data.user_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={'role_assignments': []}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a domain or project' ): self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name) self.assert_calls() def test_revoke_no_project_or_domain(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=['user.id=%s' % self.user_data.user_id, 'role.id=%s' % self.role_data.role_id]), complete_qs=True, status_code=200, json={ 'role_assignments': self._build_role_assignment_response( role_id=self.role_data.role_id, scope_type='project', scope_id=self.project_data.project_id, entity_type='user', entity_id=self.user_data.user_id)}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, 'Must specify either a domain or project' ): self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name) self.assert_calls() def test_grant_bad_domain_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=['baddomain']), status_code=404, text='Could not find domain: baddomain') ]) with testtools.ExpectedException( exc.OpenStackCloudURINotFound, 'Failed to get domain baddomain' ): self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, domain='baddomain') self.assert_calls() def test_revoke_bad_domain_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(resource='domains', append=['baddomain']), status_code=404, text='Could not find domain: baddomain') ]) with testtools.ExpectedException( exc.OpenStackCloudURINotFound, 'Failed to get domain baddomain' ): self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, domain='baddomain') self.assert_calls() def test_grant_role_user_project_v2_wait(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), ]) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, wait=True)) self.assert_calls() def test_grant_role_user_project_v2_wait_exception(self): self.use_keystone_v2() with testtools.ExpectedException( exc.OpenStackCloudTimeout, 'Timeout waiting for role to be granted' ): self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[ self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), dict(method='PUT', uri=self.get_mock_url( base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=201), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[ self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), ]) self.assertTrue( self.op_cloud.grant_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, wait=True, timeout=0.01)) self.assert_calls(do_count=False) def test_revoke_role_user_project_v2_wait(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': []}), ]) self.assertTrue( self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, wait=True)) self.assert_calls(do_count=False) def test_revoke_role_user_project_v2_wait_exception(self): self.use_keystone_v2() self.register_uris([ dict(method='GET', uri=self.get_mock_url(base_url_append='OS-KSADM', resource='roles'), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='users'), status_code=200, json={'users': [self.user_data.json_response['user']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants'), status_code=200, json={ 'tenants': [ self.project_data_v2.json_response['tenant']]}), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles', 'OS-KSADM', self.role_data.role_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(base_url_append=None, resource='tenants', append=[self.project_data_v2.project_id, 'users', self.user_data.user_id, 'roles']), status_code=200, json={'roles': [self.role_data.json_response['role']]}), ]) with testtools.ExpectedException( exc.OpenStackCloudTimeout, 'Timeout waiting for role to be revoked' ): self.assertTrue(self.op_cloud.revoke_role( self.role_data.role_name, user=self.user_data.name, project=self.project_data.project_id, wait=True, timeout=0.01)) self.assert_calls(do_count=False) shade-1.31.0/shade/tests/unit/test_aggregate.py0000666000175000017500000001521513440327640021457 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from shade.tests.unit import base from shade.tests import fakes class TestAggregate(base.RequestsMockTestCase): def setUp(self): super(TestAggregate, self).setUp() self.aggregate_name = self.getUniqueString('aggregate') self.fake_aggregate = fakes.make_fake_aggregate(1, self.aggregate_name) def test_create_aggregate(self): create_aggregate = self.fake_aggregate.copy() del create_aggregate['metadata'] del create_aggregate['hosts'] self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregate': create_aggregate}, validate=dict(json={ 'aggregate': { 'name': self.aggregate_name, 'availability_zone': None, }})), ]) self.op_cloud.create_aggregate(name=self.aggregate_name) self.assert_calls() def test_create_aggregate_with_az(self): availability_zone = 'az1' az_aggregate = fakes.make_fake_aggregate( 1, self.aggregate_name, availability_zone=availability_zone) create_aggregate = az_aggregate.copy() del create_aggregate['metadata'] del create_aggregate['hosts'] self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregate': create_aggregate}, validate=dict(json={ 'aggregate': { 'name': self.aggregate_name, 'availability_zone': availability_zone, }})), ]) self.op_cloud.create_aggregate( name=self.aggregate_name, availability_zone=availability_zone) self.assert_calls() def test_delete_aggregate(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1'])), ]) self.assertTrue(self.op_cloud.delete_aggregate('1')) self.assert_calls() def test_update_aggregate_set_az(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1']), json={'aggregate': self.fake_aggregate}, validate=dict( json={ 'aggregate': { 'availability_zone': 'az', }})), ]) self.op_cloud.update_aggregate(1, availability_zone='az') self.assert_calls() def test_update_aggregate_unset_az(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1']), json={'aggregate': self.fake_aggregate}, validate=dict( json={ 'aggregate': { 'availability_zone': None, }})), ]) self.op_cloud.update_aggregate(1, availability_zone=None) self.assert_calls() def test_set_aggregate_metadata(self): metadata = {'key': 'value'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1', 'action']), json={'aggregate': self.fake_aggregate}, validate=dict( json={'set_metadata': {'metadata': metadata}})), ]) self.op_cloud.set_aggregate_metadata('1', metadata) self.assert_calls() def test_add_host_to_aggregate(self): hostname = 'host1' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1', 'action']), json={'aggregate': self.fake_aggregate}, validate=dict( json={'add_host': {'host': hostname}})), ]) self.op_cloud.add_host_to_aggregate('1', hostname) self.assert_calls() def test_remove_host_from_aggregate(self): hostname = 'host1' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates']), json={'aggregates': [self.fake_aggregate]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-aggregates', '1', 'action']), json={'aggregate': self.fake_aggregate}, validate=dict( json={'remove_host': {'host': hostname}})), ]) self.op_cloud.remove_host_from_aggregate('1', hostname) self.assert_calls() shade-1.31.0/shade/tests/unit/test_qos_rule_type.py0000666000175000017500000001301613440327640022420 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from shade import exc from shade.tests.unit import base class TestQosRuleType(base.RequestsMockTestCase): rule_type_name = "bandwidth_limit" qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } qos_rule_type_details_extension = { "updated": "2017-06-22T10:00:00-00:00", "name": "Details of QoS rule types", "links": [], "alias": "qos-rule-type-details", "description": ("Expose details about QoS rule types supported by " "loaded backend drivers") } mock_rule_type_bandwidth_limit = { 'type': 'bandwidth_limit' } mock_rule_type_dscp_marking = { 'type': 'dscp_marking' } mock_rule_types = [ mock_rule_type_bandwidth_limit, mock_rule_type_dscp_marking] mock_rule_type_details = { 'drivers': [{ 'name': 'linuxbridge', 'supported_parameters': [{ 'parameter_values': {'start': 0, 'end': 2147483647}, 'parameter_type': 'range', 'parameter_name': u'max_kbps' }, { 'parameter_values': ['ingress', 'egress'], 'parameter_type': 'choices', 'parameter_name': u'direction' }, { 'parameter_values': {'start': 0, 'end': 2147483647}, 'parameter_type': 'range', 'parameter_name': 'max_burst_kbps' }] }], 'type': rule_type_name } def test_list_qos_rule_types(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'rule-types.json']), json={'rule_types': self.mock_rule_types}) ]) rule_types = self.cloud.list_qos_rule_types() self.assertEqual(self.mock_rule_types, rule_types) self.assert_calls() def test_list_qos_rule_types_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.list_qos_rule_types) self.assert_calls() def test_get_qos_rule_type_details(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [ self.qos_extension, self.qos_rule_type_details_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [ self.qos_extension, self.qos_rule_type_details_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'rule-types', '%s.json' % self.rule_type_name]), json={'rule_type': self.mock_rule_type_details}) ]) self.assertEqual( self.mock_rule_type_details, self.cloud.get_qos_rule_type_details(self.rule_type_name) ) self.assert_calls() def test_get_qos_rule_type_details_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_rule_type_details, self.rule_type_name) self.assert_calls() def test_get_qos_rule_type_details_no_qos_details_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_rule_type_details, self.rule_type_name) self.assert_calls() shade-1.31.0/shade/tests/unit/test_subnet.py0000666000175000017500000003753313440327640021040 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import testtools from shade import exc from shade.tests.unit import base class TestSubnet(base.RequestsMockTestCase): network_name = 'network_name' subnet_name = 'subnet_name' subnet_id = '1f1696eb-7f47-47f6-835c-4889bff88604' subnet_cidr = '192.168.199.0/24' mock_network_rep = { 'id': '881d1bb7-a663-44c0-8f9f-ee2765b74486', 'name': network_name, } mock_subnet_rep = { 'allocation_pools': [{ 'start': u'192.168.199.2', 'end': u'192.168.199.254' }], 'cidr': subnet_cidr, 'created_at': '2017-04-24T20:22:23Z', 'description': '', 'dns_nameservers': [], 'enable_dhcp': False, 'gateway_ip': '192.168.199.1', 'host_routes': [], 'id': subnet_id, 'ip_version': 4, 'ipv6_address_mode': None, 'ipv6_ra_mode': None, 'name': subnet_name, 'network_id': mock_network_rep['id'], 'project_id': '861808a93da0484ea1767967c4df8a23', 'revision_number': 2, 'service_types': [], 'subnetpool_id': None, 'tags': [] } def test_get_subnet(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}) ]) r = self.cloud.get_subnet(self.subnet_name) self.assertIsNotNone(r) self.assertDictEqual(self.mock_subnet_rep, r) self.assert_calls() def test_get_subnet_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', self.subnet_id]), json={'subnet': self.mock_subnet_rep}) ]) r = self.cloud.get_subnet_by_id(self.subnet_id) self.assertIsNotNone(r) self.assertDictEqual(self.mock_subnet_rep, r) self.assert_calls() def test_create_subnet(self): pool = [{'start': '192.168.199.2', 'end': '192.168.199.254'}] dns = ['8.8.8.8'] routes = [{"destination": "0.0.0.0/0", "nexthop": "123.456.78.9"}] mock_subnet_rep = copy.copy(self.mock_subnet_rep) mock_subnet_rep['allocation_pools'] = pool mock_subnet_rep['dns_nameservers'] = dns mock_subnet_rep['host_routes'] = routes self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnet': mock_subnet_rep}, validate=dict( json={'subnet': { 'cidr': self.subnet_cidr, 'enable_dhcp': False, 'ip_version': 4, 'network_id': self.mock_network_rep['id'], 'allocation_pools': pool, 'dns_nameservers': dns, 'host_routes': routes}})) ]) subnet = self.cloud.create_subnet(self.network_name, self.subnet_cidr, allocation_pools=pool, dns_nameservers=dns, host_routes=routes) self.assertDictEqual(mock_subnet_rep, subnet) self.assert_calls() def test_create_subnet_string_ip_version(self): '''Allow ip_version as a string''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnet': self.mock_subnet_rep}, validate=dict( json={'subnet': { 'cidr': self.subnet_cidr, 'enable_dhcp': False, 'ip_version': 4, 'network_id': self.mock_network_rep['id']}})) ]) subnet = self.cloud.create_subnet( self.network_name, self.subnet_cidr, ip_version='4') self.assertDictEqual(self.mock_subnet_rep, subnet) self.assert_calls() def test_create_subnet_bad_ip_version(self): '''String ip_versions must be convertable to int''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}) ]) with testtools.ExpectedException( exc.OpenStackCloudException, "ip_version must be an integer" ): self.cloud.create_subnet( self.network_name, self.subnet_cidr, ip_version='4x') self.assert_calls() def test_create_subnet_without_gateway_ip(self): pool = [{'start': '192.168.199.2', 'end': '192.168.199.254'}] dns = ['8.8.8.8'] mock_subnet_rep = copy.copy(self.mock_subnet_rep) mock_subnet_rep['allocation_pools'] = pool mock_subnet_rep['dns_nameservers'] = dns mock_subnet_rep['gateway_ip'] = None self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnet': mock_subnet_rep}, validate=dict( json={'subnet': { 'cidr': self.subnet_cidr, 'enable_dhcp': False, 'ip_version': 4, 'network_id': self.mock_network_rep['id'], 'allocation_pools': pool, 'gateway_ip': None, 'dns_nameservers': dns}})) ]) subnet = self.cloud.create_subnet(self.network_name, self.subnet_cidr, allocation_pools=pool, dns_nameservers=dns, disable_gateway_ip=True) self.assertDictEqual(mock_subnet_rep, subnet) self.assert_calls() def test_create_subnet_with_gateway_ip(self): pool = [{'start': '192.168.199.8', 'end': '192.168.199.254'}] gateway = '192.168.199.2' dns = ['8.8.8.8'] mock_subnet_rep = copy.copy(self.mock_subnet_rep) mock_subnet_rep['allocation_pools'] = pool mock_subnet_rep['dns_nameservers'] = dns mock_subnet_rep['gateway_ip'] = gateway self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnet': mock_subnet_rep}, validate=dict( json={'subnet': { 'cidr': self.subnet_cidr, 'enable_dhcp': False, 'ip_version': 4, 'network_id': self.mock_network_rep['id'], 'allocation_pools': pool, 'gateway_ip': gateway, 'dns_nameservers': dns}})) ]) subnet = self.cloud.create_subnet(self.network_name, self.subnet_cidr, allocation_pools=pool, dns_nameservers=dns, gateway_ip=gateway) self.assertDictEqual(mock_subnet_rep, subnet) self.assert_calls() def test_create_subnet_conflict_gw_ops(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}) ]) gateway = '192.168.200.3' self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, 'kooky', self.subnet_cidr, gateway_ip=gateway, disable_gateway_ip=True) self.assert_calls() def test_create_subnet_bad_network(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_network_rep]}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, 'duck', self.subnet_cidr) self.assert_calls() def test_create_subnet_non_unique_network(self): net1 = dict(id='123', name=self.network_name) net2 = dict(id='456', name=self.network_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [net1, net2]}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.create_subnet, self.network_name, self.subnet_cidr) self.assert_calls() def test_delete_subnet(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % self.subnet_id]), json={}) ]) self.assertTrue(self.cloud.delete_subnet(self.subnet_name)) self.assert_calls() def test_delete_subnet_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}) ]) self.assertFalse(self.cloud.delete_subnet('goofy')) self.assert_calls() def test_delete_subnet_multiple_found(self): subnet1 = dict(id='123', name=self.subnet_name) subnet2 = dict(id='456', name=self.subnet_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [subnet1, subnet2]}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.delete_subnet, self.subnet_name) self.assert_calls() def test_delete_subnet_multiple_using_id(self): subnet1 = dict(id='123', name=self.subnet_name) subnet2 = dict(id='456', name=self.subnet_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [subnet1, subnet2]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % subnet1['id']]), json={}) ]) self.assertTrue(self.cloud.delete_subnet(subnet1['id'])) self.assert_calls() def test_update_subnet(self): expected_subnet = copy.copy(self.mock_subnet_rep) expected_subnet['name'] = 'goofy' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % self.subnet_id]), json={'subnet': expected_subnet}, validate=dict( json={'subnet': {'name': 'goofy'}})) ]) subnet = self.cloud.update_subnet(self.subnet_id, subnet_name='goofy') self.assertDictEqual(expected_subnet, subnet) self.assert_calls() def test_update_subnet_gateway_ip(self): expected_subnet = copy.copy(self.mock_subnet_rep) gateway = '192.168.199.3' expected_subnet['gateway_ip'] = gateway self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % self.subnet_id]), json={'subnet': expected_subnet}, validate=dict( json={'subnet': {'gateway_ip': gateway}})) ]) subnet = self.cloud.update_subnet(self.subnet_id, gateway_ip=gateway) self.assertDictEqual(expected_subnet, subnet) self.assert_calls() def test_update_subnet_disable_gateway_ip(self): expected_subnet = copy.copy(self.mock_subnet_rep) expected_subnet['gateway_ip'] = None self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': [self.mock_subnet_rep]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets', '%s.json' % self.subnet_id]), json={'subnet': expected_subnet}, validate=dict( json={'subnet': {'gateway_ip': None}})) ]) subnet = self.cloud.update_subnet(self.subnet_id, disable_gateway_ip=True) self.assertDictEqual(expected_subnet, subnet) self.assert_calls() def test_update_subnet_conflict_gw_ops(self): self.assertRaises(exc.OpenStackCloudException, self.cloud.update_subnet, self.subnet_id, gateway_ip="192.168.199.3", disable_gateway_ip=True) shade-1.31.0/shade/tests/unit/test_cluster_templates.py0000666000175000017500000002316013440327640023266 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import munch import shade import testtools from shade.tests.unit import base cluster_template_obj = munch.Munch( apiserver_port=12345, cluster_distro='fake-distro', coe='fake-coe', created_at='fake-date', dns_nameserver='8.8.8.8', docker_volume_size=1, external_network_id='public', fixed_network=None, flavor_id='fake-flavor', https_proxy=None, human_id=None, image_id='fake-image', insecure_registry='https://192.168.0.10', keypair_id='fake-key', labels={}, links={}, master_flavor_id=None, name='fake-cluster-template', network_driver='fake-driver', no_proxy=None, public=False, registry_enabled=False, server_type='vm', tls_disabled=False, updated_at=None, uuid='fake-uuid', volume_driver=None, ) class TestClusterTemplates(base.RequestsMockTestCase): def test_list_cluster_templates_without_detail(self): self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) cluster_templates_list = self.cloud.list_cluster_templates() self.assertEqual( cluster_templates_list[0], self.cloud._normalize_cluster_template(cluster_template_obj)) self.assert_calls() def test_list_cluster_templates_with_detail(self): self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) cluster_templates_list = self.cloud.list_cluster_templates(detail=True) self.assertEqual( cluster_templates_list[0], self.cloud._normalize_cluster_template(cluster_template_obj)) self.assert_calls() def test_search_cluster_templates_by_name(self): self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) cluster_templates = self.cloud.search_cluster_templates( name_or_id='fake-cluster-template') self.assertEqual(1, len(cluster_templates)) self.assertEqual('fake-uuid', cluster_templates[0]['uuid']) self.assert_calls() def test_search_cluster_templates_not_found(self): self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) cluster_templates = self.cloud.search_cluster_templates( name_or_id='non-existent') self.assertEqual(0, len(cluster_templates)) self.assert_calls() def test_get_cluster_template(self): self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()]))]) r = self.cloud.get_cluster_template('fake-cluster-template') self.assertIsNotNone(r) self.assertDictEqual( r, self.cloud._normalize_cluster_template(cluster_template_obj)) self.assert_calls() def test_get_cluster_template_not_found(self): self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[]))]) r = self.cloud.get_cluster_template('doesNotExist') self.assertIsNone(r) self.assert_calls() def test_create_cluster_template(self): self.register_uris([ dict( method='POST', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='POST', uri='https://container-infra.example.com/v1/baymodels', json=dict(baymodels=[cluster_template_obj.toDict()]), validate=dict(json={ 'coe': 'fake-coe', 'image_id': 'fake-image', 'keypair_id': 'fake-key', 'name': 'fake-cluster-template'}),)]) self.cloud.create_cluster_template( name=cluster_template_obj.name, image_id=cluster_template_obj.image_id, keypair_id=cluster_template_obj.keypair_id, coe=cluster_template_obj.coe) self.assert_calls() def test_create_cluster_template_exception(self): self.register_uris([ dict( method='POST', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='POST', uri='https://container-infra.example.com/v1/baymodels', status_code=403)]) # TODO(mordred) requests here doens't give us a great story # for matching the old error message text. Investigate plumbing # an error message in to the adapter call so that we can give a # more informative error. Also, the test was originally catching # OpenStackCloudException - but for some reason testtools will not # match the more specific HTTPError, even though it's a subclass # of OpenStackCloudException. with testtools.ExpectedException(shade.OpenStackCloudHTTPError): self.cloud.create_cluster_template('fake-cluster-template') self.assert_calls() def test_delete_cluster_template(self): uri = 'https://container-infra.example.com/v1/baymodels/fake-uuid' self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()])), dict( method='DELETE', uri=uri), ]) self.cloud.delete_cluster_template('fake-uuid') self.assert_calls() def test_update_cluster_template(self): uri = 'https://container-infra.example.com/v1/baymodels/fake-uuid' self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', status_code=404), dict( method='GET', uri='https://container-infra.example.com/v1/baymodels/detail', json=dict(baymodels=[cluster_template_obj.toDict()])), dict( method='PATCH', uri=uri, status_code=200, validate=dict( json=[{ u'op': u'replace', u'path': u'/name', u'value': u'new-cluster-template' }] )), dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', # This json value is not meaningful to the test - it just has # to be valid. json=dict(baymodels=[cluster_template_obj.toDict()])), ]) new_name = 'new-cluster-template' self.cloud.update_cluster_template( 'fake-uuid', 'replace', name=new_name) self.assert_calls() def test_get_coe_cluster_template(self): self.register_uris([ dict( method='GET', uri='https://container-infra.example.com/v1/clustertemplates', json=dict(clustertemplates=[cluster_template_obj.toDict()]))]) r = self.cloud.get_coe_cluster_template('fake-cluster-template') self.assertIsNotNone(r) self.assertDictEqual( r, self.cloud._normalize_cluster_template(cluster_template_obj)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_qos_minimum_bandwidth_rule.py0000666000175000017500000003015513440327640025141 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from shade import exc from shade.tests.unit import base class TestQosMinimumBandwidthRule(base.RequestsMockTestCase): policy_name = 'qos test policy' policy_id = '881d1bb7-a663-44c0-8f9f-ee2765b74486' project_id = 'c88fc89f-5121-4a4c-87fd-496b5af864e9' rule_id = 'ed1a2b05-0ad7-45d7-873f-008b575a02b3' rule_min_kbps = 1000 mock_policy = { 'id': policy_id, 'name': policy_name, 'description': '', 'rules': [], 'project_id': project_id, 'tenant_id': project_id, 'shared': False, 'is_default': False } mock_rule = { 'id': rule_id, 'min_kbps': rule_min_kbps, 'direction': 'egress' } qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } enabled_neutron_extensions = [qos_extension] def test_get_qos_minimum_bandwidth_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), json={'minimum_bandwidth_rule': self.mock_rule}) ]) r = self.cloud.get_qos_minimum_bandwidth_rule(self.policy_name, self.rule_id) self.assertDictEqual(self.mock_rule, r) self.assert_calls() def test_get_qos_minimum_bandwidth_rule_no_qos_policy_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': []}) ]) self.assertRaises( exc.OpenStackCloudResourceNotFound, self.cloud.get_qos_minimum_bandwidth_rule, self.policy_name, self.rule_id) self.assert_calls() def test_get_qos_minimum_bandwidth_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_minimum_bandwidth_rule, self.policy_name, self.rule_id) self.assert_calls() def test_create_qos_minimum_bandwidth_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules']), json={'minimum_bandwidth_rule': self.mock_rule}) ]) rule = self.cloud.create_qos_minimum_bandwidth_rule( self.policy_name, min_kbps=self.rule_min_kbps) self.assertDictEqual(self.mock_rule, rule) self.assert_calls() def test_create_qos_minimum_bandwidth_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_qos_minimum_bandwidth_rule, self.policy_name, min_kbps=100) self.assert_calls() def test_update_qos_minimum_bandwidth_rule(self): expected_rule = copy.copy(self.mock_rule) expected_rule['min_kbps'] = self.rule_min_kbps + 100 self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), json={'minimum_bandwidth_rule': self.mock_rule}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), json={'minimum_bandwidth_rule': expected_rule}, validate=dict( json={'minimum_bandwidth_rule': { 'min_kbps': self.rule_min_kbps + 100}})) ]) rule = self.cloud.update_qos_minimum_bandwidth_rule( self.policy_id, self.rule_id, min_kbps=self.rule_min_kbps + 100) self.assertDictEqual(expected_rule, rule) self.assert_calls() def test_update_qos_minimum_bandwidth_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.update_qos_minimum_bandwidth_rule, self.policy_id, self.rule_id, min_kbps=2000) self.assert_calls() def test_delete_qos_minimum_bandwidth_rule(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), json={}) ]) self.assertTrue( self.cloud.delete_qos_minimum_bandwidth_rule( self.policy_name, self.rule_id)) self.assert_calls() def test_delete_qos_minimum_bandwidth_rule_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_qos_minimum_bandwidth_rule, self.policy_name, self.rule_id) self.assert_calls() def test_delete_qos_minimum_bandwidth_rule_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', self.policy_id, 'minimum_bandwidth_rules', '%s.json' % self.rule_id]), status_code=404) ]) self.assertFalse( self.cloud.delete_qos_minimum_bandwidth_rule( self.policy_name, self.rule_id)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_magnum_services.py0000666000175000017500000000244313440327640022717 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from shade.tests.unit import base magnum_service_obj = dict( binary='fake-service', created_at='2015-08-27T09:49:58-05:00', disabled_reason=None, host='fake-host', human_id=None, id=1, report_count=1, state='up', updated_at=None, ) class TestMagnumServices(base.RequestsMockTestCase): def test_list_magnum_services(self): self.register_uris([dict( method='GET', uri='https://container-infra.example.com/v1/mservices', json=dict(mservices=[magnum_service_obj]))]) mservices_list = self.op_cloud.list_magnum_services() self.assertEqual( mservices_list[0], self.cloud._normalize_magnum_service(magnum_service_obj)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_floating_ip_pool.py0000666000175000017500000000547413440327640023063 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_pool ---------------------------------- Test floating IP pool resource (managed by nova) """ from shade import OpenStackCloudException from shade.tests.unit import base from shade.tests import fakes class TestFloatingIPPool(base.RequestsMockTestCase): pools = [{'name': u'public'}] def test_list_floating_ip_pools(self): self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'extensions': [{ u'alias': u'os-floating-ip-pools', u'updated': u'2014-12-03T00:00:00Z', u'name': u'FloatingIpPools', u'links': [], u'namespace': u'http://docs.openstack.org/compute/ext/fake_xml', u'description': u'Floating IPs support.'}]}), dict(method='GET', uri='{endpoint}/os-floating-ip-pools'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={"floating_ip_pools": [{"name": "public"}]}) ]) floating_ip_pools = self.cloud.list_floating_ip_pools() self.assertItemsEqual(floating_ip_pools, self.pools) self.assert_calls() def test_list_floating_ip_pools_exception(self): self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'extensions': [{ u'alias': u'os-floating-ip-pools', u'updated': u'2014-12-03T00:00:00Z', u'name': u'FloatingIpPools', u'links': [], u'namespace': u'http://docs.openstack.org/compute/ext/fake_xml', u'description': u'Floating IPs support.'}]}), dict(method='GET', uri='{endpoint}/os-floating-ip-pools'.format( endpoint=fakes.COMPUTE_ENDPOINT), status_code=404)]) self.assertRaises( OpenStackCloudException, self.cloud.list_floating_ip_pools) self.assert_calls() shade-1.31.0/shade/tests/unit/test_update_server.py0000666000175000017500000000624313440327640022402 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_update_server ---------------------------------- Tests for the `update_server` command. """ import uuid from shade.exc import OpenStackCloudException from shade.tests import fakes from shade.tests.unit import base class TestUpdateServer(base.RequestsMockTestCase): def setUp(self): super(TestUpdateServer, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.updated_server_name = self.getUniqueString('name2') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) def test_update_server_with_update_exception(self): """ Test that an exception in the update raises an exception in update_server. """ self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id]), status_code=400, validate=dict( json={'server': {'name': self.updated_server_name}})), ]) self.assertRaises( OpenStackCloudException, self.cloud.update_server, self.server_name, name=self.updated_server_name) self.assert_calls() def test_update_server_name(self): """ Test that update_server updates the name without raising any exception """ fake_update_server = fakes.make_fake_server( self.server_id, self.updated_server_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.server_id]), json={'server': fake_update_server}, validate=dict( json={'server': {'name': self.updated_server_name}})), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertEqual( self.updated_server_name, self.cloud.update_server( self.server_name, name=self.updated_server_name)['name']) self.assert_calls() shade-1.31.0/shade/tests/unit/test_network.py0000666000175000017500000003323313440327640021222 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import testtools import shade from shade.tests.unit import base class TestNetwork(base.RequestsMockTestCase): mock_new_network_rep = { 'provider:physical_network': None, 'ipv6_address_scope': None, 'revision_number': 3, 'port_security_enabled': True, 'provider:network_type': 'local', 'id': '881d1bb7-a663-44c0-8f9f-ee2765b74486', 'router:external': False, 'availability_zone_hints': [], 'availability_zones': [], 'provider:segmentation_id': None, 'ipv4_address_scope': None, 'shared': False, 'project_id': '861808a93da0484ea1767967c4df8a23', 'status': 'ACTIVE', 'subnets': [], 'description': '', 'tags': [], 'updated_at': '2017-04-22T19:22:53Z', 'is_default': False, 'qos_policy_id': None, 'name': 'netname', 'admin_state_up': True, 'tenant_id': '861808a93da0484ea1767967c4df8a23', 'created_at': '2017-04-22T19:22:53Z', 'mtu': 0 } network_availability_zone_extension = { "alias": "network_availability_zone", "updated": "2015-01-01T10:00:00-00:00", "description": "Availability zone support for router.", "links": [], "name": "Network Availability Zone" } enabled_neutron_extensions = [network_availability_zone_extension] def test_list_networks(self): net1 = {'id': '1', 'name': 'net1'} net2 = {'id': '2', 'name': 'net2'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [net1, net2]}) ]) nets = self.cloud.list_networks() self.assertEqual([net1, net2], nets) self.assert_calls() def test_list_networks_filtered(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json'], qs_elements=["name=test"]), json={'networks': []}) ]) self.cloud.list_networks(filters={'name': 'test'}) self.assert_calls() def test_create_network(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': self.mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname'}})) ]) network = self.cloud.create_network("netname") self.assertEqual(self.mock_new_network_rep, network) self.assert_calls() def test_create_network_specific_tenant(self): project_id = "project_id_value" mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep['project_id'] = project_id self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname', 'tenant_id': project_id}})) ]) network = self.cloud.create_network("netname", project_id=project_id) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_external(self): mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep['router:external'] = True self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname', 'router:external': True}})) ]) network = self.cloud.create_network("netname", external=True) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_provider(self): provider_opts = {'physical_network': 'mynet', 'network_type': 'vlan', 'segmentation_id': 'vlan1'} new_network_provider_opts = { 'provider:physical_network': 'mynet', 'provider:network_type': 'vlan', 'provider:segmentation_id': 'vlan1' } mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep.update(new_network_provider_opts) expected_send_params = { 'admin_state_up': True, 'name': 'netname' } expected_send_params.update(new_network_provider_opts) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': expected_send_params})) ]) network = self.cloud.create_network("netname", provider=provider_opts) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_with_availability_zone_hints(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': self.mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname', 'availability_zone_hints': ['nova']}})) ]) network = self.cloud.create_network("netname", availability_zone_hints=['nova']) self.assertEqual(self.mock_new_network_rep, network) self.assert_calls() def test_create_network_provider_ignored_value(self): provider_opts = {'physical_network': 'mynet', 'network_type': 'vlan', 'segmentation_id': 'vlan1', 'should_not_be_passed': 1} new_network_provider_opts = { 'provider:physical_network': 'mynet', 'provider:network_type': 'vlan', 'provider:segmentation_id': 'vlan1' } mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep.update(new_network_provider_opts) expected_send_params = { 'admin_state_up': True, 'name': 'netname' } expected_send_params.update(new_network_provider_opts) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': expected_send_params})) ]) network = self.cloud.create_network("netname", provider=provider_opts) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_wrong_availability_zone_hints_type(self): azh_opts = "invalid" with testtools.ExpectedException( shade.OpenStackCloudException, "Parameter 'availability_zone_hints' must be a list" ): self.cloud.create_network("netname", availability_zone_hints=azh_opts) def test_create_network_provider_wrong_type(self): provider_opts = "invalid" with testtools.ExpectedException( shade.OpenStackCloudException, "Parameter 'provider' must be a dict" ): self.cloud.create_network("netname", provider=provider_opts) def test_create_network_port_security_disabled(self): port_security_state = False mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep['port_security_enabled'] = port_security_state self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname', 'port_security_enabled': port_security_state}})) ]) network = self.cloud.create_network( "netname", port_security_enabled=port_security_state ) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_with_mtu(self): mtu_size = 1500 mock_new_network_rep = copy.copy(self.mock_new_network_rep) mock_new_network_rep['mtu'] = mtu_size self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'network': mock_new_network_rep}, validate=dict( json={'network': { 'admin_state_up': True, 'name': 'netname', 'mtu': mtu_size}})) ]) network = self.cloud.create_network("netname", mtu_size=mtu_size ) self.assertEqual(mock_new_network_rep, network) self.assert_calls() def test_create_network_with_wrong_mtu_size(self): with testtools.ExpectedException( shade.OpenStackCloudException, "Parameter 'mtu_size' must be greater than 67." ): self.cloud.create_network("netname", mtu_size=42) def test_create_network_with_wrong_mtu_type(self): with testtools.ExpectedException( shade.OpenStackCloudException, "Parameter 'mtu_size' must be an integer." ): self.cloud.create_network("netname", mtu_size="fourty_two") def test_delete_network(self): network_id = "test-net-id" network_name = "network" network = {'id': network_id, 'name': network_name} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks', "%s.json" % network_id]), json={}) ]) self.assertTrue(self.cloud.delete_network(network_name)) self.assert_calls() def test_delete_network_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.assertFalse(self.cloud.delete_network('test-net')) self.assert_calls() def test_delete_network_exception(self): network_id = "test-net-id" network_name = "network" network = {'id': network_id, 'name': network_name} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks', "%s.json" % network_id]), status_code=503) ]) self.assertRaises(shade.OpenStackCloudException, self.cloud.delete_network, network_name) self.assert_calls() def test_get_network_by_id(self): network_id = "test-net-id" network_name = "network" network = {'id': network_id, 'name': network_name} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks', "%s" % network_id]), json={'network': network}) ]) self.assertTrue(self.cloud.get_network_by_id(network_id)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_identity_roles.py0000666000175000017500000002776113440327640022577 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import testtools import shade from shade.tests.unit import base from testtools import matchers RAW_ROLE_ASSIGNMENTS = [ { "links": {"assignment": "http://example"}, "role": {"id": "123456"}, "scope": {"domain": {"id": "161718"}}, "user": {"id": "313233"} }, { "links": {"assignment": "http://example"}, "group": {"id": "101112"}, "role": {"id": "123456"}, "scope": {"project": {"id": "456789"}} } ] class TestIdentityRoles(base.RequestsMockTestCase): def get_mock_url(self, service_type='identity', interface='admin', resource='roles', append=None, base_url_append='v3', qs_elements=None): return super(TestIdentityRoles, self).get_mock_url( service_type, interface, resource, append, base_url_append, qs_elements) def test_list_roles(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}) ]) self.op_cloud.list_roles() self.assert_calls() def test_get_role_by_name(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}) ]) role = self.op_cloud.get_role(role_data.role_name) self.assertIsNotNone(role) self.assertThat(role.id, matchers.Equals(role_data.role_id)) self.assertThat(role.name, matchers.Equals(role_data.role_name)) self.assert_calls() def test_get_role_by_id(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}) ]) role = self.op_cloud.get_role(role_data.role_id) self.assertIsNotNone(role) self.assertThat(role.id, matchers.Equals(role_data.role_id)) self.assertThat(role.name, matchers.Equals(role_data.role_name)) self.assert_calls() def test_create_role(self): role_data = self._get_role_data() self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=200, json=role_data.json_response, validate=dict(json=role_data.json_request)) ]) role = self.op_cloud.create_role(role_data.role_name) self.assertIsNotNone(role) self.assertThat(role.name, matchers.Equals(role_data.role_name)) self.assertThat(role.id, matchers.Equals(role_data.role_id)) self.assert_calls() def test_update_role(self): role_data = self._get_role_data() req = {'role_id': role_data.role_id, 'role': {'name': role_data.role_name}} self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}), dict(method='PATCH', uri=self.get_mock_url(), status_code=200, json=role_data.json_response, validate=dict(json=req)) ]) role = self.op_cloud.update_role(role_data.role_id, role_data.role_name) self.assertIsNotNone(role) self.assertThat(role.name, matchers.Equals(role_data.role_name)) self.assertThat(role.id, matchers.Equals(role_data.role_id)) self.assert_calls() def test_delete_role_by_id(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(append=[role_data.role_id]), status_code=204) ]) role = self.op_cloud.delete_role(role_data.role_id) self.assertThat(role, matchers.Equals(True)) self.assert_calls() def test_delete_role_by_name(self): role_data = self._get_role_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'roles': [role_data.json_response['role']]}), dict(method='DELETE', uri=self.get_mock_url(append=[role_data.role_id]), status_code=204) ]) role = self.op_cloud.delete_role(role_data.role_name) self.assertThat(role, matchers.Equals(True)) self.assert_calls() def test_list_role_assignments(self): domain_data = self._get_domain_data() user_data = self._get_user_data(domain_id=domain_data.domain_id) group_data = self._get_group_data(domain_id=domain_data.domain_id) project_data = self._get_project_data(domain_id=domain_data.domain_id) role_data = self._get_role_data() response = [ {'links': 'https://example.com', 'role': {'id': role_data.role_id}, 'scope': {'domain': {'id': domain_data.domain_id}}, 'user': {'id': user_data.user_id}}, {'links': 'https://example.com', 'role': {'id': role_data.role_id}, 'scope': {'project': {'id': project_data.project_id}}, 'group': {'id': group_data.group_id}}, ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='role_assignments'), status_code=200, json={'role_assignments': response}, complete_qs=True) ]) ret = self.op_cloud.list_role_assignments() self.assertThat(len(ret), matchers.Equals(2)) self.assertThat(ret[0].user, matchers.Equals(user_data.user_id)) self.assertThat(ret[0].id, matchers.Equals(role_data.role_id)) self.assertThat(ret[0].domain, matchers.Equals(domain_data.domain_id)) self.assertThat(ret[1].group, matchers.Equals(group_data.group_id)) self.assertThat(ret[1].id, matchers.Equals(role_data.role_id)) self.assertThat(ret[1].project, matchers.Equals(project_data.project_id)) def test_list_role_assignments_filters(self): domain_data = self._get_domain_data() user_data = self._get_user_data(domain_id=domain_data.domain_id) role_data = self._get_role_data() response = [ {'links': 'https://example.com', 'role': {'id': role_data.role_id}, 'scope': {'domain': {'id': domain_data.domain_id}}, 'user': {'id': user_data.user_id}} ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='role_assignments', qs_elements=['scope.domain.id=%s' % domain_data.domain_id, 'user.id=%s' % user_data.user_id, 'effective=True']), status_code=200, json={'role_assignments': response}, complete_qs=True) ]) params = dict(user=user_data.user_id, domain=domain_data.domain_id, effective=True) ret = self.op_cloud.list_role_assignments(filters=params) self.assertThat(len(ret), matchers.Equals(1)) self.assertThat(ret[0].user, matchers.Equals(user_data.user_id)) self.assertThat(ret[0].id, matchers.Equals(role_data.role_id)) self.assertThat(ret[0].domain, matchers.Equals(domain_data.domain_id)) def test_list_role_assignments_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='role_assignments'), status_code=403) ]) with testtools.ExpectedException( shade.exc.OpenStackCloudHTTPError, "Failed to list role assignments" ): self.op_cloud.list_role_assignments() self.assert_calls() def test_list_role_assignments_keystone_v2(self): self.use_keystone_v2() role_data = self._get_role_data() user_data = self._get_user_data() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='tenants', append=[project_data.project_id, 'users', user_data.user_id, 'roles'], base_url_append=None), status_code=200, json={'roles': [role_data.json_response['role']]}) ]) ret = self.op_cloud.list_role_assignments( filters={ 'user': user_data.user_id, 'project': project_data.project_id}) self.assertThat(len(ret), matchers.Equals(1)) self.assertThat(ret[0].project, matchers.Equals(project_data.project_id)) self.assertThat(ret[0].id, matchers.Equals(role_data.role_id)) self.assertThat(ret[0].user, matchers.Equals(user_data.user_id)) self.assert_calls() def test_list_role_assignments_keystone_v2_with_role(self): self.use_keystone_v2() roles_data = [self._get_role_data() for r in range(0, 2)] user_data = self._get_user_data() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='tenants', append=[project_data.project_id, 'users', user_data.user_id, 'roles'], base_url_append=None), status_code=200, json={'roles': [r.json_response['role'] for r in roles_data]}) ]) ret = self.op_cloud.list_role_assignments( filters={ 'role': roles_data[0].role_id, 'user': user_data.user_id, 'project': project_data.project_id}) self.assertThat(len(ret), matchers.Equals(1)) self.assertThat(ret[0].project, matchers.Equals(project_data.project_id)) self.assertThat(ret[0].id, matchers.Equals(roles_data[0].role_id)) self.assertThat(ret[0].user, matchers.Equals(user_data.user_id)) self.assert_calls() def test_list_role_assignments_exception_v2(self): self.use_keystone_v2() with testtools.ExpectedException( shade.OpenStackCloudException, "Must provide project and user for keystone v2" ): self.op_cloud.list_role_assignments() self.assert_calls() def test_list_role_assignments_exception_v2_no_project(self): self.use_keystone_v2() with testtools.ExpectedException( shade.OpenStackCloudException, "Must provide project and user for keystone v2" ): self.op_cloud.list_role_assignments(filters={'user': '12345'}) self.assert_calls() shade-1.31.0/shade/tests/unit/test_volume_access.py0000666000175000017500000002043713440327640022363 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import shade from shade.tests.unit import base class TestVolumeAccess(base.RequestsMockTestCase): def test_list_volume_types(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]})]) self.assertTrue(self.cloud.list_volume_types()) self.assert_calls() def test_get_volume_type(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]})]) volume_type_got = self.cloud.get_volume_type(volume_type['name']) self.assertEqual(volume_type_got.id, volume_type['id']) def test_get_volume_type_access(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) volume_type_access = [ dict(volume_type_id='voltype01', name='name', project_id='prj01'), dict(volume_type_id='voltype01', name='name', project_id='prj02') ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'os-volume-type-access']), json={'volume_type_access': volume_type_access})]) self.assertEqual( len(self.op_cloud.get_volume_type_access(volume_type['name'])), 2) self.assert_calls() def test_remove_volume_type_access(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) project_001 = dict(volume_type_id='voltype01', name='name', project_id='prj01') project_002 = dict(volume_type_id='voltype01', name='name', project_id='prj02') volume_type_access = [project_001, project_002] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'os-volume-type-access']), json={'volume_type_access': volume_type_access}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'action']), json={'removeProjectAccess': { 'project': project_001['project_id']}}, validate=dict( json={'removeProjectAccess': { 'project': project_001['project_id']}})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'os-volume-type-access']), json={'volume_type_access': [project_001]})]) self.assertEqual( len(self.op_cloud.get_volume_type_access( volume_type['name'])), 2) self.op_cloud.remove_volume_type_access( volume_type['name'], project_001['project_id']) self.assertEqual( len(self.op_cloud.get_volume_type_access(volume_type['name'])), 1) self.assert_calls() def test_add_volume_type_access(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) project_001 = dict(volume_type_id='voltype01', name='name', project_id='prj01') project_002 = dict(volume_type_id='voltype01', name='name', project_id='prj02') volume_type_access = [project_001, project_002] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'action']), json={'addProjectAccess': { 'project': project_002['project_id']}}, validate=dict( json={'addProjectAccess': { 'project': project_002['project_id']}})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types', volume_type['id'], 'os-volume-type-access']), json={'volume_type_access': volume_type_access})]) self.op_cloud.add_volume_type_access( volume_type['name'], project_002['project_id']) self.assertEqual( len(self.op_cloud.get_volume_type_access(volume_type['name'])), 2) self.assert_calls() def test_add_volume_type_access_missing(self): volume_type = dict( id='voltype01', description='volume type description', name='name', is_public=False) project_001 = dict(volume_type_id='voltype01', name='name', project_id='prj01') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['types'], qs_elements=['is_public=None']), json={'volume_types': [volume_type]})]) with testtools.ExpectedException(shade.OpenStackCloudException, "VolumeType not found: MISSING"): self.op_cloud.add_volume_type_access( "MISSING", project_001['project_id']) self.assert_calls() shade-1.31.0/shade/tests/unit/test_object.py0000666000175000017500000010722013440327640020775 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile import testtools import shade import shade.openstackcloud from shade import exc from shade.tests.unit import base class BaseTestObject(base.RequestsMockTestCase): def setUp(self): super(BaseTestObject, self).setUp() self.container = self.getUniqueString() self.object = self.getUniqueString() self.endpoint = self.cloud._object_store_client.get_endpoint() self.container_endpoint = '{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container) self.object_endpoint = '{endpoint}/{object}'.format( endpoint=self.container_endpoint, object=self.object) class TestObject(BaseTestObject): def test_create_container(self): """Test creating a (private) container""" self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, status_code=404), dict(method='PUT', uri=self.container_endpoint, status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri=self.container_endpoint, headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}) ]) self.cloud.create_container(self.container) self.assert_calls() def test_create_container_public(self): """Test creating a public container""" self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, status_code=404), dict(method='PUT', uri=self.container_endpoint, status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='POST', uri=self.container_endpoint, status_code=201, validate=dict( headers={ 'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS[ 'public']})), dict(method='HEAD', uri=self.container_endpoint, headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}) ]) self.cloud.create_container(self.container, public=True) self.assert_calls() def test_create_container_exists(self): """Test creating a container that exists.""" self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}) ]) container = self.cloud.create_container(self.container) self.assert_calls() self.assertIsNotNone(container) def test_delete_container(self): self.register_uris([ dict(method='DELETE', uri=self.container_endpoint)]) self.assertTrue(self.cloud.delete_container(self.container)) self.assert_calls() def test_delete_container_404(self): """No exception when deleting a container that does not exist""" self.register_uris([ dict(method='DELETE', uri=self.container_endpoint, status_code=404)]) self.assertFalse(self.cloud.delete_container(self.container)) self.assert_calls() def test_delete_container_error(self): """Non-404 swift error re-raised as OSCE""" # 409 happens if the container is not empty self.register_uris([ dict(method='DELETE', uri=self.container_endpoint, status_code=409)]) self.assertRaises( shade.OpenStackCloudException, self.cloud.delete_container, self.container) self.assert_calls() def test_update_container(self): headers = { 'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS['public']} self.register_uris([ dict(method='POST', uri=self.container_endpoint, status_code=204, validate=dict(headers=headers))]) self.cloud.update_container(self.container, headers) self.assert_calls() def test_update_container_error(self): """Swift error re-raised as OSCE""" # This test is of questionable value - the swift API docs do not # declare error codes (other than 404 for the container) for this # method, and I cannot make a synthetic failure to validate a real # error code. So we're really just testing the shade adapter error # raising logic here, rather than anything specific to swift. self.register_uris([ dict(method='POST', uri=self.container_endpoint, status_code=409)]) self.assertRaises( shade.OpenStackCloudException, self.cloud.update_container, self.container, dict(foo='bar')) self.assert_calls() def test_set_container_access_public(self): self.register_uris([ dict(method='POST', uri=self.container_endpoint, status_code=204, validate=dict( headers={ 'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS[ 'public']}))]) self.cloud.set_container_access(self.container, 'public') self.assert_calls() def test_set_container_access_private(self): self.register_uris([ dict(method='POST', uri=self.container_endpoint, status_code=204, validate=dict( headers={ 'x-container-read': shade.openstackcloud.OBJECT_CONTAINER_ACLS[ 'private']}))]) self.cloud.set_container_access(self.container, 'private') self.assert_calls() def test_set_container_access_invalid(self): self.assertRaises( shade.OpenStackCloudException, self.cloud.set_container_access, self.container, 'invalid') def test_get_container_access(self): self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, headers={ 'x-container-read': str(shade.openstackcloud.OBJECT_CONTAINER_ACLS[ 'public'])})]) access = self.cloud.get_container_access(self.container) self.assertEqual('public', access) def test_get_container_invalid(self): self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, headers={'x-container-read': 'invalid'})]) with testtools.ExpectedException( exc.OpenStackCloudException, "Could not determine container access for ACL: invalid" ): self.cloud.get_container_access(self.container) def test_get_container_access_not_found(self): self.register_uris([ dict(method='HEAD', uri=self.container_endpoint, status_code=404)]) with testtools.ExpectedException( exc.OpenStackCloudException, "Container not found: %s" % self.container ): self.cloud.get_container_access(self.container) def test_list_containers(self): endpoint = '{endpoint}/?format=json'.format( endpoint=self.endpoint) containers = [ {u'count': 0, u'bytes': 0, u'name': self.container}] self.register_uris([dict(method='GET', uri=endpoint, complete_qs=True, json=containers)]) ret = self.cloud.list_containers() self.assert_calls() self.assertEqual(containers, ret) def test_list_containers_exception(self): endpoint = '{endpoint}/?format=json'.format( endpoint=self.endpoint) self.register_uris([dict(method='GET', uri=endpoint, complete_qs=True, status_code=416)]) self.assertRaises( exc.OpenStackCloudException, self.cloud.list_containers) self.assert_calls() def test_list_objects(self): endpoint = '{endpoint}?format=json'.format( endpoint=self.container_endpoint) objects = [{ u'bytes': 20304400896, u'last_modified': u'2016-12-15T13:34:13.650090', u'hash': u'daaf9ed2106d09bba96cf193d866445e', u'name': self.object, u'content_type': u'application/octet-stream'}] self.register_uris([dict(method='GET', uri=endpoint, complete_qs=True, json=objects)]) ret = self.cloud.list_objects(self.container) self.assert_calls() self.assertEqual(objects, ret) def test_list_objects_exception(self): endpoint = '{endpoint}?format=json'.format( endpoint=self.container_endpoint) self.register_uris([dict(method='GET', uri=endpoint, complete_qs=True, status_code=416)]) self.assertRaises( exc.OpenStackCloudException, self.cloud.list_objects, self.container) self.assert_calls() def test_delete_object(self): self.register_uris([ dict(method='HEAD', uri=self.object_endpoint, headers={'X-Object-Meta': 'foo'}), dict(method='DELETE', uri=self.object_endpoint, status_code=204)]) self.assertTrue(self.cloud.delete_object(self.container, self.object)) self.assert_calls() def test_delete_object_not_found(self): self.register_uris([dict(method='HEAD', uri=self.object_endpoint, status_code=404)]) self.assertFalse(self.cloud.delete_object(self.container, self.object)) self.assert_calls() def test_get_object(self): headers = { 'Content-Length': '20304400896', 'Content-Type': 'application/octet-stream', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Thu, 15 Dec 2016 13:34:14 GMT', 'Etag': '"b5c454b44fbd5344793e3fb7e3850768"', 'X-Timestamp': '1481808853.65009', 'X-Trans-Id': 'tx68c2a2278f0c469bb6de1-005857ed80dfw1', 'Date': 'Mon, 19 Dec 2016 14:24:00 GMT', 'X-Static-Large-Object': 'True', 'X-Object-Meta-Mtime': '1481513709.168512', } response_headers = {k.lower(): v for k, v in headers.items()} text = 'test body' self.register_uris([ dict(method='GET', uri=self.object_endpoint, headers={ 'Content-Length': '20304400896', 'Content-Type': 'application/octet-stream', 'Accept-Ranges': 'bytes', 'Last-Modified': 'Thu, 15 Dec 2016 13:34:14 GMT', 'Etag': '"b5c454b44fbd5344793e3fb7e3850768"', 'X-Timestamp': '1481808853.65009', 'X-Trans-Id': 'tx68c2a2278f0c469bb6de1-005857ed80dfw1', 'Date': 'Mon, 19 Dec 2016 14:24:00 GMT', 'X-Static-Large-Object': 'True', 'X-Object-Meta-Mtime': '1481513709.168512', }, text='test body')]) resp = self.cloud.get_object(self.container, self.object) self.assert_calls() self.assertEqual((response_headers, text), resp) def test_get_object_not_found(self): self.register_uris([dict(method='GET', uri=self.object_endpoint, status_code=404)]) self.assertIsNone(self.cloud.get_object(self.container, self.object)) self.assert_calls() def test_get_object_exception(self): self.register_uris([dict(method='GET', uri=self.object_endpoint, status_code=416)]) self.assertRaises( shade.OpenStackCloudException, self.cloud.get_object, self.container, self.object) self.assert_calls() def test_get_object_segment_size_below_min(self): # Register directly becuase we make multiple calls. The number # of calls we make isn't interesting - what we do with the return # values is. Don't run assert_calls for the same reason. self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': 1000}, slo={'min_segment_size': 500}), headers={'Content-Type': 'application/json'})]) self.assertEqual(500, self.cloud.get_object_segment_size(400)) self.assertEqual(900, self.cloud.get_object_segment_size(900)) self.assertEqual(1000, self.cloud.get_object_segment_size(1000)) self.assertEqual(1000, self.cloud.get_object_segment_size(1100)) def test_get_object_segment_size_http_404(self): self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', status_code=404, reason='Not Found')]) self.assertEqual(shade.openstackcloud.DEFAULT_OBJECT_SEGMENT_SIZE, self.cloud.get_object_segment_size(None)) self.assert_calls() def test_get_object_segment_size_http_412(self): self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', status_code=412, reason='Precondition failed')]) self.assertEqual(shade.openstackcloud.DEFAULT_OBJECT_SEGMENT_SIZE, self.cloud.get_object_segment_size(None)) self.assert_calls() class TestObjectUploads(BaseTestObject): def setUp(self): super(TestObjectUploads, self).setUp() self.content = self.getUniqueString().encode('latin-1') self.object_file = tempfile.NamedTemporaryFile(delete=False) self.object_file.write(self.content) self.object_file.close() (self.md5, self.sha256) = self.cloud._get_file_hashes( self.object_file.name) self.endpoint = self.cloud._object_store_client.get_endpoint() def test_create_object(self): self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': 1000}, slo={'min_segment_size': 500})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404), dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, validate=dict( headers={ 'x-object-meta-x-shade-md5': self.md5, 'x-object-meta-x-shade-sha256': self.sha256, })) ]) self.cloud.create_object( container=self.container, name=self.object, filename=self.object_file.name) self.assert_calls() def test_create_dynamic_large_object(self): max_file_size = 2 min_file_size = 1 uris_to_mock = [ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': max_file_size}, slo={'min_segment_size': min_file_size})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container, ), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404) ] uris_to_mock.extend( [dict(method='PUT', uri='{endpoint}/{container}/{object}/{index:0>6}'.format( endpoint=self.endpoint, container=self.container, object=self.object, index=index), status_code=201) for index, offset in enumerate( range(0, len(self.content), max_file_size))] ) uris_to_mock.append( dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, validate=dict( headers={ 'x-object-manifest': '{container}/{object}'.format( container=self.container, object=self.object), 'x-object-meta-x-shade-md5': self.md5, 'x-object-meta-x-shade-sha256': self.sha256, }))) self.register_uris(uris_to_mock) self.cloud.create_object( container=self.container, name=self.object, filename=self.object_file.name, use_slo=False) # After call 6, order become indeterminate because of thread pool self.assert_calls(stop_after=6) for key, value in self.calls[-1]['headers'].items(): self.assertEqual( value, self.adapter.request_history[-1].headers[key], 'header mismatch in manifest call') def test_create_static_large_object(self): max_file_size = 25 min_file_size = 1 uris_to_mock = [ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': max_file_size}, slo={'min_segment_size': min_file_size})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container, ), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404) ] uris_to_mock.extend([ dict(method='PUT', uri='{endpoint}/{container}/{object}/{index:0>6}'.format( endpoint=self.endpoint, container=self.container, object=self.object, index=index), status_code=201, headers=dict(Etag='etag{index}'.format(index=index))) for index, offset in enumerate( range(0, len(self.content), max_file_size)) ]) uris_to_mock.append( dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, validate=dict( params={ 'multipart-manifest', 'put' }, headers={ 'x-object-meta-x-shade-md5': self.md5, 'x-object-meta-x-shade-sha256': self.sha256, }))) self.register_uris(uris_to_mock) self.cloud.create_object( container=self.container, name=self.object, filename=self.object_file.name, use_slo=True) # After call 6, order become indeterminate because of thread pool self.assert_calls(stop_after=6) for key, value in self.calls[-1]['headers'].items(): self.assertEqual( value, self.adapter.request_history[-1].headers[key], 'header mismatch in manifest call') base_object = '/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object) self.assertEqual([ { 'path': "{base_object}/000000".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag0', }, { 'path': "{base_object}/000001".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag1', }, { 'path': "{base_object}/000002".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag2', }, { 'path': "{base_object}/000003".format( base_object=base_object), 'size_bytes': 5, 'etag': 'etag3', }, ], self.adapter.request_history[-1].json()) def test_object_segment_retry_failure(self): max_file_size = 25 min_file_size = 1 self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': max_file_size}, slo={'min_segment_size': min_file_size})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container, ), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404), dict(method='PUT', uri='{endpoint}/{container}/{object}/000000'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000001'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000002'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000003'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=501), dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_object, container=self.container, name=self.object, filename=self.object_file.name, use_slo=True) # After call 6, order become indeterminate because of thread pool self.assert_calls(stop_after=6) def test_object_segment_retries(self): max_file_size = 25 min_file_size = 1 self.register_uris([ dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': max_file_size}, slo={'min_segment_size': min_file_size})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container, ), status_code=201, headers={ 'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8', }), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=self.endpoint, container=self.container), headers={ 'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=404), dict(method='PUT', uri='{endpoint}/{container}/{object}/000000'.format( endpoint=self.endpoint, container=self.container, object=self.object), headers={'etag': 'etag0'}, status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000001'.format( endpoint=self.endpoint, container=self.container, object=self.object), headers={'etag': 'etag1'}, status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000002'.format( endpoint=self.endpoint, container=self.container, object=self.object), headers={'etag': 'etag2'}, status_code=201), dict(method='PUT', uri='{endpoint}/{container}/{object}/000003'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=501), dict(method='PUT', uri='{endpoint}/{container}/{object}/000003'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, headers={'etag': 'etag3'}), dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object), status_code=201, validate=dict( params={ 'multipart-manifest', 'put' }, headers={ 'x-object-meta-x-shade-md5': self.md5, 'x-object-meta-x-shade-sha256': self.sha256, })) ]) self.cloud.create_object( container=self.container, name=self.object, filename=self.object_file.name, use_slo=True) # After call 6, order become indeterminate because of thread pool self.assert_calls(stop_after=6) for key, value in self.calls[-1]['headers'].items(): self.assertEqual( value, self.adapter.request_history[-1].headers[key], 'header mismatch in manifest call') base_object = '/{container}/{object}'.format( endpoint=self.endpoint, container=self.container, object=self.object) self.assertEqual([ { 'path': "{base_object}/000000".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag0', }, { 'path': "{base_object}/000001".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag1', }, { 'path': "{base_object}/000002".format( base_object=base_object), 'size_bytes': 25, 'etag': 'etag2', }, { 'path': "{base_object}/000003".format( base_object=base_object), 'size_bytes': 1, 'etag': 'etag3', }, ], self.adapter.request_history[-1].json()) shade-1.31.0/shade/tests/unit/test_qos_policy.py0000666000175000017500000003167213440327640021717 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from shade import exc from shade.tests.unit import base class TestQosPolicy(base.RequestsMockTestCase): policy_name = 'qos test policy' policy_id = '881d1bb7-a663-44c0-8f9f-ee2765b74486' project_id = 'c88fc89f-5121-4a4c-87fd-496b5af864e9' mock_policy = { 'id': policy_id, 'name': policy_name, 'description': '', 'rules': [], 'project_id': project_id, 'tenant_id': project_id, 'shared': False, 'is_default': False } qos_extension = { "updated": "2015-06-08T10:00:00-00:00", "name": "Quality of Service", "links": [], "alias": "qos", "description": "The Quality of Service extension." } qos_default_extension = { "updated": "2017-041-06T10:00:00-00:00", "name": "QoS default policy", "links": [], "alias": "qos-default", "description": "Expose the QoS default policy per project" } enabled_neutron_extensions = [qos_extension, qos_default_extension] def test_get_qos_policy(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}) ]) r = self.cloud.get_qos_policy(self.policy_name) self.assertIsNotNone(r) self.assertDictEqual(self.mock_policy, r) self.assert_calls() def test_get_qos_policy_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.get_qos_policy, self.policy_name) self.assert_calls() def test_create_qos_policy(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policy': self.mock_policy}) ]) policy = self.cloud.create_qos_policy( name=self.policy_name, project_id=self.project_id) self.assertDictEqual(self.mock_policy, policy) self.assert_calls() def test_create_qos_policy_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_qos_policy, name=self.policy_name) self.assert_calls() def test_create_qos_policy_no_qos_default_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policy': self.mock_policy}, validate=dict( json={'policy': { 'name': self.policy_name, 'project_id': self.project_id}})) ]) policy = self.cloud.create_qos_policy( name=self.policy_name, project_id=self.project_id, default=True) self.assertDictEqual(self.mock_policy, policy) self.assert_calls() def test_delete_qos_policy(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', '%s.json' % self.policy_id]), json={}) ]) self.assertTrue(self.cloud.delete_qos_policy(self.policy_name)) self.assert_calls() def test_delete_qos_policy_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_qos_policy, self.policy_name) self.assert_calls() def test_delete_qos_policy_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': []}) ]) self.assertFalse(self.cloud.delete_qos_policy('goofy')) self.assert_calls() def test_delete_qos_policy_multiple_found(self): policy1 = dict(id='123', name=self.policy_name) policy2 = dict(id='456', name=self.policy_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [policy1, policy2]}) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.delete_qos_policy, self.policy_name) self.assert_calls() def test_delete_qos_policy_multiple_using_id(self): policy1 = self.mock_policy policy2 = dict(id='456', name=self.policy_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [policy1, policy2]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', '%s.json' % self.policy_id]), json={}) ]) self.assertTrue(self.cloud.delete_qos_policy(policy1['id'])) self.assert_calls() def test_update_qos_policy(self): expected_policy = copy.copy(self.mock_policy) expected_policy['name'] = 'goofy' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': self.enabled_neutron_extensions}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', '%s.json' % self.policy_id]), json={'policy': expected_policy}, validate=dict( json={'policy': {'name': 'goofy'}})) ]) policy = self.cloud.update_qos_policy( self.policy_id, name='goofy') self.assertDictEqual(expected_policy, policy) self.assert_calls() def test_update_qos_policy_no_qos_extension(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.update_qos_policy, self.policy_id, name="goofy") self.assert_calls() def test_update_qos_policy_no_qos_default_extension(self): expected_policy = copy.copy(self.mock_policy) expected_policy['name'] = 'goofy' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json={'extensions': [self.qos_extension]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies.json']), json={'policies': [self.mock_policy]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'qos', 'policies', '%s.json' % self.policy_id]), json={'policy': expected_policy}, validate=dict( json={'policy': {'name': "goofy"}})) ]) policy = self.cloud.update_qos_policy( self.policy_id, name='goofy', default=True) self.assertDictEqual(expected_policy, policy) self.assert_calls() shade-1.31.0/shade/tests/unit/test_baremetal_node.py0000666000175000017500000020003013440327640022461 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_baremetal_node ---------------------------------- Tests for baremetal node related operations """ import uuid from testscenarios import load_tests_apply_scenarios as load_tests # noqa from shade import exc from shade.tests import fakes from shade.tests.unit import base class TestBaremetalNode(base.IronicTestCase): def setUp(self): super(TestBaremetalNode, self).setUp() self.fake_baremetal_node = fakes.make_fake_machine( self.name, self.uuid) # TODO(TheJulia): Some tests below have fake ports, # since they are required in some processes. Lets refactor # them at some point to use self.fake_baremetal_port. self.fake_baremetal_port = fakes.make_fake_port( '00:01:02:03:04:05', node_id=self.uuid) def test_list_machines(self): fake_baremetal_two = fakes.make_fake_machine('two', str(uuid.uuid4())) self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='nodes'), json={'nodes': [self.fake_baremetal_node, fake_baremetal_two]}), ]) machines = self.op_cloud.list_machines() self.assertEqual(2, len(machines)) self.assertEqual(self.fake_baremetal_node, machines[0]) self.assert_calls() def test_get_machine(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) machine = self.op_cloud.get_machine(self.fake_baremetal_node['uuid']) self.assertEqual(machine['uuid'], self.fake_baremetal_node['uuid']) self.assert_calls() def test_get_machine_by_mac(self): mac_address = '00:01:02:03:04:05' url_address = 'detail?address=%s' % mac_address node_uuid = self.fake_baremetal_node['uuid'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='ports', append=[url_address]), json={'ports': [{'address': mac_address, 'node_uuid': node_uuid}]}), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) machine = self.op_cloud.get_machine_by_mac(mac_address) self.assertEqual(machine['uuid'], self.fake_baremetal_node['uuid']) self.assert_calls() def test_validate_node(self): # NOTE(TheJulia): Note: These are only the interfaces # that are validated, and both must be true for an # exception to not be raised. # This should be fixed at some point, as some interfaces # are important in some cases and should be validated, # such as storage. validate_return = { 'deploy': { 'result': True, }, 'power': { 'result': True, }, 'foo': { 'result': False, }} self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'validate']), json=validate_return), ]) validate_resp = self.op_cloud.validate_node( self.fake_baremetal_node['uuid']) self.assertDictEqual(validate_return, validate_resp) self.assert_calls() def test_validate_node_raises_exception(self): validate_return = { 'deploy': { 'result': False, 'reason': 'error!', }, 'power': { 'result': False, 'reason': 'meow!', }, 'foo': { 'result': True }} self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'validate']), json=validate_return), ]) self.assertRaisesRegex( exc.OpenStackCloudException, '^ironic node .* failed to validate.*', self.op_cloud.validate_node, self.fake_baremetal_node['uuid']) self.assert_calls() def test_patch_machine(self): test_patch = [{ 'op': 'remove', 'path': '/instance_info'}] self.fake_baremetal_node['instance_info'] = {} self.register_uris([ dict(method='PATCH', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node, validate=dict(json=test_patch)), ]) self.op_cloud.patch_machine(self.fake_baremetal_node['uuid'], test_patch) self.assert_calls() def test_set_node_instance_info(self): test_patch = [{ 'op': 'add', 'path': '/foo', 'value': 'bar'}] self.register_uris([ dict(method='PATCH', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node, validate=dict(json=test_patch)), ]) self.op_cloud.set_node_instance_info( self.fake_baremetal_node['uuid'], test_patch) self.assert_calls() def test_purge_node_instance_info(self): test_patch = [{ 'op': 'remove', 'path': '/instance_info'}] self.fake_baremetal_node['instance_info'] = {} self.register_uris([ dict(method='PATCH', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node, validate=dict(json=test_patch)), ]) self.op_cloud.purge_node_instance_info( self.fake_baremetal_node['uuid']) self.assert_calls() def test_inspect_machine_fail_active(self): self.fake_baremetal_node['provision_state'] = 'active' self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.op_cloud.inspect_machine, self.fake_baremetal_node['uuid'], wait=True, timeout=1) self.assert_calls() def test_inspect_machine_failed(self): inspecting_node = self.fake_baremetal_node.copy() self.fake_baremetal_node['provision_state'] = 'inspect failed' self.fake_baremetal_node['last_error'] = 'kaboom!' inspecting_node['provision_state'] = 'inspecting' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node) ]) self.op_cloud.inspect_machine(self.fake_baremetal_node['uuid']) self.assert_calls() def test_inspect_machine_manageable(self): self.fake_baremetal_node['provision_state'] = 'manageable' inspecting_node = self.fake_baremetal_node.copy() inspecting_node['provision_state'] = 'inspecting' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), ]) self.op_cloud.inspect_machine(self.fake_baremetal_node['uuid']) self.assert_calls() def test_inspect_machine_available(self): available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' manageable_node = self.fake_baremetal_node.copy() manageable_node['provision_state'] = 'manageable' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) self.op_cloud.inspect_machine(self.fake_baremetal_node['uuid']) self.assert_calls() def test_inspect_machine_available_wait(self): available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' manageable_node = self.fake_baremetal_node.copy() manageable_node['provision_state'] = 'manageable' inspecting_node = self.fake_baremetal_node.copy() inspecting_node['provision_state'] = 'inspecting' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) self.op_cloud.inspect_machine(self.fake_baremetal_node['uuid'], wait=True, timeout=1) self.assert_calls() def test_inspect_machine_wait(self): self.fake_baremetal_node['provision_state'] = 'manageable' inspecting_node = self.fake_baremetal_node.copy() inspecting_node['provision_state'] = 'inspecting' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.op_cloud.inspect_machine(self.fake_baremetal_node['uuid'], wait=True, timeout=1) self.assert_calls() def test_inspect_machine_inspect_failed(self): self.fake_baremetal_node['provision_state'] = 'manageable' inspecting_node = self.fake_baremetal_node.copy() inspecting_node['provision_state'] = 'inspecting' inspect_fail_node = self.fake_baremetal_node.copy() inspect_fail_node['provision_state'] = 'inspect failed' inspect_fail_node['last_error'] = 'Earth Imploded' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'inspect'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspecting_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=inspect_fail_node), ]) self.assertRaises(exc.OpenStackCloudException, self.op_cloud.inspect_machine, self.fake_baremetal_node['uuid'], wait=True, timeout=1) self.assert_calls() def test_set_machine_maintenace_state(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'maintenance']), validate=dict(json={'reason': 'no reason'})), ]) self.op_cloud.set_machine_maintenance_state( self.fake_baremetal_node['uuid'], True, reason='no reason') self.assert_calls() def test_set_machine_maintenace_state_false(self): self.register_uris([ dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'maintenance'])), ]) self.op_cloud.set_machine_maintenance_state( self.fake_baremetal_node['uuid'], False) self.assert_calls def test_remove_machine_from_maintenance(self): self.register_uris([ dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'maintenance'])), ]) self.op_cloud.remove_machine_from_maintenance( self.fake_baremetal_node['uuid']) self.assert_calls() def test_set_machine_power_on(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power on'})), ]) return_value = self.op_cloud.set_machine_power_on( self.fake_baremetal_node['uuid']) self.assertIsNone(return_value) self.assert_calls() def test_set_machine_power_on_with_retires(self): # NOTE(TheJulia): This logic ends up testing power on/off and reboot # as they all utilize the same helper method. self.register_uris([ dict( method='PUT', status_code=503, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power on'})), dict( method='PUT', status_code=409, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power on'})), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power on'})), ]) return_value = self.op_cloud.set_machine_power_on( self.fake_baremetal_node['uuid']) self.assertIsNone(return_value) self.assert_calls() def test_set_machine_power_off(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'power off'})), ]) return_value = self.op_cloud.set_machine_power_off( self.fake_baremetal_node['uuid']) self.assertIsNone(return_value) self.assert_calls() def test_set_machine_power_reboot(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), validate=dict(json={'target': 'rebooting'})), ]) return_value = self.op_cloud.set_machine_power_reboot( self.fake_baremetal_node['uuid']) self.assertIsNone(return_value) self.assert_calls() def test_set_machine_power_reboot_failure(self): self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'power']), status_code=400, json={'error': 'invalid'}, validate=dict(json={'target': 'rebooting'})), ]) self.assertRaises(exc.OpenStackCloudException, self.op_cloud.set_machine_power_reboot, self.fake_baremetal_node['uuid']) self.assert_calls() def test_node_set_provision_state(self): deploy_node = self.fake_baremetal_node.copy() deploy_node['provision_state'] = 'deploying' active_node = self.fake_baremetal_node.copy() active_node['provision_state'] = 'active' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.op_cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'active', configdrive='http://host/file') self.assert_calls() def test_node_set_provision_state_with_retries(self): deploy_node = self.fake_baremetal_node.copy() deploy_node['provision_state'] = 'deploying' active_node = self.fake_baremetal_node.copy() active_node['provision_state'] = 'active' self.register_uris([ dict( method='PUT', status_code=409, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict( method='PUT', status_code=503, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.op_cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'active', configdrive='http://host/file') self.assert_calls() def test_node_set_provision_state_wait_timeout(self): deploy_node = self.fake_baremetal_node.copy() deploy_node['provision_state'] = 'deploying' active_node = self.fake_baremetal_node.copy() active_node['provision_state'] = 'active' self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=deploy_node), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=active_node), ]) return_value = self.op_cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'active', wait=True) self.assertEqual(active_node, return_value) self.assert_calls() def test_node_set_provision_state_wait_timeout_fails(self): # Intentionally time out. self.fake_baremetal_node['provision_state'] = 'deploy wait' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.op_cloud.node_set_provision_state, self.fake_baremetal_node['uuid'], 'active', wait=True, timeout=0.001) self.assert_calls() def test_node_set_provision_state_wait_success(self): self.fake_baremetal_node['provision_state'] = 'active' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) return_value = self.op_cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'active', wait=True) self.assertEqual(self.fake_baremetal_node, return_value) self.assert_calls() def test_node_set_provision_state_wait_failure_cases(self): self.fake_baremetal_node['provision_state'] = 'foo failed' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.op_cloud.node_set_provision_state, self.fake_baremetal_node['uuid'], 'active', wait=True, timeout=300) self.assert_calls() def test_node_set_provision_state_wait_provide(self): self.fake_baremetal_node['provision_state'] = 'manageable' available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) return_value = self.op_cloud.node_set_provision_state( self.fake_baremetal_node['uuid'], 'provide', wait=True) self.assertEqual(available_node, return_value) self.assert_calls() def test_node_set_provision_state_bad_request(self): self.fake_baremetal_node['provision_state'] = 'enroll' self.register_uris([ dict( method='PUT', status_code=400, json={'error': "{\"faultstring\": \"invalid state\"}"}, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision'])), ]) self.assertRaisesRegex( exc.OpenStackCloudException, '^Baremetal .* to dummy.*/states/provision invalid state$', self.op_cloud.node_set_provision_state, self.fake_baremetal_node['uuid'], 'dummy') self.assert_calls() def test_node_set_provision_state_bad_request_bad_json(self): self.fake_baremetal_node['provision_state'] = 'enroll' self.register_uris([ dict( method='PUT', status_code=400, json={'error': 'invalid json'}, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision'])), ]) self.assertRaisesRegex( exc.OpenStackCloudException, '^Baremetal .* to dummy.*/states/provision$', self.op_cloud.node_set_provision_state, self.fake_baremetal_node['uuid'], 'dummy') self.assert_calls() def test_wait_for_baremetal_node_lock_locked(self): self.fake_baremetal_node['reservation'] = 'conductor0' unlocked_node = self.fake_baremetal_node.copy() unlocked_node['reservation'] = None self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=unlocked_node), ]) self.assertIsNone( self.op_cloud.wait_for_baremetal_node_lock( self.fake_baremetal_node, timeout=1)) self.assert_calls() def test_wait_for_baremetal_node_lock_not_locked(self): self.fake_baremetal_node['reservation'] = None self.assertIsNone( self.op_cloud.wait_for_baremetal_node_lock( self.fake_baremetal_node, timeout=1)) self.assertEqual(0, len(self.adapter.request_history)) def test_wait_for_baremetal_node_lock_timeout(self): self.fake_baremetal_node['reservation'] = 'conductor0' self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.op_cloud.wait_for_baremetal_node_lock, self.fake_baremetal_node, timeout=0.001) self.assert_calls() def test_activate_node(self): self.fake_baremetal_node['provision_state'] = 'active' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'active', 'configdrive': 'http://host/file'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) return_value = self.op_cloud.activate_node( self.fake_baremetal_node['uuid'], configdrive='http://host/file', wait=True) self.assertIsNone(return_value) self.assert_calls() def test_deactivate_node(self): self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'deleted'})), dict(method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) return_value = self.op_cloud.deactivate_node( self.fake_baremetal_node['uuid'], wait=True) self.assertIsNone(return_value) self.assert_calls() def test_register_machine(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] # TODO(TheJulia): There is a lot of duplication # in testing creation. Surely this hsould be a helper # or something. We should fix this. node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'available' if 'provision_state' in node_to_post: node_to_post.pop('provision_state') self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), ]) return_value = self.op_cloud.register_machine(nics, **node_to_post) self.assertDictEqual(self.fake_baremetal_node, return_value) self.assert_calls() # TODO(TheJulia): We need to de-duplicate these tests. # Possibly a dedicated class, although we should do it # then as we may find differences that need to be # accounted for newer microversions. def test_register_machine_enroll(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' manageable_node = self.fake_baremetal_node.copy() manageable_node['provision_state'] = 'manageable' available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), validate=dict(json=node_to_post), json=self.fake_baremetal_node), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) # NOTE(When we migrate to a newer microversion, this test # may require revision. It was written for microversion # ?1.13?, which accidently got reverted to 1.6 at one # point during code being refactored soon after the # change landed. Presently, with the lock at 1.6, # this code is never used in the current code path. return_value = self.op_cloud.register_machine(nics, **node_to_post) self.assertDictEqual(available_node, return_value) self.assert_calls() def test_register_machine_enroll_wait(self): mac_address = self.fake_baremetal_port nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' manageable_node = self.fake_baremetal_node.copy() manageable_node['provision_state'] = 'manageable' available_node = self.fake_baremetal_node.copy() available_node['provision_state'] = 'available' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), validate=dict(json=node_to_post), json=self.fake_baremetal_node), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=manageable_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'provide'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=available_node), ]) return_value = self.op_cloud.register_machine(nics, wait=True, **node_to_post) self.assertDictEqual(available_node, return_value) self.assert_calls() def test_register_machine_enroll_failure(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' failed_node = self.fake_baremetal_node.copy() failed_node['reservation'] = 'conductor0' failed_node['provision_state'] = 'verifying' failed_node['last_error'] = 'kaboom!' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=failed_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=failed_node), ]) self.assertRaises( exc.OpenStackCloudException, self.op_cloud.register_machine, nics, **node_to_post) self.assert_calls() def test_register_machine_enroll_timeout(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' busy_node = self.fake_baremetal_node.copy() busy_node['reservation'] = 'conductor0' busy_node['provision_state'] = 'verifying' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=busy_node), ]) # NOTE(TheJulia): This test shortcircuits the timeout loop # such that it executes only once. The very last returned # state to the API is essentially a busy state that we # want to block on until it has cleared. self.assertRaises( exc.OpenStackCloudException, self.op_cloud.register_machine, nics, timeout=0.001, lock_timeout=0.001, **node_to_post) self.assert_calls() def test_register_machine_enroll_timeout_wait(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'enroll' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), validate=dict(json={'address': mac_address, 'node_uuid': node_uuid}), json=self.fake_baremetal_port), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='PUT', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid'], 'states', 'provision']), validate=dict(json={'target': 'manage'})), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.op_cloud.register_machine, nics, wait=True, timeout=0.001, **node_to_post) self.assert_calls() def test_register_machine_port_create_failed(self): mac_address = '00:01:02:03:04:05' nics = [{'mac': mac_address}] node_uuid = self.fake_baremetal_node['uuid'] node_to_post = { 'chassis_uuid': None, 'driver': None, 'driver_info': None, 'name': self.fake_baremetal_node['name'], 'properties': None, 'uuid': node_uuid} self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='POST', uri=self.get_mock_url( resource='nodes'), json=self.fake_baremetal_node, validate=dict(json=node_to_post)), dict( method='POST', uri=self.get_mock_url( resource='ports'), status_code=400, json={'error': 'invalid'}, validate=dict(json={'address': mac_address, 'node_uuid': node_uuid})), dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), ]) self.assertRaises(exc.OpenStackCloudException, self.op_cloud.register_machine, nics, **node_to_post) self.assert_calls() def test_unregister_machine(self): mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] port_uuid = self.fake_baremetal_port['uuid'] # NOTE(TheJulia): The two values below should be the same. port_node_uuid = self.fake_baremetal_port['node_uuid'] port_url_address = 'detail?address=%s' % mac_address self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='ports', append=[port_url_address]), json={'ports': [{'address': mac_address, 'node_uuid': port_node_uuid, 'uuid': port_uuid}]}), dict( method='DELETE', uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), ]) self.op_cloud.unregister_machine(nics, self.fake_baremetal_node['uuid']) self.assert_calls() def test_unregister_machine_timeout(self): mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] port_uuid = self.fake_baremetal_port['uuid'] port_node_uuid = self.fake_baremetal_port['node_uuid'] port_url_address = 'detail?address=%s' % mac_address self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='ports', append=[port_url_address]), json={'ports': [{'address': mac_address, 'node_uuid': port_node_uuid, 'uuid': port_uuid}]}), dict( method='DELETE', uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.op_cloud.unregister_machine, nics, self.fake_baremetal_node['uuid'], wait=True, timeout=0.001) self.assert_calls() def test_unregister_machine_locked_timeout(self): mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] self.fake_baremetal_node['provision_state'] = 'available' self.fake_baremetal_node['reservation'] = 'conductor99' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) self.assertRaises( exc.OpenStackCloudException, self.op_cloud.unregister_machine, nics, self.fake_baremetal_node['uuid'], timeout=0.001) self.assert_calls() def test_unregister_machine_retries(self): mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] port_uuid = self.fake_baremetal_port['uuid'] # NOTE(TheJulia): The two values below should be the same. port_node_uuid = self.fake_baremetal_port['node_uuid'] port_url_address = 'detail?address=%s' % mac_address self.fake_baremetal_node['provision_state'] = 'available' self.register_uris([ dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), dict( method='GET', uri=self.get_mock_url( resource='ports', append=[port_url_address]), json={'ports': [{'address': mac_address, 'node_uuid': port_node_uuid, 'uuid': port_uuid}]}), dict( method='DELETE', status_code=503, uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', status_code=409, uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', uri=self.get_mock_url( resource='ports', append=[self.fake_baremetal_port['uuid']])), dict( method='DELETE', status_code=409, uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), dict( method='DELETE', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']])), ]) self.op_cloud.unregister_machine(nics, self.fake_baremetal_node['uuid']) self.assert_calls() def test_unregister_machine_unavailable(self): # This is a list of invalid states that the method # should fail on. invalid_states = ['active', 'cleaning', 'clean wait', 'clean failed'] mac_address = self.fake_baremetal_port['address'] nics = [{'mac': mac_address}] url_list = [] for state in invalid_states: self.fake_baremetal_node['provision_state'] = state url_list.append( dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node)) self.register_uris(url_list) for state in invalid_states: self.assertRaises( exc.OpenStackCloudException, self.op_cloud.unregister_machine, nics, self.fake_baremetal_node['uuid']) self.assert_calls() def test_update_machine_patch_no_action(self): self.register_uris([dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ]) # NOTE(TheJulia): This is just testing mechanics. update_dict = self.op_cloud.update_machine( self.fake_baremetal_node['uuid']) self.assertIsNone(update_dict['changes']) self.assertDictEqual(self.fake_baremetal_node, update_dict['node']) self.assert_calls() class TestUpdateMachinePatch(base.IronicTestCase): # NOTE(TheJulia): As appears, and mordred describes, # this class utilizes black magic, which ultimately # results in additional test runs being executed with # the scenario name appended. Useful for lots of # variables that need to be tested. def setUp(self): super(TestUpdateMachinePatch, self).setUp() self.fake_baremetal_node = fakes.make_fake_machine( self.name, self.uuid) def test_update_machine_patch(self): # The model has evolved over time, create the field if # we don't already have it. if self.field_name not in self.fake_baremetal_node: self.fake_baremetal_node[self.field_name] = None value_to_send = self.fake_baremetal_node[self.field_name] if self.changed: value_to_send = 'meow' uris = [dict( method='GET', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node), ] if self.changed: test_patch = [{ 'op': 'replace', 'path': '/' + self.field_name, 'value': 'meow'}] uris.append( dict( method='PATCH', uri=self.get_mock_url( resource='nodes', append=[self.fake_baremetal_node['uuid']]), json=self.fake_baremetal_node, validate=dict(json=test_patch))) self.register_uris(uris) call_args = {self.field_name: value_to_send} update_dict = self.op_cloud.update_machine( self.fake_baremetal_node['uuid'], **call_args) if not self.changed: self.assertIsNone(update_dict['changes']) self.assertDictEqual(self.fake_baremetal_node, update_dict['node']) self.assert_calls() scenarios = [ ('chassis_uuid', dict(field_name='chassis_uuid', changed=False)), ('chassis_uuid_changed', dict(field_name='chassis_uuid', changed=True)), ('driver', dict(field_name='driver', changed=False)), ('driver_changed', dict(field_name='driver', changed=True)), ('driver_info', dict(field_name='driver_info', changed=False)), ('driver_info_changed', dict(field_name='driver_info', changed=True)), ('instance_info', dict(field_name='instance_info', changed=False)), ('instance_info_changed', dict(field_name='instance_info', changed=True)), ('instance_uuid', dict(field_name='instance_uuid', changed=False)), ('instance_uuid_changed', dict(field_name='instance_uuid', changed=True)), ('name', dict(field_name='name', changed=False)), ('name_changed', dict(field_name='name', changed=True)), ('properties', dict(field_name='properties', changed=False)), ('properties_changed', dict(field_name='properties', changed=True)) ] shade-1.31.0/shade/tests/unit/test_domain_params.py0000666000175000017500000000552513440327640022346 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from shade import exc from shade.tests.unit import base class TestDomainParams(base.RequestsMockTestCase): def test_identity_params_v3(self): project_data = self._get_project_data(v3=True) self.register_uris([ dict(method='GET', uri='https://identity.example.com/v3/projects', json=dict(projects=[project_data.json_response['project']])) ]) ret = self.cloud._get_identity_params( domain_id='5678', project=project_data.project_name) self.assertIn('default_project_id', ret) self.assertEqual(ret['default_project_id'], project_data.project_id) self.assertIn('domain_id', ret) self.assertEqual(ret['domain_id'], '5678') self.assert_calls() def test_identity_params_v3_no_domain(self): project_data = self._get_project_data(v3=True) self.assertRaises( exc.OpenStackCloudException, self.cloud._get_identity_params, domain_id=None, project=project_data.project_name) self.assert_calls() def test_identity_params_v2(self): self.use_keystone_v2() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri='https://identity.example.com/v2.0/tenants', json=dict(tenants=[project_data.json_response['tenant']])) ]) ret = self.cloud._get_identity_params( domain_id='foo', project=project_data.project_name) self.assertIn('tenant_id', ret) self.assertEqual(ret['tenant_id'], project_data.project_id) self.assertNotIn('domain', ret) self.assert_calls() def test_identity_params_v2_no_domain(self): self.use_keystone_v2() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri='https://identity.example.com/v2.0/tenants', json=dict(tenants=[project_data.json_response['tenant']])) ]) ret = self.cloud._get_identity_params( domain_id=None, project=project_data.project_name) self.assertIn('tenant_id', ret) self.assertEqual(ret['tenant_id'], project_data.project_id) self.assertNotIn('domain', ret) self.assert_calls() shade-1.31.0/shade/tests/unit/test_create_server.py0000666000175000017500000011467313440327640022372 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_create_server ---------------------------------- Tests for the `create_server` command. """ import base64 import uuid import mock import shade from shade import exc from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestCreateServer(base.RequestsMockTestCase): def test_create_server_with_get_exception(self): """ Test that a bad status code when attempting to get the server instance raises an exception in create_server. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), status_code=404), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', {'id': 'image-id'}, {'id': 'flavor-id'}) self.assert_calls() def test_create_server_with_server_error(self): """ Test that a server error before we return or begin waiting for the server instance spawn raises an exception in create_server. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') error_server = fakes.make_fake_server('1234', '', 'ERROR') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': error_server}), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', {'id': 'image-id'}, {'id': 'flavor-id'}) self.assert_calls() def test_create_server_wait_server_error(self): """ Test that a server error while waiting for the server to spawn raises an exception in create_server. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') error_server = fakes.make_fake_server('1234', '', 'ERROR') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [build_server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [error_server]}), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', dict(id='image-id'), dict(id='flavor-id'), wait=True) self.assert_calls() def test_create_server_with_timeout(self): """ Test that a timeout while waiting for the server to spawn raises an exception in create_server. """ fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), ]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.create_server, 'server-name', dict(id='image-id'), dict(id='flavor-id'), wait=True, timeout=0.01) # We poll at the end, so we don't know real counts self.assert_calls(do_count=False) def test_create_server_no_wait(self): """ Test that create_server with no wait and no exception in the create call returns the server instance. """ fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) normalized = self.cloud._expand_server( self.cloud._normalize_server(fake_server), False, False) self.assertEqual( normalized, self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'))) self.assert_calls() def test_create_server_config_drive(self): """ Test that config_drive gets passed in properly """ fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'config_drive': True, u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) normalized = self.cloud._expand_server( self.cloud._normalize_server(fake_server), False, False) self.assertEqual( normalized, self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), config_drive=True)) self.assert_calls() def test_create_server_config_drive_none(self): """ Test that config_drive gets not passed in properly """ fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) normalized = self.cloud._expand_server( self.cloud._normalize_server(fake_server), False, False) self.assertEqual( normalized, self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), config_drive=None)) self.assert_calls() def test_create_server_with_admin_pass_no_wait(self): """ Test that a server with an admin_pass passed returns the password """ admin_pass = self.getUniqueString('password') fake_server = fakes.make_fake_server('1234', '', 'BUILD') fake_create_server = fakes.make_fake_server( '1234', '', 'BUILD', admin_pass=admin_pass) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_create_server}, validate=dict( json={'server': { u'adminPass': admin_pass, u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) self.assertEqual( self.cloud._normalize_server(fake_create_server)['adminPass'], self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), admin_pass=admin_pass)['adminPass']) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, "wait_for_server") def test_create_server_with_admin_pass_wait(self, mock_wait): """ Test that a server with an admin_pass passed returns the password """ admin_pass = self.getUniqueString('password') fake_server = fakes.make_fake_server('1234', '', 'BUILD') fake_server_with_pass = fakes.make_fake_server( '1234', '', 'BUILD', admin_pass=admin_pass) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server_with_pass}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'adminPass': admin_pass, u'name': u'server-name'}})), ]) # The wait returns non-password server mock_wait.return_value = self.cloud._normalize_server(fake_server) server = self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), admin_pass=admin_pass, wait=True) # Assert that we did wait self.assertTrue(mock_wait.called) # Even with the wait, we should still get back a passworded server self.assertEqual( server['adminPass'], self.cloud._normalize_server(fake_server_with_pass)['adminPass'] ) self.assert_calls() def test_create_server_user_data_base64(self): """ Test that a server passed user-data sends it base64 encoded. """ user_data = self.getUniqueString('user_data') user_data_b64 = base64.b64encode( user_data.encode('utf-8')).decode('utf-8') fake_server = fakes.make_fake_server('1234', '', 'BUILD') fake_server['user_data'] = user_data self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'user_data': user_data_b64, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': fake_server}), ]) self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), userdata=user_data, wait=False) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, "get_active_server") @mock.patch.object(shade.OpenStackCloud, "get_server") def test_wait_for_server(self, mock_get_server, mock_get_active_server): """ Test that waiting for a server returns the server instance when its status changes to "ACTIVE". """ # TODO(mordred) Rework this to not mock methods building_server = {'id': 'fake_server_id', 'status': 'BUILDING'} active_server = {'id': 'fake_server_id', 'status': 'ACTIVE'} mock_get_server.side_effect = iter([building_server, active_server]) mock_get_active_server.side_effect = iter([ building_server, active_server]) server = self.cloud.wait_for_server(building_server) self.assertEqual(2, mock_get_server.call_count) mock_get_server.assert_has_calls([ mock.call(building_server['id']), mock.call(active_server['id']), ]) self.assertEqual(2, mock_get_active_server.call_count) mock_get_active_server.assert_has_calls([ mock.call(server=building_server, reuse=True, auto_ip=True, ips=None, ip_pool=None, wait=True, timeout=mock.ANY, nat_destination=None), mock.call(server=active_server, reuse=True, auto_ip=True, ips=None, ip_pool=None, wait=True, timeout=mock.ANY, nat_destination=None), ]) self.assertEqual('ACTIVE', server['status']) @mock.patch.object(shade.OpenStackCloud, 'wait_for_server') def test_create_server_wait(self, mock_wait): """ Test that create_server with a wait actually does the wait. """ # TODO(mordred) Make this a full proper response fake_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': fake_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), wait=True), mock_wait.assert_called_once_with( fake_server, auto_ip=True, ips=None, ip_pool=None, reuse=True, timeout=180, nat_destination=None, ) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, 'add_ips_to_server') @mock.patch('time.sleep') def test_create_server_no_addresses( self, mock_sleep, mock_add_ips_to_server): """ Test that create_server with a wait throws an exception if the server doesn't have addresses. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') fake_server = fakes.make_fake_server( '1234', '', 'ACTIVE', addresses={}) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [build_server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=['device_id=1234']), json={'ports': []}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) mock_add_ips_to_server.return_value = fake_server self.cloud._SERVER_AGE = 0 self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', {'id': 'image-id'}, {'id': 'flavor-id'}, wait=True) self.assert_calls() def test_create_server_network_with_no_nics(self): """ Verify that if 'network' is supplied, and 'nics' is not, that we attempt to get the network for the server. """ build_server = fakes.make_fake_server('1234', '', 'BUILD') network = { 'id': 'network-id', 'name': 'network-name' } self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'networks': [{u'uuid': u'network-id'}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': build_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), network='network-name') self.assert_calls() def test_create_server_network_with_empty_nics(self): """ Verify that if 'network' is supplied, along with an empty 'nics' list, it's treated the same as if 'nics' were not included. """ network = { 'id': 'network-id', 'name': 'network-name' } build_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'networks': [{u'uuid': u'network-id'}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': build_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), network='network-name', nics=[]) self.assert_calls() def test_create_server_network_fixed_ip(self): """ Verify that if 'fixed_ip' is supplied in nics, we pass it to networks appropriately. """ network = { 'id': 'network-id', 'name': 'network-name' } fixed_ip = '10.0.0.1' build_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'networks': [{'fixed_ip': fixed_ip}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': build_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), nics=[{'fixed_ip': fixed_ip}]) self.assert_calls() def test_create_server_network_v4_fixed_ip(self): """ Verify that if 'v4-fixed-ip' is supplied in nics, we pass it to networks appropriately. """ network = { 'id': 'network-id', 'name': 'network-name' } fixed_ip = '10.0.0.1' build_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'networks': [{'fixed_ip': fixed_ip}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': build_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), nics=[{'fixed_ip': fixed_ip}]) self.assert_calls() def test_create_server_network_v6_fixed_ip(self): """ Verify that if 'v6-fixed-ip' is supplied in nics, we pass it to networks appropriately. """ network = { 'id': 'network-id', 'name': 'network-name' } # Note - it doesn't actually have to be a v6 address - it's just # an alias. fixed_ip = 'fe80::28da:5fff:fe57:13ed' build_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': u'flavor-id', u'imageRef': u'image-id', u'max_count': 1, u'min_count': 1, u'networks': [{'fixed_ip': fixed_ip}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': build_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), ]) self.cloud.create_server( 'server-name', dict(id='image-id'), dict(id='flavor-id'), nics=[{'fixed_ip': fixed_ip}]) self.assert_calls() def test_create_server_network_fixed_ip_conflicts(self): """ Verify that if 'fixed_ip' and 'v4-fixed-ip' are both supplied in nics, we throw an exception. """ # Note - it doesn't actually have to be a v6 address - it's just # an alias. self.use_nothing() fixed_ip = '10.0.0.1' self.assertRaises( exc.OpenStackCloudException, self.cloud.create_server, 'server-name', dict(id='image-id'), dict(id='flavor-id'), nics=[{ 'fixed_ip': fixed_ip, 'v4-fixed-ip': fixed_ip }]) self.assert_calls() def test_create_server_get_flavor_image(self): self.use_glance() image_id = str(uuid.uuid4()) fake_image_dict = fakes.make_fake_image(image_id=image_id) fake_image_search_return = {'images': [fake_image_dict]} build_server = fakes.make_fake_server('1234', '', 'BUILD') active_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=fake_image_search_return), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['flavors', 'detail'], qs_elements=['is_public=None']), json={'flavors': fakes.FAKE_FLAVOR_LIST}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': fakes.FLAVOR_ID, u'imageRef': image_id, u'max_count': 1, u'min_count': 1, u'networks': [{u'uuid': u'some-network'}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': active_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.cloud.create_server( 'server-name', image_id, 'vanilla', nics=[{'net-id': 'some-network'}], wait=False) self.assert_calls() def test_create_server_nics_port_id(self): '''Verify port-id in nics input turns into port in REST.''' build_server = fakes.make_fake_server('1234', '', 'BUILD') active_server = fakes.make_fake_server('1234', '', 'BUILD') image_id = uuid.uuid4().hex port_id = uuid.uuid4().hex self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': fakes.FLAVOR_ID, u'imageRef': image_id, u'max_count': 1, u'min_count': 1, u'networks': [{u'port': port_id}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': active_server}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), ]) self.cloud.create_server( 'server-name', dict(id=image_id), dict(id=fakes.FLAVOR_ID), nics=[{'port-id': port_id}], wait=False) self.assert_calls() def test_create_boot_attach_volume(self): build_server = fakes.make_fake_server('1234', '', 'BUILD') active_server = fakes.make_fake_server('1234', '', 'BUILD') vol = {'id': 'volume001', 'status': 'available', 'name': '', 'attachments': []} volume = meta.obj_to_munch(fakes.FakeVolume(**vol)) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-volumes_boot']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': 'flavor-id', u'imageRef': 'image-id', u'max_count': 1, u'min_count': 1, u'block_device_mapping_v2': [ { u'boot_index': 0, u'delete_on_termination': True, u'destination_type': u'local', u'source_type': u'image', u'uuid': u'image-id' }, { u'boot_index': u'-1', u'delete_on_termination': False, u'destination_type': u'volume', u'source_type': u'volume', u'uuid': u'volume001' } ], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': active_server}), ]) self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), boot_from_volume=False, volumes=[volume], wait=False) self.assert_calls() def test_create_boot_from_volume_image_terminate(self): build_server = fakes.make_fake_server('1234', '', 'BUILD') active_server = fakes.make_fake_server('1234', '', 'BUILD') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': []}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-volumes_boot']), json={'server': build_server}, validate=dict( json={'server': { u'flavorRef': 'flavor-id', u'imageRef': '', u'max_count': 1, u'min_count': 1, u'block_device_mapping_v2': [{ u'boot_index': u'0', u'delete_on_termination': True, u'destination_type': u'volume', u'source_type': u'image', u'uuid': u'image-id', u'volume_size': u'1'}], u'name': u'server-name'}})), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234']), json={'server': active_server}), ]) self.cloud.create_server( name='server-name', image=dict(id='image-id'), flavor=dict(id='flavor-id'), boot_from_volume=True, terminate_volume=True, volume_size=1, wait=False) self.assert_calls() shade-1.31.0/shade/tests/unit/test_usage.py0000666000175000017500000000505613440327640020637 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid from shade.tests.unit import base class TestUsage(base.RequestsMockTestCase): def test_get_usage(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] start = end = datetime.datetime.now() self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-simple-tenant-usage', project.project_id], qs_elements=[ 'start={now}'.format(now=start.isoformat()), 'end={now}'.format(now=end.isoformat()), ]), json={"tenant_usage": { "server_usages": [ { "ended_at": None, "flavor": "m1.tiny", "hours": 1.0, "instance_id": uuid.uuid4().hex, "local_gb": 1, "memory_mb": 512, "name": "instance-2", "started_at": "2012-10-08T20:10:44.541277", "state": "active", "tenant_id": "6f70656e737461636b20342065766572", "uptime": 3600, "vcpus": 1 } ], "start": "2012-10-08T20:10:44.587336", "stop": "2012-10-08T21:10:44.587336", "tenant_id": "6f70656e737461636b20342065766572", "total_hours": 1.0, "total_local_gb_usage": 1.0, "total_memory_mb_usage": 512.0, "total_vcpus_usage": 1.0 }}) ]) self.op_cloud.get_compute_usage(project.project_id, start, end) self.assert_calls() shade-1.31.0/shade/tests/unit/base.py0000666000175000017500000006636713440327640017422 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import time import uuid import fixtures import mock import os import os_client_config as occ from requests import structures from requests_mock.contrib import fixture as rm_fixture from six.moves import urllib import tempfile import shade.openstackcloud from shade.tests import base _ProjectData = collections.namedtuple( 'ProjectData', 'project_id, project_name, enabled, domain_id, description, ' 'json_response, json_request') _UserData = collections.namedtuple( 'UserData', 'user_id, password, name, email, description, domain_id, enabled, ' 'json_response, json_request') _GroupData = collections.namedtuple( 'GroupData', 'group_id, group_name, domain_id, description, json_response, ' 'json_request') _DomainData = collections.namedtuple( 'DomainData', 'domain_id, domain_name, description, json_response, ' 'json_request') _ServiceData = collections.namedtuple( 'Servicedata', 'service_id, service_name, service_type, description, enabled, ' 'json_response_v3, json_response_v2, json_request') _EndpointDataV3 = collections.namedtuple( 'EndpointData', 'endpoint_id, service_id, interface, region, url, enabled, ' 'json_response, json_request') _EndpointDataV2 = collections.namedtuple( 'EndpointData', 'endpoint_id, service_id, region, public_url, internal_url, ' 'admin_url, v3_endpoint_list, json_response, ' 'json_request') # NOTE(notmorgan): Shade does not support domain-specific roles # This should eventually be fixed if it becomes a main-stream feature. _RoleData = collections.namedtuple( 'RoleData', 'role_id, role_name, json_response, json_request') class BaseTestCase(base.TestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): """Run before each test method to initialize test environment.""" super(BaseTestCase, self).setUp() # Sleeps are for real testing, but unit tests shouldn't need them realsleep = time.sleep def _nosleep(seconds): return realsleep(seconds * 0.0001) self.sleep_fixture = self.useFixture(fixtures.MonkeyPatch( 'time.sleep', _nosleep)) self.fixtures_directory = 'shade/tests/unit/fixtures' # Isolate os-client-config from test environment config = tempfile.NamedTemporaryFile(delete=False) cloud_path = '%s/clouds/%s' % (self.fixtures_directory, cloud_config_fixture) with open(cloud_path, 'rb') as f: content = f.read() config.write(content) config.close() vendor = tempfile.NamedTemporaryFile(delete=False) vendor.write(b'{}') vendor.close() # set record mode depending on environment record_mode = os.environ.get('BETAMAX_RECORD_FIXTURES', False) if record_mode: self.record_fixtures = 'new_episodes' else: self.record_fixtures = None test_cloud = os.environ.get('SHADE_OS_CLOUD', '_test_cloud_') self.config = occ.OpenStackConfig( config_files=[config.name], vendor_files=[vendor.name], secure_files=['non-existant']) self.cloud_config = self.config.get_one_cloud( cloud=test_cloud, validate=False) self.cloud = shade.OpenStackCloud( cloud_config=self.cloud_config) self.strict_cloud = shade.OpenStackCloud( cloud_config=self.cloud_config, strict=True) self.op_cloud = shade.OperatorCloud( cloud_config=self.cloud_config) class TestCase(BaseTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(TestCase, self).setUp(cloud_config_fixture=cloud_config_fixture) self.session_fixture = self.useFixture(fixtures.MonkeyPatch( 'os_client_config.cloud_config.CloudConfig.get_session', mock.Mock())) class RequestsMockTestCase(BaseTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(RequestsMockTestCase, self).setUp( cloud_config_fixture=cloud_config_fixture) # FIXME(notmorgan): Convert the uri_registry, discovery.json, and # use of keystone_v3/v2 to a proper fixtures.Fixture. For now this # is acceptable, but eventually this should become it's own fixture # that encapsulates the registry, registering the URIs, and # assert_calls (and calling assert_calls every test case that uses # it on cleanup). Subclassing here could be 100% eliminated in the # future allowing any class to simply # self.useFixture(shade.RequestsMockFixture) and get all the benefits. # NOTE(notmorgan): use an ordered dict here to ensure we preserve the # order in which items are added to the uri_registry. This makes # the behavior more consistent when dealing with ensuring the # requests_mock uri/query_string matchers are ordered and parse the # request in the correct orders. self._uri_registry = collections.OrderedDict() self.discovery_json = os.path.join( self.fixtures_directory, 'discovery.json') self.use_keystone_v3() self.__register_uris_called = False def get_mock_url(self, service_type, interface='public', resource=None, append=None, base_url_append=None, qs_elements=None): endpoint_url = self.cloud.endpoint_for( service_type=service_type, interface=interface) # Strip trailing slashes, so as not to produce double-slashes below if endpoint_url.endswith('/'): endpoint_url = endpoint_url[:-1] to_join = [endpoint_url] qs = '' if base_url_append: to_join.append(base_url_append) if resource: to_join.append(resource) to_join.extend(append or []) if qs_elements is not None: qs = '?%s' % '&'.join(qs_elements) return '%(uri)s%(qs)s' % {'uri': '/'.join(to_join), 'qs': qs} def mock_for_keystone_projects(self, project=None, v3=True, list_get=False, id_get=False, project_list=None, project_count=None): if project: assert not (project_list or project_count) elif project_list: assert not (project or project_count) elif project_count: assert not (project or project_list) else: raise Exception('Must specify a project, project_list, ' 'or project_count') assert list_get or id_get base_url_append = 'v3' if v3 else None if project: project_list = [project] elif project_count: # Generate multiple projects project_list = [self._get_project_data(v3=v3) for c in range(0, project_count)] uri_mock_list = [] if list_get: uri_mock_list.append( dict(method='GET', uri=self.get_mock_url( service_type='identity', interface='admin', resource='projects', base_url_append=base_url_append), status_code=200, json={'projects': [p.json_response['project'] for p in project_list]}) ) if id_get: for p in project_list: uri_mock_list.append( dict(method='GET', uri=self.get_mock_url( service_type='identity', interface='admin', resource='projects', append=[p.project_id], base_url_append=base_url_append), status_code=200, json=p.json_response) ) self.__do_register_uris(uri_mock_list) return project_list def _get_project_data(self, project_name=None, enabled=None, domain_id=None, description=None, v3=True, project_id=None): project_name = project_name or self.getUniqueString('projectName') project_id = uuid.UUID(project_id or uuid.uuid4().hex).hex response = {'id': project_id, 'name': project_name} request = {'name': project_name} domain_id = (domain_id or uuid.uuid4().hex) if v3 else None if domain_id: request['domain_id'] = domain_id response['domain_id'] = domain_id if enabled is not None: enabled = bool(enabled) response['enabled'] = enabled request['enabled'] = enabled response.setdefault('enabled', True) request.setdefault('enabled', True) if description: response['description'] = description request['description'] = description request.setdefault('description', None) if v3: project_key = 'project' else: project_key = 'tenant' return _ProjectData(project_id, project_name, enabled, domain_id, description, {project_key: response}, {project_key: request}) def _get_group_data(self, name=None, domain_id=None, description=None): group_id = uuid.uuid4().hex name = name or self.getUniqueString('groupname') domain_id = uuid.UUID(domain_id or uuid.uuid4().hex).hex response = {'id': group_id, 'name': name, 'domain_id': domain_id} request = {'name': name, 'domain_id': domain_id} if description is not None: response['description'] = description request['description'] = description return _GroupData(group_id, name, domain_id, description, {'group': response}, {'group': request}) def _get_user_data(self, name=None, password=None, **kwargs): name = name or self.getUniqueString('username') password = password or self.getUniqueString('user_password') user_id = uuid.uuid4().hex response = {'name': name, 'id': user_id} request = {'name': name, 'password': password} if kwargs.get('domain_id'): kwargs['domain_id'] = uuid.UUID(kwargs['domain_id']).hex response['domain_id'] = kwargs.pop('domain_id') request['domain_id'] = response['domain_id'] response['email'] = kwargs.pop('email', None) request['email'] = response['email'] response['enabled'] = kwargs.pop('enabled', True) request['enabled'] = response['enabled'] response['description'] = kwargs.pop('description', None) if response['description']: request['description'] = response['description'] self.assertIs(0, len(kwargs), message='extra key-word args received ' 'on _get_user_data') return _UserData(user_id, password, name, response['email'], response['description'], response.get('domain_id'), response.get('enabled'), {'user': response}, {'user': request}) def _get_domain_data(self, domain_name=None, description=None, enabled=None): domain_id = uuid.uuid4().hex domain_name = domain_name or self.getUniqueString('domainName') response = {'id': domain_id, 'name': domain_name} request = {'name': domain_name} if enabled is not None: request['enabled'] = bool(enabled) response['enabled'] = bool(enabled) if description: response['description'] = description request['description'] = description response.setdefault('enabled', True) return _DomainData(domain_id, domain_name, description, {'domain': response}, {'domain': request}) def _get_service_data(self, type=None, name=None, description=None, enabled=True): service_id = uuid.uuid4().hex name = name or uuid.uuid4().hex type = type or uuid.uuid4().hex response = {'id': service_id, 'name': name, 'type': type, 'enabled': enabled} if description is not None: response['description'] = description request = response.copy() request.pop('id') return _ServiceData(service_id, name, type, description, enabled, {'service': response}, {'OS-KSADM:service': response}, request) def _get_endpoint_v3_data(self, service_id=None, region=None, url=None, interface=None, enabled=True): endpoint_id = uuid.uuid4().hex service_id = service_id or uuid.uuid4().hex region = region or uuid.uuid4().hex url = url or 'https://example.com/' interface = interface or uuid.uuid4().hex response = {'id': endpoint_id, 'service_id': service_id, 'region': region, 'interface': interface, 'url': url, 'enabled': enabled} request = response.copy() request.pop('id') response['region_id'] = response['region'] return _EndpointDataV3(endpoint_id, service_id, interface, region, url, enabled, {'endpoint': response}, {'endpoint': request}) def _get_endpoint_v2_data(self, service_id=None, region=None, public_url=None, admin_url=None, internal_url=None): endpoint_id = uuid.uuid4().hex service_id = service_id or uuid.uuid4().hex region = region or uuid.uuid4().hex response = {'id': endpoint_id, 'service_id': service_id, 'region': region} v3_endpoints = {} request = response.copy() request.pop('id') if admin_url: response['adminURL'] = admin_url v3_endpoints['admin'] = self._get_endpoint_v3_data( service_id, region, public_url, interface='admin') if internal_url: response['internalURL'] = internal_url v3_endpoints['internal'] = self._get_endpoint_v3_data( service_id, region, internal_url, interface='internal') if public_url: response['publicURL'] = public_url v3_endpoints['public'] = self._get_endpoint_v3_data( service_id, region, public_url, interface='public') request = response.copy() request.pop('id') for u in ('publicURL', 'internalURL', 'adminURL'): if request.get(u): request[u.lower()] = request.pop(u) return _EndpointDataV2(endpoint_id, service_id, region, public_url, internal_url, admin_url, v3_endpoints, {'endpoint': response}, {'endpoint': request}) def _get_role_data(self, role_name=None): role_id = uuid.uuid4().hex role_name = role_name or uuid.uuid4().hex request = {'name': role_name} response = request.copy() response['id'] = role_id return _RoleData(role_id, role_name, {'role': response}, {'role': request}) def use_nothing(self): self.calls = [] self._uri_registry.clear() def use_keystone_v3(self, catalog='catalog-v3.json'): self.adapter = self.useFixture(rm_fixture.Fixture()) self.calls = [] self._uri_registry.clear() self.__do_register_uris([ dict(method='GET', uri='https://identity.example.com/', text=open(self.discovery_json, 'r').read()), dict(method='POST', uri='https://identity.example.com/v3/auth/tokens', headers={ 'X-Subject-Token': self.getUniqueString('KeystoneToken')}, text=open(os.path.join( self.fixtures_directory, catalog), 'r').read() ), ]) self._make_test_cloud(identity_api_version='3') def use_keystone_v2(self): self.adapter = self.useFixture(rm_fixture.Fixture()) self.calls = [] self._uri_registry.clear() self.__do_register_uris([ dict(method='GET', uri='https://identity.example.com/', text=open(self.discovery_json, 'r').read()), dict(method='POST', uri='https://identity.example.com/v2.0/tokens', text=open(os.path.join( self.fixtures_directory, 'catalog-v2.json'), 'r').read() ), ]) self._make_test_cloud(cloud_name='_test_cloud_v2_', identity_api_version='2.0') def _make_test_cloud(self, cloud_name='_test_cloud_', **kwargs): test_cloud = os.environ.get('SHADE_OS_CLOUD', cloud_name) self.cloud_config = self.config.get_one_cloud( cloud=test_cloud, validate=True, **kwargs) self.cloud = shade.OpenStackCloud( cloud_config=self.cloud_config) self.strict_cloud = shade.OpenStackCloud( cloud_config=self.cloud_config, strict=True) self.op_cloud = shade.OperatorCloud( cloud_config=self.cloud_config) def get_glance_discovery_mock_dict( self, image_version_json='image-version.json', image_discovery_url='https://image.example.com/'): discovery_fixture = os.path.join( self.fixtures_directory, image_version_json) return dict(method='GET', uri=image_discovery_url, status_code=300, text=open(discovery_fixture, 'r').read()) def get_designate_discovery_mock_dict(self): discovery_fixture = os.path.join( self.fixtures_directory, "dns.json") return dict(method='GET', uri="https://dns.example.com/", text=open(discovery_fixture, 'r').read()) def get_ironic_discovery_mock_dict(self): discovery_fixture = os.path.join( self.fixtures_directory, "baremetal.json") return dict(method='GET', uri="https://bare-metal.example.com/", text=open(discovery_fixture, 'r').read()) def use_glance( self, image_version_json='image-version.json', image_discovery_url='https://image.example.com/'): # NOTE(notmorgan): This method is only meant to be used in "setUp" # where the ordering of the url being registered is tightly controlled # if the functionality of .use_glance is meant to be used during an # actual test case, use .get_glance_discovery_mock and apply to the # right location in the mock_uris when calling .register_uris self.__do_register_uris([ self.get_glance_discovery_mock_dict( image_version_json, image_discovery_url)]) def use_designate(self): # NOTE(slaweq): This method is only meant to be used in "setUp" # where the ordering of the url being registered is tightly controlled # if the functionality of .use_designate is meant to be used during an # actual test case, use .get_designate_discovery_mock and apply to the # right location in the mock_uris when calling .register_uris self.__do_register_uris([ self.get_designate_discovery_mock_dict()]) def use_ironic(self): # NOTE(TheJulia): This method is only meant to be used in "setUp" # where the ordering of the url being registered is tightly controlled # if the functionality of .use_ironic is meant to be used during an # actual test case, use .get_ironic_discovery_mock and apply to the # right location in the mock_uris when calling .register_uris self.__do_register_uris([ self.get_ironic_discovery_mock_dict()]) def register_uris(self, uri_mock_list=None): """Mock a list of URIs and responses via requests mock. This method may be called only once per test-case to avoid odd and difficult to debug interactions. Discovery and Auth request mocking happens separately from this method. :param uri_mock_list: List of dictionaries that template out what is passed to requests_mock fixture's `register_uri`. Format is: {'method': , 'uri': , ... } Common keys to pass in the dictionary: * json: the json response (dict) * status_code: the HTTP status (int) * validate: The request body (dict) to validate with assert_calls all key-word arguments that are valid to send to requests_mock are supported. This list should be in the order in which calls are made. When `assert_calls` is executed, order here will be validated. Duplicate URIs and Methods are allowed and will be collapsed into a single matcher. Each response will be returned in order as the URI+Method is hit. :type uri_mock_list: list :return: None """ assert not self.__register_uris_called self.__do_register_uris(uri_mock_list or []) self.__register_uris_called = True def __do_register_uris(self, uri_mock_list=None): for to_mock in uri_mock_list: kw_params = {k: to_mock.pop(k) for k in ('request_headers', 'complete_qs', '_real_http') if k in to_mock} method = to_mock.pop('method') uri = to_mock.pop('uri') # NOTE(notmorgan): make sure the delimiter is non-url-safe, in this # case "|" is used so that the split can be a bit easier on # maintainers of this code. key = '{method}|{uri}|{params}'.format( method=method, uri=uri, params=kw_params) validate = to_mock.pop('validate', {}) valid_keys = set(['json', 'headers', 'params']) invalid_keys = set(validate.keys()) - valid_keys if invalid_keys: raise TypeError( "Invalid values passed to validate: {keys}".format( keys=invalid_keys)) headers = structures.CaseInsensitiveDict(to_mock.pop('headers', {})) if 'content-type' not in headers: headers[u'content-type'] = 'application/json' to_mock['headers'] = headers self.calls += [ dict( method=method, url=uri, **validate) ] self._uri_registry.setdefault( key, {'response_list': [], 'kw_params': kw_params}) if self._uri_registry[key]['kw_params'] != kw_params: raise AssertionError( 'PROGRAMMING ERROR: key-word-params ' 'should be part of the uri_key and cannot change, ' 'it will affect the matcher in requests_mock. ' '%(old)r != %(new)r' % {'old': self._uri_registry[key]['kw_params'], 'new': kw_params}) self._uri_registry[key]['response_list'].append(to_mock) for mocked, params in self._uri_registry.items(): mock_method, mock_uri, _ignored = mocked.split('|', 2) self.adapter.register_uri( mock_method, mock_uri, params['response_list'], **params['kw_params']) def assert_calls(self, stop_after=None, do_count=True): for (x, (call, history)) in enumerate( zip(self.calls, self.adapter.request_history)): if stop_after and x > stop_after: break call_uri_parts = urllib.parse.urlparse(call['url']) history_uri_parts = urllib.parse.urlparse(history.url) self.assertEqual( (call['method'], call_uri_parts.scheme, call_uri_parts.netloc, call_uri_parts.path, call_uri_parts.params, urllib.parse.parse_qs(call_uri_parts.query)), (history.method, history_uri_parts.scheme, history_uri_parts.netloc, history_uri_parts.path, history_uri_parts.params, urllib.parse.parse_qs(history_uri_parts.query)), ('REST mismatch on call %(index)d. Expected %(call)r. ' 'Got %(history)r). ' 'NOTE: query string order differences wont cause mismatch' % { 'index': x, 'call': '{method} {url}'.format(method=call['method'], url=call['url']), 'history': '{method} {url}'.format( method=history.method, url=history.url)}) ) if 'json' in call: self.assertEqual( call['json'], history.json(), 'json content mismatch in call {index}'.format(index=x)) # headers in a call isn't exhaustive - it's checking to make sure # a specific header or headers are there, not that they are the # only headers if 'headers' in call: for key, value in call['headers'].items(): self.assertEqual( value, history.headers[key], 'header mismatch in call {index}'.format(index=x)) if do_count: self.assertEqual( len(self.calls), len(self.adapter.request_history)) class IronicTestCase(RequestsMockTestCase): def setUp(self): super(IronicTestCase, self).setUp() self.use_ironic() self.uuid = str(uuid.uuid4()) self.name = self.getUniqueString('name') def get_mock_url(self, resource=None, append=None, qs_elements=None): return super(IronicTestCase, self).get_mock_url( service_type='baremetal', interface='public', resource=resource, append=append, base_url_append='v1', qs_elements=qs_elements) shade-1.31.0/shade/tests/unit/test_delete_volume_snapshot.py0000666000175000017500000000752213440327640024303 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_delete_volume_snapshot ---------------------------------- Tests for the `delete_volume_snapshot` command. """ from shade import exc from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestDeleteVolumeSnapshot(base.RequestsMockTestCase): def test_delete_volume_snapshot(self): """ Test that delete_volume_snapshot without a wait returns True instance when the volume snapshot deletes. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') fake_snapshot_dict = meta.obj_to_munch(fake_snapshot) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', 'detail']), json={'snapshots': [fake_snapshot_dict]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', fake_snapshot_dict['id']]))]) self.assertTrue( self.cloud.delete_volume_snapshot(name_or_id='1234', wait=False)) self.assert_calls() def test_delete_volume_snapshot_with_error(self): """ Test that a exception while deleting a volume snapshot will cause an OpenStackCloudException. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') fake_snapshot_dict = meta.obj_to_munch(fake_snapshot) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', 'detail']), json={'snapshots': [fake_snapshot_dict]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', fake_snapshot_dict['id']]), status_code=404)]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_volume_snapshot, name_or_id='1234') self.assert_calls() def test_delete_volume_snapshot_with_timeout(self): """ Test that a timeout while waiting for the volume snapshot to delete raises an exception in delete_volume_snapshot. """ fake_snapshot = fakes.FakeVolumeSnapshot('1234', 'available', 'foo', 'derpysnapshot') fake_snapshot_dict = meta.obj_to_munch(fake_snapshot) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', 'detail']), json={'snapshots': [fake_snapshot_dict]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots', fake_snapshot_dict['id']]))]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.delete_volume_snapshot, name_or_id='1234', wait=True, timeout=0.01) self.assert_calls(do_count=False) shade-1.31.0/shade/tests/unit/test__adapter.py0000666000175000017500000000303313440327640021303 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from testscenarios import load_tests_apply_scenarios as load_tests # noqa from shade import _adapter from shade.tests.unit import base class TestExtractName(base.TestCase): scenarios = [ ('slash_servers_bare', dict(url='/servers', parts=['servers'])), ('slash_servers_arg', dict(url='/servers/1', parts=['servers'])), ('servers_bare', dict(url='servers', parts=['servers'])), ('servers_arg', dict(url='servers/1', parts=['servers'])), ('networks_bare', dict(url='/v2.0/networks', parts=['networks'])), ('networks_arg', dict(url='/v2.0/networks/1', parts=['networks'])), ('tokens', dict(url='/v3/tokens', parts=['tokens'])), ('discovery', dict(url='/', parts=['discovery'])), ('secgroups', dict( url='/servers/1/os-security-groups', parts=['servers', 'os-security-groups'])), ] def test_extract_name(self): results = _adapter.extract_name(self.url) self.assertEqual(self.parts, results) shade-1.31.0/shade/tests/unit/test_caching.py0000666000175000017500000005747613440327640021144 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import concurrent import time import testtools from testscenarios import load_tests_apply_scenarios as load_tests # noqa import shade.openstackcloud from shade import exc from shade import meta from shade.tests import fakes from shade.tests.unit import base # Mock out the gettext function so that the task schema can be copypasta def _(msg): return msg _TASK_PROPERTIES = { "id": { "description": _("An identifier for the task"), "pattern": _('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}' '-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$'), "type": "string" }, "type": { "description": _("The type of task represented by this content"), "enum": [ "import", ], "type": "string" }, "status": { "description": _("The current status of this task"), "enum": [ "pending", "processing", "success", "failure" ], "type": "string" }, "input": { "description": _("The parameters required by task, JSON blob"), "type": ["null", "object"], }, "result": { "description": _("The result of current task, JSON blob"), "type": ["null", "object"], }, "owner": { "description": _("An identifier for the owner of this task"), "type": "string" }, "message": { "description": _("Human-readable informative message only included" " when appropriate (usually on failure)"), "type": "string", }, "expires_at": { "description": _("Datetime when this resource would be" " subject to removal"), "type": ["null", "string"] }, "created_at": { "description": _("Datetime when this resource was created"), "type": "string" }, "updated_at": { "description": _("Datetime when this resource was updated"), "type": "string" }, 'self': {'type': 'string'}, 'schema': {'type': 'string'} } _TASK_SCHEMA = dict( name='Task', properties=_TASK_PROPERTIES, additionalProperties=False, ) class TestMemoryCache(base.RequestsMockTestCase): def setUp(self): super(TestMemoryCache, self).setUp( cloud_config_fixture='clouds_cache.yaml') def _image_dict(self, fake_image): return self.cloud._normalize_image(meta.obj_to_munch(fake_image)) def _munch_images(self, fake_image): return self.cloud._normalize_images([fake_image]) def test_openstack_cloud(self): self.assertIsInstance(self.cloud, shade.OpenStackCloud) def test_list_projects_v3(self): project_one = self._get_project_data() project_two = self._get_project_data() project_list = [project_one, project_two] first_response = {'projects': [project_one.json_response['project']]} second_response = {'projects': [p.json_response['project'] for p in project_list]} mock_uri = self.get_mock_url( service_type='identity', interface='admin', resource='projects', base_url_append='v3') self.register_uris([ dict(method='GET', uri=mock_uri, status_code=200, json=first_response), dict(method='GET', uri=mock_uri, status_code=200, json=second_response)]) self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(first_response['projects'])), self.cloud.list_projects()) self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(first_response['projects'])), self.cloud.list_projects()) # invalidate the list_projects cache self.cloud.list_projects.invalidate(self.cloud) # ensure the new values are now retrieved self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(second_response['projects'])), self.cloud.list_projects()) self.assert_calls() def test_list_projects_v2(self): self.use_keystone_v2() project_one = self._get_project_data(v3=False) project_two = self._get_project_data(v3=False) project_list = [project_one, project_two] first_response = {'tenants': [project_one.json_response['tenant']]} second_response = {'tenants': [p.json_response['tenant'] for p in project_list]} mock_uri = self.get_mock_url( service_type='identity', interface='admin', resource='tenants') self.register_uris([ dict(method='GET', uri=mock_uri, status_code=200, json=first_response), dict(method='GET', uri=mock_uri, status_code=200, json=second_response)]) self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(first_response['tenants'])), self.cloud.list_projects()) self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(first_response['tenants'])), self.cloud.list_projects()) # invalidate the list_projects cache self.cloud.list_projects.invalidate(self.cloud) # ensure the new values are now retrieved self.assertEqual( self.cloud._normalize_projects( meta.obj_list_to_munch(second_response['tenants'])), self.cloud.list_projects()) self.assert_calls() def test_list_servers_no_herd(self): self.cloud._SERVER_AGE = 2 fake_server = fakes.make_fake_server('1234', 'name') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), ]) with concurrent.futures.ThreadPoolExecutor(16) as pool: for i in range(16): pool.submit(lambda: self.cloud.list_servers(bare=True)) # It's possible to race-condition 16 threads all in the # single initial lock without a tiny sleep time.sleep(0.001) self.assert_calls() def test_list_volumes(self): fake_volume = fakes.FakeVolume('volume1', 'available', 'Volume 1 Display Name') fake_volume_dict = meta.obj_to_munch(fake_volume) fake_volume2 = fakes.FakeVolume('volume2', 'available', 'Volume 2 Display Name') fake_volume2_dict = meta.obj_to_munch(fake_volume2) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volume_dict]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volume_dict, fake_volume2_dict]})]) self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict)], self.cloud.list_volumes()) # this call should hit the cache self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict)], self.cloud.list_volumes()) self.cloud.list_volumes.invalidate(self.cloud) self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict), self.cloud._normalize_volume(fake_volume2_dict)], self.cloud.list_volumes()) self.assert_calls() def test_list_volumes_creating_invalidates(self): fake_volume = fakes.FakeVolume('volume1', 'creating', 'Volume 1 Display Name') fake_volume_dict = meta.obj_to_munch(fake_volume) fake_volume2 = fakes.FakeVolume('volume2', 'available', 'Volume 2 Display Name') fake_volume2_dict = meta.obj_to_munch(fake_volume2) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volume_dict]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volume_dict, fake_volume2_dict]})]) self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict)], self.cloud.list_volumes()) self.assertEqual( [self.cloud._normalize_volume(fake_volume_dict), self.cloud._normalize_volume(fake_volume2_dict)], self.cloud.list_volumes()) self.assert_calls() def test_create_volume_invalidates(self): fake_volb4 = meta.obj_to_munch( fakes.FakeVolume('volume1', 'available', '')) _id = '12345' fake_vol_creating = meta.obj_to_munch( fakes.FakeVolume(_id, 'creating', '')) fake_vol_avail = meta.obj_to_munch( fakes.FakeVolume(_id, 'available', '')) def now_deleting(request, context): fake_vol_avail['status'] = 'deleting' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes']), json={'volume': fake_vol_creating}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4, fake_vol_creating]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4, fake_vol_avail]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4, fake_vol_avail]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', _id]), json=now_deleting), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['volumes', 'detail']), json={'volumes': [fake_volb4]})]) self.assertEqual( [self.cloud._normalize_volume(fake_volb4)], self.cloud.list_volumes()) volume = dict(display_name='junk_vol', size=1, display_description='test junk volume') self.cloud.create_volume(wait=True, timeout=None, **volume) # If cache was not invalidated, we would not see our own volume here # because the first volume was available and thus would already be # cached. self.assertEqual( [self.cloud._normalize_volume(fake_volb4), self.cloud._normalize_volume(fake_vol_avail)], self.cloud.list_volumes()) self.cloud.delete_volume(_id) # And now delete and check same thing since list is cached as all # available self.assertEqual( [self.cloud._normalize_volume(fake_volb4)], self.cloud.list_volumes()) self.assert_calls() def test_list_users(self): user_data = self._get_user_data(email='test@example.com') self.register_uris([ dict(method='GET', uri=self.get_mock_url( service_type='identity', interface='admin', resource='users', base_url_append='v3'), status_code=200, json={'users': [user_data.json_response['user']]})]) users = self.cloud.list_users() self.assertEqual(1, len(users)) self.assertEqual(user_data.user_id, users[0]['id']) self.assertEqual(user_data.name, users[0]['name']) self.assertEqual(user_data.email, users[0]['email']) self.assert_calls() def test_modify_user_invalidates_cache(self): self.use_keystone_v2() user_data = self._get_user_data(email='test@example.com') new_resp = {'user': user_data.json_response['user'].copy()} new_resp['user']['email'] = 'Nope@Nope.Nope' new_req = {'user': {'email': new_resp['user']['email']}} mock_users_url = self.get_mock_url( service_type='identity', interface='admin', resource='users') mock_user_resource_url = self.get_mock_url( service_type='identity', interface='admin', resource='users', append=[user_data.user_id]) empty_user_list_resp = {'users': []} users_list_resp = {'users': [user_data.json_response['user']]} updated_users_list_resp = {'users': [new_resp['user']]} # Password is None in the original create below user_data.json_request['user']['password'] = None uris_to_mock = [ # Inital User List is Empty dict(method='GET', uri=mock_users_url, status_code=200, json=empty_user_list_resp), # POST to create the user # GET to get the user data after POST dict(method='POST', uri=mock_users_url, status_code=200, json=user_data.json_response, validate=dict(json=user_data.json_request)), # List Users Call dict(method='GET', uri=mock_users_url, status_code=200, json=users_list_resp), # List users to get ID for update # Get user using user_id from list # Update user # Get updated user dict(method='GET', uri=mock_users_url, status_code=200, json=users_list_resp), dict(method='PUT', uri=mock_user_resource_url, status_code=200, json=new_resp, validate=dict(json=new_req)), # List Users Call dict(method='GET', uri=mock_users_url, status_code=200, json=updated_users_list_resp), # List User to get ID for delete # Get user using user_id from list # delete user dict(method='GET', uri=mock_users_url, status_code=200, json=updated_users_list_resp), dict(method='GET', uri=mock_user_resource_url, status_code=200, json=new_resp), dict(method='DELETE', uri=mock_user_resource_url, status_code=204), # List Users Call (empty post delete) dict(method='GET', uri=mock_users_url, status_code=200, json=empty_user_list_resp) ] self.register_uris(uris_to_mock) # first cache an empty list self.assertEqual([], self.cloud.list_users()) # now add one created = self.cloud.create_user(name=user_data.name, email=user_data.email) self.assertEqual(user_data.user_id, created['id']) self.assertEqual(user_data.name, created['name']) self.assertEqual(user_data.email, created['email']) # Cache should have been invalidated users = self.cloud.list_users() self.assertEqual(user_data.user_id, users[0]['id']) self.assertEqual(user_data.name, users[0]['name']) self.assertEqual(user_data.email, users[0]['email']) # Update and check to see if it is updated updated = self.cloud.update_user(user_data.user_id, email=new_resp['user']['email']) self.assertEqual(user_data.user_id, updated.id) self.assertEqual(user_data.name, updated.name) self.assertEqual(new_resp['user']['email'], updated.email) users = self.cloud.list_users() self.assertEqual(1, len(users)) self.assertEqual(user_data.user_id, users[0]['id']) self.assertEqual(user_data.name, users[0]['name']) self.assertEqual(new_resp['user']['email'], users[0]['email']) # Now delete and ensure it disappears self.cloud.delete_user(user_data.user_id) self.assertEqual([], self.cloud.list_users()) self.assert_calls() def test_list_flavors(self): mock_uri = '{endpoint}/flavors/detail?is_public=None'.format( endpoint=fakes.COMPUTE_ENDPOINT) uris_to_mock = [ dict(method='GET', uri=mock_uri, json={'flavors': []}), dict(method='GET', uri=mock_uri, json={'flavors': fakes.FAKE_FLAVOR_LIST}) ] uris_to_mock.extend([ dict(method='GET', uri='{endpoint}/flavors/{id}/os-extra_specs'.format( endpoint=fakes.COMPUTE_ENDPOINT, id=flavor['id']), json={'extra_specs': {}}) for flavor in fakes.FAKE_FLAVOR_LIST]) self.register_uris(uris_to_mock) self.assertEqual([], self.cloud.list_flavors()) self.assertEqual([], self.cloud.list_flavors()) fake_flavor_dicts = self.cloud._normalize_flavors( fakes.FAKE_FLAVOR_LIST) self.cloud.list_flavors.invalidate(self.cloud) self.assertEqual(fake_flavor_dicts, self.cloud.list_flavors()) self.assert_calls() def test_list_images(self): self.use_glance() fake_image = fakes.make_fake_image(image_id='42') self.register_uris([ dict(method='GET', uri=self.get_mock_url('image', 'public', append=['v2', 'images']), json={'images': []}), dict(method='GET', uri=self.get_mock_url('image', 'public', append=['v2', 'images']), json={'images': [fake_image]}), ]) self.assertEqual([], self.cloud.list_images()) self.assertEqual([], self.cloud.list_images()) self.cloud.list_images.invalidate(self.cloud) self.assertEqual( self._munch_images(fake_image), self.cloud.list_images()) self.assert_calls() def test_list_images_caches_deleted_status(self): self.use_glance() deleted_image_id = self.getUniqueString() deleted_image = fakes.make_fake_image( image_id=deleted_image_id, status='deleted') active_image_id = self.getUniqueString() active_image = fakes.make_fake_image(image_id=active_image_id) list_return = {'images': [active_image, deleted_image]} self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=list_return), ]) self.assertEqual( [self.cloud._normalize_image(active_image)], self.cloud.list_images()) self.assertEqual( [self.cloud._normalize_image(active_image)], self.cloud.list_images()) # We should only have one call self.assert_calls() def test_cache_no_cloud_name(self): self.use_glance() self.cloud.name = None fi = fakes.make_fake_image(image_id=self.getUniqueString()) fi2 = fakes.make_fake_image(image_id=self.getUniqueString()) self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [fi]}), dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [fi, fi2]}), ]) self.assertEqual( self._munch_images(fi), self.cloud.list_images()) # Now test that the list was cached self.assertEqual( self._munch_images(fi), self.cloud.list_images()) # Invalidation too self.cloud.list_images.invalidate(self.cloud) self.assertEqual( [ self.cloud._normalize_image(fi), self.cloud._normalize_image(fi2) ], self.cloud.list_images()) class TestCacheIgnoresQueuedStatus(base.RequestsMockTestCase): scenarios = [ ('queued', dict(status='queued')), ('saving', dict(status='saving')), ('pending_delete', dict(status='pending_delete')), ] def setUp(self): super(TestCacheIgnoresQueuedStatus, self).setUp( cloud_config_fixture='clouds_cache.yaml') self.use_glance() active_image_id = self.getUniqueString() self.active_image = fakes.make_fake_image( image_id=active_image_id, status=self.status) self.active_list_return = {'images': [self.active_image]} steady_image_id = self.getUniqueString() self.steady_image = fakes.make_fake_image(image_id=steady_image_id) self.steady_list_return = { 'images': [self.active_image, self.steady_image]} def test_list_images_ignores_pending_status(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=self.active_list_return), dict(method='GET', uri='https://image.example.com/v2/images', json=self.steady_list_return), ]) self.assertEqual( [self.cloud._normalize_image(self.active_image)], self.cloud.list_images()) # Should expect steady_image to appear if active wasn't cached self.assertEqual( [ self.cloud._normalize_image(self.active_image), self.cloud._normalize_image(self.steady_image) ], self.cloud.list_images()) class TestCacheSteadyStatus(base.RequestsMockTestCase): scenarios = [ ('active', dict(status='active')), ('killed', dict(status='killed')), ] def setUp(self): super(TestCacheSteadyStatus, self).setUp( cloud_config_fixture='clouds_cache.yaml') self.use_glance() active_image_id = self.getUniqueString() self.active_image = fakes.make_fake_image( image_id=active_image_id, status=self.status) self.active_list_return = {'images': [self.active_image]} def test_list_images_caches_steady_status(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=self.active_list_return), ]) self.assertEqual( [self.cloud._normalize_image(self.active_image)], self.cloud.list_images()) self.assertEqual( [self.cloud._normalize_image(self.active_image)], self.cloud.list_images()) # We should only have one call self.assert_calls() class TestBogusAuth(base.RequestsMockTestCase): def setUp(self): super(TestBogusAuth, self).setUp( cloud_config_fixture='clouds_cache.yaml') def test_get_auth_bogus(self): with testtools.ExpectedException(exc.OpenStackCloudException): shade.openstack_cloud( cloud='_bogus_test_', config=self.config) shade-1.31.0/shade/tests/unit/test_server_delete_metadata.py0000666000175000017500000000503313440327640024216 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_server_delete_metadata ---------------------------------- Tests for the `delete_server_metadata` command. """ import uuid from shade.exc import OpenStackCloudURINotFound from shade.tests import fakes from shade.tests.unit import base class TestServerDeleteMetadata(base.RequestsMockTestCase): def setUp(self): super(TestServerDeleteMetadata, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) def test_server_delete_metadata_with_exception(self): """ Test that a missing metadata throws an exception. """ self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.fake_server['id'], 'metadata', 'key']), status_code=404), ]) self.assertRaises( OpenStackCloudURINotFound, self.cloud.delete_server_metadata, self.server_name, ['key']) self.assert_calls() def test_server_delete_metadata(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.fake_server['id'], 'metadata', 'key']), status_code=200), ]) self.cloud.delete_server_metadata(self.server_id, ['key']) self.assert_calls() shade-1.31.0/shade/tests/unit/test_quotas.py0000666000175000017500000002206213440327640021043 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from shade import exc from shade.tests.unit import base fake_quota_set = { "cores": 20, "fixed_ips": -1, "floating_ips": 10, "injected_file_content_bytes": 10240, "injected_file_path_bytes": 255, "injected_files": 5, "instances": 10, "key_pairs": 100, "metadata_items": 128, "ram": 51200, "security_group_rules": 20, "security_groups": 45, "server_groups": 10, "server_group_members": 10 } class TestQuotas(base.RequestsMockTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(TestQuotas, self).setUp( cloud_config_fixture=cloud_config_fixture) def test_update_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['os-quota-sets', project.project_id]), json={'quota_set': fake_quota_set}, validate=dict( json={ 'quota_set': { 'cores': 1, 'force': True }})), ]) self.op_cloud.set_compute_quotas(project.project_id, cores=1) self.assert_calls() def test_update_quotas_bad_request(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'compute', 'public', append=['os-quota-sets', project.project_id]), status_code=400), ]) self.assertRaises(exc.OpenStackCloudException, self.op_cloud.set_compute_quotas, project.project_id) self.assert_calls() def test_get_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-quota-sets', project.project_id]), json={'quota_set': fake_quota_set}), ]) self.op_cloud.get_compute_quotas(project.project_id) self.assert_calls() def test_delete_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-quota-sets', project.project_id])), ]) self.op_cloud.delete_compute_quotas(project.project_id) self.assert_calls() def test_cinder_update_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'volumev2', 'public', append=['os-quota-sets', project.project_id]), json=dict(quota_set={'volumes': 1}), validate=dict( json={'quota_set': { 'volumes': 1, 'tenant_id': project.project_id}}))]) self.op_cloud.set_volume_quotas(project.project_id, volumes=1) self.assert_calls() def test_cinder_get_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['os-quota-sets', project.project_id]), json=dict(quota_set={'snapshots': 10, 'volumes': 20}))]) self.op_cloud.get_volume_quotas(project.project_id) self.assert_calls() def test_cinder_delete_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['os-quota-sets', project.project_id]))]) self.op_cloud.delete_volume_quotas(project.project_id) self.assert_calls() def test_neutron_update_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'quotas', '%s.json' % project.project_id]), json={}, validate=dict( json={'quota': {'network': 1}})) ]) self.op_cloud.set_network_quotas(project.project_id, network=1) self.assert_calls() def test_neutron_get_quotas(self): quota = { 'subnet': 100, 'network': 100, 'floatingip': 50, 'subnetpool': -1, 'security_group_rule': 100, 'security_group': 10, 'router': 10, 'rbac_policy': 10, 'port': 500 } project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'quotas', '%s.json' % project.project_id]), json={'quota': quota}) ]) received_quota = self.op_cloud.get_network_quotas(project.project_id) self.assertDictEqual(quota, received_quota) self.assert_calls() def test_neutron_get_quotas_details(self): quota_details = { 'subnet': { 'limit': 100, 'used': 7, 'reserved': 0}, 'network': { 'limit': 100, 'used': 6, 'reserved': 0}, 'floatingip': { 'limit': 50, 'used': 0, 'reserved': 0}, 'subnetpool': { 'limit': -1, 'used': 2, 'reserved': 0}, 'security_group_rule': { 'limit': 100, 'used': 4, 'reserved': 0}, 'security_group': { 'limit': 10, 'used': 1, 'reserved': 0}, 'router': { 'limit': 10, 'used': 2, 'reserved': 0}, 'rbac_policy': { 'limit': 10, 'used': 2, 'reserved': 0}, 'port': { 'limit': 500, 'used': 7, 'reserved': 0} } project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'quotas', '%s/details.json' % project.project_id]), json={'quota': quota_details}) ]) received_quota_details = self.op_cloud.get_network_quotas( project.project_id, details=True) self.assertDictEqual(quota_details, received_quota_details) self.assert_calls() def test_neutron_delete_quotas(self): project = self.mock_for_keystone_projects(project_count=1, list_get=True)[0] self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'quotas', '%s.json' % project.project_id]), json={}) ]) self.op_cloud.delete_network_quotas(project.project_id) self.assert_calls() shade-1.31.0/shade/tests/unit/test_create_volume_snapshot.py0000666000175000017500000001251513440327640024302 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_create_volume_snapshot ---------------------------------- Tests for the `create_volume_snapshot` command. """ from shade import exc from shade import meta from shade.tests import fakes from shade.tests.unit import base class TestCreateVolumeSnapshot(base.RequestsMockTestCase): def test_create_volume_snapshot_wait(self): """ Test that create_volume_snapshot with a wait returns the volume snapshot when its status changes to "available". """ snapshot_id = '5678' volume_id = '1234' build_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'creating', 'foo', 'derpysnapshot') build_snapshot_dict = meta.obj_to_munch(build_snapshot) fake_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'available', 'foo', 'derpysnapshot') fake_snapshot_dict = meta.obj_to_munch(fake_snapshot) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots']), json={'snapshot': build_snapshot_dict}, validate=dict(json={ 'snapshot': {'force': False, 'volume_id': '1234'}})), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': build_snapshot_dict}), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': fake_snapshot_dict})]) self.assertEqual( self.cloud._normalize_volume(fake_snapshot_dict), self.cloud.create_volume_snapshot(volume_id=volume_id, wait=True) ) self.assert_calls() def test_create_volume_snapshot_with_timeout(self): """ Test that a timeout while waiting for the volume snapshot to create raises an exception in create_volume_snapshot. """ snapshot_id = '5678' volume_id = '1234' build_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'creating', 'foo', 'derpysnapshot') build_snapshot_dict = meta.obj_to_munch(build_snapshot) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots']), json={'snapshot': build_snapshot_dict}, validate=dict(json={ 'snapshot': {'force': False, 'volume_id': '1234'}})), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': build_snapshot_dict})]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud.create_volume_snapshot, volume_id=volume_id, wait=True, timeout=0.01) self.assert_calls(do_count=False) def test_create_volume_snapshot_with_error(self): """ Test that a error status while waiting for the volume snapshot to create raises an exception in create_volume_snapshot. """ snapshot_id = '5678' volume_id = '1234' build_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'creating', 'bar', 'derpysnapshot') build_snapshot_dict = meta.obj_to_munch(build_snapshot) error_snapshot = fakes.FakeVolumeSnapshot(snapshot_id, 'error', 'blah', 'derpysnapshot') error_snapshot_dict = meta.obj_to_munch(error_snapshot) self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['snapshots']), json={'snapshot': build_snapshot_dict}, validate=dict(json={ 'snapshot': {'force': False, 'volume_id': '1234'}})), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': build_snapshot_dict}), dict(method='GET', uri=self.get_mock_url('volumev2', 'public', append=['snapshots', snapshot_id]), json={'snapshot': error_snapshot_dict})]) self.assertRaises( exc.OpenStackCloudException, self.cloud.create_volume_snapshot, volume_id=volume_id, wait=True, timeout=5) self.assert_calls() shade-1.31.0/shade/tests/unit/test_task_manager.py0000666000175000017500000000602013440327640022157 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import concurrent.futures import mock from shade import task_manager from shade.tests.unit import base class TestException(Exception): pass class TaskTest(task_manager.Task): def main(self, client): raise TestException("This is a test exception") class TaskTestGenerator(task_manager.Task): def main(self, client): yield 1 class TaskTestInt(task_manager.Task): def main(self, client): return int(1) class TaskTestFloat(task_manager.Task): def main(self, client): return float(2.0) class TaskTestStr(task_manager.Task): def main(self, client): return "test" class TaskTestBool(task_manager.Task): def main(self, client): return True class TaskTestSet(task_manager.Task): def main(self, client): return set([1, 2]) class TaskTestAsync(task_manager.Task): def __init__(self): super(task_manager.Task, self).__init__() self.run_async = True def main(self, client): pass class TestTaskManager(base.RequestsMockTestCase): def setUp(self): super(TestTaskManager, self).setUp() self.manager = task_manager.TaskManager(name='test', client=self) def test_wait_re_raise(self): """Test that Exceptions thrown in a Task is reraised correctly This test is aimed to six.reraise(), called in Task::wait(). Specifically, we test if we get the same behaviour with all the configured interpreters (e.g. py27, p34, pypy, ...) """ self.assertRaises(TestException, self.manager.submit_task, TaskTest()) def test_dont_munchify_int(self): ret = self.manager.submit_task(TaskTestInt()) self.assertIsInstance(ret, int) def test_dont_munchify_float(self): ret = self.manager.submit_task(TaskTestFloat()) self.assertIsInstance(ret, float) def test_dont_munchify_str(self): ret = self.manager.submit_task(TaskTestStr()) self.assertIsInstance(ret, str) def test_dont_munchify_bool(self): ret = self.manager.submit_task(TaskTestBool()) self.assertIsInstance(ret, bool) def test_dont_munchify_set(self): ret = self.manager.submit_task(TaskTestSet()) self.assertIsInstance(ret, set) @mock.patch.object(concurrent.futures.ThreadPoolExecutor, 'submit') def test_async(self, mock_submit): self.manager.submit_task(TaskTestAsync()) self.assertTrue(mock_submit.called) shade-1.31.0/shade/tests/unit/test_server_set_metadata.py0000666000175000017500000000506613440327640023555 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_server_set_metadata ---------------------------------- Tests for the `set_server_metadata` command. """ import uuid from shade.exc import OpenStackCloudBadRequest from shade.tests import fakes from shade.tests.unit import base class TestServerSetMetadata(base.RequestsMockTestCase): def setUp(self): super(TestServerSetMetadata, self).setUp() self.server_id = str(uuid.uuid4()) self.server_name = self.getUniqueString('name') self.fake_server = fakes.make_fake_server( self.server_id, self.server_name) def test_server_set_metadata_with_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.fake_server['id'], 'metadata']), validate=dict(json={'metadata': {'meta': 'data'}}), json={}, status_code=400), ]) self.assertRaises( OpenStackCloudBadRequest, self.cloud.set_server_metadata, self.server_name, {'meta': 'data'}) self.assert_calls() def test_server_set_metadata(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [self.fake_server]}), dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['servers', self.fake_server['id'], 'metadata']), validate=dict(json={'metadata': {'meta': 'data'}}), status_code=200), ]) self.cloud.set_server_metadata(self.server_id, {'meta': 'data'}) self.assert_calls() shade-1.31.0/shade/tests/unit/test_delete_server.py0000666000175000017500000002367713440327640022374 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_delete_server ---------------------------------- Tests for the `delete_server` command. """ import uuid from shade import exc as shade_exc from shade.tests import fakes from shade.tests.unit import base class TestDeleteServer(base.RequestsMockTestCase): def test_delete_server(self): """ Test that server delete is called when wait=False """ server = fakes.make_fake_server('1234', 'daffy', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), ]) self.assertTrue(self.cloud.delete_server('daffy', wait=False)) self.assert_calls() def test_delete_server_already_gone(self): """ Test that we return immediately when server is already gone """ self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertFalse(self.cloud.delete_server('tweety', wait=False)) self.assert_calls() def test_delete_server_already_gone_wait(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertFalse(self.cloud.delete_server('speedy', wait=True)) self.assert_calls() def test_delete_server_wait_for_deleted(self): """ Test that delete_server waits for the server to be gone """ server = fakes.make_fake_server('9999', 'wily', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '9999'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertTrue(self.cloud.delete_server('wily', wait=True)) self.assert_calls() def test_delete_server_fails(self): """ Test that delete_server raises non-404 exceptions """ server = fakes.make_fake_server('1212', 'speedy', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1212']), status_code=400), ]) self.assertRaises( shade_exc.OpenStackCloudException, self.cloud.delete_server, 'speedy', wait=False) self.assert_calls() def test_delete_server_no_cinder(self): """ Test that deleting server works when cinder is not available """ orig_has_service = self.cloud.has_service def fake_has_service(service_type): if service_type == 'volume': return False return orig_has_service(service_type) self.cloud.has_service = fake_has_service server = fakes.make_fake_server('1234', 'porky', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), ]) self.assertTrue(self.cloud.delete_server('porky', wait=False)) self.assert_calls() def test_delete_server_delete_ips(self): """ Test that deleting server and fips works """ server = fakes.make_fake_server('1234', 'porky', 'ACTIVE') fip_id = uuid.uuid4().hex self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json'], qs_elements=['floating_ip_address=172.24.5.5']), complete_qs=True, json={'floatingips': [{ 'router_id': 'd23abc8d-2991-4a55-ba98-2aaea84cc72f', 'tenant_id': '4969c491a3c74ee4af974e6d800c62de', 'floating_network_id': '376da547-b977-4cfe-9cba7', 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.5.5', 'port_id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'id': fip_id, 'status': 'ACTIVE'}]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips', '{fip_id}.json'.format(fip_id=fip_id)])), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), complete_qs=True, json={'floatingips': []}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertTrue(self.cloud.delete_server( 'porky', wait=True, delete_ips=True)) self.assert_calls() def test_delete_server_delete_ips_bad_neutron(self): """ Test that deleting server with a borked neutron doesn't bork """ server = fakes.make_fake_server('1234', 'porky', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json'], qs_elements=['floating_ip_address=172.24.5.5']), complete_qs=True, status_code=404), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertTrue(self.cloud.delete_server( 'porky', wait=True, delete_ips=True)) self.assert_calls() def test_delete_server_delete_fips_nova(self): """ Test that deleting server with a borked neutron doesn't bork """ self.cloud._floating_ip_source = 'nova' server = fakes.make_fake_server('1234', 'porky', 'ACTIVE') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server]}), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-floating-ips']), json={'floating_ips': [ { 'fixed_ip': None, 'id': 1, 'instance_id': None, 'ip': '172.24.5.5', 'pool': 'nova' }]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-floating-ips', '1'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-floating-ips']), json={'floating_ips': []}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['servers', '1234'])), dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) self.assertTrue(self.cloud.delete_server( 'porky', wait=True, delete_ips=True)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_image.py0000666000175000017500000011450413440327640020614 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # TODO(mordred) There are mocks of the image_client in here that are not # using requests_mock. Erradicate them. import operator import tempfile import uuid import six import shade import shade.openstackcloud from shade import exc from shade import meta from shade.tests import fakes from shade.tests.unit import base CINDER_URL = 'https://volume.example.com/v2/1c36b64c840a42cd9e9b931a369337f0' class BaseTestImage(base.RequestsMockTestCase): def setUp(self): super(BaseTestImage, self).setUp() self.image_id = str(uuid.uuid4()) self.imagefile = tempfile.NamedTemporaryFile(delete=False) self.imagefile.write(b'\0') self.imagefile.close() self.fake_image_dict = fakes.make_fake_image(image_id=self.image_id) self.fake_search_return = {'images': [self.fake_image_dict]} self.output = uuid.uuid4().bytes self.image_name = self.getUniqueString('image') self.container_name = self.getUniqueString('container') class TestImage(BaseTestImage): def setUp(self): super(TestImage, self).setUp() self.use_glance() def test_config_v1(self): self.cloud.cloud_config.config['image_api_version'] = '1' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v1/', self.cloud._image_client.get_endpoint()) self.assertEqual( '1', self.cloud_config.get_api_version('image')) def test_config_v2(self): self.cloud.cloud_config.config['image_api_version'] = '2' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v2/', self.cloud._image_client.get_endpoint()) self.assertEqual( '2', self.cloud_config.get_api_version('image')) def test_download_image_no_output(self): self.assertRaises(exc.OpenStackCloudException, self.cloud.download_image, 'fake_image') def test_download_image_two_outputs(self): fake_fd = six.BytesIO() self.assertRaises(exc.OpenStackCloudException, self.cloud.download_image, 'fake_image', output_path='fake_path', output_file=fake_fd) def test_download_image_no_images_found(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=dict(images=[]))]) self.assertRaises(exc.OpenStackCloudResourceNotFound, self.cloud.download_image, 'fake_image', output_path='fake_path') self.assert_calls() def _register_image_mocks(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=self.fake_search_return), dict(method='GET', uri='https://image.example.com/v2/images/{id}/file'.format( id=self.image_id), content=self.output, headers={'Content-Type': 'application/octet-stream'}) ]) def test_download_image_with_fd(self): self._register_image_mocks() output_file = six.BytesIO() self.cloud.download_image('fake_image', output_file=output_file) output_file.seek(0) self.assertEqual(output_file.read(), self.output) self.assert_calls() def test_download_image_with_path(self): self._register_image_mocks() output_file = tempfile.NamedTemporaryFile() self.cloud.download_image('fake_image', output_path=output_file.name) output_file.seek(0) self.assertEqual(output_file.read(), self.output) self.assert_calls() def test_empty_list_images(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}) ]) self.assertEqual([], self.cloud.list_images()) self.assert_calls() def test_list_images(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() def test_list_images_show_all(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images?member_status=all', json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([self.fake_image_dict]), self.cloud.list_images(show_all=True)) self.assert_calls() def test_list_images_show_all_deleted(self): deleted_image = self.fake_image_dict.copy() deleted_image['status'] = 'deleted' self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images?member_status=all', json={'images': [self.fake_image_dict, deleted_image]}) ]) self.assertEqual( self.cloud._normalize_images([ self.fake_image_dict, deleted_image]), self.cloud.list_images(show_all=True)) self.assert_calls() def test_list_images_no_filter_deleted(self): deleted_image = self.fake_image_dict.copy() deleted_image['status'] = 'deleted' self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [self.fake_image_dict, deleted_image]}) ]) self.assertEqual( self.cloud._normalize_images([ self.fake_image_dict, deleted_image]), self.cloud.list_images(filter_deleted=False)) self.assert_calls() def test_list_images_filter_deleted(self): deleted_image = self.fake_image_dict.copy() deleted_image['status'] = 'deleted' self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [self.fake_image_dict, deleted_image]}) ]) self.assertEqual( self.cloud._normalize_images([self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() def test_list_images_string_properties(self): image_dict = self.fake_image_dict.copy() image_dict['properties'] = 'list,of,properties' self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [image_dict]}), ]) images = self.cloud.list_images() self.assertEqual( self.cloud._normalize_images([image_dict]), images) self.assertEqual( images[0]['properties']['properties'], 'list,of,properties') self.assert_calls() def test_list_images_paginated(self): marker = str(uuid.uuid4()) self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [self.fake_image_dict], 'next': '/v2/images?marker={marker}'.format( marker=marker)}), dict(method='GET', uri=('https://image.example.com/v2/images?' 'marker={marker}'.format(marker=marker)), json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([ self.fake_image_dict, self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() def test_create_image_put_v2(self): self.cloud.image_api_use_tasks = False self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}), dict(method='POST', uri='https://image.example.com/v2/images', json=self.fake_image_dict, validate=dict( json={u'container_format': u'bare', u'disk_format': u'qcow2', u'name': u'fake_image', u'owner_specified.shade.md5': fakes.NO_MD5, u'owner_specified.shade.object': u'images/fake_image', # noqa u'owner_specified.shade.sha256': fakes.NO_SHA256, u'visibility': u'private'}) ), dict(method='PUT', uri='https://image.example.com/v2/images/{id}/file'.format( id=self.image_id), request_headers={'Content-Type': 'application/octet-stream'}), dict(method='GET', uri='https://image.example.com/v2/images', json=self.fake_search_return) ]) self.cloud.create_image( 'fake_image', self.imagefile.name, wait=True, timeout=1, is_public=False) self.assert_calls() self.assertEqual(self.adapter.request_history[5].text.read(), b'\x00') def test_create_image_task(self): self.cloud.image_api_use_tasks = True endpoint = self.cloud._object_store_client.get_endpoint() task_id = str(uuid.uuid4()) args = dict( id=task_id, status='success', type='import', result={ 'image_id': self.image_id, }, ) image_no_checksums = self.fake_image_dict.copy() del(image_no_checksums['owner_specified.shade.md5']) del(image_no_checksums['owner_specified.shade.sha256']) del(image_no_checksums['owner_specified.shade.object']) self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}), dict(method='GET', uri='https://object-store.example.com/info', json=dict( swift={'max_file_size': 1000}, slo={'min_segment_size': 500})), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=endpoint, container=self.container_name), status_code=404), dict(method='PUT', uri='{endpoint}/{container}'.format( endpoint=endpoint, container=self.container_name), status_code=201, headers={'Date': 'Fri, 16 Dec 2016 18:21:20 GMT', 'Content-Length': '0', 'Content-Type': 'text/html; charset=UTF-8'}), dict(method='HEAD', uri='{endpoint}/{container}'.format( endpoint=endpoint, container=self.container_name), headers={'Content-Length': '0', 'X-Container-Object-Count': '0', 'Accept-Ranges': 'bytes', 'X-Storage-Policy': 'Policy-0', 'Date': 'Fri, 16 Dec 2016 18:29:05 GMT', 'X-Timestamp': '1481912480.41664', 'X-Trans-Id': 'tx60ec128d9dbf44b9add68-0058543271dfw1', # noqa 'X-Container-Bytes-Used': '0', 'Content-Type': 'text/plain; charset=utf-8'}), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name), status_code=404), dict(method='PUT', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name), status_code=201, validate=dict( headers={'x-object-meta-x-shade-md5': fakes.NO_MD5, 'x-object-meta-x-shade-sha256': fakes.NO_SHA256}) ), dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}), dict(method='POST', uri='https://image.example.com/v2/tasks', json=args, validate=dict( json=dict( type='import', input={ 'import_from': '{container}/{object}'.format( container=self.container_name, object=self.image_name), 'image_properties': {'name': self.image_name}})) ), dict(method='GET', uri='https://image.example.com/v2/tasks/{id}'.format( id=task_id), status_code=503, text='Random error'), dict(method='GET', uri='https://image.example.com/v2/tasks/{id}'.format( id=task_id), json=args), dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [image_no_checksums]}), dict(method='PATCH', uri='https://image.example.com/v2/images/{id}'.format( id=self.image_id), validate=dict( json=sorted([{u'op': u'add', u'value': '{container}/{object}'.format( container=self.container_name, object=self.image_name), u'path': u'/owner_specified.shade.object'}, {u'op': u'add', u'value': fakes.NO_MD5, u'path': u'/owner_specified.shade.md5'}, {u'op': u'add', u'value': fakes.NO_SHA256, u'path': u'/owner_specified.shade.sha256'}], key=operator.itemgetter('value')), headers={ 'Content-Type': 'application/openstack-images-v2.1-json-patch'}) ), dict(method='HEAD', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name), headers={ 'X-Timestamp': '1429036140.50253', 'X-Trans-Id': 'txbbb825960a3243b49a36f-005a0dadaedfw1', 'Content-Length': '1290170880', 'Last-Modified': 'Tue, 14 Apr 2015 18:29:01 GMT', 'X-Object-Meta-X-Shade-Sha256': fakes.NO_SHA256, 'X-Object-Meta-X-Shade-Md5': fakes.NO_MD5, 'Date': 'Thu, 16 Nov 2017 15:24:30 GMT', 'Accept-Ranges': 'bytes', 'Content-Type': 'application/octet-stream', 'Etag': fakes.NO_MD5}), dict(method='DELETE', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name)), dict(method='GET', uri='https://image.example.com/v2/images', json=self.fake_search_return) ]) self.cloud.create_image( self.image_name, self.imagefile.name, wait=True, timeout=1, is_public=False, container=self.container_name) self.assert_calls() def test_delete_autocreated_no_tasks(self): self.use_nothing() self.cloud.image_api_use_tasks = False deleted = self.cloud.delete_autocreated_image_objects( container=self.container_name) self.assertFalse(deleted) self.assert_calls() def test_delete_autocreated_image_objects(self): self.use_keystone_v3() self.cloud.image_api_use_tasks = True endpoint = self.cloud._object_store_client.get_endpoint() other_image = self.getUniqueString('no-delete') self.register_uris([ dict(method='GET', uri=self.get_mock_url( service_type='object-store', resource=self.container_name, qs_elements=['format=json']), json=[{ 'content_type': 'application/octet-stream', 'bytes': 1437258240, 'hash': '249219347276c331b87bf1ac2152d9af', 'last_modified': '2015-02-16T17:50:05.289600', 'name': other_image, }, { 'content_type': 'application/octet-stream', 'bytes': 1290170880, 'hash': fakes.NO_MD5, 'last_modified': '2015-04-14T18:29:00.502530', 'name': self.image_name, }]), dict(method='HEAD', uri=self.get_mock_url( service_type='object-store', resource=self.container_name, append=[other_image]), headers={ 'X-Timestamp': '1429036140.50253', 'X-Trans-Id': 'txbbb825960a3243b49a36f-005a0dadaedfw1', 'Content-Length': '1290170880', 'Last-Modified': 'Tue, 14 Apr 2015 18:29:01 GMT', 'X-Object-Meta-X-Shade-Sha256': 'does not matter', 'X-Object-Meta-X-Shade-Md5': 'does not matter', 'Date': 'Thu, 16 Nov 2017 15:24:30 GMT', 'Accept-Ranges': 'bytes', 'Content-Type': 'application/octet-stream', 'Etag': '249219347276c331b87bf1ac2152d9af', }), dict(method='HEAD', uri=self.get_mock_url( service_type='object-store', resource=self.container_name, append=[self.image_name]), headers={ 'X-Timestamp': '1429036140.50253', 'X-Trans-Id': 'txbbb825960a3243b49a36f-005a0dadaedfw1', 'Content-Length': '1290170880', 'Last-Modified': 'Tue, 14 Apr 2015 18:29:01 GMT', 'X-Object-Meta-X-Shade-Sha256': fakes.NO_SHA256, 'X-Object-Meta-X-Shade-Md5': fakes.NO_MD5, 'Date': 'Thu, 16 Nov 2017 15:24:30 GMT', 'Accept-Ranges': 'bytes', 'Content-Type': 'application/octet-stream', shade.openstackcloud.OBJECT_AUTOCREATE_KEY: 'true', 'Etag': fakes.NO_MD5}), dict(method='DELETE', uri='{endpoint}/{container}/{object}'.format( endpoint=endpoint, container=self.container_name, object=self.image_name)), ]) deleted = self.cloud.delete_autocreated_image_objects( container=self.container_name) self.assertTrue(deleted) self.assert_calls() def _image_dict(self, fake_image): return self.cloud._normalize_image(meta.obj_to_munch(fake_image)) def _munch_images(self, fake_image): return self.cloud._normalize_images([fake_image]) def _call_create_image(self, name, **kwargs): imagefile = tempfile.NamedTemporaryFile(delete=False) imagefile.write(b'\0') imagefile.close() self.cloud.create_image( name, imagefile.name, wait=True, timeout=1, is_public=False, **kwargs) def test_create_image_put_v1(self): self.cloud.cloud_config.config['image_api_version'] = '1' args = {'name': self.image_name, 'container_format': 'bare', 'disk_format': 'qcow2', 'properties': { 'owner_specified.shade.md5': fakes.NO_MD5, 'owner_specified.shade.sha256': fakes.NO_SHA256, 'owner_specified.shade.object': 'images/{name}'.format( name=self.image_name), 'is_public': False}} ret = args.copy() ret['id'] = self.image_id ret['status'] = 'success' self.register_uris([ dict(method='GET', uri='https://image.example.com/v1/images/detail', json={'images': []}), dict(method='POST', uri='https://image.example.com/v1/images', json={'image': ret}, validate=dict(json=args)), dict(method='PUT', uri='https://image.example.com/v1/images/{id}'.format( id=self.image_id), json={'image': ret}, validate=dict(headers={ 'x-image-meta-checksum': fakes.NO_MD5, 'x-glance-registry-purge-props': 'false' })), dict(method='GET', uri='https://image.example.com/v1/images/detail', json={'images': [ret]}), ]) self._call_create_image(self.image_name) self.assertEqual(self._munch_images(ret), self.cloud.list_images()) def test_create_image_put_v1_bad_delete(self): self.cloud.cloud_config.config['image_api_version'] = '1' args = {'name': self.image_name, 'container_format': 'bare', 'disk_format': 'qcow2', 'properties': { 'owner_specified.shade.md5': fakes.NO_MD5, 'owner_specified.shade.sha256': fakes.NO_SHA256, 'owner_specified.shade.object': 'images/{name}'.format( name=self.image_name), 'is_public': False}} ret = args.copy() ret['id'] = self.image_id ret['status'] = 'success' self.register_uris([ dict(method='GET', uri='https://image.example.com/v1/images/detail', json={'images': []}), dict(method='POST', uri='https://image.example.com/v1/images', json={'image': ret}, validate=dict(json=args)), dict(method='PUT', uri='https://image.example.com/v1/images/{id}'.format( id=self.image_id), status_code=400, validate=dict(headers={ 'x-image-meta-checksum': fakes.NO_MD5, 'x-glance-registry-purge-props': 'false' })), dict(method='DELETE', uri='https://image.example.com/v1/images/{id}'.format( id=self.image_id), json={'images': [ret]}), ]) self.assertRaises( exc.OpenStackCloudHTTPError, self._call_create_image, self.image_name) self.assert_calls() def test_update_image_no_patch(self): self.cloud.image_api_use_tasks = False args = {'name': self.image_name, 'container_format': 'bare', 'disk_format': 'qcow2', 'owner_specified.shade.md5': fakes.NO_MD5, 'owner_specified.shade.sha256': fakes.NO_SHA256, 'owner_specified.shade.object': 'images/{name}'.format( name=self.image_name), 'visibility': 'private'} ret = args.copy() ret['id'] = self.image_id ret['status'] = 'success' self.cloud.update_image_properties( image=self._image_dict(ret), **{'owner_specified.shade.object': 'images/{name}'.format( name=self.image_name)}) self.assert_calls() def test_create_image_put_v2_bad_delete(self): self.cloud.image_api_use_tasks = False args = {'name': self.image_name, 'container_format': 'bare', 'disk_format': 'qcow2', 'owner_specified.shade.md5': fakes.NO_MD5, 'owner_specified.shade.sha256': fakes.NO_SHA256, 'owner_specified.shade.object': 'images/{name}'.format( name=self.image_name), 'visibility': 'private'} ret = args.copy() ret['id'] = self.image_id ret['status'] = 'success' self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}), dict(method='POST', uri='https://image.example.com/v2/images', json=ret, validate=dict(json=args)), dict(method='PUT', uri='https://image.example.com/v2/images/{id}/file'.format( id=self.image_id), status_code=400, validate=dict( headers={ 'Content-Type': 'application/octet-stream', }, )), dict(method='DELETE', uri='https://image.example.com/v2/images/{id}'.format( id=self.image_id)), ]) self.assertRaises( exc.OpenStackCloudHTTPError, self._call_create_image, self.image_name) self.assert_calls() def test_create_image_put_bad_int(self): self.cloud.image_api_use_tasks = False self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}), ]) self.assertRaises( exc.OpenStackCloudException, self._call_create_image, self.image_name, min_disk='fish', min_ram=0) self.assert_calls() def test_create_image_put_user_int(self): self.cloud.image_api_use_tasks = False args = {'name': self.image_name, 'container_format': 'bare', 'disk_format': u'qcow2', 'owner_specified.shade.md5': fakes.NO_MD5, 'owner_specified.shade.sha256': fakes.NO_SHA256, 'owner_specified.shade.object': 'images/{name}'.format( name=self.image_name), 'int_v': '12345', 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = args.copy() ret['id'] = self.image_id ret['status'] = 'success' self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}), dict(method='POST', uri='https://image.example.com/v2/images', json=ret, validate=dict(json=args)), dict(method='PUT', uri='https://image.example.com/v2/images/{id}/file'.format( id=self.image_id), validate=dict( headers={ 'Content-Type': 'application/octet-stream', }, )), dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [ret]}), ]) self._call_create_image( self.image_name, min_disk='0', min_ram=0, int_v=12345) self.assert_calls() def test_create_image_put_meta_int(self): self.cloud.image_api_use_tasks = False args = {'name': self.image_name, 'container_format': 'bare', 'disk_format': u'qcow2', 'owner_specified.shade.md5': fakes.NO_MD5, 'owner_specified.shade.sha256': fakes.NO_SHA256, 'owner_specified.shade.object': 'images/{name}'.format( name=self.image_name), 'int_v': 12345, 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = args.copy() ret['id'] = self.image_id ret['status'] = 'success' self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}), dict(method='POST', uri='https://image.example.com/v2/images', json=ret, validate=dict(json=args)), dict(method='PUT', uri='https://image.example.com/v2/images/{id}/file'.format( id=self.image_id), validate=dict( headers={ 'Content-Type': 'application/octet-stream', }, )), dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [ret]}), ]) self._call_create_image( self.image_name, min_disk='0', min_ram=0, meta={'int_v': 12345}) self.assert_calls() def test_create_image_put_protected(self): self.cloud.image_api_use_tasks = False args = {'name': self.image_name, 'container_format': 'bare', 'disk_format': u'qcow2', 'owner_specified.shade.md5': fakes.NO_MD5, 'owner_specified.shade.sha256': fakes.NO_SHA256, 'owner_specified.shade.object': 'images/{name}'.format( name=self.image_name), 'int_v': '12345', 'protected': False, 'visibility': 'private', 'min_disk': 0, 'min_ram': 0} ret = args.copy() ret['id'] = self.image_id ret['status'] = 'success' self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}), dict(method='POST', uri='https://image.example.com/v2/images', json=ret, validate=dict(json=args)), dict(method='PUT', uri='https://image.example.com/v2/images/{id}/file'.format( id=self.image_id), validate=dict( headers={ 'Content-Type': 'application/octet-stream', }, )), dict(method='GET', uri='https://image.example.com/v2/images', json={'images': [ret]}), ]) self._call_create_image( self.image_name, min_disk='0', min_ram=0, properties={'int_v': 12345}, protected=False) self.assert_calls() def test_get_image_by_id(self): self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images/{id}'.format( id=self.image_id), json=self.fake_image_dict) ]) self.assertEqual( self.cloud._normalize_image(self.fake_image_dict), self.cloud.get_image_by_id(self.image_id)) self.assert_calls() class TestImageSuburl(BaseTestImage): def setUp(self): super(TestImageSuburl, self).setUp() self.use_keystone_v3(catalog='catalog-v3-suburl.json') self.use_glance( image_version_json='image-version-suburl.json', image_discovery_url='https://example.com/image') def test_list_images(self): self.register_uris([ dict(method='GET', uri='https://example.com/image/v2/images', json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() def test_list_images_paginated(self): marker = str(uuid.uuid4()) self.register_uris([ dict(method='GET', uri='https://example.com/image/v2/images', json={'images': [self.fake_image_dict], 'next': '/v2/images?marker={marker}'.format( marker=marker)}), dict(method='GET', uri=('https://example.com/image/v2/images?' 'marker={marker}'.format(marker=marker)), json=self.fake_search_return) ]) self.assertEqual( self.cloud._normalize_images([ self.fake_image_dict, self.fake_image_dict]), self.cloud.list_images()) self.assert_calls() class TestImageV1Only(base.RequestsMockTestCase): def setUp(self): super(TestImageV1Only, self).setUp() self.use_glance(image_version_json='image-version-v1.json') def test_config_v1(self): self.cloud.cloud_config.config['image_api_version'] = '1' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v1/', self.cloud._image_client.get_endpoint()) self.assertTrue(self.cloud._is_client_version('image', 1)) def test_config_v2(self): self.cloud.cloud_config.config['image_api_version'] = '2' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v1/', self.cloud._image_client.get_endpoint()) self.assertFalse(self.cloud._is_client_version('image', 2)) class TestImageV2Only(base.RequestsMockTestCase): def setUp(self): super(TestImageV2Only, self).setUp() self.use_glance(image_version_json='image-version-v2.json') def test_config_v1(self): self.cloud.cloud_config.config['image_api_version'] = '1' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v2/', self.cloud._image_client.get_endpoint()) self.assertTrue(self.cloud._is_client_version('image', 2)) def test_config_v2(self): self.cloud.cloud_config.config['image_api_version'] = '2' # We override the scheme of the endpoint with the scheme of the service # because glance has a bug where it doesn't return https properly. self.assertEqual( 'https://image.example.com/v2/', self.cloud._image_client.get_endpoint()) self.assertTrue(self.cloud._is_client_version('image', 2)) class TestImageVolume(BaseTestImage): def test_create_image_volume(self): volume_id = 'some-volume' self.register_uris([ dict(method='POST', uri='{endpoint}/volumes/{id}/action'.format( endpoint=CINDER_URL, id=volume_id), json={'os-volume_upload_image': {'image_id': self.image_id}}, validate=dict(json={ u'os-volume_upload_image': { u'container_format': u'bare', u'disk_format': u'qcow2', u'force': False, u'image_name': u'fake_image'}}) ), # NOTE(notmorgan): Glance discovery happens here, insert the # glance discovery mock at this point, DO NOT use the # .use_glance() method, that is intended only for use in # .setUp self.get_glance_discovery_mock_dict(), dict(method='GET', uri='https://image.example.com/v2/images', json=self.fake_search_return) ]) self.cloud.create_image( 'fake_image', self.imagefile.name, wait=True, timeout=1, volume={'id': volume_id}) self.assert_calls() def test_create_image_volume_duplicate(self): volume_id = 'some-volume' self.register_uris([ dict(method='POST', uri='{endpoint}/volumes/{id}/action'.format( endpoint=CINDER_URL, id=volume_id), json={'os-volume_upload_image': {'image_id': self.image_id}}, validate=dict(json={ u'os-volume_upload_image': { u'container_format': u'bare', u'disk_format': u'qcow2', u'force': True, u'image_name': u'fake_image'}}) ), # NOTE(notmorgan): Glance discovery happens here, insert the # glance discovery mock at this point, DO NOT use the # .use_glance() method, that is intended only for use in # .setUp self.get_glance_discovery_mock_dict(), dict(method='GET', uri='https://image.example.com/v2/images', json=self.fake_search_return) ]) self.cloud.create_image( 'fake_image', self.imagefile.name, wait=True, timeout=1, volume={'id': volume_id}, allow_duplicates=True) self.assert_calls() class TestImageBrokenDiscovery(base.RequestsMockTestCase): def setUp(self): super(TestImageBrokenDiscovery, self).setUp() self.use_glance(image_version_json='image-version-broken.json') def test_url_fix(self): # image-version-broken.json has both http urls and localhost as the # host. This is testing that what is discovered is https, because # that's what's in the catalog, and image.example.com for the same # reason. self.register_uris([ dict(method='GET', uri='https://image.example.com/v2/images', json={'images': []}) ]) self.assertEqual([], self.cloud.list_images()) self.assertEqual( self.cloud._image_client.get_endpoint(), 'https://image.example.com/v2/') self.assert_calls() shade-1.31.0/shade/tests/unit/test__utils.py0000666000175000017500000003471713440327640021040 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import random import string import tempfile from uuid import uuid4 import mock import testtools from shade import _utils from shade import exc from shade.tests.unit import base RANGE_DATA = [ dict(id=1, key1=1, key2=5), dict(id=2, key1=1, key2=20), dict(id=3, key1=2, key2=10), dict(id=4, key1=2, key2=30), dict(id=5, key1=3, key2=40), dict(id=6, key1=3, key2=40), ] class TestUtils(base.TestCase): def test__filter_list_name_or_id(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto') data = [el1, el2] ret = _utils._filter_list(data, 'donald', None) self.assertEqual([el1], ret) def test__filter_list_name_or_id_special(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto[2017-01-10]') data = [el1, el2] ret = _utils._filter_list(data, 'pluto[2017-01-10]', None) self.assertEqual([el2], ret) def test__filter_list_name_or_id_partial_bad(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto[2017-01-10]') data = [el1, el2] ret = _utils._filter_list(data, 'pluto[2017-01]', None) self.assertEqual([], ret) def test__filter_list_name_or_id_partial_glob(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto[2017-01-10]') data = [el1, el2] ret = _utils._filter_list(data, 'pluto*', None) self.assertEqual([el2], ret) def test__filter_list_name_or_id_non_glob_glob(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto[2017-01-10]') data = [el1, el2] ret = _utils._filter_list(data, 'pluto', None) self.assertEqual([], ret) def test__filter_list_name_or_id_glob(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto') el3 = dict(id=200, name='pluto-2') data = [el1, el2, el3] ret = _utils._filter_list(data, 'pluto*', None) self.assertEqual([el2, el3], ret) def test__filter_list_name_or_id_glob_not_found(self): el1 = dict(id=100, name='donald') el2 = dict(id=200, name='pluto') el3 = dict(id=200, name='pluto-2') data = [el1, el2, el3] ret = _utils._filter_list(data, 'q*', None) self.assertEqual([], ret) def test__filter_list_unicode(self): el1 = dict(id=100, name=u'中文', last='duck', other=dict(category='duck', financial=dict(status='poor'))) el2 = dict(id=200, name=u'中文', last='trump', other=dict(category='human', financial=dict(status='rich'))) el3 = dict(id=300, name='donald', last='ronald mac', other=dict(category='clown', financial=dict(status='rich'))) data = [el1, el2, el3] ret = _utils._filter_list( data, u'中文', {'other': { 'financial': {'status': 'rich'} }}) self.assertEqual([el2], ret) def test__filter_list_filter(self): el1 = dict(id=100, name='donald', other='duck') el2 = dict(id=200, name='donald', other='trump') data = [el1, el2] ret = _utils._filter_list(data, 'donald', {'other': 'duck'}) self.assertEqual([el1], ret) def test__filter_list_filter_jmespath(self): el1 = dict(id=100, name='donald', other='duck') el2 = dict(id=200, name='donald', other='trump') data = [el1, el2] ret = _utils._filter_list(data, 'donald', "[?other == `duck`]") self.assertEqual([el1], ret) def test__filter_list_dict1(self): el1 = dict(id=100, name='donald', last='duck', other=dict(category='duck')) el2 = dict(id=200, name='donald', last='trump', other=dict(category='human')) el3 = dict(id=300, name='donald', last='ronald mac', other=dict(category='clown')) data = [el1, el2, el3] ret = _utils._filter_list( data, 'donald', {'other': {'category': 'clown'}}) self.assertEqual([el3], ret) def test__filter_list_dict2(self): el1 = dict(id=100, name='donald', last='duck', other=dict(category='duck', financial=dict(status='poor'))) el2 = dict(id=200, name='donald', last='trump', other=dict(category='human', financial=dict(status='rich'))) el3 = dict(id=300, name='donald', last='ronald mac', other=dict(category='clown', financial=dict(status='rich'))) data = [el1, el2, el3] ret = _utils._filter_list( data, 'donald', {'other': { 'financial': {'status': 'rich'} }}) self.assertEqual([el2, el3], ret) def test_safe_dict_min_ints(self): """Test integer comparison""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_strs(self): """Test integer as strings comparison""" data = [{'f1': '3'}, {'f1': '2'}, {'f1': '1'}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_None(self): """Test None values""" data = [{'f1': 3}, {'f1': None}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_key_missing(self): """Test missing key for an entry still works""" data = [{'f1': 3}, {'x': 2}, {'f1': 1}] retval = _utils.safe_dict_min('f1', data) self.assertEqual(1, retval) def test_safe_dict_min_key_not_found(self): """Test key not found in any elements returns None""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_min('doesnotexist', data) self.assertIsNone(retval) def test_safe_dict_min_not_int(self): """Test non-integer key value raises OSCE""" data = [{'f1': 3}, {'f1': "aaa"}, {'f1': 1}] with testtools.ExpectedException( exc.OpenStackCloudException, "Search for minimum value failed. " "Value for f1 is not an integer: aaa" ): _utils.safe_dict_min('f1', data) def test_safe_dict_max_ints(self): """Test integer comparison""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_strs(self): """Test integer as strings comparison""" data = [{'f1': '3'}, {'f1': '2'}, {'f1': '1'}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_None(self): """Test None values""" data = [{'f1': 3}, {'f1': None}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_key_missing(self): """Test missing key for an entry still works""" data = [{'f1': 3}, {'x': 2}, {'f1': 1}] retval = _utils.safe_dict_max('f1', data) self.assertEqual(3, retval) def test_safe_dict_max_key_not_found(self): """Test key not found in any elements returns None""" data = [{'f1': 3}, {'f1': 2}, {'f1': 1}] retval = _utils.safe_dict_max('doesnotexist', data) self.assertIsNone(retval) def test_safe_dict_max_not_int(self): """Test non-integer key value raises OSCE""" data = [{'f1': 3}, {'f1': "aaa"}, {'f1': 1}] with testtools.ExpectedException( exc.OpenStackCloudException, "Search for maximum value failed. " "Value for f1 is not an integer: aaa" ): _utils.safe_dict_max('f1', data) def test_parse_range_None(self): self.assertIsNone(_utils.parse_range(None)) def test_parse_range_invalid(self): self.assertIsNone(_utils.parse_range("1024") self.assertIsInstance(retval, tuple) self.assertEqual(">", retval[0]) self.assertEqual(1024, retval[1]) def test_parse_range_le(self): retval = _utils.parse_range("<=1024") self.assertIsInstance(retval, tuple) self.assertEqual("<=", retval[0]) self.assertEqual(1024, retval[1]) def test_parse_range_ge(self): retval = _utils.parse_range(">=1024") self.assertIsInstance(retval, tuple) self.assertEqual(">=", retval[0]) self.assertEqual(1024, retval[1]) def test_range_filter_min(self): retval = _utils.range_filter(RANGE_DATA, "key1", "min") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[:2], retval) def test_range_filter_max(self): retval = _utils.range_filter(RANGE_DATA, "key1", "max") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[-2:], retval) def test_range_filter_range(self): retval = _utils.range_filter(RANGE_DATA, "key1", "<3") self.assertIsInstance(retval, list) self.assertEqual(4, len(retval)) self.assertEqual(RANGE_DATA[:4], retval) def test_range_filter_exact(self): retval = _utils.range_filter(RANGE_DATA, "key1", "2") self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual(RANGE_DATA[2:4], retval) def test_range_filter_invalid_int(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Invalid range value: <1A0" ): _utils.range_filter(RANGE_DATA, "key1", "<1A0") def test_range_filter_invalid_op(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Invalid range value: <>100" ): _utils.range_filter(RANGE_DATA, "key1", "<>100") def test_file_segment(self): file_size = 4200 content = ''.join(random.SystemRandom().choice( string.ascii_uppercase + string.digits) for _ in range(file_size)).encode('latin-1') self.imagefile = tempfile.NamedTemporaryFile(delete=False) self.imagefile.write(content) self.imagefile.close() segments = self.cloud._get_file_segments( endpoint='test_container/test_image', filename=self.imagefile.name, file_size=file_size, segment_size=1000) self.assertEqual(len(segments), 5) segment_content = b'' for (index, (name, segment)) in enumerate(segments.items()): self.assertEqual( 'test_container/test_image/{index:0>6}'.format(index=index), name) segment_content += segment.read() self.assertEqual(content, segment_content) def test_get_entity_pass_object(self): obj = mock.Mock(id=uuid4().hex) self.cloud.use_direct_get = True self.assertEqual(obj, _utils._get_entity(self.cloud, '', obj, {})) def test_get_entity_pass_dict(self): d = dict(id=uuid4().hex) self.cloud.use_direct_get = True self.assertEqual(d, _utils._get_entity(self.cloud, '', d, {})) def test_get_entity_no_use_direct_get(self): # test we are defaulting to the search_ methods # if the use_direct_get flag is set to False(default). uuid = uuid4().hex resource = 'network' func = 'search_%ss' % resource filters = {} with mock.patch.object(self.cloud, func) as search: _utils._get_entity(self.cloud, resource, uuid, filters) search.assert_called_once_with(uuid, filters) def test_get_entity_no_uuid_like(self): # test we are defaulting to the search_ methods # if the name_or_id param is a name(string) but not a uuid. self.cloud.use_direct_get = True name = 'name_no_uuid' resource = 'network' func = 'search_%ss' % resource filters = {} with mock.patch.object(self.cloud, func) as search: _utils._get_entity(self.cloud, resource, name, filters) search.assert_called_once_with(name, filters) def test_get_entity_pass_uuid(self): uuid = uuid4().hex self.cloud.use_direct_get = True resources = ['flavor', 'image', 'volume', 'network', 'subnet', 'port', 'floating_ip', 'security_group'] for r in resources: f = 'get_%s_by_id' % r with mock.patch.object(self.cloud, f) as get: _utils._get_entity(self.cloud, r, uuid, {}) get.assert_called_once_with(uuid) def test_get_entity_pass_search_methods(self): self.cloud.use_direct_get = True resources = ['flavor', 'image', 'volume', 'network', 'subnet', 'port', 'floating_ip', 'security_group'] filters = {} name = 'name_no_uuid' for r in resources: f = 'search_%ss' % r with mock.patch.object(self.cloud, f) as search: _utils._get_entity(self.cloud, r, name, {}) search.assert_called_once_with(name, filters) def test_get_entity_get_and_search(self): resources = ['flavor', 'image', 'volume', 'network', 'subnet', 'port', 'floating_ip', 'security_group'] for r in resources: self.assertTrue(hasattr(self.cloud, 'get_%s_by_id' % r)) self.assertTrue(hasattr(self.cloud, 'search_%ss' % r)) shade-1.31.0/shade/tests/unit/test_normalize.py0000666000175000017500000012017613440327640021534 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from shade.tests.unit import base RAW_SERVER_DICT = { 'HUMAN_ID': True, 'NAME_ATTR': 'name', 'OS-DCF:diskConfig': u'MANUAL', 'OS-EXT-AZ:availability_zone': u'ca-ymq-2', 'OS-EXT-STS:power_state': 1, 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': u'active', 'OS-SRV-USG:launched_at': u'2015-08-01T19:52:02.000000', 'OS-SRV-USG:terminated_at': None, 'accessIPv4': u'', 'accessIPv6': u'', 'addresses': { u'public': [{ u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'2604:e100:1:0:f816:3eff:fe9f:463e', u'version': 6 }, { u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'162.253.54.192', u'version': 4}]}, 'config_drive': u'True', 'created': u'2015-08-01T19:52:16Z', 'flavor': { u'id': u'bbcb7eb5-5c8d-498f-9d7e-307c575d3566', u'links': [{ u'href': u'https://compute-ca-ymq-1.vexxhost.net/db9/flavors/bbc', u'rel': u'bookmark'}]}, 'hostId': u'bd37', 'human_id': u'mordred-irc', 'id': u'811c5197-dba7-4d3a-a3f6-68ca5328b9a7', 'image': { u'id': u'69c99b45-cd53-49de-afdc-f24789eb8f83', u'links': [{ u'href': u'https://compute-ca-ymq-1.vexxhost.net/db9/images/69c', u'rel': u'bookmark'}]}, 'key_name': u'mordred', 'links': [{ u'href': u'https://compute-ca-ymq-1.vexxhost.net/v2/db9/servers/811', u'rel': u'self' }, { u'href': u'https://compute-ca-ymq-1.vexxhost.net/db9/servers/811', u'rel': u'bookmark'}], 'metadata': {u'group': u'irc', u'groups': u'irc,enabled'}, 'name': u'mordred-irc', 'networks': {u'public': [u'2604:e100:1:0:f816:3eff:fe9f:463e', u'162.253.54.192']}, 'os-extended-volumes:volumes_attached': [], 'progress': 0, 'request_ids': [], 'security_groups': [{u'name': u'default'}], 'status': u'ACTIVE', 'tenant_id': u'db92b20496ae4fbda850a689ea9d563f', 'updated': u'2016-10-15T15:49:29Z', 'user_id': u'e9b21dc437d149858faee0898fb08e92'} RAW_GLANCE_IMAGE_DICT = { u'auto_disk_config': u'False', u'checksum': u'774f48af604ab1ec319093234c5c0019', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'container_format': u'ovf', u'created_at': u'2015-02-15T22:58:45Z', u'disk_format': u'vhd', u'file': u'/v2/images/f2868d7c-63e1-4974-a64d-8670a86df21e/file', u'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', u'image_type': u'import', u'min_disk': 20, u'min_ram': 0, u'name': u'Test Monty Ubuntu', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'owner': u'610275', u'protected': False, u'schema': u'/v2/schemas/image', u'size': 323004185, u'status': u'active', u'tags': [], u'updated_at': u'2015-02-15T23:04:34Z', u'user_id': u'156284', u'virtual_size': None, u'visibility': u'private', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'} RAW_NOVA_IMAGE_DICT = { 'HUMAN_ID': True, 'NAME_ATTR': 'name', 'OS-DCF:diskConfig': u'MANUAL', 'OS-EXT-IMG-SIZE:size': 323004185, 'created': u'2015-02-15T22:58:45Z', 'human_id': u'test-monty-ubuntu', 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', 'links': [{ u'href': u'https://example.com/v2/610275/images/f2868d7c', u'rel': u'self' }, { u'href': u'https://example.com/610275/images/f2868d7c', u'rel': u'bookmark' }, { u'href': u'https://example.com/images/f2868d7c', u'rel': u'alternate', u'type': u'application/vnd.openstack.image'}], 'metadata': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'}, 'minDisk': 20, 'minRam': 0, 'name': u'Test Monty Ubuntu', 'progress': 100, 'request_ids': [], 'status': u'ACTIVE', 'updated': u'2015-02-15T23:04:34Z'} RAW_FLAVOR_DICT = { 'HUMAN_ID': True, 'NAME_ATTR': 'name', 'OS-FLV-EXT-DATA:ephemeral': 80, 'OS-FLV-WITH-EXT-SPECS:extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}, 'disk': 40, 'ephemeral': 80, 'human_id': u'8-gb-performance', 'id': u'performance1-8', 'is_public': 'N/A', 'links': [{ u'href': u'https://example.com/v2/610275/flavors/performance1-8', u'rel': u'self' }, { u'href': u'https://example.com/610275/flavors/performance1-8', u'rel': u'bookmark'}], 'name': u'8 GB Performance', 'ram': 8192, 'request_ids': [], 'rxtx_factor': 1600.0, 'swap': u'', 'vcpus': 8} class TestUtils(base.RequestsMockTestCase): def test_normalize_flavors(self): raw_flavor = RAW_FLAVOR_DICT.copy() expected = { 'OS-FLV-EXT-DATA:ephemeral': 80, 'OS-FLV-WITH-EXT-SPECS:extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}, 'disk': 40, 'ephemeral': 80, 'extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}, 'id': u'performance1-8', 'is_disabled': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'name': u'8 GB Performance', 'properties': { 'OS-FLV-EXT-DATA:ephemeral': 80, 'OS-FLV-WITH-EXT-SPECS:extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}}, 'ram': 8192, 'rxtx_factor': 1600.0, 'swap': 0, 'vcpus': 8} retval = self.cloud._normalize_flavor(raw_flavor) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_flavors_strict(self): raw_flavor = RAW_FLAVOR_DICT.copy() expected = { 'disk': 40, 'ephemeral': 80, 'extra_specs': { u'class': u'performance1', u'disk_io_index': u'40', u'number_of_data_disks': u'1', u'policy_class': u'performance_flavor', u'resize_policy_class': u'performance_flavor'}, 'id': u'performance1-8', 'is_disabled': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'name': u'8 GB Performance', 'properties': {}, 'ram': 8192, 'rxtx_factor': 1600.0, 'swap': 0, 'vcpus': 8} retval = self.strict_cloud._normalize_flavor(raw_flavor) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_nova_images(self): raw_image = RAW_NOVA_IMAGE_DICT.copy() expected = { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False', 'OS-DCF:diskConfig': u'MANUAL', 'checksum': None, 'container_format': None, 'created': u'2015-02-15T22:58:45Z', 'created_at': '2015-02-15T22:58:45Z', 'direct_url': None, 'disk_format': None, 'file': None, 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', 'is_protected': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'locations': [], 'metadata': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False', 'OS-DCF:diskConfig': u'MANUAL', 'progress': 100}, 'minDisk': 20, 'minRam': 0, 'min_disk': 20, 'min_ram': 0, 'name': u'Test Monty Ubuntu', 'owner': None, 'progress': 100, 'properties': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False', 'OS-DCF:diskConfig': u'MANUAL', 'progress': 100}, 'protected': False, 'size': 323004185, 'status': u'active', 'tags': [], 'updated': u'2015-02-15T23:04:34Z', 'updated_at': u'2015-02-15T23:04:34Z', 'virtual_size': 0, 'visibility': 'private'} retval = self.cloud._normalize_image(raw_image) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_nova_images_strict(self): raw_image = RAW_NOVA_IMAGE_DICT.copy() expected = { 'checksum': None, 'container_format': None, 'created_at': '2015-02-15T22:58:45Z', 'direct_url': None, 'disk_format': None, 'file': None, 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', 'is_protected': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'locations': [], 'min_disk': 20, 'min_ram': 0, 'name': u'Test Monty Ubuntu', 'owner': None, 'properties': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False', 'OS-DCF:diskConfig': u'MANUAL', 'progress': 100}, 'size': 323004185, 'status': u'active', 'tags': [], 'updated_at': u'2015-02-15T23:04:34Z', 'virtual_size': 0, 'visibility': 'private'} retval = self.strict_cloud._normalize_image(raw_image) self.assertEqual(sorted(expected.keys()), sorted(retval.keys())) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_glance_images(self): raw_image = RAW_GLANCE_IMAGE_DICT.copy() expected = { u'auto_disk_config': u'False', 'checksum': u'774f48af604ab1ec319093234c5c0019', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', 'container_format': u'ovf', 'created': u'2015-02-15T22:58:45Z', 'created_at': u'2015-02-15T22:58:45Z', 'direct_url': None, 'disk_format': u'vhd', 'file': u'/v2/images/f2868d7c-63e1-4974-a64d-8670a86df21e/file', 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', u'image_type': u'import', 'is_protected': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': u'610275', 'name': None}, 'region_name': u'RegionOne', 'zone': None}, 'locations': [], 'metadata': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'schema': u'/v2/schemas/image', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'}, 'minDisk': 20, 'min_disk': 20, 'minRam': 0, 'min_ram': 0, 'name': u'Test Monty Ubuntu', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', 'owner': u'610275', 'properties': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'schema': u'/v2/schemas/image', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'}, 'protected': False, u'schema': u'/v2/schemas/image', 'size': 323004185, 'status': u'active', 'tags': [], 'updated': u'2015-02-15T23:04:34Z', 'updated_at': u'2015-02-15T23:04:34Z', u'user_id': u'156284', 'virtual_size': 0, 'visibility': u'private', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'} retval = self.cloud._normalize_image(raw_image) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_glance_images_strict(self): raw_image = RAW_GLANCE_IMAGE_DICT.copy() expected = { 'checksum': u'774f48af604ab1ec319093234c5c0019', 'container_format': u'ovf', 'created_at': u'2015-02-15T22:58:45Z', 'direct_url': None, 'disk_format': u'vhd', 'file': u'/v2/images/f2868d7c-63e1-4974-a64d-8670a86df21e/file', 'id': u'f2868d7c-63e1-4974-a64d-8670a86df21e', 'is_protected': False, 'is_public': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': u'610275', 'name': None}, 'region_name': u'RegionOne', 'zone': None}, 'locations': [], 'min_disk': 20, 'min_ram': 0, 'name': u'Test Monty Ubuntu', 'owner': u'610275', 'properties': { u'auto_disk_config': u'False', u'com.rackspace__1__build_core': u'1', u'com.rackspace__1__build_managed': u'1', u'com.rackspace__1__build_rackconnect': u'1', u'com.rackspace__1__options': u'0', u'com.rackspace__1__source': u'import', u'com.rackspace__1__visible_core': u'1', u'com.rackspace__1__visible_managed': u'1', u'com.rackspace__1__visible_rackconnect': u'1', u'image_type': u'import', u'org.openstack__1__architecture': u'x64', u'os_type': u'linux', u'schema': u'/v2/schemas/image', u'user_id': u'156284', u'vm_mode': u'hvm', u'xenapi_use_agent': u'False'}, 'size': 323004185, 'status': u'active', 'tags': [], 'updated_at': u'2015-02-15T23:04:34Z', 'virtual_size': 0, 'visibility': 'private'} retval = self.strict_cloud._normalize_image(raw_image) self.assertEqual(sorted(expected.keys()), sorted(retval.keys())) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_servers_strict(self): raw_server = RAW_SERVER_DICT.copy() expected = { 'accessIPv4': u'', 'accessIPv6': u'', 'addresses': { u'public': [{ u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'2604:e100:1:0:f816:3eff:fe9f:463e', u'version': 6 }, { u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'162.253.54.192', u'version': 4}]}, 'adminPass': None, 'created': u'2015-08-01T19:52:16Z', 'created_at': u'2015-08-01T19:52:16Z', 'disk_config': u'MANUAL', 'flavor': {u'id': u'bbcb7eb5-5c8d-498f-9d7e-307c575d3566'}, 'has_config_drive': True, 'host_id': u'bd37', 'id': u'811c5197-dba7-4d3a-a3f6-68ca5328b9a7', 'image': {u'id': u'69c99b45-cd53-49de-afdc-f24789eb8f83'}, 'interface_ip': u'', 'key_name': u'mordred', 'launched_at': u'2015-08-01T19:52:02.000000', 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': u'db92b20496ae4fbda850a689ea9d563f', 'name': None}, 'region_name': u'RegionOne', 'zone': u'ca-ymq-2'}, 'metadata': {u'group': u'irc', u'groups': u'irc,enabled'}, 'name': u'mordred-irc', 'networks': { u'public': [ u'2604:e100:1:0:f816:3eff:fe9f:463e', u'162.253.54.192']}, 'power_state': 1, 'private_v4': None, 'progress': 0, 'properties': {}, 'public_v4': None, 'public_v6': None, 'security_groups': [{u'name': u'default'}], 'status': u'ACTIVE', 'task_state': None, 'terminated_at': None, 'updated': u'2016-10-15T15:49:29Z', 'user_id': u'e9b21dc437d149858faee0898fb08e92', 'vm_state': u'active', 'volumes': []} retval = self.strict_cloud._normalize_server(raw_server) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_servers_normal(self): raw_server = RAW_SERVER_DICT.copy() expected = { 'OS-DCF:diskConfig': u'MANUAL', 'OS-EXT-AZ:availability_zone': u'ca-ymq-2', 'OS-EXT-STS:power_state': 1, 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': u'active', 'OS-SRV-USG:launched_at': u'2015-08-01T19:52:02.000000', 'OS-SRV-USG:terminated_at': None, 'accessIPv4': u'', 'accessIPv6': u'', 'addresses': { u'public': [{ u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'2604:e100:1:0:f816:3eff:fe9f:463e', u'version': 6 }, { u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9f:46:3e', u'OS-EXT-IPS:type': u'fixed', u'addr': u'162.253.54.192', u'version': 4}]}, 'adminPass': None, 'az': u'ca-ymq-2', 'cloud': '_test_cloud_', 'config_drive': u'True', 'created': u'2015-08-01T19:52:16Z', 'created_at': u'2015-08-01T19:52:16Z', 'disk_config': u'MANUAL', 'flavor': {u'id': u'bbcb7eb5-5c8d-498f-9d7e-307c575d3566'}, 'has_config_drive': True, 'hostId': u'bd37', 'host_id': u'bd37', 'id': u'811c5197-dba7-4d3a-a3f6-68ca5328b9a7', 'image': {u'id': u'69c99b45-cd53-49de-afdc-f24789eb8f83'}, 'interface_ip': '', 'key_name': u'mordred', 'launched_at': u'2015-08-01T19:52:02.000000', 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': u'db92b20496ae4fbda850a689ea9d563f', 'name': None}, 'region_name': u'RegionOne', 'zone': u'ca-ymq-2'}, 'metadata': {u'group': u'irc', u'groups': u'irc,enabled'}, 'name': u'mordred-irc', 'networks': { u'public': [ u'2604:e100:1:0:f816:3eff:fe9f:463e', u'162.253.54.192']}, 'os-extended-volumes:volumes_attached': [], 'power_state': 1, 'private_v4': None, 'progress': 0, 'project_id': u'db92b20496ae4fbda850a689ea9d563f', 'properties': { 'OS-DCF:diskConfig': u'MANUAL', 'OS-EXT-AZ:availability_zone': u'ca-ymq-2', 'OS-EXT-STS:power_state': 1, 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': u'active', 'OS-SRV-USG:launched_at': u'2015-08-01T19:52:02.000000', 'OS-SRV-USG:terminated_at': None, 'os-extended-volumes:volumes_attached': []}, 'public_v4': None, 'public_v6': None, 'region': u'RegionOne', 'security_groups': [{u'name': u'default'}], 'status': u'ACTIVE', 'task_state': None, 'tenant_id': u'db92b20496ae4fbda850a689ea9d563f', 'terminated_at': None, 'updated': u'2016-10-15T15:49:29Z', 'user_id': u'e9b21dc437d149858faee0898fb08e92', 'vm_state': u'active', 'volumes': []} retval = self.cloud._normalize_server(raw_server) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_secgroups_strict(self): nova_secgroup = dict( id='abc123', name='nova_secgroup', description='A Nova security group', rules=[ dict(id='123', from_port=80, to_port=81, ip_protocol='tcp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] ) expected = dict( id='abc123', name='nova_secgroup', description='A Nova security group', properties={}, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_'), security_group_rules=[ dict(id='123', direction='ingress', ethertype='IPv4', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0', security_group_id='xyz123', properties={}, remote_group_id=None, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_')) ] ) retval = self.strict_cloud._normalize_secgroup(nova_secgroup) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_secgroups(self): nova_secgroup = dict( id='abc123', name='nova_secgroup', description='A Nova security group', rules=[ dict(id='123', from_port=80, to_port=81, ip_protocol='tcp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] ) expected = dict( id='abc123', name='nova_secgroup', description='A Nova security group', tenant_id='', project_id='', properties={}, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_'), security_group_rules=[ dict(id='123', direction='ingress', ethertype='IPv4', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0', security_group_id='xyz123', properties={}, tenant_id='', project_id='', remote_group_id=None, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_')) ] ) retval = self.cloud._normalize_secgroup(nova_secgroup) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_secgroups_negone_port(self): nova_secgroup = dict( id='abc123', name='nova_secgroup', description='A Nova security group with -1 ports', rules=[ dict(id='123', from_port=-1, to_port=-1, ip_protocol='icmp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] ) retval = self.cloud._normalize_secgroup(nova_secgroup) self.assertIsNone(retval['security_group_rules'][0]['port_range_min']) self.assertIsNone(retval['security_group_rules'][0]['port_range_max']) self.assert_calls() def test_normalize_secgroup_rules(self): nova_rules = [ dict(id='123', from_port=80, to_port=81, ip_protocol='tcp', ip_range={'cidr': '0.0.0.0/0'}, parent_group_id='xyz123') ] expected = [ dict(id='123', direction='ingress', ethertype='IPv4', port_range_min=80, port_range_max=81, protocol='tcp', remote_ip_prefix='0.0.0.0/0', security_group_id='xyz123', tenant_id='', project_id='', remote_group_id=None, properties={}, location=dict( region_name='RegionOne', zone=None, project=dict( domain_name='default', id=mock.ANY, domain_id=None, name='admin'), cloud='_test_cloud_')) ] retval = self.cloud._normalize_secgroup_rules(nova_rules) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_volumes_v1(self): vol = dict( id='55db9e89-9cb4-4202-af88-d8c4a174998e', display_name='test', display_description='description', bootable=u'false', # unicode type multiattach='true', # str type status='in-use', created_at='2015-08-27T09:49:58-05:00', ) expected = { 'attachments': [], 'availability_zone': None, 'bootable': False, 'can_multiattach': True, 'consistencygroup_id': None, 'created_at': vol['created_at'], 'description': vol['display_description'], 'display_description': vol['display_description'], 'display_name': vol['display_name'], 'encrypted': False, 'host': None, 'id': '55db9e89-9cb4-4202-af88-d8c4a174998e', 'is_bootable': False, 'is_encrypted': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'metadata': {}, 'migration_status': None, 'multiattach': True, 'name': vol['display_name'], 'properties': {}, 'replication_driver': None, 'replication_extended_status': None, 'replication_status': None, 'size': 0, 'snapshot_id': None, 'source_volume_id': None, 'status': vol['status'], 'updated_at': None, 'volume_type': None, } retval = self.cloud._normalize_volume(vol) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_volumes_v2(self): vol = dict( id='55db9e89-9cb4-4202-af88-d8c4a174998e', name='test', description='description', bootable=False, multiattach=True, status='in-use', created_at='2015-08-27T09:49:58-05:00', availability_zone='my-zone', ) vol['os-vol-tenant-attr:tenant_id'] = 'my-project' expected = { 'attachments': [], 'availability_zone': vol['availability_zone'], 'bootable': False, 'can_multiattach': True, 'consistencygroup_id': None, 'created_at': vol['created_at'], 'description': vol['description'], 'display_description': vol['description'], 'display_name': vol['name'], 'encrypted': False, 'host': None, 'id': '55db9e89-9cb4-4202-af88-d8c4a174998e', 'is_bootable': False, 'is_encrypted': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': vol['os-vol-tenant-attr:tenant_id'], 'name': None}, 'region_name': u'RegionOne', 'zone': vol['availability_zone']}, 'metadata': {}, 'migration_status': None, 'multiattach': True, 'name': vol['name'], 'os-vol-tenant-attr:tenant_id': vol[ 'os-vol-tenant-attr:tenant_id'], 'properties': { 'os-vol-tenant-attr:tenant_id': vol[ 'os-vol-tenant-attr:tenant_id']}, 'replication_driver': None, 'replication_extended_status': None, 'replication_status': None, 'size': 0, 'snapshot_id': None, 'source_volume_id': None, 'status': vol['status'], 'updated_at': None, 'volume_type': None, } retval = self.cloud._normalize_volume(vol) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_volumes_v1_strict(self): vol = dict( id='55db9e89-9cb4-4202-af88-d8c4a174998e', display_name='test', display_description='description', bootable=u'false', # unicode type multiattach='true', # str type status='in-use', created_at='2015-08-27T09:49:58-05:00', ) expected = { 'attachments': [], 'can_multiattach': True, 'consistencygroup_id': None, 'created_at': vol['created_at'], 'description': vol['display_description'], 'host': None, 'id': '55db9e89-9cb4-4202-af88-d8c4a174998e', 'is_bootable': False, 'is_encrypted': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': 'default', 'id': mock.ANY, 'name': 'admin'}, 'region_name': u'RegionOne', 'zone': None}, 'metadata': {}, 'migration_status': None, 'name': vol['display_name'], 'properties': {}, 'replication_driver': None, 'replication_extended_status': None, 'replication_status': None, 'size': 0, 'snapshot_id': None, 'source_volume_id': None, 'status': vol['status'], 'updated_at': None, 'volume_type': None, } retval = self.strict_cloud._normalize_volume(vol) self.assertEqual(expected, retval) self.assert_calls() def test_normalize_volumes_v2_strict(self): vol = dict( id='55db9e89-9cb4-4202-af88-d8c4a174998e', name='test', description='description', bootable=False, multiattach=True, status='in-use', created_at='2015-08-27T09:49:58-05:00', availability_zone='my-zone', ) vol['os-vol-tenant-attr:tenant_id'] = 'my-project' expected = { 'attachments': [], 'can_multiattach': True, 'consistencygroup_id': None, 'created_at': vol['created_at'], 'description': vol['description'], 'host': None, 'id': '55db9e89-9cb4-4202-af88-d8c4a174998e', 'is_bootable': False, 'is_encrypted': False, 'location': { 'cloud': '_test_cloud_', 'project': { 'domain_id': None, 'domain_name': None, 'id': vol['os-vol-tenant-attr:tenant_id'], 'name': None}, 'region_name': u'RegionOne', 'zone': vol['availability_zone']}, 'metadata': {}, 'migration_status': None, 'name': vol['name'], 'properties': {}, 'replication_driver': None, 'replication_extended_status': None, 'replication_status': None, 'size': 0, 'snapshot_id': None, 'source_volume_id': None, 'status': vol['status'], 'updated_at': None, 'volume_type': None, } retval = self.strict_cloud._normalize_volume(vol) self.assertEqual(expected, retval) self.assert_calls() shade-1.31.0/shade/tests/unit/test_recordset.py0000666000175000017500000001573013440327640021525 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import testtools import shade from shade.tests.unit import base zone = { 'id': '1', 'name': 'example.net.', 'type': 'PRIMARY', 'email': 'test@example.net', 'description': 'Example zone', 'ttl': 3600, } recordset = { 'name': 'www.example.net.', 'type': 'A', 'description': 'Example zone', 'ttl': 3600, 'records': ['192.168.1.1'] } recordset_zone = '1' new_recordset = copy.copy(recordset) new_recordset['id'] = '1' new_recordset['zone'] = recordset_zone class TestRecordset(base.RequestsMockTestCase): def setUp(self): super(TestRecordset, self).setUp() self.use_designate() def test_create_recordset(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [zone], "links": {}, "metadata": { 'total_count': 1}}), dict(method='POST', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets']), json=new_recordset, validate=dict(json=recordset)), ]) rs = self.cloud.create_recordset( zone=recordset_zone, name=recordset['name'], recordset_type=recordset['type'], records=recordset['records'], description=recordset['description'], ttl=recordset['ttl']) self.assertEqual(new_recordset, rs) self.assert_calls() def test_create_recordset_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [zone], "links": {}, "metadata": { 'total_count': 1}}), dict(method='POST', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets']), status_code=500, validate=dict(json={ 'name': 'www2.example.net.', 'records': ['192.168.1.2'], 'type': 'A'})), ]) with testtools.ExpectedException( shade.exc.OpenStackCloudHTTPError, "Error creating recordset www2.example.net." ): self.cloud.create_recordset('1', 'www2.example.net.', 'a', ['192.168.1.2']) self.assert_calls() def test_update_recordset(self): new_ttl = 7200 expected_recordset = { 'name': recordset['name'], 'records': recordset['records'], 'type': recordset['type'] } self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [zone], "links": {}, "metadata": { 'total_count': 1}}), dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets', new_recordset['id']]), json=new_recordset), dict(method='PUT', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets', new_recordset['id']]), json=expected_recordset, validate=dict(json={'ttl': new_ttl})) ]) updated_rs = self.cloud.update_recordset('1', '1', ttl=new_ttl) self.assertEqual(expected_recordset, updated_rs) self.assert_calls() def test_delete_recordset(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [zone], "links": {}, "metadata": { 'total_count': 1}}), dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets', new_recordset['id']]), json=new_recordset), dict(method='DELETE', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', zone['id'], 'recordsets', new_recordset['id']]), json={}) ]) self.assertTrue(self.cloud.delete_recordset('1', '1')) self.assert_calls() def test_get_recordset_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1', 'recordsets', '1']), json=new_recordset), ]) recordset = self.cloud.get_recordset('1', '1') self.assertEqual(recordset['id'], '1') self.assert_calls() def test_get_recordset_by_name(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1', 'recordsets', new_recordset['name']]), json=new_recordset), ]) recordset = self.cloud.get_recordset('1', new_recordset['name']) self.assertEqual(new_recordset['name'], recordset['name']) self.assert_calls() def test_get_recordset_not_found_returns_false(self): recordset_name = "www.nonexistingrecord.net." self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1', 'recordsets', recordset_name]), json=[]) ]) recordset = self.cloud.get_recordset('1', recordset_name) self.assertFalse(recordset) self.assert_calls() shade-1.31.0/shade/tests/unit/test_server_group.py0000666000175000017500000000434013440327640022250 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from shade.tests.unit import base from shade.tests import fakes class TestServerGroup(base.RequestsMockTestCase): def setUp(self): super(TestServerGroup, self).setUp() self.group_id = uuid.uuid4().hex self.group_name = self.getUniqueString('server-group') self.policies = ['affinity'] self.fake_group = fakes.make_fake_server_group( self.group_id, self.group_name, self.policies) def test_create_server_group(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'compute', 'public', append=['os-server-groups']), json={'server_group': self.fake_group}, validate=dict( json={'server_group': { 'name': self.group_name, 'policies': self.policies, }})), ]) self.cloud.create_server_group(name=self.group_name, policies=self.policies) self.assert_calls() def test_delete_server_group(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['os-server-groups']), json={'server_groups': [self.fake_group]}), dict(method='DELETE', uri=self.get_mock_url( 'compute', 'public', append=['os-server-groups', self.group_id]), json={'server_groups': [self.fake_group]}), ]) self.assertTrue(self.cloud.delete_server_group(self.group_name)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_floating_ip_nova.py0000666000175000017500000002554413440327640023055 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_nova ---------------------------------- Tests Floating IP resource methods for nova-network """ from shade.tests import fakes from shade.tests.unit import base def get_fake_has_service(has_service): def fake_has_service(s): if s == 'network': return False return has_service(s) return fake_has_service class TestFloatingIP(base.RequestsMockTestCase): mock_floating_ip_list_rep = [ { 'fixed_ip': None, 'id': 1, 'instance_id': None, 'ip': '203.0.113.1', 'pool': 'nova' }, { 'fixed_ip': None, 'id': 2, 'instance_id': None, 'ip': '203.0.113.2', 'pool': 'nova' }, { 'fixed_ip': '192.0.2.3', 'id': 29, 'instance_id': 'myself', 'ip': '198.51.100.29', 'pool': 'black_hole' } ] mock_floating_ip_pools = [ {'id': 'pool1_id', 'name': 'nova'}, {'id': 'pool2_id', 'name': 'pool2'}] def assertAreInstances(self, elements, elem_type): for e in elements: self.assertIsInstance(e, elem_type) def setUp(self): super(TestFloatingIP, self).setUp() self.fake_server = fakes.make_fake_server( 'server-id', '', 'ACTIVE', addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': '192.0.2.129', u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42'}]}) self.cloud.has_service = get_fake_has_service(self.cloud.has_service) def test_list_floating_ips(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), ]) floating_ips = self.cloud.list_floating_ips() self.assertIsInstance(floating_ips, list) self.assertEqual(3, len(floating_ips)) self.assertAreInstances(floating_ips, dict) self.assert_calls() def test_list_floating_ips_with_filters(self): self.assertRaisesRegex( ValueError, "Nova-network don't support server-side", self.cloud.list_floating_ips, filters={'Foo': 42} ) def test_search_floating_ips(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), ]) floating_ips = self.cloud.search_floating_ips( filters={'attached': False}) self.assertIsInstance(floating_ips, list) self.assertEqual(2, len(floating_ips)) self.assertAreInstances(floating_ips, dict) self.assert_calls() def test_get_floating_ip(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), ]) floating_ip = self.cloud.get_floating_ip(id='29') self.assertIsInstance(floating_ip, dict) self.assertEqual('198.51.100.29', floating_ip['floating_ip_address']) self.assert_calls() def test_get_floating_ip_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), ]) floating_ip = self.cloud.get_floating_ip(id='666') self.assertIsNone(floating_ip) self.assert_calls() def test_get_floating_ip_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips', '1']), json={'floating_ip': self.mock_floating_ip_list_rep[0]}), ]) floating_ip = self.cloud.get_floating_ip_by_id(id='1') self.assertIsInstance(floating_ip, dict) self.assertEqual('203.0.113.1', floating_ip['floating_ip_address']) self.assert_calls() def test_create_floating_ip(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ip': self.mock_floating_ip_list_rep[1]}, validate=dict( json={'pool': 'nova'})), dict(method='GET', uri=self.get_mock_url( 'compute', append=['os-floating-ips', '2']), json={'floating_ip': self.mock_floating_ip_list_rep[1]}), ]) self.cloud.create_floating_ip(network='nova') self.assert_calls() def test_available_floating_ip_existing(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep[:1]}), ]) ip = self.cloud.available_floating_ip(network='nova') self.assertEqual(self.mock_floating_ip_list_rep[0]['ip'], ip['floating_ip_address']) self.assert_calls() def test_available_floating_ip_new(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': []}), dict(method='POST', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ip': self.mock_floating_ip_list_rep[0]}, validate=dict( json={'pool': 'nova'})), dict(method='GET', uri=self.get_mock_url( 'compute', append=['os-floating-ips', '1']), json={'floating_ip': self.mock_floating_ip_list_rep[0]}), ]) ip = self.cloud.available_floating_ip(network='nova') self.assertEqual(self.mock_floating_ip_list_rep[0]['ip'], ip['floating_ip_address']) self.assert_calls() def test_delete_floating_ip_existing(self): self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', append=['os-floating-ips', 'a-wild-id-appears'])), dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': []}), ]) ret = self.cloud.delete_floating_ip( floating_ip_id='a-wild-id-appears') self.assertTrue(ret) self.assert_calls() def test_delete_floating_ip_not_found(self): self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'compute', append=['os-floating-ips', 'a-wild-id-appears']), status_code=404), ]) ret = self.cloud.delete_floating_ip( floating_ip_id='a-wild-id-appears') self.assertFalse(ret) self.assert_calls() def test_attach_ip_to_server(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), dict(method='POST', uri=self.get_mock_url( 'compute', append=['servers', self.fake_server['id'], 'action']), validate=dict( json={ "addFloatingIp": { "address": "203.0.113.1", "fixed_address": "192.0.2.129", }})), ]) self.cloud._attach_ip_to_server( server=self.fake_server, floating_ip=self.cloud._normalize_floating_ip( self.mock_floating_ip_list_rep[0]), fixed_address='192.0.2.129') self.assert_calls() def test_detach_ip_from_server(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), dict(method='POST', uri=self.get_mock_url( 'compute', append=['servers', self.fake_server['id'], 'action']), validate=dict( json={ "removeFloatingIp": { "address": "203.0.113.1", }})), ]) self.cloud.detach_ip_from_server( server_id='server-id', floating_ip_id=1) self.assert_calls() def test_add_ip_from_pool(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), dict(method='GET', uri=self.get_mock_url('compute', append=['os-floating-ips']), json={'floating_ips': self.mock_floating_ip_list_rep}), dict(method='POST', uri=self.get_mock_url( 'compute', append=['servers', self.fake_server['id'], 'action']), validate=dict( json={ "addFloatingIp": { "address": "203.0.113.1", "fixed_address": "192.0.2.129", }})), ]) server = self.cloud._add_ip_from_pool( server=self.fake_server, network='nova', fixed_address='192.0.2.129') self.assertEqual(server, self.fake_server) self.assert_calls() def test_cleanup_floating_ips(self): # This should not call anything because it's unsafe on nova. self.cloud.delete_unattached_floating_ips() shade-1.31.0/shade/tests/unit/test_meta.py0000666000175000017500000012035513440327640020461 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import shade from shade import meta from shade.tests import fakes from shade.tests.unit import base PRIVATE_V4 = '198.51.100.3' PUBLIC_V4 = '192.0.2.99' PUBLIC_V6 = '2001:0db8:face:0da0:face::0b00:1c' # rfc3849 class FakeCloud(object): region_name = 'test-region' name = 'test-name' private = False force_ipv4 = False service_val = True _unused = "useless" _local_ipv6 = True def get_flavor_name(self, id): return 'test-flavor-name' def get_image_name(self, id): return 'test-image-name' def get_volumes(self, server): return [] def has_service(self, service_name): return self.service_val def use_internal_network(self): return True def use_external_network(self): return True def get_internal_networks(self): return [] def get_external_networks(self): return [] def get_internal_ipv4_networks(self): return [] def get_external_ipv4_networks(self): return [] def get_internal_ipv6_networks(self): return [] def get_external_ipv6_networks(self): return [] def list_server_security_groups(self, server): return [] def get_default_network(self): return None standard_fake_server = fakes.make_fake_server( server_id='test-id-0', name='test-id-0', status='ACTIVE', addresses={'private': [{'OS-EXT-IPS:type': 'fixed', 'addr': PRIVATE_V4, 'version': 4}], 'public': [{'OS-EXT-IPS:type': 'floating', 'addr': PUBLIC_V4, 'version': 4}]}, flavor={'id': '101'}, image={'id': '471c2475-da2f-47ac-aba5-cb4aa3d546f5'}, ) standard_fake_server['metadata'] = {'group': 'test-group'} SUBNETS_WITH_NAT = [ { u'name': u'', u'enable_dhcp': True, u'network_id': u'5ef0358f-9403-4f7b-9151-376ca112abf7', u'tenant_id': u'29c79f394b2946f1a0f8446d715dc301', u'dns_nameservers': [], u'ipv6_ra_mode': None, u'allocation_pools': [ { u'start': u'10.10.10.2', u'end': u'10.10.10.254' } ], u'gateway_ip': u'10.10.10.1', u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.10.10.0/24', u'id': u'14025a85-436e-4418-b0ee-f5b12a50f9b4' }, ] OSIC_NETWORKS = [ { u'admin_state_up': True, u'id': u'7004a83a-13d3-4dcd-8cf5-52af1ace4cae', u'mtu': 0, u'name': u'GATEWAY_NET', u'router:external': True, u'shared': True, u'status': u'ACTIVE', u'subnets': [u'cf785ee0-6cc9-4712-be3d-0bf6c86cf455'], u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' }, { u'admin_state_up': True, u'id': u'405abfcc-77dc-49b2-a271-139619ac9b26', u'mtu': 0, u'name': u'openstackjenkins-network1', u'router:external': False, u'shared': False, u'status': u'ACTIVE', u'subnets': [u'a47910bc-f649-45db-98ec-e2421c413f4e'], u'tenant_id': u'7e9c4d5842b3451d94417bd0af03a0f4' }, { u'admin_state_up': True, u'id': u'54753d2c-0a58-4928-9b32-084c59dd20a6', u'mtu': 0, u'name': u'GATEWAY_NET_V6', u'router:external': True, u'shared': True, u'status': u'ACTIVE', u'subnets': [u'9c21d704-a8b9-409a-b56d-501cb518d380', u'7cb0ce07-64c3-4a3d-92d3-6f11419b45b9'], u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' } ] OSIC_SUBNETS = [ { u'allocation_pools': [{ u'end': u'172.99.106.254', u'start': u'172.99.106.5'}], u'cidr': u'172.99.106.0/24', u'dns_nameservers': [u'69.20.0.164', u'69.20.0.196'], u'enable_dhcp': True, u'gateway_ip': u'172.99.106.1', u'host_routes': [], u'id': u'cf785ee0-6cc9-4712-be3d-0bf6c86cf455', u'ip_version': 4, u'ipv6_address_mode': None, u'ipv6_ra_mode': None, u'name': u'GATEWAY_NET', u'network_id': u'7004a83a-13d3-4dcd-8cf5-52af1ace4cae', u'subnetpool_id': None, u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' }, { u'allocation_pools': [{ u'end': u'10.0.1.254', u'start': u'10.0.1.2'}], u'cidr': u'10.0.1.0/24', u'dns_nameservers': [u'8.8.8.8', u'8.8.4.4'], u'enable_dhcp': True, u'gateway_ip': u'10.0.1.1', u'host_routes': [], u'id': u'a47910bc-f649-45db-98ec-e2421c413f4e', u'ip_version': 4, u'ipv6_address_mode': None, u'ipv6_ra_mode': None, u'name': u'openstackjenkins-subnet1', u'network_id': u'405abfcc-77dc-49b2-a271-139619ac9b26', u'subnetpool_id': None, u'tenant_id': u'7e9c4d5842b3451d94417bd0af03a0f4' }, { u'allocation_pools': [{ u'end': u'10.255.255.254', u'start': u'10.0.0.2'}], u'cidr': u'10.0.0.0/8', u'dns_nameservers': [u'8.8.8.8', u'8.8.4.4'], u'enable_dhcp': True, u'gateway_ip': u'10.0.0.1', u'host_routes': [], u'id': u'9c21d704-a8b9-409a-b56d-501cb518d380', u'ip_version': 4, u'ipv6_address_mode': None, u'ipv6_ra_mode': None, u'name': u'GATEWAY_SUBNET_V6V4', u'network_id': u'54753d2c-0a58-4928-9b32-084c59dd20a6', u'subnetpool_id': None, u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' }, { u'allocation_pools': [{ u'end': u'2001:4800:1ae1:18:ffff:ffff:ffff:ffff', u'start': u'2001:4800:1ae1:18::2'}], u'cidr': u'2001:4800:1ae1:18::/64', u'dns_nameservers': [u'2001:4860:4860::8888'], u'enable_dhcp': True, u'gateway_ip': u'2001:4800:1ae1:18::1', u'host_routes': [], u'id': u'7cb0ce07-64c3-4a3d-92d3-6f11419b45b9', u'ip_version': 6, u'ipv6_address_mode': u'dhcpv6-stateless', u'ipv6_ra_mode': None, u'name': u'GATEWAY_SUBNET_V6V6', u'network_id': u'54753d2c-0a58-4928-9b32-084c59dd20a6', u'subnetpool_id': None, u'tenant_id': u'7a1ca9f7cc4e4b13ac0ed2957f1e8c32' } ] class TestMeta(base.RequestsMockTestCase): def test_find_nova_addresses_key_name(self): # Note 198.51.100.0/24 is TEST-NET-2 from rfc5737 addrs = {'public': [{'addr': '198.51.100.1', 'version': 4}], 'private': [{'addr': '192.0.2.5', 'version': 4}]} self.assertEqual( ['198.51.100.1'], meta.find_nova_addresses(addrs, key_name='public')) self.assertEqual([], meta.find_nova_addresses(addrs, key_name='foo')) def test_find_nova_addresses_ext_tag(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses(addrs, ext_tag='fixed')) self.assertEqual([], meta.find_nova_addresses(addrs, ext_tag='foo')) def test_find_nova_addresses_key_name_and_ext_tag(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed')) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='public', ext_tag='foo')) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='bar', ext_tag='fixed')) def test_find_nova_addresses_all(self): addrs = {'public': [{'OS-EXT-IPS:type': 'fixed', 'addr': '198.51.100.2', 'version': 4}]} self.assertEqual( ['198.51.100.2'], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed', version=4)) self.assertEqual([], meta.find_nova_addresses( addrs, key_name='public', ext_tag='fixed', version=6)) def test_find_nova_addresses_floating_first(self): # Note 198.51.100.0/24 is TEST-NET-2 from rfc5737 addrs = { 'private': [{ 'addr': '192.0.2.5', 'version': 4, 'OS-EXT-IPS:type': 'fixed'}], 'public': [{ 'addr': '198.51.100.1', 'version': 4, 'OS-EXT-IPS:type': 'floating'}]} self.assertEqual( ['198.51.100.1', '192.0.2.5'], meta.find_nova_addresses(addrs)) def test_get_server_ip(self): srv = meta.obj_to_munch(standard_fake_server) self.assertEqual( PRIVATE_V4, meta.get_server_ip(srv, ext_tag='fixed')) self.assertEqual( PUBLIC_V4, meta.get_server_ip(srv, ext_tag='floating')) def test_get_server_private_ip(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net-name'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'private': [{'OS-EXT-IPS:type': 'fixed', 'addr': PRIVATE_V4, 'version': 4}], 'public': [{'OS-EXT-IPS:type': 'floating', 'addr': PUBLIC_V4, 'version': 4}]} ) self.assertEqual( PRIVATE_V4, meta.get_server_private_ip(srv, self.cloud)) self.assert_calls() def test_get_server_multiple_private_ip(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) shared_mac = '11:22:33:44:55:66' distinct_mac = '66:55:44:33:22:11' srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': distinct_mac, 'addr': '10.0.0.100', 'version': 4}, {'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': shared_mac, 'addr': '10.0.0.101', 'version': 4}], 'public': [{'OS-EXT-IPS:type': 'floating', 'OS-EXT-IPS-MAC:mac_addr': shared_mac, 'addr': PUBLIC_V4, 'version': 4}]} ) self.assertEqual( '10.0.0.101', meta.get_server_private_ip(srv, self.cloud)) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(shade.OpenStackCloud, 'get_volumes') @mock.patch.object(shade.OpenStackCloud, 'get_image_name') @mock.patch.object(shade.OpenStackCloud, 'get_flavor_name') def test_get_server_private_ip_devstack( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes, mock_has_service): mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] mock_has_service.return_value = True self.register_uris([ dict(method='GET', uri=('https://network.example.com/v2.0/ports.json?' 'device_id=test-id'), json={'ports': [{ 'id': 'test_port_id', 'mac_address': 'fa:16:3e:ae:7d:42', 'device_id': 'test-id'}]} ), dict(method='GET', uri=('https://network.example.com/v2.0/' 'floatingips.json?port_id=test_port_id'), json={'floatingips': []}), dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [ {'id': 'test_pnztt_net', 'name': 'test_pnztt_net', 'router:external': False }, {'id': 'private', 'name': 'private'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': PRIVATE_V4, u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42' }]} )) self.assertEqual(PRIVATE_V4, srv['private_v4']) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, 'get_volumes') @mock.patch.object(shade.OpenStackCloud, 'get_image_name') @mock.patch.object(shade.OpenStackCloud, 'get_flavor_name') def test_get_server_private_ip_no_fip( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes): self.cloud._floating_ip_source = None mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [ {'id': 'test_pnztt_net', 'name': 'test_pnztt_net', 'router:external': False, }, {'id': 'private', 'name': 'private'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': PRIVATE_V4, u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42' }]} )) self.assertEqual(PRIVATE_V4, srv['private_v4']) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, 'get_volumes') @mock.patch.object(shade.OpenStackCloud, 'get_image_name') @mock.patch.object(shade.OpenStackCloud, 'get_flavor_name') def test_get_server_cloud_no_fips( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes): self.cloud._floating_ip_source = None mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [ { 'id': 'test_pnztt_net', 'name': 'test_pnztt_net', 'router:external': False, }, { 'id': 'private', 'name': 'private'}]} ), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'addr': PRIVATE_V4, u'version': 4, }]} )) self.assertEqual(PRIVATE_V4, srv['private_v4']) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, 'has_service') @mock.patch.object(shade.OpenStackCloud, 'get_volumes') @mock.patch.object(shade.OpenStackCloud, 'get_image_name') @mock.patch.object(shade.OpenStackCloud, 'get_flavor_name') def test_get_server_cloud_missing_fips( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes, mock_has_service): mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] mock_has_service.return_value = True self.register_uris([ dict(method='GET', uri=('https://network.example.com/v2.0/ports.json?' 'device_id=test-id'), json={'ports': [{ 'id': 'test_port_id', 'mac_address': 'fa:16:3e:ae:7d:42', 'device_id': 'test-id'}]} ), dict(method='GET', uri=('https://network.example.com/v2.0/floatingips.json' '?port_id=test_port_id'), json={'floatingips': [{ 'id': 'floating-ip-id', 'port_id': 'test_port_id', 'fixed_ip_address': PRIVATE_V4, 'floating_ip_address': PUBLIC_V4, }]}), dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [ { 'id': 'test_pnztt_net', 'name': 'test_pnztt_net', 'router:external': False, }, { 'id': 'private', 'name': 'private', } ]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={u'test_pnztt_net': [{ u'addr': PRIVATE_V4, u'version': 4, 'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:ae:7d:42', }]} )) self.assertEqual(PUBLIC_V4, srv['public_v4']) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, 'get_volumes') @mock.patch.object(shade.OpenStackCloud, 'get_image_name') @mock.patch.object(shade.OpenStackCloud, 'get_flavor_name') def test_get_server_cloud_rackspace_v6( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes): self.cloud.cloud_config.config['has_network'] = False self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = True mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] self.register_uris([ dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ 'addr': "104.130.246.91", 'version': 4 }, { 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } )) self.assertEqual("10.223.160.141", srv['private_v4']) self.assertEqual("104.130.246.91", srv['public_v4']) self.assertEqual( "2001:4800:7819:103:be76:4eff:fe05:8525", srv['public_v6']) self.assertEqual( "2001:4800:7819:103:be76:4eff:fe05:8525", srv['interface_ip']) self.assert_calls() @mock.patch.object(shade.OpenStackCloud, 'get_volumes') @mock.patch.object(shade.OpenStackCloud, 'get_image_name') @mock.patch.object(shade.OpenStackCloud, 'get_flavor_name') def test_get_server_cloud_osic_split( self, mock_get_flavor_name, mock_get_image_name, mock_get_volumes): self.cloud._floating_ip_source = None self.cloud.force_ipv4 = False self.cloud._local_ipv6 = True self.cloud._external_ipv4_names = ['GATEWAY_NET'] self.cloud._external_ipv6_names = ['GATEWAY_NET_V6'] self.cloud._internal_ipv4_names = ['GATEWAY_NET_V6'] self.cloud._internal_ipv6_names = [] mock_get_image_name.return_value = 'cirros-0.3.4-x86_64-uec' mock_get_flavor_name.return_value = 'm1.tiny' mock_get_volumes.return_value = [] self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': OSIC_NETWORKS}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': OSIC_SUBNETS}), dict(method='GET', uri='{endpoint}/servers/test-id/os-security-groups'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={'security_groups': []}) ]) srv = self.cloud.get_openstack_vars(fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', flavor={u'id': u'1'}, image={ 'name': u'cirros-0.3.4-x86_64-uec', u'id': u'f93d000b-7c29-4489-b375-3641a1758fe1'}, addresses={ 'private': [{ 'addr': "10.223.160.141", 'version': 4 }], 'public': [{ 'addr': "104.130.246.91", 'version': 4 }, { 'addr': "2001:4800:7819:103:be76:4eff:fe05:8525", 'version': 6 }] } )) self.assertEqual("10.223.160.141", srv['private_v4']) self.assertEqual("104.130.246.91", srv['public_v4']) self.assertEqual( "2001:4800:7819:103:be76:4eff:fe05:8525", srv['public_v6']) self.assertEqual( "2001:4800:7819:103:be76:4eff:fe05:8525", srv['interface_ip']) self.assert_calls() def test_get_server_external_ipv4_neutron(self): # Testing Clouds with Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net', 'router:external': True }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, ) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) self.assert_calls() def test_get_server_external_provider_ipv4_neutron(self): # Testing Clouds with Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net', 'provider:network_type': 'vlan', 'provider:physical_network': 'vlan', }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, ) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) self.assert_calls() def test_get_server_internal_provider_ipv4_neutron(self): # Testing Clouds with Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net', 'router:external': False, 'provider:network_type': 'vxlan', 'provider:physical_network': None, }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PRIVATE_V4, 'version': 4}]}, ) self.assertIsNone( meta.get_server_external_ipv4(cloud=self.cloud, server=srv)) int_ip = meta.get_server_private_ip(cloud=self.cloud, server=srv) self.assertEqual(PRIVATE_V4, int_ip) self.assert_calls() def test_get_server_external_none_ipv4_neutron(self): # Testing Clouds with Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [{ 'id': 'test-net-id', 'name': 'test-net', 'router:external': False, }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': SUBNETS_WITH_NAT}) ]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{ 'addr': PUBLIC_V4, 'version': 4}]}, ) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertIsNone(ip) self.assert_calls() def test_get_server_external_ipv4_neutron_accessIPv4(self): srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE') srv['accessIPv4'] = PUBLIC_V4 ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) def test_get_server_external_ipv4_neutron_accessIPv6(self): srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE') srv['accessIPv6'] = PUBLIC_V6 ip = meta.get_server_external_ipv6(server=srv) self.assertEqual(PUBLIC_V6, ip) def test_get_server_external_ipv4_neutron_exception(self): # Testing Clouds with a non working Neutron self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', status_code=404)]) srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'public': [{'addr': PUBLIC_V4, 'version': 4}]} ) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) self.assert_calls() def test_get_server_external_ipv4_nova_public(self): # Testing Clouds w/o Neutron and a network named public self.cloud.cloud_config.config['has_network'] = False srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'public': [{'addr': PUBLIC_V4, 'version': 4}]}) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertEqual(PUBLIC_V4, ip) def test_get_server_external_ipv4_nova_none(self): # Testing Clouds w/o Neutron or a globally routable IP self.cloud.cloud_config.config['has_network'] = False srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={'test-net': [{'addr': PRIVATE_V4}]}) ip = meta.get_server_external_ipv4(cloud=self.cloud, server=srv) self.assertIsNone(ip) def test_get_server_external_ipv6(self): srv = fakes.make_fake_server( server_id='test-id', name='test-name', status='ACTIVE', addresses={ 'test-net': [ {'addr': PUBLIC_V4, 'version': 4}, {'addr': PUBLIC_V6, 'version': 6} ] } ) ip = meta.get_server_external_ipv6(srv) self.assertEqual(PUBLIC_V6, ip) def test_get_groups_from_server(self): server_vars = {'flavor': 'test-flavor', 'image': 'test-image', 'az': 'test-az'} self.assertEqual( ['test-name', 'test-region', 'test-name_test-region', 'test-group', 'instance-test-id-0', 'meta-group_test-group', 'test-az', 'test-region_test-az', 'test-name_test-region_test-az'], meta.get_groups_from_server( FakeCloud(), meta.obj_to_munch(standard_fake_server), server_vars ) ) def test_obj_list_to_munch(self): """Test conversion of a list of objects to a list of dictonaries""" class obj0(object): value = 0 class obj1(object): value = 1 list = [obj0, obj1] new_list = meta.obj_list_to_munch(list) self.assertEqual(new_list[0]['value'], 0) self.assertEqual(new_list[1]['value'], 1) @mock.patch.object(FakeCloud, 'list_server_security_groups') def test_get_security_groups(self, mock_list_server_security_groups): '''This test verifies that calling get_hostvars_froms_server ultimately calls list_server_security_groups, and that the return value from list_server_security_groups ends up in server['security_groups'].''' mock_list_server_security_groups.return_value = [ {'name': 'testgroup', 'id': '1'}] server = meta.obj_to_munch(standard_fake_server) hostvars = meta.get_hostvars_from_server(FakeCloud(), server) mock_list_server_security_groups.assert_called_once_with(server) self.assertEqual('testgroup', hostvars['security_groups'][0]['name']) @mock.patch.object(shade.meta, 'get_server_external_ipv6') @mock.patch.object(shade.meta, 'get_server_external_ipv4') def test_basic_hostvars( self, mock_get_server_external_ipv4, mock_get_server_external_ipv6): mock_get_server_external_ipv4.return_value = PUBLIC_V4 mock_get_server_external_ipv6.return_value = PUBLIC_V6 hostvars = meta.get_hostvars_from_server( FakeCloud(), self.cloud._normalize_server( meta.obj_to_munch(standard_fake_server))) self.assertNotIn('links', hostvars) self.assertEqual(PRIVATE_V4, hostvars['private_v4']) self.assertEqual(PUBLIC_V4, hostvars['public_v4']) self.assertEqual(PUBLIC_V6, hostvars['public_v6']) self.assertEqual(PUBLIC_V6, hostvars['interface_ip']) self.assertEqual('RegionOne', hostvars['region']) self.assertEqual('_test_cloud_', hostvars['cloud']) self.assertIn('location', hostvars) self.assertEqual('_test_cloud_', hostvars['location']['cloud']) self.assertEqual('RegionOne', hostvars['location']['region_name']) self.assertEqual('admin', hostvars['location']['project']['name']) self.assertEqual("test-image-name", hostvars['image']['name']) self.assertEqual( standard_fake_server['image']['id'], hostvars['image']['id']) self.assertNotIn('links', hostvars['image']) self.assertEqual( standard_fake_server['flavor']['id'], hostvars['flavor']['id']) self.assertEqual("test-flavor-name", hostvars['flavor']['name']) self.assertNotIn('links', hostvars['flavor']) # test having volumes # test volume exception self.assertEqual([], hostvars['volumes']) @mock.patch.object(shade.meta, 'get_server_external_ipv6') @mock.patch.object(shade.meta, 'get_server_external_ipv4') def test_ipv4_hostvars( self, mock_get_server_external_ipv4, mock_get_server_external_ipv6): mock_get_server_external_ipv4.return_value = PUBLIC_V4 mock_get_server_external_ipv6.return_value = PUBLIC_V6 fake_cloud = FakeCloud() fake_cloud.force_ipv4 = True hostvars = meta.get_hostvars_from_server( fake_cloud, meta.obj_to_munch(standard_fake_server)) self.assertEqual(PUBLIC_V4, hostvars['interface_ip']) @mock.patch.object(shade.meta, 'get_server_external_ipv4') def test_private_interface_ip(self, mock_get_server_external_ipv4): mock_get_server_external_ipv4.return_value = PUBLIC_V4 cloud = FakeCloud() cloud.private = True hostvars = meta.get_hostvars_from_server( cloud, meta.obj_to_munch(standard_fake_server)) self.assertEqual(PRIVATE_V4, hostvars['interface_ip']) @mock.patch.object(shade.meta, 'get_server_external_ipv4') def test_image_string(self, mock_get_server_external_ipv4): mock_get_server_external_ipv4.return_value = PUBLIC_V4 server = standard_fake_server server['image'] = 'fake-image-id' hostvars = meta.get_hostvars_from_server( FakeCloud(), meta.obj_to_munch(server)) self.assertEqual('fake-image-id', hostvars['image']['id']) def test_az(self): server = standard_fake_server server['OS-EXT-AZ:availability_zone'] = 'az1' hostvars = self.cloud._normalize_server(meta.obj_to_munch(server)) self.assertEqual('az1', hostvars['az']) def test_current_location(self): self.assertEqual({ 'cloud': '_test_cloud_', 'project': { 'id': mock.ANY, 'name': 'admin', 'domain_id': None, 'domain_name': 'default' }, 'region_name': u'RegionOne', 'zone': None}, self.cloud.current_location) def test_current_project(self): self.assertEqual({ 'id': mock.ANY, 'name': 'admin', 'domain_id': None, 'domain_name': 'default'}, self.cloud.current_project) def test_has_volume(self): mock_cloud = mock.MagicMock() fake_volume = fakes.FakeVolume( id='volume1', status='available', name='Volume 1 Display Name', attachments=[{'device': '/dev/sda0'}]) fake_volume_dict = meta.obj_to_munch(fake_volume) mock_cloud.get_volumes.return_value = [fake_volume_dict] hostvars = meta.get_hostvars_from_server( mock_cloud, meta.obj_to_munch(standard_fake_server)) self.assertEqual('volume1', hostvars['volumes'][0]['id']) self.assertEqual('/dev/sda0', hostvars['volumes'][0]['device']) def test_has_no_volume_service(self): fake_cloud = FakeCloud() fake_cloud.service_val = False hostvars = meta.get_hostvars_from_server( fake_cloud, meta.obj_to_munch(standard_fake_server)) self.assertEqual([], hostvars['volumes']) def test_unknown_volume_exception(self): mock_cloud = mock.MagicMock() class FakeException(Exception): pass def side_effect(*args): raise FakeException("No Volumes") mock_cloud.get_volumes.side_effect = side_effect self.assertRaises( FakeException, meta.get_hostvars_from_server, mock_cloud, meta.obj_to_munch(standard_fake_server)) def test_obj_to_munch(self): cloud = FakeCloud() cloud.subcloud = FakeCloud() cloud_dict = meta.obj_to_munch(cloud) self.assertEqual(FakeCloud.name, cloud_dict['name']) self.assertNotIn('_unused', cloud_dict) self.assertNotIn('get_flavor_name', cloud_dict) self.assertNotIn('subcloud', cloud_dict) self.assertTrue(hasattr(cloud_dict, 'name')) self.assertEqual(cloud_dict.name, cloud_dict['name']) def test_obj_to_munch_subclass(self): class FakeObjDict(dict): additional = 1 obj = FakeObjDict(foo='bar') obj_dict = meta.obj_to_munch(obj) self.assertIn('additional', obj_dict) self.assertIn('foo', obj_dict) self.assertEqual(obj_dict['additional'], 1) self.assertEqual(obj_dict['foo'], 'bar') shade-1.31.0/shade/tests/unit/test_groups.py0000666000175000017500000000753013440327640021051 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from shade.tests.unit import base class TestGroups(base.RequestsMockTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(TestGroups, self).setUp( cloud_config_fixture=cloud_config_fixture) self.addCleanup(self.assert_calls) def get_mock_url(self, service_type='identity', interface='admin', resource='groups', append=None, base_url_append='v3'): return super(TestGroups, self).get_mock_url( service_type='identity', interface='admin', resource=resource, append=append, base_url_append=base_url_append) def test_list_groups(self): group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'groups': [group_data.json_response['group']]}) ]) self.op_cloud.list_groups() def test_get_group(self): group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'groups': [group_data.json_response['group']]}), ]) self.op_cloud.get_group(group_data.group_id) def test_delete_group(self): group_data = self._get_group_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='DELETE', uri=self.get_mock_url(append=[group_data.group_id]), status_code=204), ]) self.assertTrue(self.op_cloud.delete_group(group_data.group_id)) def test_create_group(self): domain_data = self._get_domain_data() group_data = self._get_group_data(domain_id=domain_data.domain_id) self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='domains', append=[domain_data.domain_id]), status_code=200, json=domain_data.json_response), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=group_data.json_response, validate=dict(json=group_data.json_request)) ]) self.op_cloud.create_group( name=group_data.group_name, description=group_data.description, domain=group_data.domain_id) def test_update_group(self): group_data = self._get_group_data() # Domain ID is not sent group_data.json_request['group'].pop('domain_id') self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'groups': [group_data.json_response['group']]}), dict(method='PATCH', uri=self.get_mock_url(append=[group_data.group_id]), status_code=200, json=group_data.json_response, validate=dict(json=group_data.json_request)) ]) self.op_cloud.update_group(group_data.group_id, group_data.group_name, group_data.description) shade-1.31.0/shade/tests/unit/test_services.py0000666000175000017500000002676213440327640021365 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_cloud_services ---------------------------------- Tests Keystone services commands. """ from shade import OpenStackCloudException from shade.exc import OpenStackCloudUnavailableFeature from shade.tests.unit import base from testtools import matchers class CloudServices(base.RequestsMockTestCase): def setUp(self, cloud_config_fixture='clouds.yaml'): super(CloudServices, self).setUp(cloud_config_fixture) def get_mock_url(self, service_type='identity', interface='admin', resource='services', append=None, base_url_append='v3'): return super(CloudServices, self).get_mock_url( service_type, interface, resource, append, base_url_append) def test_create_service_v2(self): self.use_keystone_v2() service_data = self._get_service_data(name='a service', type='network', description='A test service') reference_req = service_data.json_request.copy() reference_req.pop('enabled') self.register_uris([ dict(method='POST', uri=self.get_mock_url(base_url_append='OS-KSADM'), status_code=200, json=service_data.json_response_v2, validate=dict(json={'OS-KSADM:service': reference_req})) ]) service = self.op_cloud.create_service( name=service_data.service_name, service_type=service_data.service_type, description=service_data.description) self.assertThat(service.name, matchers.Equals(service_data.service_name)) self.assertThat(service.id, matchers.Equals(service_data.service_id)) self.assertThat(service.description, matchers.Equals(service_data.description)) self.assertThat(service.type, matchers.Equals(service_data.service_type)) self.assert_calls() def test_create_service_v3(self): service_data = self._get_service_data(name='a service', type='network', description='A test service') self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=200, json=service_data.json_response_v3, validate=dict(json={'service': service_data.json_request})) ]) service = self.op_cloud.create_service( name=service_data.service_name, service_type=service_data.service_type, description=service_data.description) self.assertThat(service.name, matchers.Equals(service_data.service_name)) self.assertThat(service.id, matchers.Equals(service_data.service_id)) self.assertThat(service.description, matchers.Equals(service_data.description)) self.assertThat(service.type, matchers.Equals(service_data.service_type)) self.assert_calls() def test_update_service_v2(self): self.use_keystone_v2() # NOTE(SamYaple): Update service only works with v3 api self.assertRaises(OpenStackCloudUnavailableFeature, self.op_cloud.update_service, 'service_id', name='new name') def test_update_service_v3(self): service_data = self._get_service_data(name='a service', type='network', description='A test service') request = service_data.json_request.copy() request['enabled'] = False resp = service_data.json_response_v3.copy() resp['enabled'] = False request.pop('description') request.pop('name') request.pop('type') self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [resp['service']]}), dict(method='PATCH', uri=self.get_mock_url(append=[service_data.service_id]), status_code=200, json=resp, validate=dict(json={'service': request})) ]) service = self.op_cloud.update_service(service_data.service_id, enabled=False) self.assertThat(service.name, matchers.Equals(service_data.service_name)) self.assertThat(service.id, matchers.Equals(service_data.service_id)) self.assertThat(service.description, matchers.Equals(service_data.description)) self.assertThat(service.type, matchers.Equals(service_data.service_type)) self.assert_calls() def test_list_services(self): service_data = self._get_service_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [service_data.json_response_v3['service']]}) ]) services = self.op_cloud.list_services() self.assertThat(len(services), matchers.Equals(1)) self.assertThat(services[0].id, matchers.Equals(service_data.service_id)) self.assertThat(services[0].name, matchers.Equals(service_data.service_name)) self.assertThat(services[0].type, matchers.Equals(service_data.service_type)) self.assert_calls() def test_get_service(self): service_data = self._get_service_data() service2_data = self._get_service_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=400), ]) # Search by id service = self.op_cloud.get_service(name_or_id=service_data.service_id) self.assertThat(service.id, matchers.Equals(service_data.service_id)) # Search by name service = self.op_cloud.get_service( name_or_id=service_data.service_name) # test we are getting exactly 1 element self.assertThat(service.id, matchers.Equals(service_data.service_id)) # Not found service = self.op_cloud.get_service(name_or_id='INVALID SERVICE') self.assertIs(None, service) # Multiple matches # test we are getting an Exception self.assertRaises(OpenStackCloudException, self.op_cloud.get_service, name_or_id=None, filters={'type': 'type2'}) self.assert_calls() def test_search_services(self): service_data = self._get_service_data() service2_data = self._get_service_data(type=service_data.service_type) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service'], service2_data.json_response_v3['service']]}), ]) # Search by id services = self.op_cloud.search_services( name_or_id=service_data.service_id) # test we are getting exactly 1 element self.assertThat(len(services), matchers.Equals(1)) self.assertThat(services[0].id, matchers.Equals(service_data.service_id)) # Search by name services = self.op_cloud.search_services( name_or_id=service_data.service_name) # test we are getting exactly 1 element self.assertThat(len(services), matchers.Equals(1)) self.assertThat(services[0].name, matchers.Equals(service_data.service_name)) # Not found services = self.op_cloud.search_services(name_or_id='!INVALID!') self.assertThat(len(services), matchers.Equals(0)) # Multiple matches services = self.op_cloud.search_services( filters={'type': service_data.service_type}) # test we are getting exactly 2 elements self.assertThat(len(services), matchers.Equals(2)) self.assertThat(services[0].id, matchers.Equals(service_data.service_id)) self.assertThat(services[1].id, matchers.Equals(service2_data.service_id)) self.assert_calls() def test_delete_service(self): service_data = self._get_service_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service']]}), dict(method='DELETE', uri=self.get_mock_url(append=[service_data.service_id]), status_code=204), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'services': [ service_data.json_response_v3['service']]}), dict(method='DELETE', uri=self.get_mock_url(append=[service_data.service_id]), status_code=204) ]) # Delete by name self.op_cloud.delete_service(name_or_id=service_data.service_name) # Delete by id self.op_cloud.delete_service(service_data.service_id) self.assert_calls() shade-1.31.0/shade/tests/unit/test_endpoints.py0000666000175000017500000003727513440327640021546 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_cloud_endpoints ---------------------------------- Tests Keystone endpoints commands. """ import uuid from shade.exc import OpenStackCloudException from shade.exc import OpenStackCloudUnavailableFeature from shade.tests.unit import base from testtools import matchers class TestCloudEndpoints(base.RequestsMockTestCase): def get_mock_url(self, service_type='identity', interface='admin', resource='endpoints', append=None, base_url_append='v3'): return super(TestCloudEndpoints, self).get_mock_url( service_type, interface, resource, append, base_url_append) def _dummy_url(self): return 'https://%s.example.com/' % uuid.uuid4().hex def test_create_endpoint_v2(self): self.use_keystone_v2() service_data = self._get_service_data() endpoint_data = self._get_endpoint_v2_data( service_data.service_id, public_url=self._dummy_url(), internal_url=self._dummy_url(), admin_url=self._dummy_url()) other_endpoint_data = self._get_endpoint_v2_data( service_data.service_id, region=endpoint_data.region, public_url=endpoint_data.public_url) # correct the keys self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource='services', base_url_append='OS-KSADM'), status_code=200, json={'OS-KSADM:services': [ service_data.json_response_v2['OS-KSADM:service']]}), dict(method='POST', uri=self.get_mock_url(base_url_append=None), status_code=200, json=endpoint_data.json_response, validate=dict(json=endpoint_data.json_request)), dict(method='GET', uri=self.get_mock_url( resource='services', base_url_append='OS-KSADM'), status_code=200, json={'OS-KSADM:services': [ service_data.json_response_v2['OS-KSADM:service']]}), # NOTE(notmorgan): There is a stupid happening here, we do two # gets on the services for some insane reason (read: keystoneclient # is bad and should feel bad). dict(method='GET', uri=self.get_mock_url( resource='services', base_url_append='OS-KSADM'), status_code=200, json={'OS-KSADM:services': [ service_data.json_response_v2['OS-KSADM:service']]}), dict(method='POST', uri=self.get_mock_url(base_url_append=None), status_code=200, json=other_endpoint_data.json_response, validate=dict(json=other_endpoint_data.json_request)) ]) endpoints = self.op_cloud.create_endpoint( service_name_or_id=service_data.service_id, region=endpoint_data.region, public_url=endpoint_data.public_url, internal_url=endpoint_data.internal_url, admin_url=endpoint_data.admin_url ) self.assertThat(endpoints[0].id, matchers.Equals(endpoint_data.endpoint_id)) self.assertThat(endpoints[0].region, matchers.Equals(endpoint_data.region)) self.assertThat(endpoints[0].publicURL, matchers.Equals(endpoint_data.public_url)) self.assertThat(endpoints[0].internalURL, matchers.Equals(endpoint_data.internal_url)) self.assertThat(endpoints[0].adminURL, matchers.Equals(endpoint_data.admin_url)) # test v3 semantics on v2.0 endpoint self.assertRaises(OpenStackCloudException, self.op_cloud.create_endpoint, service_name_or_id='service1', interface='mock_admin_url', url='admin') endpoints_3on2 = self.op_cloud.create_endpoint( service_name_or_id=service_data.service_id, region=endpoint_data.region, interface='public', url=endpoint_data.public_url ) # test keys and values are correct self.assertThat( endpoints_3on2[0].region, matchers.Equals(other_endpoint_data.region)) self.assertThat( endpoints_3on2[0].publicURL, matchers.Equals(other_endpoint_data.public_url)) self.assertThat(endpoints_3on2[0].get('internalURL'), matchers.Equals(None)) self.assertThat(endpoints_3on2[0].get('adminURL'), matchers.Equals(None)) self.assert_calls() def test_create_endpoint_v3(self): service_data = self._get_service_data() public_endpoint_data = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='public', url=self._dummy_url()) public_endpoint_data_disabled = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='public', url=self._dummy_url(), enabled=False) admin_endpoint_data = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='admin', url=self._dummy_url(), region=public_endpoint_data.region) internal_endpoint_data = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='internal', url=self._dummy_url(), region=public_endpoint_data.region) self.register_uris([ dict(method='GET', uri=self.get_mock_url(resource='services'), status_code=200, json={'services': [ service_data.json_response_v3['service']]}), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=public_endpoint_data_disabled.json_response, validate=dict( json=public_endpoint_data_disabled.json_request)), dict(method='GET', uri=self.get_mock_url(resource='services'), status_code=200, json={'services': [ service_data.json_response_v3['service']]}), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=public_endpoint_data.json_response, validate=dict(json=public_endpoint_data.json_request)), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=internal_endpoint_data.json_response, validate=dict(json=internal_endpoint_data.json_request)), dict(method='POST', uri=self.get_mock_url(), status_code=200, json=admin_endpoint_data.json_response, validate=dict(json=admin_endpoint_data.json_request)), ]) endpoints = self.op_cloud.create_endpoint( service_name_or_id=service_data.service_id, region=public_endpoint_data_disabled.region, url=public_endpoint_data_disabled.url, interface=public_endpoint_data_disabled.interface, enabled=False) # Test endpoint values self.assertThat( endpoints[0].id, matchers.Equals(public_endpoint_data_disabled.endpoint_id)) self.assertThat(endpoints[0].url, matchers.Equals(public_endpoint_data_disabled.url)) self.assertThat( endpoints[0].interface, matchers.Equals(public_endpoint_data_disabled.interface)) self.assertThat( endpoints[0].region, matchers.Equals(public_endpoint_data_disabled.region)) self.assertThat( endpoints[0].region_id, matchers.Equals(public_endpoint_data_disabled.region)) self.assertThat(endpoints[0].enabled, matchers.Equals(public_endpoint_data_disabled.enabled)) endpoints_2on3 = self.op_cloud.create_endpoint( service_name_or_id=service_data.service_id, region=public_endpoint_data.region, public_url=public_endpoint_data.url, internal_url=internal_endpoint_data.url, admin_url=admin_endpoint_data.url) # Three endpoints should be returned, public, internal, and admin self.assertThat(len(endpoints_2on3), matchers.Equals(3)) # test keys and values are correct for each endpoint created for result, reference in zip( endpoints_2on3, [public_endpoint_data, internal_endpoint_data, admin_endpoint_data] ): self.assertThat(result.id, matchers.Equals(reference.endpoint_id)) self.assertThat(result.url, matchers.Equals(reference.url)) self.assertThat(result.interface, matchers.Equals(reference.interface)) self.assertThat(result.region, matchers.Equals(reference.region)) self.assertThat(result.enabled, matchers.Equals(reference.enabled)) self.assert_calls() def test_update_endpoint_v2(self): self.use_keystone_v2() self.assertRaises(OpenStackCloudUnavailableFeature, self.op_cloud.update_endpoint, 'endpoint_id') def test_update_endpoint_v3(self): service_data = self._get_service_data() dummy_url = self._dummy_url() endpoint_data = self._get_endpoint_v3_data( service_id=service_data.service_id, interface='admin', enabled=False) reference_request = endpoint_data.json_request.copy() reference_request['endpoint']['url'] = dummy_url self.register_uris([ dict(method='PATCH', uri=self.get_mock_url(append=[endpoint_data.endpoint_id]), status_code=200, json=endpoint_data.json_response, validate=dict(json=reference_request)) ]) endpoint = self.op_cloud.update_endpoint( endpoint_data.endpoint_id, service_name_or_id=service_data.service_id, region=endpoint_data.region, url=dummy_url, interface=endpoint_data.interface, enabled=False ) # test keys and values are correct self.assertThat(endpoint.id, matchers.Equals(endpoint_data.endpoint_id)) self.assertThat(endpoint.service_id, matchers.Equals(service_data.service_id)) self.assertThat(endpoint.url, matchers.Equals(endpoint_data.url)) self.assertThat(endpoint.interface, matchers.Equals(endpoint_data.interface)) self.assert_calls() def test_list_endpoints(self): endpoints_data = [self._get_endpoint_v3_data() for e in range(1, 10)] self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}) ]) endpoints = self.op_cloud.list_endpoints() # test we are getting exactly len(self.mock_endpoints) elements self.assertThat(len(endpoints), matchers.Equals(len(endpoints_data))) # test keys and values are correct for i, ep in enumerate(endpoints_data): self.assertThat(endpoints[i].id, matchers.Equals(ep.endpoint_id)) self.assertThat(endpoints[i].service_id, matchers.Equals(ep.service_id)) self.assertThat(endpoints[i].url, matchers.Equals(ep.url)) self.assertThat(endpoints[i].interface, matchers.Equals(ep.interface)) self.assert_calls() def test_search_endpoints(self): endpoints_data = [self._get_endpoint_v3_data(region='region1') for e in range(0, 2)] endpoints_data.extend([self._get_endpoint_v3_data() for e in range(1, 8)]) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}), dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [e.json_response['endpoint'] for e in endpoints_data]}) ]) # Search by id endpoints = self.op_cloud.search_endpoints( id=endpoints_data[-1].endpoint_id) # # test we are getting exactly 1 element self.assertEqual(1, len(endpoints)) self.assertThat(endpoints[0].id, matchers.Equals(endpoints_data[-1].endpoint_id)) self.assertThat(endpoints[0].service_id, matchers.Equals(endpoints_data[-1].service_id)) self.assertThat(endpoints[0].url, matchers.Equals(endpoints_data[-1].url)) self.assertThat(endpoints[0].interface, matchers.Equals(endpoints_data[-1].interface)) # Not found endpoints = self.op_cloud.search_endpoints(id='!invalid!') self.assertEqual(0, len(endpoints)) # Multiple matches endpoints = self.op_cloud.search_endpoints( filters={'region_id': 'region1'}) # # test we are getting exactly 2 elements self.assertEqual(2, len(endpoints)) # test we are getting the correct response for region/region_id compat endpoints = self.op_cloud.search_endpoints( filters={'region': 'region1'}) # # test we are getting exactly 2 elements, this is v3 self.assertEqual(2, len(endpoints)) self.assert_calls() def test_delete_endpoint(self): endpoint_data = self._get_endpoint_v3_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'endpoints': [ endpoint_data.json_response['endpoint']]}), dict(method='DELETE', uri=self.get_mock_url(append=[endpoint_data.endpoint_id]), status_code=204) ]) # Delete by id self.op_cloud.delete_endpoint(id=endpoint_data.endpoint_id) self.assert_calls() shade-1.31.0/shade/tests/unit/test_shade.py0000666000175000017500000005661613440327640020627 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import uuid import testtools import shade from shade import _utils from shade import exc from shade.tests import fakes from shade.tests.unit import base RANGE_DATA = [ dict(id=1, key1=1, key2=5), dict(id=2, key1=1, key2=20), dict(id=3, key1=2, key2=10), dict(id=4, key1=2, key2=30), dict(id=5, key1=3, key2=40), dict(id=6, key1=3, key2=40), ] class TestShade(base.RequestsMockTestCase): def setUp(self): # This set of tests are not testing neutron, they're testing # rebuilding servers, but we do several network calls in service # of a NORMAL rebuild to find the default_network. Putting # in all of the neutron mocks for that will make the tests harder # to read. SO - we're going mock neutron into the off position # and then turn it back on in the few tests that specifically do. # Maybe we should reorg these into two classes - one with neutron # mocked out - and one with it not mocked out super(TestShade, self).setUp() self.has_neutron = False def fake_has_service(*args, **kwargs): return self.has_neutron self.cloud.has_service = fake_has_service def test_openstack_cloud(self): self.assertIsInstance(self.cloud, shade.OpenStackCloud) @mock.patch.object(shade.OpenStackCloud, 'search_images') def test_get_images(self, mock_search): image1 = dict(id='123', name='mickey') mock_search.return_value = [image1] r = self.cloud.get_image('mickey') self.assertIsNotNone(r) self.assertDictEqual(image1, r) @mock.patch.object(shade.OpenStackCloud, 'search_images') def test_get_image_not_found(self, mock_search): mock_search.return_value = [] r = self.cloud.get_image('doesNotExist') self.assertIsNone(r) def test_get_server(self): server1 = fakes.make_fake_server('123', 'mickey') server2 = fakes.make_fake_server('345', 'mouse') self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [server1, server2]}), ]) r = self.cloud.get_server('mickey') self.assertIsNotNone(r) self.assertEqual(server1['name'], r['name']) self.assert_calls() def test_get_server_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': []}), ]) r = self.cloud.get_server('doesNotExist') self.assertIsNone(r) self.assert_calls() def test_list_servers_exception(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), status_code=400) ]) self.assertRaises(exc.OpenStackCloudException, self.cloud.list_servers) self.assert_calls() def test__neutron_exceptions_resource_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), status_code=404) ]) self.assertRaises(exc.OpenStackCloudResourceNotFound, self.cloud.list_networks) self.assert_calls() def test__neutron_exceptions_url_not_found(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), status_code=404) ]) self.assertRaises(exc.OpenStackCloudURINotFound, self.cloud.list_networks) self.assert_calls() def test_list_servers(self): server_id = str(uuid.uuid4()) server_name = self.getUniqueString('name') fake_server = fakes.make_fake_server(server_id, server_name) self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), ]) r = self.cloud.list_servers() self.assertEqual(1, len(r)) self.assertEqual(server_name, r[0]['name']) self.assert_calls() def test_list_server_private_ip(self): self.has_neutron = True fake_server = { "OS-EXT-STS:task_state": None, "addresses": { "private": [ { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b4:a3:07", "version": 4, "addr": "10.4.0.13", "OS-EXT-IPS:type": "fixed" }, { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b4:a3:07", "version": 4, "addr": "89.40.216.229", "OS-EXT-IPS:type": "floating" }]}, "links": [ { "href": "http://example.com/images/95e4c4", "rel": "self" }, { "href": "http://example.com/images/95e4c4", "rel": "bookmark" } ], "image": { "id": "95e4c449-8abf-486e-97d9-dc3f82417d2d", "links": [ { "href": "http://example.com/images/95e4c4", "rel": "bookmark" } ] }, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2018-03-01T02:44:50.000000", "flavor": { "id": "3bd99062-2fe8-4eac-93f0-9200cc0f97ae", "links": [ { "href": "http://example.com/flavors/95e4c4", "rel": "bookmark" } ] }, "id": "97fe35e9-756a-41a2-960a-1d057d2c9ee4", "security_groups": [{"name": "default"}], "user_id": "c17534835f8f42bf98fc367e0bf35e09", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", "metadata": {}, "status": "ACTIVE", "updated": "2018-03-01T02:44:51Z", "hostId": "", "OS-SRV-USG:terminated_at": None, "key_name": None, "name": "mttest", "created": "2018-03-01T02:44:46Z", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "os-extended-volumes:volumes_attached": [], "config_drive": "" } fake_networks = {"networks": [ { "status": "ACTIVE", "router:external": True, "availability_zone_hints": [], "availability_zones": ["nova"], "description": None, "subnets": [ "df3e17fa-a4b2-47ae-9015-bc93eb076ba2", "6b0c3dc9-b0b8-4d87-976a-7f2ebf13e7ec", "fc541f48-fc7f-48c0-a063-18de6ee7bdd7" ], "shared": False, "tenant_id": "a564613210ee43708b8a7fc6274ebd63", "tags": [], "ipv6_address_scope": "9f03124f-89af-483a-b6fd-10f08079db4d", "mtu": 1550, "is_default": False, "admin_state_up": True, "revision_number": 0, "ipv4_address_scope": None, "port_security_enabled": True, "project_id": "a564613210ee43708b8a7fc6274ebd63", "id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", "name": "ext-net" }, { "status": "ACTIVE", "router:external": False, "availability_zone_hints": [], "availability_zones": ["nova"], "description": "", "subnets": ["f0ad1df5-53ee-473f-b86b-3604ea5591e9"], "shared": False, "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2016-10-22T13:46:26Z", "tags": [], "ipv6_address_scope": None, "updated_at": "2016-10-22T13:46:26Z", "admin_state_up": True, "mtu": 1500, "revision_number": 0, "ipv4_address_scope": None, "port_security_enabled": True, "project_id": "65222a4d09ea4c68934fa1028c77f394", "id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "name": "private" }]} fake_subnets = { "subnets": [ { "service_types": [], "description": "", "enable_dhcp": True, "tags": [], "network_id": "827c6bb6-492f-4168-9577-f3a131eb29e8", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2017-06-12T13:23:57Z", "dns_nameservers": [], "updated_at": "2017-06-12T13:23:57Z", "gateway_ip": "10.24.4.1", "ipv6_ra_mode": None, "allocation_pools": [ { "start": "10.24.4.2", "end": "10.24.4.254" }], "host_routes": [], "revision_number": 0, "ip_version": 4, "ipv6_address_mode": None, "cidr": "10.24.4.0/24", "project_id": "65222a4d09ea4c68934fa1028c77f394", "id": "3f0642d9-4644-4dff-af25-bcf64f739698", "subnetpool_id": None, "name": "foo_subnet" }, { "service_types": [], "description": "", "enable_dhcp": True, "tags": [], "network_id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2016-10-22T13:46:26Z", "dns_nameservers": ["89.36.90.101", "89.36.90.102"], "updated_at": "2016-10-22T13:46:26Z", "gateway_ip": "10.4.0.1", "ipv6_ra_mode": None, "allocation_pools": [ { "start": "10.4.0.2", "end": "10.4.0.200" }], "host_routes": [], "revision_number": 0, "ip_version": 4, "ipv6_address_mode": None, "cidr": "10.4.0.0/24", "project_id": "65222a4d09ea4c68934fa1028c77f394", "id": "f0ad1df5-53ee-473f-b86b-3604ea5591e9", "subnetpool_id": None, "name": "private-subnet-ipv4" }]} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail']), json={'servers': [fake_server]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json=fake_networks), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json=fake_subnets) ]) r = self.cloud.get_server('97fe35e9-756a-41a2-960a-1d057d2c9ee4') self.assertEqual('10.4.0.13', r['private_v4']) self.assert_calls() def test_list_servers_all_projects(self): '''This test verifies that when list_servers is called with `all_projects=True` that it passes `all_tenants=True` to nova.''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail'], qs_elements=['all_tenants=True']), complete_qs=True, json={'servers': []}), ]) self.cloud.list_servers(all_projects=True) self.assert_calls() def test_list_servers_filters(self): '''This test verifies that when list_servers is called with `filters` dict that it passes it to nova.''' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'compute', 'public', append=['servers', 'detail'], qs_elements=[ 'deleted=True', 'changes-since=2014-12-03T00:00:00Z' ]), complete_qs=True, json={'servers': []}), ]) self.cloud.list_servers(filters={ 'deleted': True, 'changes-since': '2014-12-03T00:00:00Z' }) self.assert_calls() def test_iterate_timeout_bad_wait(self): with testtools.ExpectedException( exc.OpenStackCloudException, "Wait value must be an int or float value."): for count in _utils._iterate_timeout( 1, "test_iterate_timeout_bad_wait", wait="timeishard"): pass @mock.patch('time.sleep') def test_iterate_timeout_str_wait(self, mock_sleep): iter = _utils._iterate_timeout( 10, "test_iterate_timeout_str_wait", wait="1.6") next(iter) next(iter) mock_sleep.assert_called_with(1.6) @mock.patch('time.sleep') def test_iterate_timeout_int_wait(self, mock_sleep): iter = _utils._iterate_timeout( 10, "test_iterate_timeout_int_wait", wait=1) next(iter) next(iter) mock_sleep.assert_called_with(1.0) @mock.patch('time.sleep') def test_iterate_timeout_timeout(self, mock_sleep): message = "timeout test" with testtools.ExpectedException( exc.OpenStackCloudTimeout, message): for count in _utils._iterate_timeout(0.1, message, wait=1): pass mock_sleep.assert_called_with(1.0) def test__nova_extensions(self): body = [ { "updated": "2014-12-03T00:00:00Z", "name": "Multinic", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "NMN", "description": "Multiple network support." }, { "updated": "2014-12-03T00:00:00Z", "name": "DiskConfig", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "OS-DCF", "description": "Disk Management Extension." }, ] self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=dict(extensions=body)) ]) extensions = self.cloud._nova_extensions() self.assertEqual(set(['NMN', 'OS-DCF']), extensions) self.assert_calls() def test__nova_extensions_fails(self): self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), status_code=404), ]) with testtools.ExpectedException( exc.OpenStackCloudURINotFound, "Error fetching extension list for nova" ): self.cloud._nova_extensions() self.assert_calls() def test__has_nova_extension(self): body = [ { "updated": "2014-12-03T00:00:00Z", "name": "Multinic", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "NMN", "description": "Multiple network support." }, { "updated": "2014-12-03T00:00:00Z", "name": "DiskConfig", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "OS-DCF", "description": "Disk Management Extension." }, ] self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=dict(extensions=body)) ]) self.assertTrue(self.cloud._has_nova_extension('NMN')) self.assert_calls() def test__has_nova_extension_missing(self): body = [ { "updated": "2014-12-03T00:00:00Z", "name": "Multinic", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "NMN", "description": "Multiple network support." }, { "updated": "2014-12-03T00:00:00Z", "name": "DiskConfig", "links": [], "namespace": "http://openstack.org/compute/ext/fake_xml", "alias": "OS-DCF", "description": "Disk Management Extension." }, ] self.register_uris([ dict(method='GET', uri='{endpoint}/extensions'.format( endpoint=fakes.COMPUTE_ENDPOINT), json=dict(extensions=body)) ]) self.assertFalse(self.cloud._has_nova_extension('invalid')) self.assert_calls() def test__neutron_extensions(self): body = [ { "updated": "2014-06-1T10:00:00-00:00", "name": "Distributed Virtual Router", "links": [], "alias": "dvr", "description": "Enables configuration of Distributed Virtual Routers." }, { "updated": "2013-07-23T10:00:00-00:00", "name": "Allowed Address Pairs", "links": [], "alias": "allowed-address-pairs", "description": "Provides allowed address pairs" }, ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json=dict(extensions=body)) ]) extensions = self.cloud._neutron_extensions() self.assertEqual(set(['dvr', 'allowed-address-pairs']), extensions) self.assert_calls() def test__neutron_extensions_fails(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), status_code=404) ]) with testtools.ExpectedException( exc.OpenStackCloudURINotFound, "Error fetching extension list for neutron" ): self.cloud._neutron_extensions() self.assert_calls() def test__has_neutron_extension(self): body = [ { "updated": "2014-06-1T10:00:00-00:00", "name": "Distributed Virtual Router", "links": [], "alias": "dvr", "description": "Enables configuration of Distributed Virtual Routers." }, { "updated": "2013-07-23T10:00:00-00:00", "name": "Allowed Address Pairs", "links": [], "alias": "allowed-address-pairs", "description": "Provides allowed address pairs" }, ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json=dict(extensions=body)) ]) self.assertTrue(self.cloud._has_neutron_extension('dvr')) self.assert_calls() def test__has_neutron_extension_missing(self): body = [ { "updated": "2014-06-1T10:00:00-00:00", "name": "Distributed Virtual Router", "links": [], "alias": "dvr", "description": "Enables configuration of Distributed Virtual Routers." }, { "updated": "2013-07-23T10:00:00-00:00", "name": "Allowed Address Pairs", "links": [], "alias": "allowed-address-pairs", "description": "Provides allowed address pairs" }, ] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'extensions.json']), json=dict(extensions=body)) ]) self.assertFalse(self.cloud._has_neutron_extension('invalid')) self.assert_calls() def test_range_search(self): filters = {"key1": "min", "key2": "20"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(1, len(retval)) self.assertEqual([RANGE_DATA[1]], retval) def test_range_search_2(self): filters = {"key1": "<=2", "key2": ">10"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(2, len(retval)) self.assertEqual([RANGE_DATA[1], RANGE_DATA[3]], retval) def test_range_search_3(self): filters = {"key1": "2", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(0, len(retval)) def test_range_search_4(self): filters = {"key1": "max", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(0, len(retval)) def test_range_search_5(self): filters = {"key1": "min", "key2": "min"} retval = self.cloud.range_search(RANGE_DATA, filters) self.assertIsInstance(retval, list) self.assertEqual(1, len(retval)) self.assertEqual([RANGE_DATA[0]], retval) shade-1.31.0/shade/tests/unit/test_project.py0000666000175000017500000002562713440327640021207 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from testtools import matchers import shade import shade._utils from shade.tests.unit import base class TestProject(base.RequestsMockTestCase): def get_mock_url(self, service_type='identity', interface='admin', resource=None, append=None, base_url_append=None, v3=True): if v3 and resource is None: resource = 'projects' elif not v3 and resource is None: resource = 'tenants' if base_url_append is None and v3: base_url_append = 'v3' return super(TestProject, self).get_mock_url( service_type=service_type, interface=interface, resource=resource, append=append, base_url_append=base_url_append) def test_create_project_v2(self): self.use_keystone_v2() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='POST', uri=self.get_mock_url(v3=False), status_code=200, json=project_data.json_response, validate=dict(json=project_data.json_request)) ]) project = self.op_cloud.create_project( name=project_data.project_name, description=project_data.description) self.assertThat(project.id, matchers.Equals(project_data.project_id)) self.assertThat( project.name, matchers.Equals(project_data.project_name)) self.assert_calls() def test_create_project_v3(self,): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) reference_req = project_data.json_request.copy() reference_req['project']['enabled'] = True self.register_uris([ dict(method='POST', uri=self.get_mock_url(), status_code=200, json=project_data.json_response, validate=dict(json=reference_req)) ]) project = self.op_cloud.create_project( name=project_data.project_name, description=project_data.description, domain_id=project_data.domain_id) self.assertThat(project.id, matchers.Equals(project_data.project_id)) self.assertThat( project.name, matchers.Equals(project_data.project_name)) self.assertThat( project.description, matchers.Equals(project_data.description)) self.assertThat( project.domain_id, matchers.Equals(project_data.domain_id)) self.assert_calls() def test_create_project_v3_no_domain(self): with testtools.ExpectedException( shade.OpenStackCloudException, "User or project creation requires an explicit" " domain_id argument." ): self.op_cloud.create_project(name='foo', description='bar') def test_delete_project_v2(self): self.use_keystone_v2() project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url(v3=False), status_code=200, json={'tenants': [project_data.json_response['tenant']]}), dict(method='DELETE', uri=self.get_mock_url( v3=False, append=[project_data.project_id]), status_code=204) ]) self.op_cloud.delete_project(project_data.project_id) self.assert_calls() def test_delete_project_v3(self): project_data = self._get_project_data(v3=False) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'projects': [project_data.json_response['tenant']]}), dict(method='DELETE', uri=self.get_mock_url(append=[project_data.project_id]), status_code=204) ]) self.op_cloud.delete_project(project_data.project_id) self.assert_calls() def test_update_project_not_found(self): project_data = self._get_project_data() self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'projects': []}) ]) # NOTE(notmorgan): This test (and shade) does not represent a case # where the project is in the project list but a 404 is raised when # the PATCH is issued. This is a bug in shade and should be fixed, # shade will raise an attribute error instead of the proper # project not found exception. with testtools.ExpectedException( shade.OpenStackCloudException, "Project %s not found." % project_data.project_id ): self.op_cloud.update_project(project_data.project_id) self.assert_calls() def test_update_project_v2(self): self.use_keystone_v2() project_data = self._get_project_data( v3=False, description=self.getUniqueString('projectDesc')) # remove elements that are not updated in this test. project_data.json_request['tenant'].pop('name') project_data.json_request['tenant'].pop('enabled') self.register_uris([ dict(method='GET', uri=self.get_mock_url(v3=False), status_code=200, json={'tenants': [project_data.json_response['tenant']]}), dict(method='POST', uri=self.get_mock_url( v3=False, append=[project_data.project_id]), status_code=200, json=project_data.json_response, validate=dict(json=project_data.json_request)) ]) project = self.op_cloud.update_project( project_data.project_id, description=project_data.description) self.assertThat(project.id, matchers.Equals(project_data.project_id)) self.assertThat( project.name, matchers.Equals(project_data.project_name)) self.assertThat( project.description, matchers.Equals(project_data.description)) self.assert_calls() def test_update_project_v3(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) reference_req = project_data.json_request.copy() # Remove elements not actually sent in the update reference_req['project'].pop('domain_id') reference_req['project'].pop('name') reference_req['project'].pop('enabled') self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % project_data.domain_id)), status_code=200, json={'projects': [project_data.json_response['project']]}), dict(method='PATCH', uri=self.get_mock_url(append=[project_data.project_id]), status_code=200, json=project_data.json_response, validate=dict(json=reference_req)) ]) project = self.op_cloud.update_project( project_data.project_id, description=project_data.description, domain_id=project_data.domain_id) self.assertThat(project.id, matchers.Equals(project_data.project_id)) self.assertThat( project.name, matchers.Equals(project_data.project_name)) self.assertThat( project.description, matchers.Equals(project_data.description)) self.assert_calls() def test_list_projects_v3(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % project_data.domain_id)), status_code=200, json={'projects': [project_data.json_response['project']]}) ]) projects = self.op_cloud.list_projects(project_data.domain_id) self.assertThat(len(projects), matchers.Equals(1)) self.assertThat( projects[0].id, matchers.Equals(project_data.project_id)) self.assert_calls() def test_list_projects_v3_kwarg(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % project_data.domain_id)), status_code=200, json={'projects': [project_data.json_response['project']]}) ]) projects = self.op_cloud.list_projects( domain_id=project_data.domain_id) self.assertThat(len(projects), matchers.Equals(1)) self.assertThat( projects[0].id, matchers.Equals(project_data.project_id)) self.assert_calls() def test_list_projects_search_compat(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) self.register_uris([ dict(method='GET', uri=self.get_mock_url(), status_code=200, json={'projects': [project_data.json_response['project']]}) ]) projects = self.op_cloud.search_projects(project_data.project_id) self.assertThat(len(projects), matchers.Equals(1)) self.assertThat( projects[0].id, matchers.Equals(project_data.project_id)) self.assert_calls() def test_list_projects_search_compat_v3(self): project_data = self._get_project_data( description=self.getUniqueString('projectDesc')) self.register_uris([ dict(method='GET', uri=self.get_mock_url( resource=('projects?domain_id=%s' % project_data.domain_id)), status_code=200, json={'projects': [project_data.json_response['project']]}) ]) projects = self.op_cloud.search_projects( domain_id=project_data.domain_id) self.assertThat(len(projects), matchers.Equals(1)) self.assertThat( projects[0].id, matchers.Equals(project_data.project_id)) self.assert_calls() shade-1.31.0/shade/tests/unit/test_floating_ip_neutron.py0000666000175000017500000012021513440327640023573 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_neutron ---------------------------------- Tests Floating IP resource methods for Neutron """ import copy import datetime import munch from shade import exc from shade.tests import fakes from shade.tests.unit import base class TestFloatingIP(base.RequestsMockTestCase): mock_floating_ip_list_rep = { 'floatingips': [ { 'router_id': 'd23abc8d-2991-4a55-ba98-2aaea84cc72f', 'tenant_id': '4969c491a3c74ee4af974e6d800c62de', 'floating_network_id': '376da547-b977-4cfe-9cba-275c80debf57', 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'port_id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda7', 'status': 'ACTIVE' }, { 'router_id': None, 'tenant_id': '4969c491a3c74ee4af974e6d800c62de', 'floating_network_id': '376da547-b977-4cfe-9cba-275c80debf57', 'fixed_ip_address': None, 'floating_ip_address': '203.0.113.30', 'port_id': None, 'id': '61cea855-49cb-4846-997d-801b70c71bdd', 'status': 'DOWN' } ] } mock_floating_ip_new_rep = { 'floatingip': { 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'floating_network_id': 'my-network-id', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda8', 'port_id': None, 'router_id': None, 'status': 'ACTIVE', 'tenant_id': '4969c491a3c74ee4af974e6d800c62df' } } mock_floating_ip_port_rep = { 'floatingip': { 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'floating_network_id': 'my-network-id', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda8', 'port_id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'router_id': None, 'status': 'ACTIVE', 'tenant_id': '4969c491a3c74ee4af974e6d800c62df' } } mock_get_network_rep = { 'status': 'ACTIVE', 'subnets': [ '54d6f61d-db07-451c-9ab3-b9609b6b6f0b' ], 'name': 'my-network', 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id': '4fd44f30292945e481c7b8a0c8908869', 'provider:network_type': 'local', 'router:external': True, 'shared': True, 'id': 'my-network-id', 'provider:segmentation_id': None } mock_search_ports_rep = [ { 'status': 'ACTIVE', 'binding:host_id': 'devstack', 'name': 'first-port', 'created_at': datetime.datetime.now().isoformat(), 'allowed_address_pairs': [], 'admin_state_up': True, 'network_id': '70c1db1f-b701-45bd-96e0-a313ee3430b3', 'tenant_id': '', 'extra_dhcp_opts': [], 'binding:vif_details': { 'port_filter': True, 'ovs_hybrid_plug': True }, 'binding:vif_type': 'ovs', 'device_owner': 'compute:None', 'mac_address': 'fa:16:3e:58:42:ed', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [ { 'subnet_id': '008ba151-0b8c-4a67-98b5-0d2b87666062', 'ip_address': u'172.24.4.2' } ], 'id': 'ce705c24-c1ef-408a-bda3-7bbd946164ac', 'security_groups': [], 'device_id': 'server-id' } ] def assertAreInstances(self, elements, elem_type): for e in elements: self.assertIsInstance(e, elem_type) def setUp(self): super(TestFloatingIP, self).setUp() self.fake_server = fakes.make_fake_server( 'server-id', '', 'ACTIVE', addresses={u'test_pnztt_net': [{ u'OS-EXT-IPS:type': u'fixed', u'addr': '192.0.2.129', u'version': 4, u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:ae:7d:42'}]}) self.floating_ip = self.cloud._normalize_floating_ips( self.mock_floating_ip_list_rep['floatingips'])[0] def test_float_no_status(self): floating_ips = [ { 'fixed_ip_address': '10.0.0.4', 'floating_ip_address': '172.24.4.229', 'floating_network_id': 'my-network-id', 'id': '2f245a7b-796b-4f26-9cf9-9e82d248fda8', 'port_id': None, 'router_id': None, 'tenant_id': '4969c491a3c74ee4af974e6d800c62df' } ] normalized = self.cloud._normalize_floating_ips(floating_ips) self.assertEqual('UNKNOWN', normalized[0]['status']) def test_list_floating_ips(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_list_rep)]) floating_ips = self.cloud.list_floating_ips() self.assertIsInstance(floating_ips, list) self.assertAreInstances(floating_ips, dict) self.assertEqual(2, len(floating_ips)) self.assert_calls() def test_list_floating_ips_with_filters(self): self.register_uris([ dict(method='GET', uri=('https://network.example.com/v2.0/floatingips.json?' 'Foo=42'), json={'floatingips': []})]) self.cloud.list_floating_ips(filters={'Foo': 42}) self.assert_calls() def test_search_floating_ips(self): self.register_uris([ dict(method='GET', uri=('https://network.example.com/v2.0/floatingips.json'), json=self.mock_floating_ip_list_rep)]) floating_ips = self.cloud.search_floating_ips( filters={'attached': False}) self.assertIsInstance(floating_ips, list) self.assertAreInstances(floating_ips, dict) self.assertEqual(1, len(floating_ips)) self.assert_calls() def test_get_floating_ip(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_list_rep)]) floating_ip = self.cloud.get_floating_ip( id='2f245a7b-796b-4f26-9cf9-9e82d248fda7') self.assertIsInstance(floating_ip, dict) self.assertEqual('172.24.4.229', floating_ip['floating_ip_address']) self.assertEqual( self.mock_floating_ip_list_rep['floatingips'][0]['tenant_id'], floating_ip['project_id'] ) self.assertEqual( self.mock_floating_ip_list_rep['floatingips'][0]['tenant_id'], floating_ip['tenant_id'] ) self.assertIn('location', floating_ip) self.assert_calls() def test_get_floating_ip_not_found(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_list_rep)]) floating_ip = self.cloud.get_floating_ip(id='non-existent') self.assertIsNone(floating_ip) self.assert_calls() def test_get_floating_ip_by_id(self): fid = self.mock_floating_ip_new_rep['floatingip']['id'] self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/floatingips/' '{id}'.format(id=fid), json=self.mock_floating_ip_new_rep)]) floating_ip = self.cloud.get_floating_ip_by_id(id=fid) self.assertIsInstance(floating_ip, dict) self.assertEqual('172.24.4.229', floating_ip['floating_ip_address']) self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['tenant_id'], floating_ip['project_id'] ) self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['tenant_id'], floating_ip['tenant_id'] ) self.assertIn('location', floating_ip) self.assert_calls() def test_create_floating_ip(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_new_rep, validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id'}})) ]) ip = self.cloud.create_floating_ip(network='my-network') self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['floating_ip_address'], ip['floating_ip_address']) self.assert_calls() def test_create_floating_ip_port_bad_response(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_new_rep, validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id', 'port_id': u'ce705c24-c1ef-408a-bda3-7bbd946164ab'}})) ]) # Fails because we requested a port and the returned FIP has no port self.assertRaises( exc.OpenStackCloudException, self.cloud.create_floating_ip, network='my-network', port='ce705c24-c1ef-408a-bda3-7bbd946164ab') self.assert_calls() def test_create_floating_ip_port(self): self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json=self.mock_floating_ip_port_rep, validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id', 'port_id': u'ce705c24-c1ef-408a-bda3-7bbd946164ac'}})) ]) ip = self.cloud.create_floating_ip( network='my-network', port='ce705c24-c1ef-408a-bda3-7bbd946164ac') self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['floating_ip_address'], ip['floating_ip_address']) self.assert_calls() def test_neutron_available_floating_ips(self): """ Test without specifying a network name. """ fips_mock_uri = 'https://network.example.com/v2.0/floatingips.json' self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': []}), dict(method='GET', uri=fips_mock_uri, json={'floatingips': []}), dict(method='POST', uri=fips_mock_uri, json=self.mock_floating_ip_new_rep, validate=dict(json={ 'floatingip': { 'floating_network_id': self.mock_get_network_rep['id'] }})) ]) # Test if first network is selected if no network is given self.cloud._neutron_available_floating_ips() self.assert_calls() def test_neutron_available_floating_ips_network(self): """ Test with specifying a network name. """ fips_mock_uri = 'https://network.example.com/v2.0/floatingips.json' self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': []}), dict(method='GET', uri=fips_mock_uri, json={'floatingips': []}), dict(method='POST', uri=fips_mock_uri, json=self.mock_floating_ip_new_rep, validate=dict(json={ 'floatingip': { 'floating_network_id': self.mock_get_network_rep['id'] }})) ]) # Test if first network is selected if no network is given self.cloud._neutron_available_floating_ips( network=self.mock_get_network_rep['name'] ) self.assert_calls() def test_neutron_available_floating_ips_invalid_network(self): """ Test with an invalid network name. """ self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': []}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud._neutron_available_floating_ips, network='INVALID') self.assert_calls() def test_auto_ip_pool_no_reuse(self): # payloads taken from citycloud self.register_uris([ dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={"networks": [{ "status": "ACTIVE", "subnets": [ "df3e17fa-a4b2-47ae-9015-bc93eb076ba2", "6b0c3dc9-b0b8-4d87-976a-7f2ebf13e7ec", "fc541f48-fc7f-48c0-a063-18de6ee7bdd7"], "availability_zone_hints": [], "availability_zones": ["nova"], "name": "ext-net", "admin_state_up": True, "tenant_id": "a564613210ee43708b8a7fc6274ebd63", "tags": [], "ipv6_address_scope": "9f03124f-89af-483a-b6fd-10f08079db4d", # noqa "mtu": 0, "is_default": False, "router:external": True, "ipv4_address_scope": None, "shared": False, "id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", "description": None }, { "status": "ACTIVE", "subnets": ["f0ad1df5-53ee-473f-b86b-3604ea5591e9"], "availability_zone_hints": [], "availability_zones": ["nova"], "name": "private", "admin_state_up": True, "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2016-10-22T13:46:26", "tags": [], "updated_at": "2016-10-22T13:46:26", "ipv6_address_scope": None, "router:external": False, "ipv4_address_scope": None, "shared": False, "mtu": 1450, "id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "description": "" }]}), dict(method='GET', uri='https://network.example.com/v2.0/ports.json' '?device_id=f80e3ad0-e13e-41d4-8e9c-be79bccdb8f7', json={"ports": [{ "status": "ACTIVE", "created_at": "2017-02-06T20:59:45", "description": "", "allowed_address_pairs": [], "admin_state_up": True, "network_id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "dns_name": None, "extra_dhcp_opts": [], "mac_address": "fa:16:3e:e8:7f:03", "updated_at": "2017-02-06T20:59:49", "name": "", "device_owner": "compute:None", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "binding:vnic_type": "normal", "fixed_ips": [{ "subnet_id": "f0ad1df5-53ee-473f-b86b-3604ea5591e9", "ip_address": "10.4.0.16"}], "id": "a767944e-057a-47d1-a669-824a21b8fb7b", "security_groups": [ "9fb5ba44-5c46-4357-8e60-8b55526cab54"], "device_id": "f80e3ad0-e13e-41d4-8e9c-be79bccdb8f7", }]}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json={"floatingip": { "router_id": "9de9c787-8f89-4a53-8468-a5533d6d7fd1", "status": "DOWN", "description": "", "dns_domain": "", "floating_network_id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", # noqa "fixed_ip_address": "10.4.0.16", "floating_ip_address": "89.40.216.153", "port_id": "a767944e-057a-47d1-a669-824a21b8fb7b", "id": "e69179dc-a904-4c9a-a4c9-891e2ecb984c", "dns_name": "", "tenant_id": "65222a4d09ea4c68934fa1028c77f394" }}, validate=dict(json={"floatingip": { "floating_network_id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", # noqa "fixed_ip_address": "10.4.0.16", "port_id": "a767944e-057a-47d1-a669-824a21b8fb7b", }})), dict(method='GET', uri='{endpoint}/servers/detail'.format( endpoint=fakes.COMPUTE_ENDPOINT), json={"servers": [{ "status": "ACTIVE", "updated": "2017-02-06T20:59:49Z", "addresses": { "private": [{ "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e8:7f:03", "version": 4, "addr": "10.4.0.16", "OS-EXT-IPS:type": "fixed" }, { "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e8:7f:03", "version": 4, "addr": "89.40.216.153", "OS-EXT-IPS:type": "floating" }]}, "key_name": None, "image": {"id": "95e4c449-8abf-486e-97d9-dc3f82417d2d"}, "OS-EXT-STS:task_state": None, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2017-02-06T20:59:48.000000", "flavor": {"id": "2186bd79-a05e-4953-9dde-ddefb63c88d4"}, "id": "f80e3ad0-e13e-41d4-8e9c-be79bccdb8f7", "security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at": None, "OS-EXT-AZ:availability_zone": "nova", "user_id": "c17534835f8f42bf98fc367e0bf35e09", "name": "testmt", "created": "2017-02-06T20:59:44Z", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached": [], "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "", "metadata": {} }]}), dict(method='GET', uri='https://network.example.com/v2.0/networks.json', json={"networks": [{ "status": "ACTIVE", "subnets": [ "df3e17fa-a4b2-47ae-9015-bc93eb076ba2", "6b0c3dc9-b0b8-4d87-976a-7f2ebf13e7ec", "fc541f48-fc7f-48c0-a063-18de6ee7bdd7"], "availability_zone_hints": [], "availability_zones": ["nova"], "name": "ext-net", "admin_state_up": True, "tenant_id": "a564613210ee43708b8a7fc6274ebd63", "tags": [], "ipv6_address_scope": "9f03124f-89af-483a-b6fd-10f08079db4d", # noqa "mtu": 0, "is_default": False, "router:external": True, "ipv4_address_scope": None, "shared": False, "id": "0232c17f-2096-49bc-b205-d3dcd9a30ebf", "description": None }, { "status": "ACTIVE", "subnets": ["f0ad1df5-53ee-473f-b86b-3604ea5591e9"], "availability_zone_hints": [], "availability_zones": ["nova"], "name": "private", "admin_state_up": True, "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2016-10-22T13:46:26", "tags": [], "updated_at": "2016-10-22T13:46:26", "ipv6_address_scope": None, "router:external": False, "ipv4_address_scope": None, "shared": False, "mtu": 1450, "id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "description": "" }]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={"subnets": [{ "description": "", "enable_dhcp": True, "network_id": "2c9adcb5-c123-4c5a-a2ba-1ad4c4e1481f", "tenant_id": "65222a4d09ea4c68934fa1028c77f394", "created_at": "2016-10-22T13:46:26", "dns_nameservers": [ "89.36.90.101", "89.36.90.102"], "updated_at": "2016-10-22T13:46:26", "gateway_ip": "10.4.0.1", "ipv6_ra_mode": None, "allocation_pools": [{ "start": "10.4.0.2", "end": "10.4.0.200"}], "host_routes": [], "ip_version": 4, "ipv6_address_mode": None, "cidr": "10.4.0.0/24", "id": "f0ad1df5-53ee-473f-b86b-3604ea5591e9", "subnetpool_id": None, "name": "private-subnet-ipv4", }]})]) self.cloud.add_ips_to_server( munch.Munch( id='f80e3ad0-e13e-41d4-8e9c-be79bccdb8f7', addresses={ "private": [{ "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e8:7f:03", "version": 4, "addr": "10.4.0.16", "OS-EXT-IPS:type": "fixed" }]}), ip_pool='ext-net', reuse=False) self.assert_calls() def test_available_floating_ip_new(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': []}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id'}}), json=self.mock_floating_ip_new_rep) ]) ip = self.cloud.available_floating_ip(network='my-network') self.assertEqual( self.mock_floating_ip_new_rep['floatingip']['floating_ip_address'], ip['floating_ip_address']) self.assert_calls() def test_delete_floating_ip_existing(self): fip_id = '2f245a7b-796b-4f26-9cf9-9e82d248fda7' fake_fip = { 'id': fip_id, 'floating_ip_address': '172.99.106.167', 'status': 'ACTIVE', } self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': []}), ]) self.assertTrue( self.cloud.delete_floating_ip(floating_ip_id=fip_id, retry=2)) self.assert_calls() def test_delete_floating_ip_existing_down(self): fip_id = '2f245a7b-796b-4f26-9cf9-9e82d248fda7' fake_fip = { 'id': fip_id, 'floating_ip_address': '172.99.106.167', 'status': 'ACTIVE', } down_fip = { 'id': fip_id, 'floating_ip_address': '172.99.106.167', 'status': 'DOWN', } self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [down_fip]}), ]) self.assertTrue( self.cloud.delete_floating_ip(floating_ip_id=fip_id, retry=2)) self.assert_calls() def test_delete_floating_ip_existing_no_delete(self): fip_id = '2f245a7b-796b-4f26-9cf9-9e82d248fda7' fake_fip = { 'id': fip_id, 'floating_ip_address': '172.99.106.167', 'status': 'ACTIVE', } self.register_uris([ dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format(fip_id)]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fake_fip]}), ]) self.assertRaises( exc.OpenStackCloudException, self.cloud.delete_floating_ip, floating_ip_id=fip_id, retry=2) self.assert_calls() def test_delete_floating_ip_not_found(self): self.register_uris([ dict(method='DELETE', uri=('https://network.example.com/v2.0/floatingips/' 'a-wild-id-appears.json'), status_code=404)]) ret = self.cloud.delete_floating_ip( floating_ip_id='a-wild-id-appears') self.assertFalse(ret) self.assert_calls() def test_attach_ip_to_server(self): fip = self.mock_floating_ip_list_rep['floatingips'][0] device_id = self.fake_server['id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=["device_id={0}".format(device_id)]), json={'ports': self.mock_search_ports_rep}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( fip['id'])]), json={'floatingip': fip}, validate=dict( json={'floatingip': { 'port_id': self.mock_search_ports_rep[0]['id'], 'fixed_ip_address': self.mock_search_ports_rep[0][ 'fixed_ips'][0]['ip_address']}})), ]) self.cloud._attach_ip_to_server( server=self.fake_server, floating_ip=self.floating_ip) self.assert_calls() def test_add_ip_refresh_timeout(self): device_id = self.fake_server['id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri='https://network.example.com/v2.0/subnets.json', json={'subnets': []}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=["device_id={0}".format(device_id)]), json={'ports': self.mock_search_ports_rep}), dict(method='POST', uri='https://network.example.com/v2.0/floatingips.json', json={'floatingip': self.floating_ip}, validate=dict( json={'floatingip': { 'floating_network_id': 'my-network-id', 'fixed_ip_address': self.mock_search_ports_rep[0][ 'fixed_ips'][0]['ip_address'], 'port_id': self.mock_search_ports_rep[0]['id']}})), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [self.floating_ip]}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( self.floating_ip['id'])]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': []}), ]) self.assertRaises( exc.OpenStackCloudTimeout, self.cloud._add_auto_ip, server=self.fake_server, wait=True, timeout=0.01, reuse=False) self.assert_calls() def test_detach_ip_from_server(self): fip = self.mock_floating_ip_new_rep['floatingip'] attached_fip = copy.copy(fip) attached_fip['port_id'] = 'server-port-id' self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [attached_fip]}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( fip['id'])]), json={'floatingip': fip}, validate=dict( json={'floatingip': {'port_id': None}})) ]) self.cloud.detach_ip_from_server( server_id='server-id', floating_ip_id=fip['id']) self.assert_calls() def test_add_ip_from_pool(self): network = self.mock_get_network_rep fip = self.mock_floating_ip_new_rep['floatingip'] fixed_ip = self.mock_search_ports_rep[0]['fixed_ips'][0]['ip_address'] port_id = self.mock_search_ports_rep[0]['id'] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [network]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [fip]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingip': fip}, validate=dict( json={'floatingip': { 'floating_network_id': network['id']}})), dict(method="GET", uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=[ "device_id={0}".format(self.fake_server['id'])]), json={'ports': self.mock_search_ports_rep}), dict(method='PUT', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( fip['id'])]), json={'floatingip': fip}, validate=dict( json={'floatingip': { 'fixed_ip_address': fixed_ip, 'port_id': port_id}})), ]) server = self.cloud._add_ip_from_pool( server=self.fake_server, network=network['id'], fixed_address=fixed_ip) self.assertEqual(server, self.fake_server) self.assert_calls() def test_cleanup_floating_ips(self): floating_ips = [{ "id": "this-is-a-floating-ip-id", "fixed_ip_address": None, "internal_network": None, "floating_ip_address": "203.0.113.29", "network": "this-is-a-net-or-pool-id", "port_id": None, "status": "ACTIVE" }, { "id": "this-is-an-attached-floating-ip-id", "fixed_ip_address": None, "internal_network": None, "floating_ip_address": "203.0.113.29", "network": "this-is-a-net-or-pool-id", "attached": True, "port_id": "this-is-id-of-port-with-fip", "status": "ACTIVE" }] self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': floating_ips}), dict(method='DELETE', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips/{0}.json'.format( floating_ips[0]['id'])]), json={}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingips': [floating_ips[1]]}), ]) self.cloud.delete_unattached_floating_ips() self.assert_calls() def test_create_floating_ip_no_port(self): server_port = { "id": "port-id", "device_id": "some-server", 'created_at': datetime.datetime.now().isoformat(), 'fixed_ips': [ { 'subnet_id': 'subnet-id', 'ip_address': '172.24.4.2' } ], } floating_ip = { "id": "floating-ip-id", "port_id": None } self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'networks.json']), json={'networks': [self.mock_get_network_rep]}), dict(method='GET', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'subnets.json']), json={'subnets': []}), dict(method="GET", uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'ports.json'], qs_elements=['device_id=some-server']), json={'ports': [server_port]}), dict(method='POST', uri=self.get_mock_url( 'network', 'public', append=['v2.0', 'floatingips.json']), json={'floatingip': floating_ip}) ]) self.assertRaises( exc.OpenStackCloudException, self.cloud._neutron_create_floating_ip, server=dict(id='some-server')) self.assert_calls() shade-1.31.0/shade/tests/unit/test_volume_backups.py0000666000175000017500000001175513440327640022555 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from shade import meta from shade.tests.unit import base class TestVolumeBackups(base.RequestsMockTestCase): def test_search_volume_backups(self): name = 'Volume1' vol1 = {'name': name, 'availability_zone': 'az1'} vol2 = {'name': name, 'availability_zone': 'az1'} vol3 = {'name': 'Volume2', 'availability_zone': 'az2'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [vol1, vol2, vol3]})]) result = self.cloud.search_volume_backups( name, {'availability_zone': 'az1'}) self.assertEqual(len(result), 2) self.assertEqual( meta.obj_list_to_munch([vol1, vol2]), result) self.assert_calls() def test_get_volume_backup(self): name = 'Volume1' vol1 = {'name': name, 'availability_zone': 'az1'} vol2 = {'name': name, 'availability_zone': 'az2'} vol3 = {'name': 'Volume2', 'availability_zone': 'az1'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [vol1, vol2, vol3]})]) result = self.cloud.get_volume_backup( name, {'availability_zone': 'az1'}) result = meta.obj_to_munch(result) self.assertEqual( meta.obj_to_munch(vol1), result) self.assert_calls() def test_list_volume_backups(self): backup = {'id': '6ff16bdf-44d5-4bf9-b0f3-687549c76414', 'status': 'available'} search_opts = {'status': 'available'} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail'], qs_elements=['='.join(i) for i in search_opts.items()]), json={"backups": [backup]})]) result = self.cloud.list_volume_backups(True, search_opts) self.assertEqual(len(result), 1) self.assertEqual( meta.obj_list_to_munch([backup]), result) self.assert_calls() def test_delete_volume_backup_wait(self): backup_id = '6ff16bdf-44d5-4bf9-b0f3-687549c76414' backup = {'id': backup_id} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [backup]}), dict(method='DELETE', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', backup_id])), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [backup]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": []})]) self.cloud.delete_volume_backup(backup_id, False, True, 1) self.assert_calls() def test_delete_volume_backup_force(self): backup_id = '6ff16bdf-44d5-4bf9-b0f3-687549c76414' backup = {'id': backup_id} self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [backup]}), dict(method='POST', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', backup_id, 'action']), json={'os-force_delete': {}}, validate=dict(json={u'os-force_delete': None})), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": [backup]}), dict(method='GET', uri=self.get_mock_url( 'volumev2', 'public', append=['backups', 'detail']), json={"backups": []}) ]) self.cloud.delete_volume_backup(backup_id, True, True, 1) self.assert_calls() shade-1.31.0/shade/tests/unit/test_zone.py0000666000175000017500000001210413440327640020476 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import testtools import shade from shade.tests.unit import base zone_dict = { 'name': 'example.net.', 'type': 'PRIMARY', 'email': 'test@example.net', 'description': 'Example zone', 'ttl': 3600, } new_zone_dict = copy.copy(zone_dict) new_zone_dict['id'] = '1' class TestZone(base.RequestsMockTestCase): def setUp(self): super(TestZone, self).setUp() self.use_designate() def test_create_zone(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json=new_zone_dict, validate=dict( json=zone_dict)) ]) z = self.cloud.create_zone( name=zone_dict['name'], zone_type=zone_dict['type'], email=zone_dict['email'], description=zone_dict['description'], ttl=zone_dict['ttl'], masters=None) self.assertEqual(new_zone_dict, z) self.assert_calls() def test_create_zone_exception(self): self.register_uris([ dict(method='POST', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), status_code=500) ]) with testtools.ExpectedException( shade.exc.OpenStackCloudHTTPError, "Unable to create zone example.net." ): self.cloud.create_zone('example.net.') self.assert_calls() def test_update_zone(self): new_ttl = 7200 updated_zone = copy.copy(new_zone_dict) updated_zone['ttl'] = new_ttl self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [new_zone_dict], "links": {}, "metadata": { 'total_count': 1}}), dict(method='PATCH', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1']), json=updated_zone, validate=dict( json={"ttl": new_ttl})) ]) z = self.cloud.update_zone('1', ttl=new_ttl) self.assertEqual(updated_zone, z) self.assert_calls() def test_delete_zone(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [new_zone_dict], "links": {}, "metadata": { 'total_count': 1}}), dict(method='DELETE', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones', '1']), json=new_zone_dict) ]) self.assertTrue(self.cloud.delete_zone('1')) self.assert_calls() def test_get_zone_by_id(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [new_zone_dict], "links": {}, "metadata": { 'total_count': 1}}) ]) zone = self.cloud.get_zone('1') self.assertEqual(zone['id'], '1') self.assert_calls() def test_get_zone_by_name(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [new_zone_dict], "links": {}, "metadata": { 'total_count': 1}}) ]) zone = self.cloud.get_zone('example.net.') self.assertEqual(zone['name'], 'example.net.') self.assert_calls() def test_get_zone_not_found_returns_false(self): self.register_uris([ dict(method='GET', uri=self.get_mock_url( 'dns', 'public', append=['v2', 'zones']), json={ "zones": [], "links": {}, "metadata": { 'total_count': 1}}) ]) zone = self.cloud.get_zone('nonexistingzone.net.') self.assertFalse(zone) self.assert_calls() shade-1.31.0/shade/tests/base.py0000666000175000017500000000740013440327640016422 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures import logging import munch import pprint from six import StringIO import testtools import testtools.content _TRUE_VALUES = ('true', '1', 'yes') class TestCase(testtools.TestCase): """Test case base class for all tests.""" # A way to adjust slow test classes TIMEOUT_SCALING_FACTOR = 1.0 def setUp(self): """Run before each test method to initialize test environment.""" super(TestCase, self).setUp() test_timeout = int(os.environ.get('OS_TEST_TIMEOUT', 0)) try: test_timeout = int(test_timeout * self.TIMEOUT_SCALING_FACTOR) except ValueError: # If timeout value is invalid do not set a timeout. test_timeout = 0 if test_timeout > 0: self.useFixture(fixtures.Timeout(test_timeout, gentle=True)) self.useFixture(fixtures.NestedTempfile()) self.useFixture(fixtures.TempHomeDir()) if os.environ.get('OS_STDOUT_CAPTURE') in _TRUE_VALUES: stdout = self.useFixture(fixtures.StringStream('stdout')).stream self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout)) if os.environ.get('OS_STDERR_CAPTURE') in _TRUE_VALUES: stderr = self.useFixture(fixtures.StringStream('stderr')).stream self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr)) self._log_stream = StringIO() if os.environ.get('OS_ALWAYS_LOG') in _TRUE_VALUES: self.addCleanup(self.printLogs) else: self.addOnException(self.attachLogs) handler = logging.StreamHandler(self._log_stream) formatter = logging.Formatter('%(asctime)s %(name)-32s %(message)s') handler.setFormatter(formatter) logger = logging.getLogger('shade') logger.setLevel(logging.DEBUG) logger.addHandler(handler) # Enable HTTP level tracing logger = logging.getLogger('keystoneauth') logger.setLevel(logging.DEBUG) logger.addHandler(handler) logger.propagate = False def assertEqual(self, first, second, *args, **kwargs): '''Munch aware wrapper''' if isinstance(first, munch.Munch): first = first.toDict() if isinstance(second, munch.Munch): second = second.toDict() return super(TestCase, self).assertEqual( first, second, *args, **kwargs) def printLogs(self, *args): self._log_stream.seek(0) print(self._log_stream.read()) def attachLogs(self, *args): def reader(): self._log_stream.seek(0) while True: x = self._log_stream.read(4096) if not x: break yield x.encode('utf8') content = testtools.content.content_from_reader( reader, testtools.content_type.UTF8_TEXT, False) self.addDetail('logging', content) def add_info_on_exception(self, name, text): def add_content(unused): self.addDetail(name, testtools.content.text_content( pprint.pformat(text))) self.addOnException(add_content) shade-1.31.0/shade/tests/functional/0000775000175000017500000000000013440330010017256 5ustar zuulzuul00000000000000shade-1.31.0/shade/tests/functional/test_inventory.py0000666000175000017500000000714713440327640022756 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_inventory ---------------------------------- Functional tests for `shade` inventory methods. """ from shade import inventory from shade.tests.functional import base from shade.tests.functional.util import pick_flavor class TestInventory(base.BaseFunctionalTestCase): def setUp(self): super(TestInventory, self).setUp() # This needs to use an admin account, otherwise a public IP # is not allocated from devstack. self.inventory = inventory.OpenStackInventory(cloud='devstack-admin') self.server_name = self.getUniqueString('inventory') self.flavor = pick_flavor( self.user_cloud.list_flavors(get_extra=False)) if self.flavor is None: self.assertTrue(False, 'no sensible flavor available') self.image = self.pick_image() self.addCleanup(self._cleanup_server) server = self.operator_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True, auto_ip=True) self.server_id = server['id'] def _cleanup_server(self): self.user_cloud.delete_server(self.server_id, wait=True) def _test_host_content(self, host): self.assertEqual(host['image']['id'], self.image.id) self.assertNotIn('links', host['image']) self.assertEqual(host['flavor']['id'], self.flavor.id) self.assertNotIn('links', host['flavor']) self.assertNotIn('links', host) self.assertIsInstance(host['volumes'], list) self.assertIsInstance(host['metadata'], dict) self.assertIn('interface_ip', host) def _test_expanded_host_content(self, host): self.assertEqual(host['image']['name'], self.image.name) self.assertEqual(host['flavor']['name'], self.flavor.name) def test_get_host(self): host = self.inventory.get_host(self.server_id) self.assertIsNotNone(host) self.assertEqual(host['name'], self.server_name) self._test_host_content(host) self._test_expanded_host_content(host) host_found = False for host in self.inventory.list_hosts(): if host['id'] == self.server_id: host_found = True self._test_host_content(host) self.assertTrue(host_found) def test_get_host_no_detail(self): host = self.inventory.get_host(self.server_id, expand=False) self.assertIsNotNone(host) self.assertEqual(host['name'], self.server_name) self.assertEqual(host['image']['id'], self.image.id) self.assertNotIn('links', host['image']) self.assertNotIn('name', host['name']) self.assertEqual(host['flavor']['id'], self.flavor.id) self.assertNotIn('links', host['flavor']) self.assertNotIn('name', host['flavor']) host_found = False for host in self.inventory.list_hosts(expand=False): if host['id'] == self.server_id: host_found = True self._test_host_content(host) self.assertTrue(host_found) shade-1.31.0/shade/tests/functional/test_keypairs.py0000666000175000017500000000456413440327640022550 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_keypairs ---------------------------------- Functional tests for `shade` keypairs methods """ from shade.tests import fakes from shade.tests.functional import base class TestKeypairs(base.BaseFunctionalTestCase): def test_create_and_delete(self): '''Test creating and deleting keypairs functionality''' name = self.getUniqueString('keypair') self.addCleanup(self.user_cloud.delete_keypair, name) keypair = self.user_cloud.create_keypair(name=name) self.assertEqual(keypair['name'], name) self.assertIsNotNone(keypair['public_key']) self.assertIsNotNone(keypair['private_key']) self.assertIsNotNone(keypair['fingerprint']) self.assertEqual(keypair['type'], 'ssh') keypairs = self.user_cloud.list_keypairs() self.assertIn(name, [k['name'] for k in keypairs]) self.user_cloud.delete_keypair(name) keypairs = self.user_cloud.list_keypairs() self.assertNotIn(name, [k['name'] for k in keypairs]) def test_create_and_delete_with_key(self): '''Test creating and deleting keypairs functionality''' name = self.getUniqueString('keypair') self.addCleanup(self.user_cloud.delete_keypair, name) keypair = self.user_cloud.create_keypair( name=name, public_key=fakes.FAKE_PUBLIC_KEY) self.assertEqual(keypair['name'], name) self.assertIsNotNone(keypair['public_key']) self.assertIsNone(keypair['private_key']) self.assertIsNotNone(keypair['fingerprint']) self.assertEqual(keypair['type'], 'ssh') keypairs = self.user_cloud.list_keypairs() self.assertIn(name, [k['name'] for k in keypairs]) self.user_cloud.delete_keypair(name) keypairs = self.user_cloud.list_keypairs() self.assertNotIn(name, [k['name'] for k in keypairs]) shade-1.31.0/shade/tests/functional/test_compute.py0000666000175000017500000005142113440327640022367 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_compute ---------------------------------- Functional tests for `shade` compute methods. """ from fixtures import TimeoutException import six from shade import exc from shade.tests.functional import base from shade.tests.functional.util import pick_flavor from shade import _utils class TestCompute(base.BaseFunctionalTestCase): def setUp(self): # OS_TEST_TIMEOUT is 60 sec by default # but on a bad day, test_attach_detach_volume can take more time. self.TIMEOUT_SCALING_FACTOR = 1.5 super(TestCompute, self).setUp() self.flavor = pick_flavor( self.user_cloud.list_flavors(get_extra=False)) if self.flavor is None: self.assertFalse('no sensible flavor available') self.image = self.pick_image() self.server_name = self.getUniqueString() def _cleanup_servers_and_volumes(self, server_name): """Delete the named server and any attached volumes. Adding separate cleanup calls for servers and volumes can be tricky since they need to be done in the proper order. And sometimes deleting a server can start the process of deleting a volume if it is booted from that volume. This encapsulates that logic. """ server = self.user_cloud.get_server(server_name) if not server: return volumes = self.user_cloud.get_volumes(server) try: self.user_cloud.delete_server(server.name, wait=True) for volume in volumes: if volume.status != 'deleting': self.user_cloud.delete_volume(volume.id, wait=True) except (exc.OpenStackCloudTimeout, TimeoutException): # Ups, some timeout occured during process of deletion server # or volumes, so now we will try to call delete each of them # once again and we will try to live with it self.user_cloud.delete_server(server.name) for volume in volumes: self.operator_cloud.delete_volume( volume.id, wait=False, force=True) def test_create_and_delete_server(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_create_and_delete_server_auto_ip_delete_ips(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, auto_ip=True, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server( self.server_name, wait=True, delete_ips=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_attach_detach_volume(self): self.skipTest('Volume functional tests temporarily disabled') server_name = self.getUniqueString() self.addCleanup(self._cleanup_servers_and_volumes, server_name) server = self.user_cloud.create_server( name=server_name, image=self.image, flavor=self.flavor, wait=True) volume = self.user_cloud.create_volume(1) vol_attachment = self.user_cloud.attach_volume(server, volume) for key in ('device', 'serverId', 'volumeId'): self.assertIn(key, vol_attachment) self.assertTrue(vol_attachment[key]) # assert string is not empty self.assertIsNone(self.user_cloud.detach_volume(server, volume)) def test_create_and_delete_server_with_config_drive(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, config_drive=True, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertTrue(server['has_config_drive']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_create_and_delete_server_with_config_drive_none(self): # check that we're not sending invalid values for config_drive # if it's passed in explicitly as None - which nodepool does if it's # not set in the config self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, config_drive=None, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertFalse(server['has_config_drive']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server( self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_list_all_servers(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) # We're going to get servers from other tests, but that's ok, as long # as we get the server we created with the demo user. found_server = False for s in self.operator_cloud.list_servers(all_projects=True): if s.name == server.name: found_server = True self.assertTrue(found_server) def test_list_all_servers_bad_permissions(self): # Normal users are not allowed to pass all_projects=True self.assertRaises( exc.OpenStackCloudException, self.user_cloud.list_servers, all_projects=True) def test_create_server_image_flavor_dict(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image={'id': self.image.id}, flavor={'id': self.flavor.id}, wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertIsNotNone(server['adminPass']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_get_server_console(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) # _get_server_console_output does not trap HTTP exceptions, so this # returning a string tests that the call is correct. Testing that # the cloud returns actual data in the output is out of scope. log = self.user_cloud._get_server_console_output(server_id=server.id) self.assertTrue(isinstance(log, six.string_types)) def test_get_server_console_name_or_id(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) log = self.user_cloud.get_server_console(server=self.server_name) self.assertTrue(isinstance(log, six.string_types)) def test_list_availability_zone_names(self): self.assertEqual( ['nova'], self.user_cloud.list_availability_zone_names()) def test_get_server_console_bad_server(self): self.assertRaises( exc.OpenStackCloudException, self.user_cloud.get_server_console, server=self.server_name) def test_create_and_delete_server_with_admin_pass(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, admin_pass='sheiqu9loegahSh', wait=True) self.assertEqual(self.server_name, server['name']) self.assertEqual(self.image.id, server['image']['id']) self.assertEqual(self.flavor.id, server['flavor']['id']) self.assertEqual(server['adminPass'], 'sheiqu9loegahSh') self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_get_image_id(self): self.assertEqual( self.image.id, self.user_cloud.get_image_id(self.image.id)) self.assertEqual( self.image.id, self.user_cloud.get_image_id(self.image.name)) def test_get_image_name(self): self.assertEqual( self.image.name, self.user_cloud.get_image_name(self.image.id)) self.assertEqual( self.image.name, self.user_cloud.get_image_name(self.image.name)) def _assert_volume_attach(self, server, volume_id=None, image=''): self.assertEqual(self.server_name, server['name']) self.assertEqual(image, server['image']) self.assertEqual(self.flavor.id, server['flavor']['id']) volumes = self.user_cloud.get_volumes(server) self.assertEqual(1, len(volumes)) volume = volumes[0] if volume_id: self.assertEqual(volume_id, volume['id']) else: volume_id = volume['id'] self.assertEqual(1, len(volume['attachments']), 1) self.assertEqual(server['id'], volume['attachments'][0]['server_id']) return volume_id def test_create_boot_from_volume_image(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, boot_from_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server) volume = self.user_cloud.get_volume(volume_id) self.assertIsNotNone(volume) self.assertEqual(volume['name'], volume['display_name']) self.assertTrue(volume['bootable']) self.assertEqual(server['id'], volume['attachments'][0]['server_id']) self.assertTrue(self.user_cloud.delete_server(server.id, wait=True)) self._wait_for_detach(volume.id) self.assertTrue(self.user_cloud.delete_volume(volume.id, wait=True)) self.assertIsNone(self.user_cloud.get_server(server.id)) self.assertIsNone(self.user_cloud.get_volume(volume.id)) def _wait_for_detach(self, volume_id): # Volumes do not show up as unattached for a bit immediately after # deleting a server that had had a volume attached. Yay for eventual # consistency! for count in _utils._iterate_timeout( 60, 'Timeout waiting for volume {volume_id} to detach'.format( volume_id=volume_id)): volume = self.user_cloud.get_volume(volume_id) if volume.status in ( 'available', 'error', 'error_restoring', 'error_extending'): return def test_create_terminate_volume_image(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, boot_from_volume=True, terminate_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) volume = self.user_cloud.get_volume(volume_id) # We can either get None (if the volume delete was quick), or a volume # that is in the process of being deleted. if volume: self.assertEqual('deleting', volume.status) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_create_boot_from_volume_preexisting(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) volume = self.user_cloud.create_volume( size=1, name=self.server_name, image=self.image, wait=True) self.addCleanup(self.user_cloud.delete_volume, volume.id) server = self.user_cloud.create_server( name=self.server_name, image=None, flavor=self.flavor, boot_volume=volume, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server, volume_id=volume['id']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) volume = self.user_cloud.get_volume(volume_id) self.assertIsNotNone(volume) self.assertEqual(volume['name'], volume['display_name']) self.assertTrue(volume['bootable']) self.assertEqual([], volume['attachments']) self._wait_for_detach(volume.id) self.assertTrue(self.user_cloud.delete_volume(volume_id)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) self.assertIsNone(self.user_cloud.get_volume(volume_id)) def test_create_boot_attach_volume(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) volume = self.user_cloud.create_volume( size=1, name=self.server_name, image=self.image, wait=True) self.addCleanup(self.user_cloud.delete_volume, volume['id']) server = self.user_cloud.create_server( name=self.server_name, flavor=self.flavor, image=self.image, boot_from_volume=False, volumes=[volume], wait=True) volume_id = self._assert_volume_attach( server, volume_id=volume['id'], image={'id': self.image['id']}) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) volume = self.user_cloud.get_volume(volume_id) self.assertIsNotNone(volume) self.assertEqual(volume['name'], volume['display_name']) self.assertEqual([], volume['attachments']) self._wait_for_detach(volume.id) self.assertTrue(self.user_cloud.delete_volume(volume_id)) self.assertIsNone(self.user_cloud.get_server(self.server_name)) self.assertIsNone(self.user_cloud.get_volume(volume_id)) def test_create_boot_from_volume_preexisting_terminate(self): self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) volume = self.user_cloud.create_volume( size=1, name=self.server_name, image=self.image, wait=True) server = self.user_cloud.create_server( name=self.server_name, image=None, flavor=self.flavor, boot_volume=volume, terminate_volume=True, volume_size=1, wait=True) volume_id = self._assert_volume_attach(server, volume_id=volume['id']) self.assertTrue( self.user_cloud.delete_server(self.server_name, wait=True)) volume = self.user_cloud.get_volume(volume_id) # We can either get None (if the volume delete was quick), or a volume # that is in the process of being deleted. if volume: self.assertEqual('deleting', volume.status) self.assertIsNone(self.user_cloud.get_server(self.server_name)) def test_create_image_snapshot_wait_active(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) server = self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, admin_pass='sheiqu9loegahSh', wait=True) image = self.user_cloud.create_image_snapshot('test-snapshot', server, wait=True) self.addCleanup(self.user_cloud.delete_image, image['id']) self.assertEqual('active', image['status']) def test_set_and_delete_metadata(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) self.user_cloud.set_server_metadata(self.server_name, {'key1': 'value1', 'key2': 'value2'}) updated_server = self.user_cloud.get_server(self.server_name) self.assertEqual(set(updated_server.metadata.items()), set({'key1': 'value1', 'key2': 'value2'}.items())) self.user_cloud.set_server_metadata(self.server_name, {'key2': 'value3'}) updated_server = self.user_cloud.get_server(self.server_name) self.assertEqual(set(updated_server.metadata.items()), set({'key1': 'value1', 'key2': 'value3'}.items())) self.user_cloud.delete_server_metadata(self.server_name, ['key2']) updated_server = self.user_cloud.get_server(self.server_name) self.assertEqual(set(updated_server.metadata.items()), set({'key1': 'value1'}.items())) self.user_cloud.delete_server_metadata(self.server_name, ['key1']) updated_server = self.user_cloud.get_server(self.server_name) self.assertEqual(set(updated_server.metadata.items()), set([])) self.assertRaises( exc.OpenStackCloudURINotFound, self.user_cloud.delete_server_metadata, self.server_name, ['key1']) def test_update_server(self): self.addCleanup(self._cleanup_servers_and_volumes, self.server_name) self.user_cloud.create_server( name=self.server_name, image=self.image, flavor=self.flavor, wait=True) server_updated = self.user_cloud.update_server( self.server_name, name='new_name' ) self.assertEqual('new_name', server_updated['name']) shade-1.31.0/shade/tests/functional/test_volume_type.py0000666000175000017500000001071713440327640023266 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_volume ---------------------------------- Functional tests for `shade` block storage methods. """ import testtools from shade import exc from shade.tests.functional import base class TestVolumeType(base.BaseFunctionalTestCase): def _assert_project(self, volume_name_or_id, project_id, allowed=True): acls = self.operator_cloud.get_volume_type_access(volume_name_or_id) allowed_projects = [x.get('project_id') for x in acls] self.assertEqual(allowed, project_id in allowed_projects) def setUp(self): super(TestVolumeType, self).setUp() if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') volume_type = { "name": 'test-volume-type', "description": None, "os-volume-type-access:is_public": False} self.operator_cloud._volume_client.post( '/types', json={'volume_type': volume_type}) def tearDown(self): ret = self.operator_cloud.get_volume_type('test-volume-type') if ret.get('id'): self.operator_cloud._volume_client.delete( '/types/{volume_type_id}'.format(volume_type_id=ret.id)) super(TestVolumeType, self).tearDown() def test_list_volume_types(self): volume_types = self.operator_cloud.list_volume_types() self.assertTrue(volume_types) self.assertTrue(any( x for x in volume_types if x.name == 'test-volume-type')) def test_add_remove_volume_type_access(self): volume_type = self.operator_cloud.get_volume_type('test-volume-type') self.assertEqual('test-volume-type', volume_type.name) self.operator_cloud.add_volume_type_access( 'test-volume-type', self.operator_cloud.current_project_id) self._assert_project( 'test-volume-type', self.operator_cloud.current_project_id, allowed=True) self.operator_cloud.remove_volume_type_access( 'test-volume-type', self.operator_cloud.current_project_id) self._assert_project( 'test-volume-type', self.operator_cloud.current_project_id, allowed=False) def test_add_volume_type_access_missing_project(self): # Project id is not valitaded and it may not exist. self.operator_cloud.add_volume_type_access( 'test-volume-type', '00000000000000000000000000000000') self.operator_cloud.remove_volume_type_access( 'test-volume-type', '00000000000000000000000000000000') def test_add_volume_type_access_missing_volume(self): with testtools.ExpectedException( exc.OpenStackCloudException, "VolumeType not found.*" ): self.operator_cloud.add_volume_type_access( 'MISSING_VOLUME_TYPE', self.operator_cloud.current_project_id) def test_remove_volume_type_access_missing_volume(self): with testtools.ExpectedException( exc.OpenStackCloudException, "VolumeType not found.*" ): self.operator_cloud.remove_volume_type_access( 'MISSING_VOLUME_TYPE', self.operator_cloud.current_project_id) def test_add_volume_type_access_bad_project(self): with testtools.ExpectedException( exc.OpenStackCloudBadRequest, "Unable to authorize.*" ): self.operator_cloud.add_volume_type_access( 'test-volume-type', 'BAD_PROJECT_ID') def test_remove_volume_type_access_missing_project(self): with testtools.ExpectedException( exc.OpenStackCloudURINotFound, "Unable to revoke.*" ): self.operator_cloud.remove_volume_type_access( 'test-volume-type', '00000000000000000000000000000000') shade-1.31.0/shade/tests/functional/test_qos_bandwidth_limit_rule.py0000666000175000017500000001000413440327640025756 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_qos_bandwidth_limit_rule ---------------------------------- Functional tests for `shade`QoS bandwidth limit methods. """ from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestQosBandwidthLimitRule(base.BaseFunctionalTestCase): def setUp(self): super(TestQosBandwidthLimitRule, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') if not self.operator_cloud._has_neutron_extension('qos'): self.skipTest('QoS network extension not supported by cloud') policy_name = self.getUniqueString('qos_policy') self.policy = self.operator_cloud.create_qos_policy(name=policy_name) self.addCleanup(self._cleanup_qos_policy) def _cleanup_qos_policy(self): try: self.operator_cloud.delete_qos_policy(self.policy['id']) except Exception as e: raise OpenStackCloudException(e) def test_qos_bandwidth_limit_rule_lifecycle(self): max_kbps = 1500 max_burst_kbps = 500 updated_max_kbps = 2000 # Create bw limit rule rule = self.operator_cloud.create_qos_bandwidth_limit_rule( self.policy['id'], max_kbps=max_kbps, max_burst_kbps=max_burst_kbps) self.assertIn('id', rule) self.assertEqual(max_kbps, rule['max_kbps']) self.assertEqual(max_burst_kbps, rule['max_burst_kbps']) # Now try to update rule updated_rule = self.operator_cloud.update_qos_bandwidth_limit_rule( self.policy['id'], rule['id'], max_kbps=updated_max_kbps) self.assertIn('id', updated_rule) self.assertEqual(updated_max_kbps, updated_rule['max_kbps']) self.assertEqual(max_burst_kbps, updated_rule['max_burst_kbps']) # List rules from policy policy_rules = self.operator_cloud.list_qos_bandwidth_limit_rules( self.policy['id']) self.assertEqual([updated_rule], policy_rules) # Delete rule self.operator_cloud.delete_qos_bandwidth_limit_rule( self.policy['id'], updated_rule['id']) # Check if there is no rules in policy policy_rules = self.operator_cloud.list_qos_bandwidth_limit_rules( self.policy['id']) self.assertEqual([], policy_rules) def test_create_qos_bandwidth_limit_rule_direction(self): if not self.operator_cloud._has_neutron_extension( 'qos-bw-limit-direction'): self.skipTest("'qos-bw-limit-direction' network extension " "not supported by cloud") max_kbps = 1500 direction = "ingress" updated_direction = "egress" # Create bw limit rule rule = self.operator_cloud.create_qos_bandwidth_limit_rule( self.policy['id'], max_kbps=max_kbps, direction=direction) self.assertIn('id', rule) self.assertEqual(max_kbps, rule['max_kbps']) self.assertEqual(direction, rule['direction']) # Now try to update direction in rule updated_rule = self.operator_cloud.update_qos_bandwidth_limit_rule( self.policy['id'], rule['id'], direction=updated_direction) self.assertIn('id', updated_rule) self.assertEqual(max_kbps, updated_rule['max_kbps']) self.assertEqual(updated_direction, updated_rule['direction']) shade-1.31.0/shade/tests/functional/test_router.py0000666000175000017500000003313213440327640022232 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_router ---------------------------------- Functional tests for `shade` router methods. """ import ipaddress from shade.exc import OpenStackCloudException from shade.tests.functional import base EXPECTED_TOPLEVEL_FIELDS = ( 'id', 'name', 'admin_state_up', 'external_gateway_info', 'tenant_id', 'routes', 'status' ) EXPECTED_GW_INFO_FIELDS = ('network_id', 'enable_snat', 'external_fixed_ips') class TestRouter(base.BaseFunctionalTestCase): def setUp(self): super(TestRouter, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') self.router_prefix = self.getUniqueString('router') self.network_prefix = self.getUniqueString('network') self.subnet_prefix = self.getUniqueString('subnet') # NOTE(Shrews): Order matters! self.addCleanup(self._cleanup_networks) self.addCleanup(self._cleanup_subnets) self.addCleanup(self._cleanup_routers) def _cleanup_routers(self): exception_list = list() for router in self.operator_cloud.list_routers(): if router['name'].startswith(self.router_prefix): try: self.operator_cloud.delete_router(router['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_networks(self): exception_list = list() for network in self.operator_cloud.list_networks(): if network['name'].startswith(self.network_prefix): try: self.operator_cloud.delete_network(network['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_subnets(self): exception_list = list() for subnet in self.operator_cloud.list_subnets(): if subnet['name'].startswith(self.subnet_prefix): try: self.operator_cloud.delete_subnet(subnet['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_router_basic(self): net1_name = self.network_prefix + '_net1' net1 = self.operator_cloud.create_network( name=net1_name, external=True) router_name = self.router_prefix + '_create_basic' router = self.operator_cloud.create_router( name=router_name, admin_state_up=True, ext_gateway_net_id=net1['id'], ) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, router) ext_gw_info = router['external_gateway_info'] for field in EXPECTED_GW_INFO_FIELDS: self.assertIn(field, ext_gw_info) self.assertEqual(router_name, router['name']) self.assertEqual('ACTIVE', router['status']) self.assertEqual(net1['id'], ext_gw_info['network_id']) self.assertTrue(ext_gw_info['enable_snat']) def test_create_router_project(self): project = self.operator_cloud.get_project('demo') self.assertIsNotNone(project) proj_id = project['id'] net1_name = self.network_prefix + '_net1' net1 = self.operator_cloud.create_network( name=net1_name, external=True, project_id=proj_id) router_name = self.router_prefix + '_create_project' router = self.operator_cloud.create_router( name=router_name, admin_state_up=True, ext_gateway_net_id=net1['id'], project_id=proj_id ) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, router) ext_gw_info = router['external_gateway_info'] for field in EXPECTED_GW_INFO_FIELDS: self.assertIn(field, ext_gw_info) self.assertEqual(router_name, router['name']) self.assertEqual('ACTIVE', router['status']) self.assertEqual(proj_id, router['tenant_id']) self.assertEqual(net1['id'], ext_gw_info['network_id']) self.assertTrue(ext_gw_info['enable_snat']) def _create_and_verify_advanced_router(self, external_cidr, external_gateway_ip=None): # external_cidr must be passed in as unicode (u'') # NOTE(Shrews): The arguments are needed because these tests # will run in parallel and we want to make sure that each test # is using different resources to prevent race conditions. net1_name = self.network_prefix + '_net1' sub1_name = self.subnet_prefix + '_sub1' net1 = self.operator_cloud.create_network( name=net1_name, external=True) sub1 = self.operator_cloud.create_subnet( net1['id'], external_cidr, subnet_name=sub1_name, gateway_ip=external_gateway_ip ) ip_net = ipaddress.IPv4Network(external_cidr) last_ip = str(list(ip_net.hosts())[-1]) router_name = self.router_prefix + '_create_advanced' router = self.operator_cloud.create_router( name=router_name, admin_state_up=False, ext_gateway_net_id=net1['id'], enable_snat=False, ext_fixed_ips=[ {'subnet_id': sub1['id'], 'ip_address': last_ip} ] ) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, router) ext_gw_info = router['external_gateway_info'] for field in EXPECTED_GW_INFO_FIELDS: self.assertIn(field, ext_gw_info) self.assertEqual(router_name, router['name']) self.assertEqual('ACTIVE', router['status']) self.assertFalse(router['admin_state_up']) self.assertEqual(1, len(ext_gw_info['external_fixed_ips'])) self.assertEqual( sub1['id'], ext_gw_info['external_fixed_ips'][0]['subnet_id'] ) self.assertEqual( last_ip, ext_gw_info['external_fixed_ips'][0]['ip_address'] ) return router def test_create_router_advanced(self): self._create_and_verify_advanced_router(external_cidr=u'10.2.2.0/24') def test_add_remove_router_interface(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.3.3.0/24') net_name = self.network_prefix + '_intnet1' sub_name = self.subnet_prefix + '_intsub1' net = self.operator_cloud.create_network(name=net_name) sub = self.operator_cloud.create_subnet( net['id'], '10.4.4.0/24', subnet_name=sub_name, gateway_ip='10.4.4.1' ) iface = self.operator_cloud.add_router_interface( router, subnet_id=sub['id']) self.assertIsNone( self.operator_cloud.remove_router_interface( router, subnet_id=sub['id']) ) # Test return values *after* the interface is detached so the # resources we've created can be cleaned up if these asserts fail. self.assertIsNotNone(iface) for key in ('id', 'subnet_id', 'port_id', 'tenant_id'): self.assertIn(key, iface) self.assertEqual(router['id'], iface['id']) self.assertEqual(sub['id'], iface['subnet_id']) def test_list_router_interfaces(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.5.5.0/24') net_name = self.network_prefix + '_intnet1' sub_name = self.subnet_prefix + '_intsub1' net = self.operator_cloud.create_network(name=net_name) sub = self.operator_cloud.create_subnet( net['id'], '10.6.6.0/24', subnet_name=sub_name, gateway_ip='10.6.6.1' ) iface = self.operator_cloud.add_router_interface( router, subnet_id=sub['id']) all_ifaces = self.operator_cloud.list_router_interfaces(router) int_ifaces = self.operator_cloud.list_router_interfaces( router, interface_type='internal') ext_ifaces = self.operator_cloud.list_router_interfaces( router, interface_type='external') self.assertIsNone( self.operator_cloud.remove_router_interface( router, subnet_id=sub['id']) ) # Test return values *after* the interface is detached so the # resources we've created can be cleaned up if these asserts fail. self.assertIsNotNone(iface) self.assertEqual(2, len(all_ifaces)) self.assertEqual(1, len(int_ifaces)) self.assertEqual(1, len(ext_ifaces)) ext_fixed_ips = router['external_gateway_info']['external_fixed_ips'] self.assertEqual(ext_fixed_ips[0]['subnet_id'], ext_ifaces[0]['fixed_ips'][0]['subnet_id']) self.assertEqual(sub['id'], int_ifaces[0]['fixed_ips'][0]['subnet_id']) def test_update_router_name(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.7.7.0/24') new_name = self.router_prefix + '_update_name' updated = self.operator_cloud.update_router( router['id'], name=new_name) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # Name is the only change we expect self.assertEqual(new_name, updated['name']) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['admin_state_up'], updated['admin_state_up']) self.assertEqual(router['external_gateway_info'], updated['external_gateway_info']) def test_update_router_routes(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.7.7.0/24') routes = [{ "destination": "10.7.7.0/24", "nexthop": "10.7.7.99" }] updated = self.operator_cloud.update_router( router['id'], routes=routes) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # Name is the only change we expect self.assertEqual(routes, updated['routes']) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['admin_state_up'], updated['admin_state_up']) self.assertEqual(router['external_gateway_info'], updated['external_gateway_info']) def test_update_router_admin_state(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.8.8.0/24') updated = self.operator_cloud.update_router( router['id'], admin_state_up=True) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # admin_state_up is the only change we expect self.assertTrue(updated['admin_state_up']) self.assertNotEqual(router['admin_state_up'], updated['admin_state_up']) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['name'], updated['name']) self.assertEqual(router['external_gateway_info'], updated['external_gateway_info']) def test_update_router_ext_gw_info(self): router = self._create_and_verify_advanced_router( external_cidr=u'10.9.9.0/24') # create a new subnet existing_net_id = router['external_gateway_info']['network_id'] sub_name = self.subnet_prefix + '_update' sub = self.operator_cloud.create_subnet( existing_net_id, '10.10.10.0/24', subnet_name=sub_name, gateway_ip='10.10.10.1' ) updated = self.operator_cloud.update_router( router['id'], ext_gateway_net_id=existing_net_id, ext_fixed_ips=[ {'subnet_id': sub['id'], 'ip_address': '10.10.10.77'} ] ) self.assertIsNotNone(updated) for field in EXPECTED_TOPLEVEL_FIELDS: self.assertIn(field, updated) # external_gateway_info is the only change we expect ext_gw_info = updated['external_gateway_info'] self.assertEqual(1, len(ext_gw_info['external_fixed_ips'])) self.assertEqual( sub['id'], ext_gw_info['external_fixed_ips'][0]['subnet_id'] ) self.assertEqual( '10.10.10.77', ext_gw_info['external_fixed_ips'][0]['ip_address'] ) # Validate nothing else changed self.assertEqual(router['status'], updated['status']) self.assertEqual(router['name'], updated['name']) self.assertEqual(router['admin_state_up'], updated['admin_state_up']) shade-1.31.0/shade/tests/functional/__init__.py0000666000175000017500000000000013440327640021376 0ustar zuulzuul00000000000000shade-1.31.0/shade/tests/functional/test_security_groups.py0000666000175000017500000000475013440327640024164 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_security_groups ---------------------------------- Functional tests for `shade` security_groups resource. """ from shade.tests.functional import base class TestSecurityGroups(base.BaseFunctionalTestCase): def test_create_list_security_groups(self): sg1 = self.user_cloud.create_security_group( name="sg1", description="sg1") self.addCleanup(self.user_cloud.delete_security_group, sg1['id']) sg2 = self.operator_cloud.create_security_group( name="sg2", description="sg2") self.addCleanup(self.operator_cloud.delete_security_group, sg2['id']) if self.user_cloud.has_service('network'): # Neutron defaults to all_tenants=1 when admin sg_list = self.operator_cloud.list_security_groups() self.assertIn(sg1['id'], [sg['id'] for sg in sg_list]) # Filter by tenant_id (filtering by project_id won't work with # Keystone V2) sg_list = self.operator_cloud.list_security_groups( filters={'tenant_id': self.user_cloud.current_project_id}) self.assertIn(sg1['id'], [sg['id'] for sg in sg_list]) self.assertNotIn(sg2['id'], [sg['id'] for sg in sg_list]) else: # Nova does not list all tenants by default sg_list = self.operator_cloud.list_security_groups() self.assertIn(sg2['id'], [sg['id'] for sg in sg_list]) self.assertNotIn(sg1['id'], [sg['id'] for sg in sg_list]) sg_list = self.operator_cloud.list_security_groups( filters={'all_tenants': 1}) self.assertIn(sg1['id'], [sg['id'] for sg in sg_list]) def test_get_security_group_by_id(self): sg = self.user_cloud.create_security_group(name='sg', description='sg') self.addCleanup(self.user_cloud.delete_security_group, sg['id']) ret_sg = self.user_cloud.get_security_group_by_id(sg['id']) self.assertEqual(sg, ret_sg) shade-1.31.0/shade/tests/functional/test_volume.py0000666000175000017500000001406213440327640022222 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_volume ---------------------------------- Functional tests for `shade` block storage methods. """ from fixtures import TimeoutException from testtools import content from shade import _utils from shade import exc from shade.tests.functional import base class TestVolume(base.BaseFunctionalTestCase): # Creating and deleting volumes is slow TIMEOUT_SCALING_FACTOR = 1.5 def setUp(self): super(TestVolume, self).setUp() self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') def test_volumes(self): '''Test volume and snapshot functionality''' volume_name = self.getUniqueString() snapshot_name = self.getUniqueString() self.addDetail('volume', content.text_content(volume_name)) self.addCleanup(self.cleanup, volume_name, snapshot_name=snapshot_name) volume = self.user_cloud.create_volume( display_name=volume_name, size=1) snapshot = self.user_cloud.create_volume_snapshot( volume['id'], display_name=snapshot_name ) ret_volume = self.user_cloud.get_volume_by_id(volume['id']) self.assertEqual(volume['id'], ret_volume['id']) volume_ids = [v['id'] for v in self.user_cloud.list_volumes()] self.assertIn(volume['id'], volume_ids) snapshot_list = self.user_cloud.list_volume_snapshots() snapshot_ids = [s['id'] for s in snapshot_list] self.assertIn(snapshot['id'], snapshot_ids) ret_snapshot = self.user_cloud.get_volume_snapshot_by_id( snapshot['id']) self.assertEqual(snapshot['id'], ret_snapshot['id']) self.user_cloud.delete_volume_snapshot(snapshot_name, wait=True) self.user_cloud.delete_volume(volume_name, wait=True) def test_volume_to_image(self): '''Test volume export to image functionality''' volume_name = self.getUniqueString() image_name = self.getUniqueString() self.addDetail('volume', content.text_content(volume_name)) self.addCleanup(self.cleanup, volume_name, image_name=image_name) volume = self.user_cloud.create_volume( display_name=volume_name, size=1) image = self.user_cloud.create_image( image_name, volume=volume, wait=True) volume_ids = [v['id'] for v in self.user_cloud.list_volumes()] self.assertIn(volume['id'], volume_ids) image_list = self.user_cloud.list_images() image_ids = [s['id'] for s in image_list] self.assertIn(image['id'], image_ids) self.user_cloud.delete_image(image_name, wait=True) self.user_cloud.delete_volume(volume_name, wait=True) def cleanup(self, volume, snapshot_name=None, image_name=None): # Need to delete snapshots before volumes if snapshot_name: snapshot = self.user_cloud.get_volume_snapshot(snapshot_name) if snapshot: self.user_cloud.delete_volume_snapshot( snapshot_name, wait=True) if image_name: image = self.user_cloud.get_image(image_name) if image: self.user_cloud.delete_image(image_name, wait=True) if not isinstance(volume, list): self.user_cloud.delete_volume(volume, wait=True) else: # We have more than one volume to clean up - submit all of the # deletes without wait, then poll until none of them are found # in the volume list anymore for v in volume: self.user_cloud.delete_volume(v, wait=False) try: for count in _utils._iterate_timeout( 180, "Timeout waiting for volume cleanup"): found = False for existing in self.user_cloud.list_volumes(): for v in volume: if v['id'] == existing['id']: found = True break if found: break if not found: break except (exc.OpenStackCloudTimeout, TimeoutException): # NOTE(slaweq): ups, some volumes are still not removed # so we should try to force delete it once again and move # forward for existing in self.user_cloud.list_volumes(): for v in volume: if v['id'] == existing['id']: self.operator_cloud.delete_volume( v, wait=False, force=True) def test_list_volumes_pagination(self): '''Test pagination for list volumes functionality''' volumes = [] # the number of created volumes needs to be higher than # CONF.osapi_max_limit but not higher than volume quotas for # the test user in the tenant(default quotas is set to 10) num_volumes = 8 for i in range(num_volumes): name = self.getUniqueString() v = self.user_cloud.create_volume(display_name=name, size=1) volumes.append(v) self.addCleanup(self.cleanup, volumes) result = [] for i in self.user_cloud.list_volumes(): if i['name'] and i['name'].startswith(self.id()): result.append(i['id']) self.assertEqual( sorted([i['id'] for i in volumes]), sorted(result)) shade-1.31.0/shade/tests/functional/test_port.py0000666000175000017500000001334313440327640021700 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_port ---------------------------------- Functional tests for `shade` port resource. """ import string import random from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestPort(base.BaseFunctionalTestCase): def setUp(self): super(TestPort, self).setUp() # Skip Neutron tests if neutron is not present if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') # Generate a unique port name to allow concurrent tests self.new_port_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_ports) def _cleanup_ports(self): exception_list = list() for p in self.operator_cloud.list_ports(): if p['name'].startswith(self.new_port_name): try: self.operator_cloud.delete_port(name_or_id=p['id']) except Exception as e: # We were unable to delete this port, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_port(self): port_name = self.new_port_name + '_create' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertIn('id', port) self.assertEqual(port.get('name'), port_name) def test_get_port(self): port_name = self.new_port_name + '_get' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertIn('id', port) self.assertEqual(port.get('name'), port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) # extra_dhcp_opts is added later by Neutron... if 'extra_dhcp_opts' in updated_port and 'extra_dhcp_opts' not in port: del updated_port['extra_dhcp_opts'] self.assertEqual(port, updated_port) def test_get_port_by_id(self): port_name = self.new_port_name + '_get_by_id' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertIn('id', port) self.assertEqual(port.get('name'), port_name) updated_port = self.operator_cloud.get_port_by_id(port['id']) # extra_dhcp_opts is added later by Neutron... if 'extra_dhcp_opts' in updated_port and 'extra_dhcp_opts' not in port: del updated_port['extra_dhcp_opts'] self.assertEqual(port, updated_port) def test_update_port(self): port_name = self.new_port_name + '_update' new_port_name = port_name + '_new' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) port = self.operator_cloud.update_port( name_or_id=port_name, name=new_port_name) self.assertIsInstance(port, dict) self.assertEqual(port.get('name'), new_port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertEqual(port.get('name'), new_port_name) port.pop('revision_number', None) port.pop(u'revision_number', None) port.pop('updated_at', None) port.pop(u'updated_at', None) updated_port.pop('revision_number', None) updated_port.pop(u'revision_number', None) updated_port.pop('updated_at', None) updated_port.pop(u'updated_at', None) self.assertEqual(port, updated_port) def test_delete_port(self): port_name = self.new_port_name + '_delete' networks = self.operator_cloud.list_networks() if not networks: self.assertFalse('no sensible network available') port = self.operator_cloud.create_port( network_id=networks[0]['id'], name=port_name) self.assertIsInstance(port, dict) self.assertIn('id', port) self.assertEqual(port.get('name'), port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertIsNotNone(updated_port) self.operator_cloud.delete_port(name_or_id=port_name) updated_port = self.operator_cloud.get_port(name_or_id=port['id']) self.assertIsNone(updated_port) shade-1.31.0/shade/tests/functional/test_limits.py0000666000175000017500000000346013440327640022214 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_limits ---------------------------------- Functional tests for `shade` limits method """ from shade.tests.functional import base class TestUsage(base.BaseFunctionalTestCase): def test_get_our_compute_limits(self): '''Test quotas functionality''' limits = self.user_cloud.get_compute_limits() self.assertIsNotNone(limits) self.assertTrue(hasattr(limits, 'max_server_meta')) # Test normalize limits self.assertFalse(hasattr(limits, 'maxImageMeta')) def test_get_other_compute_limits(self): '''Test quotas functionality''' limits = self.operator_cloud.get_compute_limits('demo') self.assertIsNotNone(limits) self.assertTrue(hasattr(limits, 'max_server_meta')) # Test normalize limits self.assertFalse(hasattr(limits, 'maxImageMeta')) def test_get_our_volume_limits(self): '''Test quotas functionality''' limits = self.user_cloud.get_volume_limits() self.assertIsNotNone(limits) self.assertFalse(hasattr(limits, 'maxTotalVolumes')) def test_get_other_volume_limits(self): '''Test quotas functionality''' limits = self.operator_cloud.get_volume_limits('demo') self.assertFalse(hasattr(limits, 'maxTotalVolumes')) shade-1.31.0/shade/tests/functional/test_users.py0000666000175000017500000001475013440327640022060 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_users ---------------------------------- Functional tests for `shade` user methods. """ from shade import OpenStackCloudException from shade.tests.functional import base class TestUsers(base.KeystoneBaseFunctionalTestCase): def setUp(self): super(TestUsers, self).setUp() self.user_prefix = self.getUniqueString('user') self.addCleanup(self._cleanup_users) def _cleanup_users(self): exception_list = list() for user in self.operator_cloud.list_users(): if user['name'].startswith(self.user_prefix): try: self.operator_cloud.delete_user(user['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _create_user(self, **kwargs): domain_id = None i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver not in ('2', '2.0'): domain = self.operator_cloud.get_domain('default') domain_id = domain['id'] return self.operator_cloud.create_user(domain_id=domain_id, **kwargs) def test_list_users(self): users = self.operator_cloud.list_users() self.assertIsNotNone(users) self.assertNotEqual([], users) def test_get_user(self): user = self.operator_cloud.get_user('admin') self.assertIsNotNone(user) self.assertIn('id', user) self.assertIn('name', user) self.assertEqual('admin', user['name']) def test_search_users(self): users = self.operator_cloud.search_users(filters={'enabled': True}) self.assertIsNotNone(users) def test_search_users_jmespath(self): users = self.operator_cloud.search_users(filters="[?enabled]") self.assertIsNotNone(users) def test_create_user(self): user_name = self.user_prefix + '_create' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertEqual(user_name, user['name']) self.assertEqual(user_email, user['email']) self.assertTrue(user['enabled']) def test_delete_user(self): user_name = self.user_prefix + '_delete' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertTrue(self.operator_cloud.delete_user(user['id'])) def test_delete_user_not_found(self): self.assertFalse(self.operator_cloud.delete_user('does_not_exist')) def test_update_user(self): user_name = self.user_prefix + '_updatev3' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) self.assertTrue(user['enabled']) # Pass some keystone v3 params. This should work no matter which # version of keystone we are testing against. new_user = self.operator_cloud.update_user( user['id'], name=user_name + '2', email='somebody@nowhere.com', enabled=False, password='secret', description='') self.assertIsNotNone(new_user) self.assertEqual(user['id'], new_user['id']) self.assertEqual(user_name + '2', new_user['name']) self.assertEqual('somebody@nowhere.com', new_user['email']) self.assertFalse(new_user['enabled']) def test_update_user_password(self): user_name = self.user_prefix + '_password' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email, password='old_secret') self.assertIsNotNone(user) self.assertTrue(user['enabled']) # This should work for both v2 and v3 new_user = self.operator_cloud.update_user( user['id'], password='new_secret') self.assertIsNotNone(new_user) self.assertEqual(user['id'], new_user['id']) self.assertEqual(user_name, new_user['name']) self.assertEqual(user_email, new_user['email']) self.assertTrue(new_user['enabled']) self.assertTrue(self.operator_cloud.grant_role( 'member', user=user['id'], project='demo', wait=True)) self.addCleanup( self.operator_cloud.revoke_role, 'member', user=user['id'], project='demo', wait=True) new_cloud = self.operator_cloud.connect_as( user_id=user['id'], password='new_secret', project_name='demo') self.assertIsNotNone(new_cloud) location = new_cloud.current_location self.assertEqual(location['project']['name'], 'demo') self.assertIsNotNone(new_cloud.service_catalog) def test_users_and_groups(self): i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support groups') group_name = self.getUniqueString('group') self.addCleanup(self.operator_cloud.delete_group, group_name) # Create a group group = self.operator_cloud.create_group(group_name, 'test group') self.assertIsNotNone(group) # Create a user user_name = self.user_prefix + '_ug' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email) self.assertIsNotNone(user) # Add the user to the group self.operator_cloud.add_user_to_group(user_name, group_name) self.assertTrue( self.operator_cloud.is_user_in_group(user_name, group_name)) # Remove them from the group self.operator_cloud.remove_user_from_group(user_name, group_name) self.assertFalse( self.operator_cloud.is_user_in_group(user_name, group_name)) shade-1.31.0/shade/tests/functional/test_stack.py0000666000175000017500000001313313440327640022016 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_stack ---------------------------------- Functional tests for `shade` stack methods. """ import tempfile from shade import exc from shade.tests import fakes from shade.tests.functional import base simple_template = '''heat_template_version: 2014-10-16 parameters: length: type: number default: 10 resources: my_rand: type: OS::Heat::RandomString properties: length: {get_param: length} outputs: rand: value: get_attr: [my_rand, value] ''' root_template = '''heat_template_version: 2014-10-16 parameters: length: type: number default: 10 count: type: number default: 5 resources: my_rands: type: OS::Heat::ResourceGroup properties: count: {get_param: count} resource_def: type: My::Simple::Template properties: length: {get_param: length} outputs: rands: value: get_attr: [my_rands, attributes, rand] ''' environment = ''' resource_registry: My::Simple::Template: %s ''' validate_template = '''heat_template_version: asdf-no-such-version ''' class TestStack(base.BaseFunctionalTestCase): def setUp(self): super(TestStack, self).setUp() if not self.user_cloud.has_service('orchestration'): self.skipTest('Orchestration service not supported by cloud') def _cleanup_stack(self): self.user_cloud.delete_stack(self.stack_name, wait=True) self.assertIsNone(self.user_cloud.get_stack(self.stack_name)) def test_stack_validation(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(validate_template.encode('utf-8')) test_template.close() stack_name = self.getUniqueString('validate_template') self.assertRaises(exc.OpenStackCloudException, self.user_cloud.create_stack, name=stack_name, template_file=test_template.name) def test_stack_simple(self): test_template = tempfile.NamedTemporaryFile(delete=False) test_template.write(fakes.FAKE_TEMPLATE.encode('utf-8')) test_template.close() self.stack_name = self.getUniqueString('simple_stack') self.addCleanup(self._cleanup_stack) stack = self.user_cloud.create_stack( name=self.stack_name, template_file=test_template.name, wait=True) # assert expected values in stack self.assertEqual('CREATE_COMPLETE', stack['stack_status']) rand = stack['outputs'][0]['output_value'] self.assertEqual(10, len(rand)) # assert get_stack matches returned create_stack stack = self.user_cloud.get_stack(self.stack_name) self.assertEqual('CREATE_COMPLETE', stack['stack_status']) self.assertEqual(rand, stack['outputs'][0]['output_value']) # assert stack is in list_stacks stacks = self.user_cloud.list_stacks() stack_ids = [s['id'] for s in stacks] self.assertIn(stack['id'], stack_ids) # update with no changes stack = self.user_cloud.update_stack( self.stack_name, template_file=test_template.name, wait=True) # assert no change in updated stack self.assertEqual('UPDATE_COMPLETE', stack['stack_status']) rand = stack['outputs'][0]['output_value'] self.assertEqual(rand, stack['outputs'][0]['output_value']) # update with changes stack = self.user_cloud.update_stack( self.stack_name, template_file=test_template.name, wait=True, length=12) # assert changed output in updated stack stack = self.user_cloud.get_stack(self.stack_name) self.assertEqual('UPDATE_COMPLETE', stack['stack_status']) new_rand = stack['outputs'][0]['output_value'] self.assertNotEqual(rand, new_rand) self.assertEqual(12, len(new_rand)) def test_stack_nested(self): test_template = tempfile.NamedTemporaryFile( suffix='.yaml', delete=False) test_template.write(root_template.encode('utf-8')) test_template.close() simple_tmpl = tempfile.NamedTemporaryFile(suffix='.yaml', delete=False) simple_tmpl.write(fakes.FAKE_TEMPLATE.encode('utf-8')) simple_tmpl.close() env = tempfile.NamedTemporaryFile(suffix='.yaml', delete=False) expanded_env = environment % simple_tmpl.name env.write(expanded_env.encode('utf-8')) env.close() self.stack_name = self.getUniqueString('nested_stack') self.addCleanup(self._cleanup_stack) stack = self.user_cloud.create_stack( name=self.stack_name, template_file=test_template.name, environment_files=[env.name], wait=True) # assert expected values in stack self.assertEqual('CREATE_COMPLETE', stack['stack_status']) rands = stack['outputs'][0]['output_value'] self.assertEqual(['0', '1', '2', '3', '4'], sorted(rands.keys())) for rand in rands.values(): self.assertEqual(10, len(rand)) shade-1.31.0/shade/tests/functional/test_qos_dscp_marking_rule.py0000666000175000017500000000531513440327640025266 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_qos_dscp_marking_rule ---------------------------------- Functional tests for `shade`QoS DSCP marking rule methods. """ from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestQosDscpMarkingRule(base.BaseFunctionalTestCase): def setUp(self): super(TestQosDscpMarkingRule, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') if not self.operator_cloud._has_neutron_extension('qos'): self.skipTest('QoS network extension not supported by cloud') policy_name = self.getUniqueString('qos_policy') self.policy = self.operator_cloud.create_qos_policy(name=policy_name) self.addCleanup(self._cleanup_qos_policy) def _cleanup_qos_policy(self): try: self.operator_cloud.delete_qos_policy(self.policy['id']) except Exception as e: raise OpenStackCloudException(e) def test_qos_dscp_marking_rule_lifecycle(self): dscp_mark = 16 updated_dscp_mark = 32 # Create DSCP marking rule rule = self.operator_cloud.create_qos_dscp_marking_rule( self.policy['id'], dscp_mark=dscp_mark) self.assertIn('id', rule) self.assertEqual(dscp_mark, rule['dscp_mark']) # Now try to update rule updated_rule = self.operator_cloud.update_qos_dscp_marking_rule( self.policy['id'], rule['id'], dscp_mark=updated_dscp_mark) self.assertIn('id', updated_rule) self.assertEqual(updated_dscp_mark, updated_rule['dscp_mark']) # List rules from policy policy_rules = self.operator_cloud.list_qos_dscp_marking_rules( self.policy['id']) self.assertEqual([updated_rule], policy_rules) # Delete rule self.operator_cloud.delete_qos_dscp_marking_rule( self.policy['id'], updated_rule['id']) # Check if there is no rules in policy policy_rules = self.operator_cloud.list_qos_dscp_marking_rules( self.policy['id']) self.assertEqual([], policy_rules) shade-1.31.0/shade/tests/functional/test_aggregate.py0000666000175000017500000000373613440327640022647 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_aggregate ---------------------------------- Functional tests for `shade` aggregate resource. """ from shade.tests.functional import base class TestAggregate(base.BaseFunctionalTestCase): def test_aggregates(self): aggregate_name = self.getUniqueString() availability_zone = self.getUniqueString() self.addCleanup(self.cleanup, aggregate_name) aggregate = self.operator_cloud.create_aggregate(aggregate_name) aggregate_ids = [v['id'] for v in self.operator_cloud.list_aggregates()] self.assertIn(aggregate['id'], aggregate_ids) aggregate = self.operator_cloud.update_aggregate( aggregate_name, availability_zone=availability_zone ) self.assertEqual(availability_zone, aggregate['availability_zone']) aggregate = self.operator_cloud.set_aggregate_metadata( aggregate_name, {'key': 'value'} ) self.assertIn('key', aggregate['metadata']) aggregate = self.operator_cloud.set_aggregate_metadata( aggregate_name, {'key': None} ) self.assertNotIn('key', aggregate['metadata']) self.operator_cloud.delete_aggregate(aggregate_name) def cleanup(self, aggregate_name): aggregate = self.operator_cloud.get_aggregate(aggregate_name) if aggregate: self.operator_cloud.delete_aggregate(aggregate_name) shade-1.31.0/shade/tests/functional/test_floating_ip.py0000666000175000017500000002572413440327640023215 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip ---------------------------------- Functional tests for floating IP resource. """ import pprint from testtools import content from shade import _utils from shade import meta from shade.exc import OpenStackCloudException from shade.tests.functional import base from shade.tests.functional.util import pick_flavor class TestFloatingIP(base.BaseFunctionalTestCase): timeout = 60 def setUp(self): super(TestFloatingIP, self).setUp() self.flavor = pick_flavor( self.user_cloud.list_flavors(get_extra=False)) if self.flavor is None: self.assertFalse('no sensible flavor available') self.image = self.pick_image() # Generate a random name for these tests self.new_item_name = self.getUniqueString() self.addCleanup(self._cleanup_network) self.addCleanup(self._cleanup_servers) def _cleanup_network(self): exception_list = list() # Delete stale networks as well as networks created for this test if self.user_cloud.has_service('network'): # Delete routers for r in self.user_cloud.list_routers(): try: if r['name'].startswith(self.new_item_name): self.user_cloud.update_router( r['id'], ext_gateway_net_id=None) for s in self.user_cloud.list_subnets(): if s['name'].startswith(self.new_item_name): try: self.user_cloud.remove_router_interface( r, subnet_id=s['id']) except Exception: pass self.user_cloud.delete_router(name_or_id=r['id']) except Exception as e: exception_list.append(str(e)) continue # Delete subnets for s in self.user_cloud.list_subnets(): if s['name'].startswith(self.new_item_name): try: self.user_cloud.delete_subnet(name_or_id=s['id']) except Exception as e: exception_list.append(str(e)) continue # Delete networks for n in self.user_cloud.list_networks(): if n['name'].startswith(self.new_item_name): try: self.user_cloud.delete_network(name_or_id=n['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_servers(self): exception_list = list() # Delete stale servers as well as server created for this test for i in self.user_cloud.list_servers(bare=True): if i.name.startswith(self.new_item_name): try: self.user_cloud.delete_server(i, wait=True) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_ips(self, server): exception_list = list() fixed_ip = meta.get_server_private_ip(server) for ip in self.user_cloud.list_floating_ips(): if (ip.get('fixed_ip', None) == fixed_ip or ip.get('fixed_ip_address', None) == fixed_ip): try: self.user_cloud.delete_floating_ip(ip['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _setup_networks(self): if self.user_cloud.has_service('network'): # Create a network self.test_net = self.user_cloud.create_network( name=self.new_item_name + '_net') # Create a subnet on it self.test_subnet = self.user_cloud.create_subnet( subnet_name=self.new_item_name + '_subnet', network_name_or_id=self.test_net['id'], cidr='10.24.4.0/24', enable_dhcp=True ) # Create a router self.test_router = self.user_cloud.create_router( name=self.new_item_name + '_router') # Attach the router to an external network ext_nets = self.user_cloud.search_networks( filters={'router:external': True}) self.user_cloud.update_router( name_or_id=self.test_router['id'], ext_gateway_net_id=ext_nets[0]['id']) # Attach the router to the internal subnet self.user_cloud.add_router_interface( self.test_router, subnet_id=self.test_subnet['id']) # Select the network for creating new servers self.nic = {'net-id': self.test_net['id']} self.addDetail( 'networks-neutron', content.text_content(pprint.pformat( self.user_cloud.list_networks()))) else: # Find network names for nova-net data = self.user_cloud._compute_client.get('/os-tenant-networks') nets = meta.get_and_munchify('networks', data) self.addDetail( 'networks-nova', content.text_content(pprint.pformat( nets))) self.nic = {'net-id': nets[0].id} def test_private_ip(self): self._setup_networks() new_server = self.user_cloud.get_openstack_vars( self.user_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic])) self.addDetail( 'server', content.text_content(pprint.pformat(new_server))) self.assertNotEqual(new_server['private_v4'], '') def test_add_auto_ip(self): self._setup_networks() new_server = self.user_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic]) # ToDo: remove the following iteration when create_server waits for # the IP to be attached ip = None for _ in _utils._iterate_timeout( self.timeout, "Timeout waiting for IP address to be attached"): ip = meta.get_server_external_ipv4(self.user_cloud, new_server) if ip is not None: break new_server = self.user_cloud.get_server(new_server.id) self.addCleanup(self._cleanup_ips, new_server) def test_detach_ip_from_server(self): self._setup_networks() new_server = self.user_cloud.create_server( wait=True, name=self.new_item_name + '_server', image=self.image, flavor=self.flavor, nics=[self.nic]) # ToDo: remove the following iteration when create_server waits for # the IP to be attached ip = None for _ in _utils._iterate_timeout( self.timeout, "Timeout waiting for IP address to be attached"): ip = meta.get_server_external_ipv4(self.user_cloud, new_server) if ip is not None: break new_server = self.user_cloud.get_server(new_server.id) self.addCleanup(self._cleanup_ips, new_server) f_ip = self.user_cloud.get_floating_ip( id=None, filters={'floating_ip_address': ip}) self.user_cloud.detach_ip_from_server( server_id=new_server.id, floating_ip_id=f_ip['id']) def test_list_floating_ips(self): fip_admin = self.operator_cloud.create_floating_ip() self.addCleanup(self.operator_cloud.delete_floating_ip, fip_admin.id) fip_user = self.user_cloud.create_floating_ip() self.addCleanup(self.user_cloud.delete_floating_ip, fip_user.id) # Get all the floating ips. fip_id_list = [ fip.id for fip in self.operator_cloud.list_floating_ips() ] if self.user_cloud.has_service('network'): # Neutron returns all FIP for all projects by default self.assertIn(fip_admin.id, fip_id_list) self.assertIn(fip_user.id, fip_id_list) # Ask Neutron for only a subset of all the FIPs. filtered_fip_id_list = [ fip.id for fip in self.operator_cloud.list_floating_ips( {'tenant_id': self.user_cloud.current_project_id} ) ] self.assertNotIn(fip_admin.id, filtered_fip_id_list) self.assertIn(fip_user.id, filtered_fip_id_list) else: self.assertIn(fip_admin.id, fip_id_list) # By default, Nova returns only the FIPs that belong to the # project which made the listing request. self.assertNotIn(fip_user.id, fip_id_list) self.assertRaisesRegex( ValueError, "Nova-network don't support server-side.*", self.operator_cloud.list_floating_ips, filters={'foo': 'bar'} ) def test_search_floating_ips(self): fip_user = self.user_cloud.create_floating_ip() self.addCleanup(self.user_cloud.delete_floating_ip, fip_user.id) self.assertIn( fip_user['id'], [fip.id for fip in self.user_cloud.search_floating_ips( filters={"attached": False})] ) self.assertNotIn( fip_user['id'], [fip.id for fip in self.user_cloud.search_floating_ips( filters={"attached": True})] ) def test_get_floating_ip_by_id(self): fip_user = self.user_cloud.create_floating_ip() self.addCleanup(self.user_cloud.delete_floating_ip, fip_user.id) ret_fip = self.user_cloud.get_floating_ip_by_id(fip_user.id) self.assertEqual(fip_user, ret_fip) shade-1.31.0/shade/tests/functional/test_cluster_templates.py0000666000175000017500000001025213440327640024447 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_cluster_templates ---------------------------------- Funself.ctional tests for `shade` cluster_template methods. """ from testtools import content from shade.tests.functional import base import os import subprocess class TestClusterTemplate(base.BaseFunctionalTestCase): def setUp(self): super(TestClusterTemplate, self).setUp() if not self.user_cloud.has_service('container-infra'): self.skipTest('Container service not supported by cloud') self.ct = None def test_cluster_templates(self): '''Test cluster_templates functionality''' name = 'fake-cluster_template' server_type = 'vm' public = False image_id = 'fedora-atomic-f23-dib' tls_disabled = False registry_enabled = False coe = 'kubernetes' keypair_id = 'testkey' self.addDetail('cluster_template', content.text_content(name)) self.addCleanup(self.cleanup, name) # generate a keypair to add to nova ssh_directory = '/tmp/.ssh' if not os.path.isdir(ssh_directory): os.mkdir(ssh_directory) subprocess.call( ['ssh-keygen', '-t', 'rsa', '-N', '', '-f', '%s/id_rsa_shade' % ssh_directory]) # add keypair to nova with open('%s/id_rsa_shade.pub' % ssh_directory) as f: key_content = f.read() self.user_cloud.create_keypair('testkey', key_content) # Test we can create a cluster_template and we get it returned self.ct = self.user_cloud.create_cluster_template( name=name, image_id=image_id, keypair_id=keypair_id, coe=coe) self.assertEqual(self.ct['name'], name) self.assertEqual(self.ct['image_id'], image_id) self.assertEqual(self.ct['keypair_id'], keypair_id) self.assertEqual(self.ct['coe'], coe) self.assertEqual(self.ct['registry_enabled'], registry_enabled) self.assertEqual(self.ct['tls_disabled'], tls_disabled) self.assertEqual(self.ct['public'], public) self.assertEqual(self.ct['server_type'], server_type) # Test that we can list cluster_templates cluster_templates = self.user_cloud.list_cluster_templates() self.assertIsNotNone(cluster_templates) # Test we get the same cluster_template with the # get_cluster_template method cluster_template_get = self.user_cloud.get_cluster_template( self.ct['uuid']) self.assertEqual(cluster_template_get['uuid'], self.ct['uuid']) # Test the get method also works by name cluster_template_get = self.user_cloud.get_cluster_template(name) self.assertEqual(cluster_template_get['name'], self.ct['name']) # Test we can update a field on the cluster_template and only that # field is updated cluster_template_update = self.user_cloud.update_cluster_template( self.ct['uuid'], 'replace', tls_disabled=True) self.assertEqual( cluster_template_update['uuid'], self.ct['uuid']) self.assertTrue(cluster_template_update['tls_disabled']) # Test we can delete and get True returned cluster_template_delete = self.user_cloud.delete_cluster_template( self.ct['uuid']) self.assertTrue(cluster_template_delete) def cleanup(self, name): if self.ct: try: self.user_cloud.delete_cluster_template(self.ct['name']) except Exception: pass # delete keypair self.user_cloud.delete_keypair('testkey') os.unlink('/tmp/.ssh/id_rsa_shade') os.unlink('/tmp/.ssh/id_rsa_shade.pub') shade-1.31.0/shade/tests/functional/test_qos_minimum_bandwidth_rule.py0000666000175000017500000000535613440327640026331 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_qos_minumum_bandwidth_rule ---------------------------------- Functional tests for `shade`QoS minimum bandwidth methods. """ from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestQosMinimumBandwidthRule(base.BaseFunctionalTestCase): def setUp(self): super(TestQosMinimumBandwidthRule, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') if not self.operator_cloud._has_neutron_extension('qos'): self.skipTest('QoS network extension not supported by cloud') policy_name = self.getUniqueString('qos_policy') self.policy = self.operator_cloud.create_qos_policy(name=policy_name) self.addCleanup(self._cleanup_qos_policy) def _cleanup_qos_policy(self): try: self.operator_cloud.delete_qos_policy(self.policy['id']) except Exception as e: raise OpenStackCloudException(e) def test_qos_minimum_bandwidth_rule_lifecycle(self): min_kbps = 1500 updated_min_kbps = 2000 # Create min bw rule rule = self.operator_cloud.create_qos_minimum_bandwidth_rule( self.policy['id'], min_kbps=min_kbps) self.assertIn('id', rule) self.assertEqual(min_kbps, rule['min_kbps']) # Now try to update rule updated_rule = self.operator_cloud.update_qos_minimum_bandwidth_rule( self.policy['id'], rule['id'], min_kbps=updated_min_kbps) self.assertIn('id', updated_rule) self.assertEqual(updated_min_kbps, updated_rule['min_kbps']) # List rules from policy policy_rules = self.operator_cloud.list_qos_minimum_bandwidth_rules( self.policy['id']) self.assertEqual([updated_rule], policy_rules) # Delete rule self.operator_cloud.delete_qos_minimum_bandwidth_rule( self.policy['id'], updated_rule['id']) # Check if there is no rules in policy policy_rules = self.operator_cloud.list_qos_minimum_bandwidth_rules( self.policy['id']) self.assertEqual([], policy_rules) shade-1.31.0/shade/tests/functional/test_identity.py0000666000175000017500000002453213440327640022547 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_identity ---------------------------------- Functional tests for `shade` identity methods. """ import random import string from shade import OpenStackCloudException from shade.tests.functional import base class TestIdentity(base.KeystoneBaseFunctionalTestCase): def setUp(self): super(TestIdentity, self).setUp() self.role_prefix = 'test_role' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.user_prefix = self.getUniqueString('user') self.group_prefix = self.getUniqueString('group') self.addCleanup(self._cleanup_users) if self.identity_version not in ('2', '2.0'): self.addCleanup(self._cleanup_groups) self.addCleanup(self._cleanup_roles) def _cleanup_groups(self): exception_list = list() for group in self.operator_cloud.list_groups(): if group['name'].startswith(self.group_prefix): try: self.operator_cloud.delete_group(group['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_users(self): exception_list = list() for user in self.operator_cloud.list_users(): if user['name'].startswith(self.user_prefix): try: self.operator_cloud.delete_user(user['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_roles(self): exception_list = list() for role in self.operator_cloud.list_roles(): if role['name'].startswith(self.role_prefix): try: self.operator_cloud.delete_role(role['name']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def _create_user(self, **kwargs): domain_id = None if self.identity_version not in ('2', '2.0'): domain = self.operator_cloud.get_domain('default') domain_id = domain['id'] return self.operator_cloud.create_user(domain_id=domain_id, **kwargs) def test_list_roles(self): roles = self.operator_cloud.list_roles() self.assertIsNotNone(roles) self.assertNotEqual([], roles) def test_get_role(self): role = self.operator_cloud.get_role('admin') self.assertIsNotNone(role) self.assertIn('id', role) self.assertIn('name', role) self.assertEqual('admin', role['name']) def test_search_roles(self): roles = self.operator_cloud.search_roles(filters={'name': 'admin'}) self.assertIsNotNone(roles) self.assertEqual(1, len(roles)) self.assertEqual('admin', roles[0]['name']) def test_create_role(self): role_name = self.role_prefix + '_create_role' role = self.operator_cloud.create_role(role_name) self.assertIsNotNone(role) self.assertIn('id', role) self.assertIn('name', role) self.assertEqual(role_name, role['name']) def test_delete_role(self): role_name = self.role_prefix + '_delete_role' role = self.operator_cloud.create_role(role_name) self.assertIsNotNone(role) self.assertTrue(self.operator_cloud.delete_role(role_name)) # TODO(Shrews): Once we can support assigning roles within shade, we # need to make this test a little more specific, and add more for testing # filtering functionality. def test_list_role_assignments(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support role assignments") assignments = self.operator_cloud.list_role_assignments() self.assertIsInstance(assignments, list) self.assertGreater(len(assignments), 0) def test_list_role_assignments_v2(self): user = self.operator_cloud.get_user('demo') project = self.operator_cloud.get_project('demo') assignments = self.operator_cloud.list_role_assignments( filters={'user': user['id'], 'project': project['id']}) self.assertIsInstance(assignments, list) self.assertGreater(len(assignments), 0) def test_grant_revoke_role_user_project(self): user_name = self.user_prefix + '_user_project' user_email = 'nobody@nowhere.com' role_name = self.role_prefix + '_grant_user_project' role = self.operator_cloud.create_role(role_name) user = self._create_user(name=user_name, email=user_email, default_project='demo') self.assertTrue(self.operator_cloud.grant_role( role_name, user=user['id'], project='demo', wait=True)) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'project': self.operator_cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(1, len(assignments)) self.assertTrue(self.operator_cloud.revoke_role( role_name, user=user['id'], project='demo', wait=True)) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'project': self.operator_cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(0, len(assignments)) def test_grant_revoke_role_group_project(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support group") role_name = self.role_prefix + '_grant_group_project' role = self.operator_cloud.create_role(role_name) group_name = self.group_prefix + '_group_project' group = self.operator_cloud.create_group( name=group_name, description='test group', domain='default') self.assertTrue(self.operator_cloud.grant_role( role_name, group=group['id'], project='demo')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'project': self.operator_cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(1, len(assignments)) self.assertTrue(self.operator_cloud.revoke_role( role_name, group=group['id'], project='demo')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'project': self.operator_cloud.get_project('demo')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(0, len(assignments)) def test_grant_revoke_role_user_domain(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support domain") role_name = self.role_prefix + '_grant_user_domain' role = self.operator_cloud.create_role(role_name) user_name = self.user_prefix + '_user_domain' user_email = 'nobody@nowhere.com' user = self._create_user(name=user_name, email=user_email, default_project='demo') self.assertTrue(self.operator_cloud.grant_role( role_name, user=user['id'], domain='default')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'domain': self.operator_cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(1, len(assignments)) self.assertTrue(self.operator_cloud.revoke_role( role_name, user=user['id'], domain='default')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'user': user['id'], 'domain': self.operator_cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(0, len(assignments)) def test_grant_revoke_role_group_domain(self): if self.identity_version in ('2', '2.0'): self.skipTest("Identity service does not support domain or group") role_name = self.role_prefix + '_grant_group_domain' role = self.operator_cloud.create_role(role_name) group_name = self.group_prefix + '_group_domain' group = self.operator_cloud.create_group( name=group_name, description='test group', domain='default') self.assertTrue(self.operator_cloud.grant_role( role_name, group=group['id'], domain='default')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'domain': self.operator_cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(1, len(assignments)) self.assertTrue(self.operator_cloud.revoke_role( role_name, group=group['id'], domain='default')) assignments = self.operator_cloud.list_role_assignments({ 'role': role['id'], 'group': group['id'], 'domain': self.operator_cloud.get_domain('default')['id'] }) self.assertIsInstance(assignments, list) self.assertEqual(0, len(assignments)) shade-1.31.0/shade/tests/functional/test_magnum_services.py0000666000175000017500000000260113440327640024076 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_magnum_services -------------------- Functional tests for `shade` services method. """ from shade.tests.functional import base class TestMagnumServices(base.BaseFunctionalTestCase): def setUp(self): super(TestMagnumServices, self).setUp() if not self.operator_cloud.has_service('container-infra'): self.skipTest('Container service not supported by cloud') def test_magnum_services(self): '''Test magnum services functionality''' # Test that we can list services services = self.operator_cloud.list_magnum_services() self.assertEqual(1, len(services)) self.assertEqual(services[0]['id'], 1) self.assertEqual('up', services[0]['state']) self.assertEqual('magnum-conductor', services[0]['binary']) self.assertGreater(services[0]['report_count'], 0) shade-1.31.0/shade/tests/functional/test_floating_ip_pool.py0000666000175000017500000000340013440327640024231 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ test_floating_ip_pool ---------------------------------- Functional tests for floating IP pool resource (managed by nova) """ from shade.tests.functional import base # When using nova-network, floating IP pools are created with nova-manage # command. # When using Neutron, floating IP pools in Nova are mapped from external # network names. This only if the floating-ip-pools nova extension is # available. # For instance, for current implementation of hpcloud that's not true: # nova floating-ip-pool-list returns 404. class TestFloatingIPPool(base.BaseFunctionalTestCase): def setUp(self): super(TestFloatingIPPool, self).setUp() if not self.user_cloud._has_nova_extension('os-floating-ip-pools'): # Skipping this test is floating-ip-pool extension is not # available on the testing cloud self.skip( 'Floating IP pools extension is not available') def test_list_floating_ip_pools(self): pools = self.user_cloud.list_floating_ip_pools() if not pools: self.assertFalse('no floating-ip pool available') for pool in pools: self.assertIn('name', pool) shade-1.31.0/shade/tests/functional/test_network.py0000666000175000017500000001137113440327640022404 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_network ---------------------------------- Functional tests for `shade` network methods. """ from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestNetwork(base.BaseFunctionalTestCase): def setUp(self): super(TestNetwork, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') self.network_name = self.getUniqueString('network') self.addCleanup(self._cleanup_networks) def _cleanup_networks(self): exception_list = list() for network in self.operator_cloud.list_networks(): if network['name'].startswith(self.network_name): try: self.operator_cloud.delete_network(network['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_network_basic(self): net1 = self.operator_cloud.create_network(name=self.network_name) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertFalse(net1['shared']) self.assertFalse(net1['router:external']) self.assertTrue(net1['admin_state_up']) self.assertTrue(net1['port_security_enabled']) def test_get_network_by_id(self): net1 = self.operator_cloud.create_network(name=self.network_name) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertFalse(net1['shared']) self.assertFalse(net1['router:external']) self.assertTrue(net1['admin_state_up']) ret_net1 = self.operator_cloud.get_network_by_id(net1.id) self.assertIn('id', ret_net1) self.assertEqual(self.network_name, ret_net1['name']) self.assertFalse(ret_net1['shared']) self.assertFalse(ret_net1['router:external']) self.assertTrue(ret_net1['admin_state_up']) def test_create_network_advanced(self): net1 = self.operator_cloud.create_network( name=self.network_name, shared=True, external=True, admin_state_up=False, ) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertTrue(net1['router:external']) self.assertTrue(net1['shared']) self.assertFalse(net1['admin_state_up']) def test_create_network_provider_flat(self): existing_public = self.operator_cloud.search_networks( filters={'provider:network_type': 'flat'}) if existing_public: self.skipTest('Physical network already allocated') net1 = self.operator_cloud.create_network( name=self.network_name, shared=True, provider={ 'physical_network': 'public', 'network_type': 'flat', } ) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertEqual('flat', net1['provider:network_type']) self.assertEqual('public', net1['provider:physical_network']) self.assertIsNone(net1['provider:segmentation_id']) def test_create_network_port_security_disabled(self): net1 = self.operator_cloud.create_network( name=self.network_name, port_security_enabled=False, ) self.assertIn('id', net1) self.assertEqual(self.network_name, net1['name']) self.assertTrue(net1['admin_state_up']) self.assertFalse(net1['shared']) self.assertFalse(net1['router:external']) self.assertFalse(net1['port_security_enabled']) def test_list_networks_filtered(self): net1 = self.operator_cloud.create_network(name=self.network_name) self.assertIsNotNone(net1) net2 = self.operator_cloud.create_network( name=self.network_name + 'other') self.assertIsNotNone(net2) match = self.operator_cloud.list_networks( filters=dict(name=self.network_name)) self.assertEqual(1, len(match)) self.assertEqual(net1['name'], match[0]['name']) shade-1.31.0/shade/tests/functional/test_range_search.py0000666000175000017500000001374313440327640023341 0ustar zuulzuul00000000000000# Copyright (c) 2016 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. from shade import exc from shade.tests.functional import base class TestRangeSearch(base.BaseFunctionalTestCase): def _filter_m1_flavors(self, results): """The m1 flavors are the original devstack flavors""" new_results = [] for flavor in results: if flavor['name'].startswith("m1."): new_results.append(flavor) return new_results def test_range_search_bad_range(self): flavors = self.user_cloud.list_flavors(get_extra=False) self.assertRaises( exc.OpenStackCloudException, self.user_cloud.range_search, flavors, {"ram": "<1a0"}) def test_range_search_exact(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "4096"}) self.assertIsInstance(result, list) # should only be 1 m1 flavor with 4096 ram result = self._filter_m1_flavors(result) self.assertEqual(1, len(result)) self.assertEqual("m1.medium", result[0]['name']) def test_range_search_min(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "MIN"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) # older devstack does not have cirros256 self.assertIn(result[0]['name'], ('cirros256', 'm1.tiny')) def test_range_search_max(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "MAX"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) self.assertEqual("m1.xlarge", result[0]['name']) def test_range_search_lt(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "<1024"}) self.assertIsInstance(result, list) # should only be 1 m1 flavor with <1024 ram result = self._filter_m1_flavors(result) self.assertEqual(1, len(result)) self.assertEqual("m1.tiny", result[0]['name']) def test_range_search_gt(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": ">4096"}) self.assertIsInstance(result, list) # should only be 2 m1 flavors with >4096 ram result = self._filter_m1_flavors(result) self.assertEqual(2, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.large", flavor_names) self.assertIn("m1.xlarge", flavor_names) def test_range_search_le(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": "<=4096"}) self.assertIsInstance(result, list) # should only be 3 m1 flavors with <=4096 ram result = self._filter_m1_flavors(result) self.assertEqual(3, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.tiny", flavor_names) self.assertIn("m1.small", flavor_names) self.assertIn("m1.medium", flavor_names) def test_range_search_ge(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search(flavors, {"ram": ">=4096"}) self.assertIsInstance(result, list) # should only be 3 m1 flavors with >=4096 ram result = self._filter_m1_flavors(result) self.assertEqual(3, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.medium", flavor_names) self.assertIn("m1.large", flavor_names) self.assertIn("m1.xlarge", flavor_names) def test_range_search_multi_1(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search( flavors, {"ram": "MIN", "vcpus": "MIN"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) # older devstack does not have cirros256 self.assertIn(result[0]['name'], ('cirros256', 'm1.tiny')) def test_range_search_multi_2(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search( flavors, {"ram": "<1024", "vcpus": "MIN"}) self.assertIsInstance(result, list) result = self._filter_m1_flavors(result) self.assertEqual(1, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.tiny", flavor_names) def test_range_search_multi_3(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search( flavors, {"ram": ">=4096", "vcpus": "<6"}) self.assertIsInstance(result, list) result = self._filter_m1_flavors(result) self.assertEqual(2, len(result)) flavor_names = [r['name'] for r in result] self.assertIn("m1.medium", flavor_names) self.assertIn("m1.large", flavor_names) def test_range_search_multi_4(self): flavors = self.user_cloud.list_flavors(get_extra=False) result = self.user_cloud.range_search( flavors, {"ram": ">=4096", "vcpus": "MAX"}) self.assertIsInstance(result, list) self.assertEqual(1, len(result)) # This is the only result that should have max vcpu self.assertEqual("m1.xlarge", result[0]['name']) shade-1.31.0/shade/tests/functional/util.py0000666000175000017500000000245313440327640020632 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ util -------------------------------- Util methods for functional tests """ import operator import os def pick_flavor(flavors): """Given a flavor list pick the smallest one.""" # Enable running functional tests against rax - which requires # performance flavors be used for boot from volume flavor_name = os.environ.get('SHADE_FLAVOR') if flavor_name: for flavor in flavors: if flavor.name == flavor_name: return flavor return None for flavor in sorted( flavors, key=operator.attrgetter('ram')): if 'performance' in flavor.name: return flavor for flavor in sorted( flavors, key=operator.attrgetter('ram')): return flavor shade-1.31.0/shade/tests/functional/test_object.py0000666000175000017500000001577313440327640022173 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_object ---------------------------------- Functional tests for `shade` object methods. """ import random import string import tempfile from testtools import content from shade import exc from shade.tests.functional import base class TestObject(base.BaseFunctionalTestCase): def setUp(self): super(TestObject, self).setUp() if not self.user_cloud.has_service('object-store'): self.skipTest('Object service not supported by cloud') def test_create_object(self): '''Test uploading small and large files.''' container_name = self.getUniqueString('container') self.addDetail('container', content.text_content(container_name)) self.addCleanup(self.user_cloud.delete_container, container_name) self.user_cloud.create_container(container_name) self.assertEqual(container_name, self.user_cloud.list_containers()[0]['name']) sizes = ( (64 * 1024, 1), # 64K, one segment (64 * 1024, 5) # 64MB, 5 segments ) for size, nseg in sizes: segment_size = int(round(size / nseg)) with tempfile.NamedTemporaryFile() as fake_file: fake_content = ''.join(random.SystemRandom().choice( string.ascii_uppercase + string.digits) for _ in range(size)).encode('latin-1') fake_file.write(fake_content) fake_file.flush() name = 'test-%d' % size self.addCleanup( self.user_cloud.delete_object, container_name, name) self.user_cloud.create_object( container_name, name, fake_file.name, segment_size=segment_size, metadata={'foo': 'bar'}) self.assertFalse(self.user_cloud.is_object_stale( container_name, name, fake_file.name )) self.assertEqual( 'bar', self.user_cloud.get_object_metadata( container_name, name)['x-object-meta-foo'] ) self.user_cloud.update_object(container=container_name, name=name, metadata={'testk': 'testv'}) self.assertEqual( 'testv', self.user_cloud.get_object_metadata( container_name, name)['x-object-meta-testk'] ) try: self.assertIsNotNone( self.user_cloud.get_object(container_name, name)) except exc.OpenStackCloudException as e: self.addDetail( 'failed_response', content.text_content(str(e.response.headers))) self.addDetail( 'failed_response', content.text_content(e.response.text)) self.assertEqual( name, self.user_cloud.list_objects(container_name)[0]['name']) self.assertTrue( self.user_cloud.delete_object(container_name, name)) self.assertEqual([], self.user_cloud.list_objects(container_name)) self.assertEqual(container_name, self.user_cloud.list_containers()[0]['name']) self.user_cloud.delete_container(container_name) def test_download_object_to_file(self): '''Test uploading small and large files.''' container_name = self.getUniqueString('container') self.addDetail('container', content.text_content(container_name)) self.addCleanup(self.user_cloud.delete_container, container_name) self.user_cloud.create_container(container_name) self.assertEqual(container_name, self.user_cloud.list_containers()[0]['name']) sizes = ( (64 * 1024, 1), # 64K, one segment (64 * 1024, 5) # 64MB, 5 segments ) for size, nseg in sizes: fake_content = '' segment_size = int(round(size / nseg)) with tempfile.NamedTemporaryFile() as fake_file: fake_content = ''.join(random.SystemRandom().choice( string.ascii_uppercase + string.digits) for _ in range(size)).encode('latin-1') fake_file.write(fake_content) fake_file.flush() name = 'test-%d' % size self.addCleanup( self.user_cloud.delete_object, container_name, name) self.user_cloud.create_object( container_name, name, fake_file.name, segment_size=segment_size, metadata={'foo': 'bar'}) self.assertFalse(self.user_cloud.is_object_stale( container_name, name, fake_file.name )) self.assertEqual( 'bar', self.user_cloud.get_object_metadata( container_name, name)['x-object-meta-foo'] ) self.user_cloud.update_object(container=container_name, name=name, metadata={'testk': 'testv'}) self.assertEqual( 'testv', self.user_cloud.get_object_metadata( container_name, name)['x-object-meta-testk'] ) try: with tempfile.NamedTemporaryFile() as fake_file: self.user_cloud.get_object( container_name, name, outfile=fake_file.name) downloaded_content = open(fake_file.name, 'rb').read() self.assertEqual(fake_content, downloaded_content) except exc.OpenStackCloudException as e: self.addDetail( 'failed_response', content.text_content(str(e.response.headers))) self.addDetail( 'failed_response', content.text_content(e.response.text)) raise self.assertEqual( name, self.user_cloud.list_objects(container_name)[0]['name']) self.assertTrue( self.user_cloud.delete_object(container_name, name)) self.assertEqual([], self.user_cloud.list_objects(container_name)) self.assertEqual(container_name, self.user_cloud.list_containers()[0]['name']) self.user_cloud.delete_container(container_name) shade-1.31.0/shade/tests/functional/test_qos_policy.py0000666000175000017500000000762313440327640023101 0ustar zuulzuul00000000000000# Copyright 2017 OVH SAS # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_qos_policy ---------------------------------- Functional tests for `shade`QoS policies methods. """ from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestQosPolicy(base.BaseFunctionalTestCase): def setUp(self): super(TestQosPolicy, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('Network service not supported by cloud') if not self.operator_cloud._has_neutron_extension('qos'): self.skipTest('QoS network extension not supported by cloud') self.policy_name = self.getUniqueString('qos_policy') self.addCleanup(self._cleanup_policies) def _cleanup_policies(self): exception_list = list() for policy in self.operator_cloud.list_qos_policies(): if policy['name'].startswith(self.policy_name): try: self.operator_cloud.delete_qos_policy(policy['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_qos_policy_basic(self): policy = self.operator_cloud.create_qos_policy(name=self.policy_name) self.assertIn('id', policy) self.assertEqual(self.policy_name, policy['name']) self.assertFalse(policy['shared']) self.assertFalse(policy['is_default']) def test_create_qos_policy_shared(self): policy = self.operator_cloud.create_qos_policy( name=self.policy_name, shared=True) self.assertIn('id', policy) self.assertEqual(self.policy_name, policy['name']) self.assertTrue(policy['shared']) self.assertFalse(policy['is_default']) def test_create_qos_policy_default(self): if not self.operator_cloud._has_neutron_extension('qos-default'): self.skipTest("'qos-default' network extension not supported " "by cloud") policy = self.operator_cloud.create_qos_policy( name=self.policy_name, default=True) self.assertIn('id', policy) self.assertEqual(self.policy_name, policy['name']) self.assertFalse(policy['shared']) self.assertTrue(policy['is_default']) def test_update_qos_policy(self): policy = self.operator_cloud.create_qos_policy(name=self.policy_name) self.assertEqual(self.policy_name, policy['name']) self.assertFalse(policy['shared']) self.assertFalse(policy['is_default']) updated_policy = self.operator_cloud.update_qos_policy( policy['id'], shared=True, default=True) self.assertEqual(self.policy_name, updated_policy['name']) self.assertTrue(updated_policy['shared']) self.assertTrue(updated_policy['is_default']) def test_list_qos_policies_filtered(self): policy1 = self.operator_cloud.create_qos_policy(name=self.policy_name) self.assertIsNotNone(policy1) policy2 = self.operator_cloud.create_qos_policy( name=self.policy_name + 'other') self.assertIsNotNone(policy2) match = self.operator_cloud.list_qos_policies( filters=dict(name=self.policy_name)) self.assertEqual(1, len(match)) self.assertEqual(policy1['name'], match[0]['name']) shade-1.31.0/shade/tests/functional/test_devstack.py0000666000175000017500000000332413440327640022516 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_devstack ------------- Throw errors if we do not actually detect the services we're supposed to. """ import os from testscenarios import load_tests_apply_scenarios as load_tests # noqa from shade.tests.functional import base class TestDevstack(base.BaseFunctionalTestCase): scenarios = [ ('designate', dict(env='DESIGNATE', service='dns')), ('heat', dict(env='HEAT', service='orchestration')), ('magnum', dict(env='MAGNUM', service='container-infra')), ('neutron', dict(env='NEUTRON', service='network')), ('swift', dict(env='SWIFT', service='object-store')), ] def test_has_service(self): if os.environ.get('SHADE_HAS_{env}'.format(env=self.env), '0') == '1': self.assertTrue(self.user_cloud.has_service(self.service)) class TestKeystoneVersion(base.BaseFunctionalTestCase): def test_keystone_version(self): use_keystone_v2 = os.environ.get('SHADE_USE_KEYSTONE_V2', False) if use_keystone_v2 and use_keystone_v2 != '0': self.assertEqual('2.0', self.identity_version) else: self.assertEqual('3', self.identity_version) shade-1.31.0/shade/tests/functional/test_usage.py0000666000175000017500000000237013440327640022016 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_usage ---------------------------------- Functional tests for `shade` usage method """ import datetime from shade.tests.functional import base class TestUsage(base.BaseFunctionalTestCase): def test_get_compute_usage(self): '''Test usage functionality''' start = datetime.datetime.now() - datetime.timedelta(seconds=5) usage = self.operator_cloud.get_compute_usage('demo', start) self.add_info_on_exception('usage', usage) self.assertIsNotNone(usage) self.assertIn('total_hours', usage) self.assertIn('started_at', usage) self.assertEqual(start.isoformat(), usage['started_at']) self.assertIn('location', usage) shade-1.31.0/shade/tests/functional/base.py0000666000175000017500000000625613440327640020574 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import os_client_config as occ import shade from shade.tests import base class BaseFunctionalTestCase(base.TestCase): def setUp(self): super(BaseFunctionalTestCase, self).setUp() self._demo_name = os.environ.get('SHADE_DEMO_CLOUD', 'devstack') self._op_name = os.environ.get( 'SHADE_OPERATOR_CLOUD', 'devstack-admin') self.config = occ.OpenStackConfig() self._set_user_cloud() self._set_operator_cloud() self.identity_version = \ self.operator_cloud.cloud_config.get_api_version('identity') def _set_user_cloud(self, **kwargs): user_config = self.config.get_one_cloud( cloud=self._demo_name, **kwargs) self.user_cloud = shade.OpenStackCloud( cloud_config=user_config) def _set_operator_cloud(self, **kwargs): operator_config = self.config.get_one_cloud( cloud=self._op_name, **kwargs) self.operator_cloud = shade.OperatorCloud( cloud_config=operator_config) def pick_image(self): images = self.user_cloud.list_images() self.add_info_on_exception('images', images) image_name = os.environ.get('SHADE_IMAGE') if image_name: for image in images: if image.name == image_name: return image self.assertFalse( "Cloud does not have {image}".format(image=image_name)) for image in images: if image.name.startswith('cirros') and image.name.endswith('-uec'): return image for image in images: if (image.name.startswith('cirros') and image.disk_format == 'qcow2'): return image for image in images: if image.name.lower().startswith('ubuntu'): return image for image in images: if image.name.lower().startswith('centos'): return image self.assertFalse('no sensible image available') class KeystoneBaseFunctionalTestCase(BaseFunctionalTestCase): def setUp(self): super(KeystoneBaseFunctionalTestCase, self).setUp() use_keystone_v2 = os.environ.get('SHADE_USE_KEYSTONE_V2', False) if use_keystone_v2: # keystone v2 has special behavior for the admin # interface and some of the operations, so make a new cloud # object with interface set to admin. # We only do it for keystone tests on v2 because otherwise # the admin interface is not a thing that wants to actually # be used self._set_operator_cloud(interface='admin') shade-1.31.0/shade/tests/functional/test_flavor.py0000666000175000017500000001473313440327640022211 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_flavor ---------------------------------- Functional tests for `shade` flavor resource. """ from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestFlavor(base.BaseFunctionalTestCase): def setUp(self): super(TestFlavor, self).setUp() # Generate a random name for flavors in this test self.new_item_name = self.getUniqueString('flavor') self.addCleanup(self._cleanup_flavors) def _cleanup_flavors(self): exception_list = list() for f in self.operator_cloud.list_flavors(get_extra=False): if f['name'].startswith(self.new_item_name): try: self.operator_cloud.delete_flavor(f['id']) except Exception as e: # We were unable to delete a flavor, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_flavor(self): flavor_name = self.new_item_name + '_create' flavor_kwargs = dict( name=flavor_name, ram=1024, vcpus=2, disk=10, ephemeral=5, swap=100, rxtx_factor=1.5, is_public=True ) flavor = self.operator_cloud.create_flavor(**flavor_kwargs) self.assertIsNotNone(flavor['id']) # When properly normalized, we should always get an extra_specs # and expect empty dict on create. self.assertIn('extra_specs', flavor) self.assertEqual({}, flavor['extra_specs']) # We should also always have ephemeral and public attributes self.assertIn('ephemeral', flavor) self.assertIn('OS-FLV-EXT-DATA:ephemeral', flavor) self.assertEqual(5, flavor['ephemeral']) self.assertIn('is_public', flavor) self.assertIn('os-flavor-access:is_public', flavor) self.assertTrue(flavor['is_public']) for key in flavor_kwargs.keys(): self.assertIn(key, flavor) for key, value in flavor_kwargs.items(): self.assertEqual(value, flavor[key]) def test_list_flavors(self): pub_flavor_name = self.new_item_name + '_public' priv_flavor_name = self.new_item_name + '_private' public_kwargs = dict( name=pub_flavor_name, ram=1024, vcpus=2, disk=10, is_public=True ) private_kwargs = dict( name=priv_flavor_name, ram=1024, vcpus=2, disk=10, is_public=False ) # Create a public and private flavor. We expect both to be listed # for an operator. self.operator_cloud.create_flavor(**public_kwargs) self.operator_cloud.create_flavor(**private_kwargs) flavors = self.operator_cloud.list_flavors(get_extra=False) # Flavor list will include the standard devstack flavors. We just want # to make sure both of the flavors we just created are present. found = [] for f in flavors: # extra_specs should be added within list_flavors() self.assertIn('extra_specs', f) if f['name'] in (pub_flavor_name, priv_flavor_name): found.append(f) self.assertEqual(2, len(found)) def test_flavor_access(self): priv_flavor_name = self.new_item_name + '_private' private_kwargs = dict( name=priv_flavor_name, ram=1024, vcpus=2, disk=10, is_public=False ) new_flavor = self.operator_cloud.create_flavor(**private_kwargs) # Validate the 'demo' user cannot see the new flavor flavors = self.user_cloud.search_flavors(priv_flavor_name) self.assertEqual(0, len(flavors)) # We need the tenant ID for the 'demo' user project = self.operator_cloud.get_project('demo') self.assertIsNotNone(project) # Now give 'demo' access self.operator_cloud.add_flavor_access(new_flavor['id'], project['id']) # Now see if the 'demo' user has access to it flavors = self.user_cloud.search_flavors(priv_flavor_name) self.assertEqual(1, len(flavors)) self.assertEqual(priv_flavor_name, flavors[0]['name']) # Now see if the 'demo' user has access to it without needing # the demo_cloud access. acls = self.operator_cloud.list_flavor_access(new_flavor['id']) self.assertEqual(1, len(acls)) self.assertEqual(project['id'], acls[0]['project_id']) # Now revoke the access and make sure we can't find it self.operator_cloud.remove_flavor_access(new_flavor['id'], project['id']) flavors = self.user_cloud.search_flavors(priv_flavor_name) self.assertEqual(0, len(flavors)) def test_set_unset_flavor_specs(self): """ Test setting and unsetting flavor extra specs """ flavor_name = self.new_item_name + '_spec_test' kwargs = dict( name=flavor_name, ram=1024, vcpus=2, disk=10 ) new_flavor = self.operator_cloud.create_flavor(**kwargs) # Expect no extra_specs self.assertEqual({}, new_flavor['extra_specs']) # Now set them extra_specs = {'foo': 'aaa', 'bar': 'bbb'} self.operator_cloud.set_flavor_specs(new_flavor['id'], extra_specs) mod_flavor = self.operator_cloud.get_flavor(new_flavor['id']) # Verify extra_specs were set self.assertIn('extra_specs', mod_flavor) self.assertEqual(extra_specs, mod_flavor['extra_specs']) # Unset the 'foo' value self.operator_cloud.unset_flavor_specs(mod_flavor['id'], ['foo']) mod_flavor = self.operator_cloud.get_flavor_by_id(new_flavor['id']) # Verify 'foo' is unset and 'bar' is still set self.assertEqual({'bar': 'bbb'}, mod_flavor['extra_specs']) shade-1.31.0/shade/tests/functional/test_quotas.py0000666000175000017500000000626213440327640022232 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_quotas ---------------------------------- Functional tests for `shade` quotas methods. """ from shade.tests.functional import base class TestComputeQuotas(base.BaseFunctionalTestCase): def test_quotas(self): '''Test quotas functionality''' quotas = self.operator_cloud.get_compute_quotas('demo') cores = quotas['cores'] self.operator_cloud.set_compute_quotas('demo', cores=cores + 1) self.assertEqual( cores + 1, self.operator_cloud.get_compute_quotas('demo')['cores']) self.operator_cloud.delete_compute_quotas('demo') self.assertEqual( cores, self.operator_cloud.get_compute_quotas('demo')['cores']) class TestVolumeQuotas(base.BaseFunctionalTestCase): def setUp(self): super(TestVolumeQuotas, self).setUp() if not self.operator_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') def test_quotas(self): '''Test quotas functionality''' quotas = self.operator_cloud.get_volume_quotas('demo') volumes = quotas['volumes'] self.operator_cloud.set_volume_quotas('demo', volumes=volumes + 1) self.assertEqual( volumes + 1, self.operator_cloud.get_volume_quotas('demo')['volumes']) self.operator_cloud.delete_volume_quotas('demo') self.assertEqual( volumes, self.operator_cloud.get_volume_quotas('demo')['volumes']) class TestNetworkQuotas(base.BaseFunctionalTestCase): def setUp(self): super(TestNetworkQuotas, self).setUp() if not self.operator_cloud.has_service('network'): self.skipTest('network service not supported by cloud') def test_quotas(self): '''Test quotas functionality''' quotas = self.operator_cloud.get_network_quotas('demo') network = quotas['network'] self.operator_cloud.set_network_quotas('demo', network=network + 1) self.assertEqual( network + 1, self.operator_cloud.get_network_quotas('demo')['network']) self.operator_cloud.delete_network_quotas('demo') self.assertEqual( network, self.operator_cloud.get_network_quotas('demo')['network']) def test_get_quotas_details(self): expected_keys = ['limit', 'used', 'reserved'] '''Test getting details about quota usage''' quota_details = self.operator_cloud.get_network_quotas( 'demo', details=True) for quota_values in quota_details.values(): for expected_key in expected_keys: self.assertTrue(expected_key in quota_values.keys()) shade-1.31.0/shade/tests/functional/test_image.py0000666000175000017500000001421513440327640021775 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_compute ---------------------------------- Functional tests for `shade` image methods. """ import filecmp import os import tempfile from shade.tests.functional import base class TestImage(base.BaseFunctionalTestCase): def setUp(self): super(TestImage, self).setUp() self.image = self.pick_image() def test_create_image(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) finally: self.user_cloud.delete_image(image_name, wait=True) def test_download_image(self): test_image = tempfile.NamedTemporaryFile(delete=False) self.addCleanup(os.remove, test_image.name) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) self.addCleanup(self.user_cloud.delete_image, image_name, wait=True) output = os.path.join(tempfile.gettempdir(), self.getUniqueString()) self.user_cloud.download_image(image_name, output) self.addCleanup(os.remove, output) self.assertTrue(filecmp.cmp(test_image.name, output), "Downloaded contents don't match created image") def test_create_image_skip_duplicate(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: first_image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) second_image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) self.assertEqual(first_image.id, second_image.id) finally: self.user_cloud.delete_image(image_name, wait=True) def test_create_image_force_duplicate(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') first_image = None second_image = None try: first_image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) second_image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, allow_duplicates=True, wait=True) self.assertNotEqual(first_image.id, second_image.id) finally: if first_image: self.user_cloud.delete_image(first_image.id, wait=True) if second_image: self.user_cloud.delete_image(second_image.id, wait=True) def test_create_image_update_properties(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) self.user_cloud.update_image_properties( image=image, name=image_name, foo='bar') image = self.user_cloud.get_image(image_name) self.assertIn('foo', image.properties) self.assertEqual(image.properties['foo'], 'bar') finally: self.user_cloud.delete_image(image_name, wait=True) def test_get_image_by_id(self): test_image = tempfile.NamedTemporaryFile(delete=False) test_image.write(b'\0' * 1024 * 1024) test_image.close() image_name = self.getUniqueString('image') try: image = self.user_cloud.create_image( name=image_name, filename=test_image.name, disk_format='raw', container_format='bare', min_disk=10, min_ram=1024, wait=True) image = self.user_cloud.get_image_by_id(image.id) self.assertEqual(image_name, image.name) self.assertEqual('raw', image.disk_format) finally: self.user_cloud.delete_image(image_name, wait=True) shade-1.31.0/shade/tests/functional/test_recordset.py0000666000175000017500000000761213440327640022710 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_recordset ---------------------------------- Functional tests for `shade` recordset methods. """ from testtools import content from shade.tests.functional import base class TestRecordset(base.BaseFunctionalTestCase): def setUp(self): super(TestRecordset, self).setUp() if not self.user_cloud.has_service('dns'): self.skipTest('dns service not supported by cloud') def test_recordsets(self): '''Test DNS recordsets functionality''' zone = 'example2.net.' email = 'test@example2.net' name = 'www' type_ = 'a' description = 'Test recordset' ttl = 3600 records = ['192.168.1.1'] self.addDetail('zone', content.text_content(zone)) self.addDetail('recordset', content.text_content(name)) self.addCleanup(self.cleanup, zone, name) # Create a zone to hold the tested recordset zone_obj = self.user_cloud.create_zone(name=zone, email=email) # Test we can create a recordset and we get it returned created_recordset = self.user_cloud.create_recordset(zone, name, type_, records, description, ttl) self.assertEqual(created_recordset['zone_id'], zone_obj['id']) self.assertEqual(created_recordset['name'], name + '.' + zone) self.assertEqual(created_recordset['type'], type_.upper()) self.assertEqual(created_recordset['records'], records) self.assertEqual(created_recordset['description'], description) self.assertEqual(created_recordset['ttl'], ttl) # Test that we can list recordsets recordsets = self.user_cloud.list_recordsets(zone) self.assertIsNotNone(recordsets) # Test we get the same recordset with the get_recordset method get_recordset = self.user_cloud.get_recordset(zone, created_recordset['id']) self.assertEqual(get_recordset['id'], created_recordset['id']) # Test the get method also works by name get_recordset = self.user_cloud.get_recordset(zone, name + '.' + zone) self.assertEqual(get_recordset['id'], created_recordset['id']) # Test we can update a field on the recordset and only that field # is updated updated_recordset = self.user_cloud.update_recordset(zone_obj['id'], name + '.' + zone, ttl=7200) self.assertEqual(updated_recordset['id'], created_recordset['id']) self.assertEqual(updated_recordset['name'], name + '.' + zone) self.assertEqual(updated_recordset['type'], type_.upper()) self.assertEqual(updated_recordset['records'], records) self.assertEqual(updated_recordset['description'], description) self.assertEqual(updated_recordset['ttl'], 7200) # Test we can delete and get True returned deleted_recordset = self.user_cloud.delete_recordset( zone, name + '.' + zone) self.assertTrue(deleted_recordset) def cleanup(self, zone_name, recordset_name): self.user_cloud.delete_recordset( zone_name, recordset_name + '.' + zone_name) self.user_cloud.delete_zone(zone_name) shade-1.31.0/shade/tests/functional/test_server_group.py0000666000175000017500000000265113440327640023436 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_server_group ---------------------------------- Functional tests for `shade` server_group resource. """ from shade.tests.functional import base class TestServerGroup(base.BaseFunctionalTestCase): def test_server_group(self): server_group_name = self.getUniqueString() self.addCleanup(self.cleanup, server_group_name) server_group = self.user_cloud.create_server_group( server_group_name, ['affinity']) server_group_ids = [v['id'] for v in self.user_cloud.list_server_groups()] self.assertIn(server_group['id'], server_group_ids) self.user_cloud.delete_server_group(server_group_name) def cleanup(self, server_group_name): server_group = self.user_cloud.get_server_group(server_group_name) if server_group: self.user_cloud.delete_server_group(server_group['id']) shade-1.31.0/shade/tests/functional/test_groups.py0000666000175000017500000000762613440327640022242 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_groups ---------------------------------- Functional tests for `shade` keystone group resource. """ import shade from shade.tests.functional import base class TestGroup(base.BaseFunctionalTestCase): def setUp(self): super(TestGroup, self).setUp() i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support groups') self.group_prefix = self.getUniqueString('group') self.addCleanup(self._cleanup_groups) def _cleanup_groups(self): exception_list = list() for group in self.operator_cloud.list_groups(): if group['name'].startswith(self.group_prefix): try: self.operator_cloud.delete_group(group['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise shade.OpenStackCloudException('\n'.join(exception_list)) def test_create_group(self): group_name = self.group_prefix + '_create' group = self.operator_cloud.create_group(group_name, 'test group') for key in ('id', 'name', 'description', 'domain_id'): self.assertIn(key, group) self.assertEqual(group_name, group['name']) self.assertEqual('test group', group['description']) def test_delete_group(self): group_name = self.group_prefix + '_delete' group = self.operator_cloud.create_group(group_name, 'test group') self.assertIsNotNone(group) self.assertTrue(self.operator_cloud.delete_group(group_name)) results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(0, len(results)) def test_delete_group_not_exists(self): self.assertFalse(self.operator_cloud.delete_group('xInvalidGroupx')) def test_search_groups(self): group_name = self.group_prefix + '_search' # Shouldn't find any group with this name yet results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(0, len(results)) # Now create a new group group = self.operator_cloud.create_group(group_name, 'test group') self.assertEqual(group_name, group['name']) # Now we should find only the new group results = self.operator_cloud.search_groups( filters=dict(name=group_name)) self.assertEqual(1, len(results)) self.assertEqual(group_name, results[0]['name']) def test_update_group(self): group_name = self.group_prefix + '_update' group_desc = 'test group' group = self.operator_cloud.create_group(group_name, group_desc) self.assertEqual(group_name, group['name']) self.assertEqual(group_desc, group['description']) updated_group_name = group_name + '_xyz' updated_group_desc = group_desc + ' updated' updated_group = self.operator_cloud.update_group( group_name, name=updated_group_name, description=updated_group_desc) self.assertEqual(updated_group_name, updated_group['name']) self.assertEqual(updated_group_desc, updated_group['description']) shade-1.31.0/shade/tests/functional/test_services.py0000666000175000017500000001231213440327640022532 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_services ---------------------------------- Functional tests for `shade` service resource. """ import string import random from shade.exc import OpenStackCloudException from shade.exc import OpenStackCloudUnavailableFeature from shade.tests.functional import base class TestServices(base.KeystoneBaseFunctionalTestCase): service_attributes = ['id', 'name', 'type', 'description'] def setUp(self): super(TestServices, self).setUp() # Generate a random name for services in this test self.new_service_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_services) def _cleanup_services(self): exception_list = list() for s in self.operator_cloud.list_services(): if s['name'] is not None and \ s['name'].startswith(self.new_service_name): try: self.operator_cloud.delete_service(name_or_id=s['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_service(self): service = self.operator_cloud.create_service( name=self.new_service_name + '_create', type='test_type', description='this is a test description') self.assertIsNotNone(service.get('id')) def test_update_service(self): ver = self.operator_cloud.cloud_config.get_api_version('identity') if ver.startswith('2'): # NOTE(SamYaple): Update service only works with v3 api self.assertRaises(OpenStackCloudUnavailableFeature, self.operator_cloud.update_service, 'service_id', name='new name') else: service = self.operator_cloud.create_service( name=self.new_service_name + '_create', type='test_type', description='this is a test description', enabled=True) new_service = self.operator_cloud.update_service( service.id, name=self.new_service_name + '_update', description='this is an updated description', enabled=False ) self.assertEqual(new_service.name, self.new_service_name + '_update') self.assertEqual(new_service.description, 'this is an updated description') self.assertFalse(new_service.enabled) self.assertEqual(service.id, new_service.id) def test_list_services(self): service = self.operator_cloud.create_service( name=self.new_service_name + '_list', type='test_type') observed_services = self.operator_cloud.list_services() self.assertIsInstance(observed_services, list) found = False for s in observed_services: # Test all attributes are returned if s['id'] == service['id']: self.assertEqual(self.new_service_name + '_list', s.get('name')) self.assertEqual('test_type', s.get('type')) found = True self.assertTrue(found, msg='new service not found in service list!') def test_delete_service_by_name(self): # Test delete by name service = self.operator_cloud.create_service( name=self.new_service_name + '_delete_by_name', type='test_type') self.operator_cloud.delete_service(name_or_id=service['name']) observed_services = self.operator_cloud.list_services() found = False for s in observed_services: if s['id'] == service['id']: found = True break self.failUnlessEqual(False, found, message='service was not deleted!') def test_delete_service_by_id(self): # Test delete by id service = self.operator_cloud.create_service( name=self.new_service_name + '_delete_by_id', type='test_type') self.operator_cloud.delete_service(name_or_id=service['id']) observed_services = self.operator_cloud.list_services() found = False for s in observed_services: if s['id'] == service['id']: found = True self.failUnlessEqual(False, found, message='service was not deleted!') shade-1.31.0/shade/tests/functional/test_endpoints.py0000666000175000017500000001722713440327640022724 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_endpoint ---------------------------------- Functional tests for `shade` endpoint resource. """ import string import random from shade.exc import OpenStackCloudException from shade.exc import OpenStackCloudUnavailableFeature from shade.tests.functional import base class TestEndpoints(base.KeystoneBaseFunctionalTestCase): endpoint_attributes = ['id', 'region', 'publicurl', 'internalurl', 'service_id', 'adminurl'] def setUp(self): super(TestEndpoints, self).setUp() # Generate a random name for services and regions in this test self.new_item_name = 'test_' + ''.join( random.choice(string.ascii_lowercase) for _ in range(5)) self.addCleanup(self._cleanup_services) self.addCleanup(self._cleanup_endpoints) def _cleanup_endpoints(self): exception_list = list() for e in self.operator_cloud.list_endpoints(): if e.get('region') is not None and \ e['region'].startswith(self.new_item_name): try: self.operator_cloud.delete_endpoint(id=e['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def _cleanup_services(self): exception_list = list() for s in self.operator_cloud.list_services(): if s['name'] is not None and \ s['name'].startswith(self.new_item_name): try: self.operator_cloud.delete_service(name_or_id=s['id']) except Exception as e: # We were unable to delete a service, let's try with next exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise OpenStackCloudException('\n'.join(exception_list)) def test_create_endpoint(self): service_name = self.new_item_name + '_create' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', admin_url='http://admin.url/', region=service_name) self.assertNotEqual([], endpoints) self.assertIsNotNone(endpoints[0].get('id')) # Test None parameters endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', region=service_name) self.assertNotEqual([], endpoints) self.assertIsNotNone(endpoints[0].get('id')) def test_update_endpoint(self): ver = self.operator_cloud.cloud_config.get_api_version('identity') if ver.startswith('2'): # NOTE(SamYaple): Update endpoint only works with v3 api self.assertRaises(OpenStackCloudUnavailableFeature, self.operator_cloud.update_endpoint, 'endpoint_id1') else: service = self.operator_cloud.create_service( name='service1', type='test_type') endpoint = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], url='http://admin.url/', interface='admin', region='orig_region', enabled=False)[0] new_service = self.operator_cloud.create_service( name='service2', type='test_type') new_endpoint = self.operator_cloud.update_endpoint( endpoint.id, service_name_or_id=new_service.id, url='http://public.url/', interface='public', region='update_region', enabled=True) self.assertEqual(new_endpoint.url, 'http://public.url/') self.assertEqual(new_endpoint.interface, 'public') self.assertEqual(new_endpoint.region, 'update_region') self.assertEqual(new_endpoint.service_id, new_service.id) self.assertTrue(new_endpoint.enabled) def test_list_endpoints(self): service_name = self.new_item_name + '_list' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', region=service_name) observed_endpoints = self.operator_cloud.list_endpoints() found = False for e in observed_endpoints: # Test all attributes are returned for endpoint in endpoints: if e['id'] == endpoint['id']: found = True self.assertEqual(service['id'], e['service_id']) if 'interface' in e: if 'interface' == 'internal': self.assertEqual('http://internal.test/', e['url']) elif 'interface' == 'public': self.assertEqual('http://public.test/', e['url']) else: self.assertEqual('http://public.test/', e['publicurl']) self.assertEqual('http://internal.test/', e['internalurl']) self.assertEqual(service_name, e['region']) self.assertTrue(found, msg='new endpoint not found in endpoints list!') def test_delete_endpoint(self): service_name = self.new_item_name + '_delete' service = self.operator_cloud.create_service( name=service_name, type='test_type', description='this is a test description') endpoints = self.operator_cloud.create_endpoint( service_name_or_id=service['id'], public_url='http://public.test/', internal_url='http://internal.test/', region=service_name) self.assertNotEqual([], endpoints) for endpoint in endpoints: self.operator_cloud.delete_endpoint(endpoint['id']) observed_endpoints = self.operator_cloud.list_endpoints() found = False for e in observed_endpoints: for endpoint in endpoints: if e['id'] == endpoint['id']: found = True break self.failUnlessEqual( False, found, message='new endpoint was not deleted!') shade-1.31.0/shade/tests/functional/test_project.py0000666000175000017500000001127513440327640022364 0ustar zuulzuul00000000000000# Copyright (c) 2016 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_project ---------------------------------- Functional tests for `shade` project resource. """ import pprint from shade.exc import OpenStackCloudException from shade.tests.functional import base class TestProject(base.KeystoneBaseFunctionalTestCase): def setUp(self): super(TestProject, self).setUp() self.new_project_name = self.getUniqueString('project') self.addCleanup(self._cleanup_projects) def _cleanup_projects(self): exception_list = list() for p in self.operator_cloud.list_projects(): if p['name'].startswith(self.new_project_name): try: self.operator_cloud.delete_project(p['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: raise OpenStackCloudException('\n'.join(exception_list)) def test_create_project(self): project_name = self.new_project_name + '_create' params = { 'name': project_name, 'description': 'test_create_project', } if self.identity_version == '3': params['domain_id'] = \ self.operator_cloud.get_domain('default')['id'] project = self.operator_cloud.create_project(**params) self.assertIsNotNone(project) self.assertEqual(project_name, project['name']) self.assertEqual('test_create_project', project['description']) user_id = self.operator_cloud.current_user_id # Grant the current user access to the project self.assertTrue(self.operator_cloud.grant_role( 'member', user=user_id, project=project['id'], wait=True)) self.addCleanup( self.operator_cloud.revoke_role, 'member', user=user_id, project=project['id'], wait=True) new_cloud = self.operator_cloud.connect_as_project(project) self.add_info_on_exception( 'new_cloud_config', pprint.pformat(new_cloud.cloud_config.config)) location = new_cloud.current_location self.assertEqual(project_name, location['project']['name']) def test_update_project(self): project_name = self.new_project_name + '_update' params = { 'name': project_name, 'description': 'test_update_project', 'enabled': True } if self.identity_version == '3': params['domain_id'] = \ self.operator_cloud.get_domain('default')['id'] project = self.operator_cloud.create_project(**params) updated_project = self.operator_cloud.update_project( project_name, enabled=False, description='new') self.assertIsNotNone(updated_project) self.assertEqual(project['id'], updated_project['id']) self.assertEqual(project['name'], updated_project['name']) self.assertEqual(updated_project['description'], 'new') self.assertTrue(project['enabled']) self.assertFalse(updated_project['enabled']) # Revert the description and verify the project is still disabled updated_project = self.operator_cloud.update_project( project_name, description=params['description']) self.assertIsNotNone(updated_project) self.assertEqual(project['id'], updated_project['id']) self.assertEqual(project['name'], updated_project['name']) self.assertEqual(project['description'], updated_project['description']) self.assertTrue(project['enabled']) self.assertFalse(updated_project['enabled']) def test_delete_project(self): project_name = self.new_project_name + '_delete' params = {'name': project_name} if self.identity_version == '3': params['domain_id'] = \ self.operator_cloud.get_domain('default')['id'] project = self.operator_cloud.create_project(**params) self.assertIsNotNone(project) self.assertTrue(self.operator_cloud.delete_project(project['id'])) def test_delete_project_not_found(self): self.assertFalse(self.operator_cloud.delete_project('doesNotExist')) shade-1.31.0/shade/tests/functional/test_domain.py0000666000175000017500000001172213440327640022162 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # # See the License for the specific language governing permissions and # limitations under the License. """ test_domain ---------------------------------- Functional tests for `shade` keystone domain resource. """ import shade from shade.tests.functional import base class TestDomain(base.BaseFunctionalTestCase): def setUp(self): super(TestDomain, self).setUp() i_ver = self.operator_cloud.cloud_config.get_api_version('identity') if i_ver in ('2', '2.0'): self.skipTest('Identity service does not support domains') self.domain_prefix = self.getUniqueString('domain') self.addCleanup(self._cleanup_domains) def _cleanup_domains(self): exception_list = list() for domain in self.operator_cloud.list_domains(): if domain['name'].startswith(self.domain_prefix): try: self.operator_cloud.delete_domain(domain['id']) except Exception as e: exception_list.append(str(e)) continue if exception_list: # Raise an error: we must make users aware that something went # wrong raise shade.OpenStackCloudException('\n'.join(exception_list)) def test_search_domains(self): domain_name = self.domain_prefix + '_search' # Shouldn't find any domain with this name yet results = self.operator_cloud.search_domains( filters=dict(name=domain_name)) self.assertEqual(0, len(results)) # Now create a new domain domain = self.operator_cloud.create_domain(domain_name) self.assertEqual(domain_name, domain['name']) # Now we should find only the new domain results = self.operator_cloud.search_domains( filters=dict(name=domain_name)) self.assertEqual(1, len(results)) self.assertEqual(domain_name, results[0]['name']) # Now we search by name with name_or_id, should find only new domain results = self.operator_cloud.search_domains(name_or_id=domain_name) self.assertEqual(1, len(results)) self.assertEqual(domain_name, results[0]['name']) def test_update_domain(self): domain = self.operator_cloud.create_domain( self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) updated = self.operator_cloud.update_domain( domain['id'], name='updated name', description='updated description', enabled=False) self.assertEqual('updated name', updated['name']) self.assertEqual('updated description', updated['description']) self.assertFalse(updated['enabled']) # Now we update domain by name with name_or_id updated = self.operator_cloud.update_domain( None, name_or_id='updated name', name='updated name 2', description='updated description 2', enabled=True) self.assertEqual('updated name 2', updated['name']) self.assertEqual('updated description 2', updated['description']) self.assertTrue(updated['enabled']) def test_delete_domain(self): domain = self.operator_cloud.create_domain(self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) deleted = self.operator_cloud.delete_domain(domain['id']) self.assertTrue(deleted) # Now we delete domain by name with name_or_id domain = self.operator_cloud.create_domain( self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) deleted = self.operator_cloud.delete_domain(None, domain['name']) self.assertTrue(deleted) # Finally, we assert we get False from delete_domain if domain does # not exist domain = self.operator_cloud.create_domain( self.domain_prefix, 'description') self.assertEqual(self.domain_prefix, domain['name']) self.assertEqual('description', domain['description']) self.assertTrue(domain['enabled']) deleted = self.operator_cloud.delete_domain(None, 'bogus_domain') self.assertFalse(deleted) shade-1.31.0/shade/tests/functional/test_volume_backup.py0000666000175000017500000000632113440327640023546 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from shade.tests.functional import base class TestVolume(base.BaseFunctionalTestCase): # Creating a volume backup is incredibly slow. TIMEOUT_SCALING_FACTOR = 1.5 def setUp(self): super(TestVolume, self).setUp() self.skipTest('Volume functional tests temporarily disabled') if not self.user_cloud.has_service('volume'): self.skipTest('volume service not supported by cloud') if not self.user_cloud.has_service('object-store'): self.skipTest('volume backups require swift') def test_create_get_delete_volume_backup(self): volume = self.user_cloud.create_volume( display_name=self.getUniqueString(), size=1) self.addCleanup(self.user_cloud.delete_volume, volume['id']) backup_name_1 = self.getUniqueString() backup_desc_1 = self.getUniqueString() backup = self.user_cloud.create_volume_backup( volume_id=volume['id'], name=backup_name_1, description=backup_desc_1, wait=True) self.assertEqual(backup_name_1, backup['name']) backup = self.user_cloud.get_volume_backup(backup['id']) self.assertEqual("available", backup['status']) self.assertEqual(backup_desc_1, backup['description']) self.user_cloud.delete_volume_backup(backup['id'], wait=True) self.assertIsNone(self.user_cloud.get_volume_backup(backup['id'])) def test_list_volume_backups(self): vol1 = self.user_cloud.create_volume( display_name=self.getUniqueString(), size=1) self.addCleanup(self.user_cloud.delete_volume, vol1['id']) # We create 2 volumes to create 2 backups. We could have created 2 # backups from the same volume but taking 2 successive backups seems # to be race-condition prone. And I didn't want to use an ugly sleep() # here. vol2 = self.user_cloud.create_volume( display_name=self.getUniqueString(), size=1) self.addCleanup(self.user_cloud.delete_volume, vol2['id']) backup_name_1 = self.getUniqueString() backup = self.user_cloud.create_volume_backup( volume_id=vol1['id'], name=backup_name_1) self.addCleanup(self.user_cloud.delete_volume_backup, backup['id']) backup = self.user_cloud.create_volume_backup(volume_id=vol2['id']) self.addCleanup(self.user_cloud.delete_volume_backup, backup['id']) backups = self.user_cloud.list_volume_backups() self.assertEqual(2, len(backups)) backups = self.user_cloud.list_volume_backups( search_opts={"name": backup_name_1}) self.assertEqual(1, len(backups)) self.assertEqual(backup_name_1, backups[0]['name']) shade-1.31.0/shade/tests/functional/test_zone.py0000666000175000017500000000600713440327640021666 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ test_zone ---------------------------------- Functional tests for `shade` zone methods. """ from testtools import content from shade.tests.functional import base class TestZone(base.BaseFunctionalTestCase): def setUp(self): super(TestZone, self).setUp() if not self.user_cloud.has_service('dns'): self.skipTest('dns service not supported by cloud') def test_zones(self): '''Test DNS zones functionality''' name = 'example.net.' zone_type = 'primary' email = 'test@example.net' description = 'Test zone' ttl = 3600 masters = None self.addDetail('zone', content.text_content(name)) self.addCleanup(self.cleanup, name) # Test we can create a zone and we get it returned zone = self.user_cloud.create_zone( name=name, zone_type=zone_type, email=email, description=description, ttl=ttl, masters=masters) self.assertEqual(zone['name'], name) self.assertEqual(zone['type'], zone_type.upper()) self.assertEqual(zone['email'], email) self.assertEqual(zone['description'], description) self.assertEqual(zone['ttl'], ttl) self.assertEqual(zone['masters'], []) # Test that we can list zones zones = self.user_cloud.list_zones() self.assertIsNotNone(zones) # Test we get the same zone with the get_zone method zone_get = self.user_cloud.get_zone(zone['id']) self.assertEqual(zone_get['id'], zone['id']) # Test the get method also works by name zone_get = self.user_cloud.get_zone(name) self.assertEqual(zone_get['name'], zone['name']) # Test we can update a field on the zone and only that field # is updated zone_update = self.user_cloud.update_zone(zone['id'], ttl=7200) self.assertEqual(zone_update['id'], zone['id']) self.assertEqual(zone_update['name'], zone['name']) self.assertEqual(zone_update['type'], zone['type']) self.assertEqual(zone_update['email'], zone['email']) self.assertEqual(zone_update['description'], zone['description']) self.assertEqual(zone_update['ttl'], 7200) self.assertEqual(zone_update['masters'], zone['masters']) # Test we can delete and get True returned zone_delete = self.user_cloud.delete_zone(zone['id']) self.assertTrue(zone_delete) def cleanup(self, name): self.user_cloud.delete_zone(name) shade-1.31.0/shade/operatorcloud.py0000666000175000017500000000011113440327640017220 0ustar zuulzuul00000000000000from shade.openstackcloud import OpenStackCloud as OperatorCloud # noqa shade-1.31.0/shade/exc.py0000666000175000017500000001342613440327640015132 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys import json import munch from requests import exceptions as _rex from shade import _log class OpenStackCloudException(Exception): log_inner_exceptions = False def __init__(self, message, extra_data=None, **kwargs): args = [message] if extra_data: if isinstance(extra_data, munch.Munch): extra_data = extra_data.toDict() args.append("Extra: {0}".format(str(extra_data))) super(OpenStackCloudException, self).__init__(*args, **kwargs) self.extra_data = extra_data # NOTE(mordred) The next two are not used for anything, but # they are public attributes so we keep them around. self.inner_exception = sys.exc_info() self.orig_message = message def log_error(self, logger=None): # NOTE(mordred) This method is here for backwards compat. As shade # no longer wraps any exceptions, this doesn't do anything. pass class OpenStackCloudCreateException(OpenStackCloudException): def __init__(self, resource, resource_id, extra_data=None, **kwargs): super(OpenStackCloudCreateException, self).__init__( message="Error creating {resource}: {resource_id}".format( resource=resource, resource_id=resource_id), extra_data=extra_data, **kwargs) self.resource_id = resource_id class OpenStackCloudTimeout(OpenStackCloudException): pass class OpenStackCloudUnavailableExtension(OpenStackCloudException): pass class OpenStackCloudUnavailableFeature(OpenStackCloudException): pass class OpenStackCloudHTTPError(OpenStackCloudException, _rex.HTTPError): def __init__(self, *args, **kwargs): OpenStackCloudException.__init__(self, *args, **kwargs) _rex.HTTPError.__init__(self, *args, **kwargs) class OpenStackCloudBadRequest(OpenStackCloudHTTPError): """There is something wrong with the request payload. Possible reasons can include malformed json or invalid values to parameters such as flavorRef to a server create. """ class OpenStackCloudURINotFound(OpenStackCloudHTTPError): pass # Backwards compat OpenStackCloudResourceNotFound = OpenStackCloudURINotFound def _log_response_extras(response): # Sometimes we get weird HTML errors. This is usually from load balancers # or other things. Log them to a special logger so that they can be # toggled indepdently - and at debug level so that a person logging # shade.* only gets them at debug. if response.headers.get('content-type') != 'text/html': return try: if int(response.headers.get('content-length', 0)) == 0: return except Exception: return logger = _log.setup_logging('shade.http') if response.reason: logger.debug( "Non-standard error '{reason}' returned from {url}:".format( reason=response.reason, url=response.url)) else: logger.debug( "Non-standard error returned from {url}:".format( url=response.url)) for response_line in response.text.split('\n'): logger.debug(response_line) # Logic shamelessly stolen from requests def raise_from_response(response, error_message=None): msg = '' if 400 <= response.status_code < 500: source = "Client" elif 500 <= response.status_code < 600: source = "Server" else: return remote_error = "Error for url: {url}".format(url=response.url) try: details = response.json() # Nova returns documents that look like # {statusname: 'message': message, 'code': code} detail_keys = list(details.keys()) if len(detail_keys) == 1: detail_key = detail_keys[0] detail_message = details[detail_key].get('message') if detail_message: remote_error += " {message}".format(message=detail_message) except ValueError: if response.reason: remote_error += " {reason}".format(reason=response.reason) except AttributeError: if response.reason: remote_error += " {reason}".format(reason=response.reason) try: json_resp = json.loads(details[detail_key]) fault_string = json_resp.get('faultstring') if fault_string: remote_error += " {fault}".format(fault=fault_string) except Exception: pass _log_response_extras(response) if error_message: msg = '{error_message}. ({code}) {source} {remote_error}'.format( error_message=error_message, source=source, code=response.status_code, remote_error=remote_error) else: msg = '({code}) {source} {remote_error}'.format( code=response.status_code, source=source, remote_error=remote_error) # Special case 404 since we raised a specific one for neutron exceptions # before if response.status_code == 404: raise OpenStackCloudURINotFound(msg, response=response) elif response.status_code == 400: raise OpenStackCloudBadRequest(msg, response=response) if msg: raise OpenStackCloudHTTPError(msg, response=response) shade-1.31.0/shade/_utils.py0000666000175000017500000006161713440327640015657 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import fnmatch import functools import inspect import jmespath import munch import netifaces import re import six import sre_constants import sys import time import uuid from decorator import decorator from shade import _log from shade import exc from shade import meta _decorated_methods = [] def _exc_clear(): """Because sys.exc_clear is gone in py3 and is not in six.""" if sys.version_info[0] == 2: sys.exc_clear() def _iterate_timeout(timeout, message, wait=2): """Iterate and raise an exception on timeout. This is a generator that will continually yield and sleep for wait seconds, and if the timeout is reached, will raise an exception with . """ log = _log.setup_logging('shade.iterate_timeout') try: # None as a wait winds up flowing well in the per-resource cache # flow. We could spread this logic around to all of the calling # points, but just having this treat None as "I don't have a value" # seems friendlier if wait is None: wait = 2 elif wait == 0: # wait should be < timeout, unless timeout is None wait = 0.1 if timeout is None else min(0.1, timeout) wait = float(wait) except ValueError: raise exc.OpenStackCloudException( "Wait value must be an int or float value. {wait} given" " instead".format(wait=wait)) start = time.time() count = 0 while (timeout is None) or (time.time() < start + timeout): count += 1 yield count log.debug('Waiting %s seconds', wait) time.sleep(wait) raise exc.OpenStackCloudTimeout(message) def _make_unicode(input): """Turn an input into unicode unconditionally :param input: A unicode, string or other object """ try: if isinstance(input, unicode): return input if isinstance(input, str): return input.decode('utf-8') else: # int, for example return unicode(input) except NameError: # python3! return str(input) def _dictify_resource(resource): if isinstance(resource, list): return [_dictify_resource(r) for r in resource] else: if hasattr(resource, 'toDict'): return resource.toDict() else: return resource def _filter_list(data, name_or_id, filters): """Filter a list by name/ID and arbitrary meta data. :param list data: The list of dictionary data to filter. It is expected that each dictionary contains an 'id' and 'name' key if a value for name_or_id is given. :param string name_or_id: The name or ID of the entity being filtered. Can be a glob pattern, such as 'nb01*'. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. """ # The logger is shade.fmmatch to allow a user/operator to configure logging # not to communicate about fnmatch misses (they shouldn't be too spammy, # but one never knows) log = _log.setup_logging('shade.fnmatch') if name_or_id: # name_or_id might already be unicode name_or_id = _make_unicode(name_or_id) identifier_matches = [] bad_pattern = False try: fn_reg = re.compile(fnmatch.translate(name_or_id)) except sre_constants.error: # If the fnmatch re doesn't compile, then we don't care, # but log it in case the user DID pass a pattern but did # it poorly and wants to know what went wrong with their # search fn_reg = None for e in data: e_id = _make_unicode(e.get('id', None)) e_name = _make_unicode(e.get('name', None)) if ((e_id and e_id == name_or_id) or (e_name and e_name == name_or_id)): identifier_matches.append(e) else: # Only try fnmatch if we don't match exactly if not fn_reg: # If we don't have a pattern, skip this, but set the flag # so that we log the bad pattern bad_pattern = True continue if ((e_id and fn_reg.match(e_id)) or (e_name and fn_reg.match(e_name))): identifier_matches.append(e) if not identifier_matches and bad_pattern: log.debug("Bad pattern passed to fnmatch", exc_info=True) data = identifier_matches if not filters: return data if isinstance(filters, six.string_types): return jmespath.search(filters, data) def _dict_filter(f, d): if not d: return False for key in f.keys(): if isinstance(f[key], dict): if not _dict_filter(f[key], d.get(key, None)): return False elif d.get(key, None) != f[key]: return False return True filtered = [] for e in data: filtered.append(e) for key in filters.keys(): if isinstance(filters[key], dict): if not _dict_filter(filters[key], e.get(key, None)): filtered.pop() break elif e.get(key, None) != filters[key]: filtered.pop() break return filtered def _get_entity(cloud, resource, name_or_id, filters, **kwargs): """Return a single entity from the list returned by a given method. :param object cloud: The controller class (Example: the main OpenStackCloud object) . :param string or callable resource: The string that identifies the resource to use to lookup the get_<>_by_id or search_s methods(Example: network) or a callable to invoke. :param string name_or_id: The name or ID of the entity being filtered or an object or dict. If this is an object/dict with an 'id' attr/key, we return it and bypass resource lookup. :param filters: A dictionary of meta data to use for further filtering. OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" """ # Sometimes in the control flow of shade, we already have an object # fetched. Rather than then needing to pull the name or id out of that # object, pass it in here and rely on caching to prevent us from making # an additional call, it's simple enough to test to see if we got an # object and just short-circuit return it. if (hasattr(name_or_id, 'id') or (isinstance(name_or_id, dict) and 'id' in name_or_id)): return name_or_id # If a uuid is passed short-circuit it calling the # get__by_id method if getattr(cloud, 'use_direct_get', False) and _is_uuid_like(name_or_id): get_resource = getattr(cloud, 'get_%s_by_id' % resource, None) if get_resource: return get_resource(name_or_id) search = resource if callable(resource) else getattr( cloud, 'search_%ss' % resource, None) if search: entities = search(name_or_id, filters, **kwargs) if entities: if len(entities) > 1: raise exc.OpenStackCloudException( "Multiple matches found for %s" % name_or_id) return entities[0] return None def normalize_keystone_services(services): """Normalize the structure of keystone services In keystone v2, there is a field called "service_type". In v3, it's "type". Just make the returned dict have both. :param list services: A list of keystone service dicts :returns: A list of normalized dicts. """ ret = [] for service in services: service_type = service.get('type', service.get('service_type')) new_service = { 'id': service['id'], 'name': service['name'], 'description': service.get('description', None), 'type': service_type, 'service_type': service_type, 'enabled': service['enabled'] } ret.append(new_service) return meta.obj_list_to_munch(ret) def localhost_supports_ipv6(): """Determine whether the local host supports IPv6 We look for a default route that supports the IPv6 address family, and assume that if it is present, this host has globally routable IPv6 connectivity. """ try: return netifaces.AF_INET6 in netifaces.gateways()['default'] except AttributeError: return False def normalize_users(users): ret = [ dict( id=user.get('id'), email=user.get('email'), name=user.get('name'), username=user.get('username'), default_project_id=user.get('default_project_id', user.get('tenantId')), domain_id=user.get('domain_id'), enabled=user.get('enabled'), description=user.get('description') ) for user in users ] return meta.obj_list_to_munch(ret) def normalize_domains(domains): ret = [ dict( id=domain.get('id'), name=domain.get('name'), description=domain.get('description'), enabled=domain.get('enabled'), ) for domain in domains ] return meta.obj_list_to_munch(ret) def normalize_groups(domains): """Normalize Identity groups.""" ret = [ dict( id=domain.get('id'), name=domain.get('name'), description=domain.get('description'), domain_id=domain.get('domain_id'), ) for domain in domains ] return meta.obj_list_to_munch(ret) def normalize_role_assignments(assignments): """Put role_assignments into a form that works with search/get interface. Role assignments have the structure:: [ { "role": { "id": "--role-id--" }, "scope": { "domain": { "id": "--domain-id--" } }, "user": { "id": "--user-id--" } }, ] Which is hard to work with in the rest of our interface. Map this to be:: [ { "id": "--role-id--", "domain": "--domain-id--", "user": "--user-id--", } ] Scope can be "domain" or "project" and "user" can also be "group". :param list assignments: A list of dictionaries of role assignments. :returns: A list of flattened/normalized role assignment dicts. """ new_assignments = [] for assignment in assignments: new_val = munch.Munch({'id': assignment['role']['id']}) for scope in ('project', 'domain'): if scope in assignment['scope']: new_val[scope] = assignment['scope'][scope]['id'] for assignee in ('user', 'group'): if assignee in assignment: new_val[assignee] = assignment[assignee]['id'] new_assignments.append(new_val) return new_assignments def normalize_flavor_accesses(flavor_accesses): """Normalize Flavor access list.""" return [munch.Munch( dict( flavor_id=acl.get('flavor_id'), project_id=acl.get('project_id') or acl.get('tenant_id'), ) ) for acl in flavor_accesses ] def valid_kwargs(*valid_args): # This decorator checks if argument passed as **kwargs to a function are # present in valid_args. # # Typically, valid_kwargs is used when we want to distinguish between # None and omitted arguments and we still want to validate the argument # list. # # Example usage: # # @valid_kwargs('opt_arg1', 'opt_arg2') # def my_func(self, mandatory_arg1, mandatory_arg2, **kwargs): # ... # @decorator def func_wrapper(func, *args, **kwargs): argspec = inspect.getargspec(func) for k in kwargs: if k not in argspec.args[1:] and k not in valid_args: raise TypeError( "{f}() got an unexpected keyword argument " "'{arg}'".format(f=inspect.stack()[1][3], arg=k)) return func(*args, **kwargs) return func_wrapper def _func_wrap(f): # NOTE(morgan): This extra wrapper is intended to eliminate ever # passing a bound method to dogpile.cache's cache_on_arguments. In # 0.7.0 and later it is impossible to pass bound methods to the # decorator. This was introduced when utilizing the decorate module in # lieu of a direct wrap implementation. @functools.wraps(f) def inner(*args, **kwargs): return f(*args, **kwargs) return inner def cache_on_arguments(*cache_on_args, **cache_on_kwargs): _cache_name = cache_on_kwargs.pop('resource', None) def _inner_cache_on_arguments(func): def _cache_decorator(obj, *args, **kwargs): the_method = obj._get_cache(_cache_name).cache_on_arguments( *cache_on_args, **cache_on_kwargs)( _func_wrap(func.__get__(obj, type(obj)))) return the_method(*args, **kwargs) def invalidate(obj, *args, **kwargs): return obj._get_cache( _cache_name).cache_on_arguments()(func).invalidate( *args, **kwargs) _cache_decorator.invalidate = invalidate _cache_decorator.func = func _decorated_methods.append(func.__name__) return _cache_decorator return _inner_cache_on_arguments @contextlib.contextmanager def shade_exceptions(error_message=None): """Context manager for dealing with shade exceptions. :param string error_message: String to use for the exception message content on non-OpenStackCloudExceptions. Useful for avoiding wrapping shade OpenStackCloudException exceptions within themselves. Code called from within the context may throw such exceptions without having to catch and reraise them. Non-OpenStackCloudException exceptions thrown within the context will be wrapped and the exception message will be appended to the given error message. """ try: yield except exc.OpenStackCloudException: raise except Exception as e: if error_message is None: error_message = str(e) raise exc.OpenStackCloudException(error_message) def safe_dict_min(key, data): """Safely find the minimum for a given key in a list of dict objects. This will find the minimum integer value for specific dictionary key across a list of dictionaries. The values for the given key MUST be integers, or string representations of an integer. The dictionary key does not have to be present in all (or any) of the elements/dicts within the data set. :param string key: The dictionary key to search for the minimum value. :param list data: List of dicts to use for the data set. :returns: None if the field was not not found in any elements, or the minimum value for the field otherwise. """ min_value = None for d in data: if (key in d) and (d[key] is not None): try: val = int(d[key]) except ValueError: raise exc.OpenStackCloudException( "Search for minimum value failed. " "Value for {key} is not an integer: {value}".format( key=key, value=d[key]) ) if (min_value is None) or (val < min_value): min_value = val return min_value def safe_dict_max(key, data): """Safely find the maximum for a given key in a list of dict objects. This will find the maximum integer value for specific dictionary key across a list of dictionaries. The values for the given key MUST be integers, or string representations of an integer. The dictionary key does not have to be present in all (or any) of the elements/dicts within the data set. :param string key: The dictionary key to search for the maximum value. :param list data: List of dicts to use for the data set. :returns: None if the field was not not found in any elements, or the maximum value for the field otherwise. """ max_value = None for d in data: if (key in d) and (d[key] is not None): try: val = int(d[key]) except ValueError: raise exc.OpenStackCloudException( "Search for maximum value failed. " "Value for {key} is not an integer: {value}".format( key=key, value=d[key]) ) if (max_value is None) or (val > max_value): max_value = val return max_value def _call_client_and_retry(client, url, retry_on=None, call_retries=3, retry_wait=2, **kwargs): """Method to provide retry operations. Some APIs utilize HTTP errors on certian operations to indicate that the resource is presently locked, and as such this mechanism provides the ability to retry upon known error codes. :param object client: The client method, such as: ``self.baremetal_client.post`` :param string url: The URL to perform the operation upon. :param integer retry_on: A list of error codes that can be retried on. The method also supports a single integer to be defined. :param integer call_retries: The number of times to retry the call upon the error code defined by the 'retry_on' parameter. Default: 3 :param integer retry_wait: The time in seconds to wait between retry attempts. Default: 2 :returns: The object returned by the client call. """ # NOTE(TheJulia): This method, as of this note, does not have direct # unit tests, although is fairly well tested by the tests checking # retry logic in test_baremetal_node.py. log = _log.setup_logging('shade.http') if isinstance(retry_on, int): retry_on = [retry_on] count = 0 while (count < call_retries): count += 1 try: ret_val = client(url, **kwargs) except exc.OpenStackCloudHTTPError as e: if (retry_on is not None and e.response.status_code in retry_on): log.debug('Received retryable error {err}, waiting ' '{wait} seconds to retry', { 'err': e.response.status_code, 'wait': retry_wait }) time.sleep(retry_wait) continue else: raise # Break out of the loop, since the loop should only continue # when we encounter a known connection error. return ret_val def parse_range(value): """Parse a numerical range string. Breakdown a range expression into its operater and numerical parts. This expression must be a string. Valid values must be an integer string, optionally preceeded by one of the following operators:: - "<" : Less than - ">" : Greater than - "<=" : Less than or equal to - ">=" : Greater than or equal to Some examples of valid values and function return values:: - "1024" : returns (None, 1024) - "<5" : returns ("<", 5) - ">=100" : returns (">=", 100) :param string value: The range expression to be parsed. :returns: A tuple with the operator string (or None if no operator was given) and the integer value. None is returned if parsing failed. """ if value is None: return None range_exp = re.match('(<|>|<=|>=){0,1}(\d+)$', value) if range_exp is None: return None op = range_exp.group(1) num = int(range_exp.group(2)) return (op, num) def range_filter(data, key, range_exp): """Filter a list by a single range expression. :param list data: List of dictionaries to be searched. :param string key: Key name to search within the data set. :param string range_exp: The expression describing the range of values. :returns: A list subset of the original data set. :raises: OpenStackCloudException on invalid range expressions. """ filtered = [] range_exp = str(range_exp).upper() if range_exp == "MIN": key_min = safe_dict_min(key, data) if key_min is None: return [] for d in data: if int(d[key]) == key_min: filtered.append(d) return filtered elif range_exp == "MAX": key_max = safe_dict_max(key, data) if key_max is None: return [] for d in data: if int(d[key]) == key_max: filtered.append(d) return filtered # Not looking for a min or max, so a range or exact value must # have been supplied. val_range = parse_range(range_exp) # If parsing the range fails, it must be a bad value. if val_range is None: raise exc.OpenStackCloudException( "Invalid range value: {value}".format(value=range_exp)) op = val_range[0] if op: # Range matching for d in data: d_val = int(d[key]) if op == '<': if d_val < val_range[1]: filtered.append(d) elif op == '>': if d_val > val_range[1]: filtered.append(d) elif op == '<=': if d_val <= val_range[1]: filtered.append(d) elif op == '>=': if d_val >= val_range[1]: filtered.append(d) return filtered else: # Exact number match for d in data: if int(d[key]) == val_range[1]: filtered.append(d) return filtered def generate_patches_from_kwargs(operation, **kwargs): """Given a set of parameters, returns a list with the valid patch values. :param string operation: The operation to perform. :param list kwargs: Dict of parameters. :returns: A list with the right patch values. """ patches = [] for k, v in kwargs.items(): patch = {'op': operation, 'value': v, 'path': '/%s' % k} patches.append(patch) return sorted(patches) class FileSegment(object): """File-like object to pass to requests.""" def __init__(self, filename, offset, length): self.filename = filename self.offset = offset self.length = length self.pos = 0 self._file = open(filename, 'rb') self.seek(0) def tell(self): return self._file.tell() - self.offset def seek(self, offset, whence=0): if whence == 0: self._file.seek(self.offset + offset, whence) elif whence == 1: self._file.seek(offset, whence) elif whence == 2: self._file.seek(self.offset + self.length - offset, 0) def read(self, size=-1): remaining = self.length - self.pos if remaining <= 0: return b'' to_read = remaining if size < 0 else min(size, remaining) chunk = self._file.read(to_read) self.pos += len(chunk) return chunk def reset(self): self._file.seek(self.offset, 0) def _format_uuid_string(string): return (string.replace('urn:', '') .replace('uuid:', '') .strip('{}') .replace('-', '') .lower()) def _is_uuid_like(val): """Returns validation of a value as a UUID. :param val: Value to verify :type val: string :returns: bool .. versionchanged:: 1.1.1 Support non-lowercase UUIDs. """ try: return str(uuid.UUID(val)).replace('-', '') == _format_uuid_string(val) except (TypeError, ValueError, AttributeError): return False shade-1.31.0/shade/_adapter.py0000666000175000017500000001347513440327640016136 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ''' Wrapper around keystoneauth Session to wrap calls in TaskManager ''' import functools from keystoneauth1 import adapter from six.moves import urllib from shade import _log from shade import exc from shade import task_manager def extract_name(url): '''Produce a key name to use in logging/metrics from the URL path. We want to be able to logic/metric sane general things, so we pull the url apart to generate names. The function returns a list because there are two different ways in which the elements want to be combined below (one for logging, one for statsd) Some examples are likely useful: /servers -> ['servers'] /servers/{id} -> ['servers'] /servers/{id}/os-security-groups -> ['servers', 'os-security-groups'] /v2.0/networks.json -> ['networks'] ''' url_path = urllib.parse.urlparse(url).path.strip() # Remove / from the beginning to keep the list indexes of interesting # things consistent if url_path.startswith('/'): url_path = url_path[1:] # Special case for neutron, which puts .json on the end of urls if url_path.endswith('.json'): url_path = url_path[:-len('.json')] url_parts = url_path.split('/') if url_parts[-1] == 'detail': # Special case detail calls # GET /servers/detail # returns ['servers', 'detail'] name_parts = url_parts[-2:] else: # Strip leading version piece so that # GET /v2.0/networks # returns ['networks'] if url_parts[0] in ('v1', 'v2', 'v2.0'): url_parts = url_parts[1:] name_parts = [] # Pull out every other URL portion - so that # GET /servers/{id}/os-security-groups # returns ['servers', 'os-security-groups'] for idx in range(0, len(url_parts)): if not idx % 2 and url_parts[idx]: name_parts.append(url_parts[idx]) # Keystone Token fetching is a special case, so we name it "tokens" if url_path.endswith('tokens'): name_parts = ['tokens'] # Getting the root of an endpoint is doing version discovery if not name_parts: name_parts = ['discovery'] # Strip out anything that's empty or None return [part for part in name_parts if part] class ShadeAdapter(adapter.Adapter): def __init__(self, shade_logger, manager, *args, **kwargs): super(ShadeAdapter, self).__init__(*args, **kwargs) self.shade_logger = shade_logger self.manager = manager self.request_log = _log.setup_logging('shade.request_ids') def _log_request_id(self, response, obj=None): # Log the request id and object id in a specific logger. This way # someone can turn it on if they're interested in this kind of tracing. request_id = response.headers.get('x-openstack-request-id') if not request_id: return response tmpl = "{meth} call to {service} for {url} used request id {req}" kwargs = dict( meth=response.request.method, service=self.service_type, url=response.request.url, req=request_id) if isinstance(obj, dict): obj_id = obj.get('id', obj.get('uuid')) if obj_id: kwargs['obj_id'] = obj_id tmpl += " returning object {obj_id}" self.request_log.debug(tmpl.format(**kwargs)) return response def _munch_response(self, response, result_key=None, error_message=None): exc.raise_from_response(response, error_message=error_message) if not response.content: # This doens't have any content return self._log_request_id(response) # Some REST calls do not return json content. Don't decode it. if 'application/json' not in response.headers.get('Content-Type'): return self._log_request_id(response) try: result_json = response.json() self._log_request_id(response, result_json) except Exception: return self._log_request_id(response) return result_json def request( self, url, method, run_async=False, error_message=None, *args, **kwargs): name_parts = extract_name(url) name = '.'.join([self.service_type, method] + name_parts) class_name = "".join([ part.lower().capitalize() for part in name.split('.')]) request_method = functools.partial( super(ShadeAdapter, self).request, url, method) class RequestTask(task_manager.BaseTask): def __init__(self, **kw): super(RequestTask, self).__init__(**kw) self.name = name self.__class__.__name__ = str(class_name) self.run_async = run_async def main(self, client): self.args.setdefault('raise_exc', False) return request_method(**self.args) response = self.manager.submit_task(RequestTask(**kwargs)) if run_async: return response else: return self._munch_response(response, error_message=error_message) def _version_matches(self, version): api_version = self.get_api_major_version() if api_version: return api_version[0] == version return False shade-1.31.0/shade/inventory.py0000666000175000017500000000573713440327640016416 0ustar zuulzuul00000000000000# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import os_client_config import shade from shade import _utils class OpenStackInventory(object): # Put this here so the capability can be detected with hasattr on the class extra_config = None def __init__( self, config_files=None, refresh=False, private=False, config_key=None, config_defaults=None, cloud=None, use_direct_get=False): if config_files is None: config_files = [] config = os_client_config.config.OpenStackConfig( config_files=os_client_config.config.CONFIG_FILES + config_files) self.extra_config = config.get_extra_config( config_key, config_defaults) if cloud is None: self.clouds = [ shade.OpenStackCloud(cloud_config=cloud_config) for cloud_config in config.get_all_clouds() ] else: try: self.clouds = [ shade.OpenStackCloud( cloud_config=config.get_one_cloud(cloud)) ] except os_client_config.exceptions.OpenStackConfigException as e: raise shade.OpenStackCloudException(e) if private: for cloud in self.clouds: cloud.private = True # Handle manual invalidation of entire persistent cache if refresh: for cloud in self.clouds: cloud._cache.invalidate() def list_hosts(self, expand=True, fail_on_cloud_config=True): hostvars = [] for cloud in self.clouds: try: # Cycle on servers for server in cloud.list_servers(detailed=expand): hostvars.append(server) except shade.OpenStackCloudException: # Don't fail on one particular cloud as others may work if fail_on_cloud_config: raise return hostvars def search_hosts(self, name_or_id=None, filters=None, expand=True): hosts = self.list_hosts(expand=expand) return _utils._filter_list(hosts, name_or_id, filters) def get_host(self, name_or_id, filters=None, expand=True): if expand: func = self.search_hosts else: func = functools.partial(self.search_hosts, expand=False) return _utils._get_entity(self, func, name_or_id, filters) shade-1.31.0/shade/_heat/0000775000175000017500000000000013440330010015032 5ustar zuulzuul00000000000000shade-1.31.0/shade/_heat/event_utils.py0000666000175000017500000000675113440327640017777 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import time from shade import meta def get_events(cloud, stack_id, event_args, marker=None, limit=None): # TODO(mordred) FIX THIS ONCE assert_calls CAN HANDLE QUERY STRINGS params = collections.OrderedDict() for k in sorted(event_args.keys()): params[k] = event_args[k] if marker: event_args['marker'] = marker if limit: event_args['limit'] = limit data = cloud._orchestration_client.get( '/stacks/{id}/events'.format(id=stack_id), params=params) events = meta.get_and_munchify('events', data) # Show which stack the event comes from (for nested events) for e in events: e['stack_name'] = stack_id.split("/")[0] return events def poll_for_events( cloud, stack_name, action=None, poll_period=5, marker=None): """Continuously poll events and logs for performed action on stack.""" def stop_check_action(a): stop_status = ('%s_FAILED' % action, '%s_COMPLETE' % action) return a in stop_status def stop_check_no_action(a): return a.endswith('_COMPLETE') or a.endswith('_FAILED') if action: stop_check = stop_check_action else: stop_check = stop_check_no_action no_event_polls = 0 msg_template = "\n Stack %(name)s %(status)s \n" def is_stack_event(event): if event.get('resource_name', '') != stack_name: return False phys_id = event.get('physical_resource_id', '') links = dict((l.get('rel'), l.get('href')) for l in event.get('links', [])) stack_id = links.get('stack', phys_id).rsplit('/', 1)[-1] return stack_id == phys_id while True: events = get_events( cloud, stack_id=stack_name, event_args={'sort_dir': 'asc', 'marker': marker}) if len(events) == 0: no_event_polls += 1 else: no_event_polls = 0 # set marker to last event that was received. marker = getattr(events[-1], 'id', None) for event in events: # check if stack event was also received if is_stack_event(event): stack_status = getattr(event, 'resource_status', '') msg = msg_template % dict( name=stack_name, status=stack_status) if stop_check(stack_status): return stack_status, msg if no_event_polls >= 2: # after 2 polls with no events, fall back to a stack get stack = cloud.get_stack(stack_name, resolve_outputs=False) stack_status = stack['stack_status'] msg = msg_template % dict( name=stack_name, status=stack_status) if stop_check(stack_status): return stack_status, msg # go back to event polling again no_event_polls = 0 time.sleep(poll_period) shade-1.31.0/shade/_heat/environment_format.py0000666000175000017500000000352013440327640021341 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import yaml from shade._heat import template_format SECTIONS = ( PARAMETER_DEFAULTS, PARAMETERS, RESOURCE_REGISTRY, ENCRYPTED_PARAM_NAMES, EVENT_SINKS, PARAMETER_MERGE_STRATEGIES ) = ( 'parameter_defaults', 'parameters', 'resource_registry', 'encrypted_param_names', 'event_sinks', 'parameter_merge_strategies' ) def parse(env_str): """Takes a string and returns a dict containing the parsed structure. This includes determination of whether the string is using the YAML format. """ try: env = yaml.load(env_str, Loader=template_format.yaml_loader) except yaml.YAMLError: # NOTE(prazumovsky): we need to return more informative error for # user, so use SafeLoader, which return error message with template # snippet where error has been occurred. try: env = yaml.load(env_str, Loader=yaml.SafeLoader) except yaml.YAMLError as yea: raise ValueError(yea) else: if env is None: env = {} elif not isinstance(env, dict): raise ValueError( 'The environment is not a valid YAML mapping data type.') for param in env: if param not in SECTIONS: raise ValueError('environment has wrong section "%s"' % param) return env shade-1.31.0/shade/_heat/template_format.py0000666000175000017500000000503513440327640020613 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import yaml if hasattr(yaml, 'CSafeLoader'): yaml_loader = yaml.CSafeLoader else: yaml_loader = yaml.SafeLoader if hasattr(yaml, 'CSafeDumper'): yaml_dumper = yaml.CSafeDumper else: yaml_dumper = yaml.SafeDumper def _construct_yaml_str(self, node): # Override the default string handling function # to always return unicode objects return self.construct_scalar(node) yaml_loader.add_constructor(u'tag:yaml.org,2002:str', _construct_yaml_str) # Unquoted dates like 2013-05-23 in yaml files get loaded as objects of type # datetime.data which causes problems in API layer when being processed by # openstack.common.jsonutils. Therefore, make unicode string out of timestamps # until jsonutils can handle dates. yaml_loader.add_constructor(u'tag:yaml.org,2002:timestamp', _construct_yaml_str) def parse(tmpl_str): """Takes a string and returns a dict containing the parsed structure. This includes determination of whether the string is using the JSON or YAML format. """ # strip any whitespace before the check tmpl_str = tmpl_str.strip() if tmpl_str.startswith('{'): tpl = json.loads(tmpl_str) else: try: tpl = yaml.load(tmpl_str, Loader=yaml_loader) except yaml.YAMLError: # NOTE(prazumovsky): we need to return more informative error for # user, so use SafeLoader, which return error message with template # snippet where error has been occurred. try: tpl = yaml.load(tmpl_str, Loader=yaml.SafeLoader) except yaml.YAMLError as yea: raise ValueError(yea) else: if tpl is None: tpl = {} # Looking for supported version keys in the loaded template if not ('HeatTemplateFormatVersion' in tpl or 'heat_template_version' in tpl or 'AWSTemplateFormatVersion' in tpl): raise ValueError("Template format version not found.") return tpl shade-1.31.0/shade/_heat/utils.py0000666000175000017500000000346213440327640016572 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import os from six.moves.urllib import error from six.moves.urllib import parse from six.moves.urllib import request from shade import exc def base_url_for_url(url): parsed = parse.urlparse(url) parsed_dir = os.path.dirname(parsed.path) return parse.urljoin(url, parsed_dir) def normalise_file_path_to_url(path): if parse.urlparse(path).scheme: return path path = os.path.abspath(path) return parse.urljoin('file:', request.pathname2url(path)) def read_url_content(url): try: # TODO(mordred) Use requests content = request.urlopen(url).read() except error.URLError: raise exc.OpenStackCloudException( 'Could not fetch contents for %s' % url) if content: try: content.decode('utf-8') except ValueError: content = base64.encodestring(content) return content def resource_nested_identifier(rsrc): nested_link = [l for l in rsrc.links or [] if l.get('rel') == 'nested'] if nested_link: nested_href = nested_link[0].get('href') nested_identifier = nested_href.split("/")[-2:] return "/".join(nested_identifier) shade-1.31.0/shade/_heat/__init__.py0000666000175000017500000000000013440327640017152 0ustar zuulzuul00000000000000shade-1.31.0/shade/_heat/template_utils.py0000666000175000017500000002577413440327640020477 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import json import six from six.moves.urllib import parse from six.moves.urllib import request from shade._heat import environment_format from shade._heat import template_format from shade._heat import utils from shade import exc def get_template_contents(template_file=None, template_url=None, template_object=None, object_request=None, files=None, existing=False): is_object = False tpl = None # Transform a bare file path to a file:// URL. if template_file: template_url = utils.normalise_file_path_to_url(template_file) if template_url: tpl = request.urlopen(template_url).read() elif template_object: is_object = True template_url = template_object tpl = object_request and object_request('GET', template_object) elif existing: return {}, None else: raise exc.OpenStackCloudException( 'Must provide one of template_file,' ' template_url or template_object') if not tpl: raise exc.OpenStackCloudException( 'Could not fetch template from %s' % template_url) try: if isinstance(tpl, six.binary_type): tpl = tpl.decode('utf-8') template = template_format.parse(tpl) except ValueError as e: raise exc.OpenStackCloudException( 'Error parsing template %(url)s %(error)s' % {'url': template_url, 'error': e}) tmpl_base_url = utils.base_url_for_url(template_url) if files is None: files = {} resolve_template_get_files(template, files, tmpl_base_url, is_object, object_request) return files, template def resolve_template_get_files(template, files, template_base_url, is_object=False, object_request=None): def ignore_if(key, value): if key != 'get_file' and key != 'type': return True if not isinstance(value, six.string_types): return True if (key == 'type' and not value.endswith(('.yaml', '.template'))): return True return False def recurse_if(value): return isinstance(value, (dict, list)) get_file_contents(template, files, template_base_url, ignore_if, recurse_if, is_object, object_request) def is_template(file_content): try: if isinstance(file_content, six.binary_type): file_content = file_content.decode('utf-8') template_format.parse(file_content) except (ValueError, TypeError): return False return True def get_file_contents(from_data, files, base_url=None, ignore_if=None, recurse_if=None, is_object=False, object_request=None): if recurse_if and recurse_if(from_data): if isinstance(from_data, dict): recurse_data = from_data.values() else: recurse_data = from_data for value in recurse_data: get_file_contents(value, files, base_url, ignore_if, recurse_if, is_object, object_request) if isinstance(from_data, dict): for key, value in from_data.items(): if ignore_if and ignore_if(key, value): continue if base_url and not base_url.endswith('/'): base_url = base_url + '/' str_url = parse.urljoin(base_url, value) if str_url not in files: if is_object and object_request: file_content = object_request('GET', str_url) else: file_content = utils.read_url_content(str_url) if is_template(file_content): if is_object: template = get_template_contents( template_object=str_url, files=files, object_request=object_request)[1] else: template = get_template_contents( template_url=str_url, files=files)[1] file_content = json.dumps(template) files[str_url] = file_content # replace the data value with the normalised absolute URL from_data[key] = str_url def deep_update(old, new): '''Merge nested dictionaries.''' # Prevents an error if in a previous iteration # old[k] = None but v[k] = {...}, if old is None: old = {} for k, v in new.items(): if isinstance(v, collections.Mapping): r = deep_update(old.get(k, {}), v) old[k] = r else: old[k] = new[k] return old def process_multiple_environments_and_files(env_paths=None, template=None, template_url=None, env_path_is_object=None, object_request=None, env_list_tracker=None): """Reads one or more environment files. Reads in each specified environment file and returns a dictionary of the filenames->contents (suitable for the files dict) and the consolidated environment (after having applied the correct overrides based on order). If a list is provided in the env_list_tracker parameter, the behavior is altered to take advantage of server-side environment resolution. Specifically, this means: * Populating env_list_tracker with an ordered list of environment file URLs to be passed to the server * Including the contents of each environment file in the returned files dict, keyed by one of the URLs in env_list_tracker :param env_paths: list of paths to the environment files to load; if None, empty results will be returned :type env_paths: list or None :param template: unused; only included for API compatibility :param template_url: unused; only included for API compatibility :param env_list_tracker: if specified, environment filenames will be stored within :type env_list_tracker: list or None :return: tuple of files dict and a dict of the consolidated environment :rtype: tuple """ merged_files = {} merged_env = {} # If we're keeping a list of environment files separately, include the # contents of the files in the files dict include_env_in_files = env_list_tracker is not None if env_paths: for env_path in env_paths: files, env = process_environment_and_files( env_path=env_path, template=template, template_url=template_url, env_path_is_object=env_path_is_object, object_request=object_request, include_env_in_files=include_env_in_files) # 'files' looks like {"filename1": contents, "filename2": contents} # so a simple update is enough for merging merged_files.update(files) # 'env' can be a deeply nested dictionary, so a simple update is # not enough merged_env = deep_update(merged_env, env) if env_list_tracker is not None: env_url = utils.normalise_file_path_to_url(env_path) env_list_tracker.append(env_url) return merged_files, merged_env def process_environment_and_files(env_path=None, template=None, template_url=None, env_path_is_object=None, object_request=None, include_env_in_files=False): """Loads a single environment file. Returns an entry suitable for the files dict which maps the environment filename to its contents. :param env_path: full path to the file to load :type env_path: str or None :param include_env_in_files: if specified, the raw environment file itself will be included in the returned files dict :type include_env_in_files: bool :return: tuple of files dict and the loaded environment as a dict :rtype: (dict, dict) """ files = {} env = {} is_object = env_path_is_object and env_path_is_object(env_path) if is_object: raw_env = object_request and object_request('GET', env_path) env = environment_format.parse(raw_env) env_base_url = utils.base_url_for_url(env_path) resolve_environment_urls( env.get('resource_registry'), files, env_base_url, is_object=True, object_request=object_request) elif env_path: env_url = utils.normalise_file_path_to_url(env_path) env_base_url = utils.base_url_for_url(env_url) raw_env = request.urlopen(env_url).read() env = environment_format.parse(raw_env) resolve_environment_urls( env.get('resource_registry'), files, env_base_url) if include_env_in_files: files[env_url] = json.dumps(env) return files, env def resolve_environment_urls(resource_registry, files, env_base_url, is_object=False, object_request=None): """Handles any resource URLs specified in an environment. :param resource_registry: mapping of type name to template filename :type resource_registry: dict :param files: dict to store loaded file contents into :type files: dict :param env_base_url: base URL to look in when loading files :type env_base_url: str or None """ if resource_registry is None: return rr = resource_registry base_url = rr.get('base_url', env_base_url) def ignore_if(key, value): if key == 'base_url': return True if isinstance(value, dict): return True if '::' in value: # Built in providers like: "X::Compute::Server" # don't need downloading. return True if key in ['hooks', 'restricted_actions']: return True get_file_contents(rr, files, base_url, ignore_if, is_object=is_object, object_request=object_request) for res_name, res_dict in rr.get('resources', {}).items(): res_base_url = res_dict.get('base_url', base_url) get_file_contents( res_dict, files, res_base_url, ignore_if, is_object=is_object, object_request=object_request) shade-1.31.0/shade/openstackcloud.py0000666000175000017500000165602513440327640017402 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 3.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import base64 import collections import copy import datetime import functools import hashlib import ipaddress import iso8601 import json import jsonpatch import operator import os_client_config.defaults import six import threading import time import warnings import dogpile.cache import munch import requestsexceptions from six.moves import urllib import keystoneauth1.exceptions import keystoneauth1.session import os import os_client_config import shade from shade import _adapter from shade import exc from shade._heat import event_utils from shade._heat import template_utils from shade import _log from shade import _legacy_clients from shade import _normalize from shade import meta from shade import task_manager from shade import _utils OBJECT_MD5_KEY = 'x-object-meta-x-shade-md5' OBJECT_SHA256_KEY = 'x-object-meta-x-shade-sha256' OBJECT_AUTOCREATE_KEY = 'x-object-meta-x-shade-autocreated' OBJECT_AUTOCREATE_CONTAINER = 'images' IMAGE_MD5_KEY = 'owner_specified.shade.md5' IMAGE_SHA256_KEY = 'owner_specified.shade.sha256' IMAGE_OBJECT_KEY = 'owner_specified.shade.object' # Rackspace returns this for intermittent import errors IMAGE_ERROR_396 = "Image cannot be imported. Error code: '396'" DEFAULT_OBJECT_SEGMENT_SIZE = 1073741824 # 1GB # This halves the current default for Swift DEFAULT_MAX_FILE_SIZE = (5 * 1024 * 1024 * 1024 + 2) / 2 DEFAULT_SERVER_AGE = 5 DEFAULT_PORT_AGE = 5 DEFAULT_FLOAT_AGE = 5 _OCC_DOC_URL = "https://docs.openstack.org/os-client-config/latest/" OBJECT_CONTAINER_ACLS = { 'public': '.r:*,.rlistings', 'private': '', } def _no_pending_volumes(volumes): """If there are any volumes not in a steady state, don't cache""" for volume in volumes: if volume['status'] not in ('available', 'error', 'in-use'): return False return True def _no_pending_images(images): """If there are any images not in a steady state, don't cache""" for image in images: if image.status not in ('active', 'deleted', 'killed'): return False return True def _no_pending_stacks(stacks): """If there are any stacks not in a steady state, don't cache""" for stack in stacks: status = stack['stack_status'] if '_COMPLETE' not in status and '_FAILED' not in status: return False return True class OpenStackCloud( _normalize.Normalizer, _legacy_clients.LegacyClientFactoryMixin): """Represent a connection to an OpenStack Cloud. OpenStackCloud is the entry point for all cloud operations, regardless of which OpenStack service those operations may ultimately come from. The operations on an OpenStackCloud are resource oriented rather than REST API operation oriented. For instance, one will request a Floating IP and that Floating IP will be actualized either via neutron or via nova depending on how this particular cloud has decided to arrange itself. :param TaskManager manager: Optional task manager to use for running OpenStack API tasks. Unless you're doing rate limiting client side, you almost certainly don't need this. (optional) :param bool log_inner_exceptions: Ignored. Exists for backwards compat. :param bool strict: Only return documented attributes for each resource as per the shade Data Model contract. (Default False) :param app_name: Name of the application to be appended to the user-agent string. Optional, defaults to None. :param app_version: Version of the application to be appended to the user-agent string. Optional, defaults to None. :param CloudConfig cloud_config: Cloud config object from os-client-config In the future, this will be the only way to pass in cloud configuration, but is being phased in currently. """ def __init__( self, cloud_config=None, manager=None, log_inner_exceptions=False, strict=False, app_name=None, app_version=None, use_direct_get=False, **kwargs): self.log = _log.setup_logging('shade') if not cloud_config: config = os_client_config.OpenStackConfig( app_name=app_name, app_version=app_version) cloud_config = config.get_one_cloud(**kwargs) self.name = cloud_config.name self.auth = cloud_config.get_auth_args() self.region_name = cloud_config.region_name self.default_interface = cloud_config.get_interface() self.private = cloud_config.config.get('private', False) self.api_timeout = cloud_config.config['api_timeout'] self.image_api_use_tasks = cloud_config.config['image_api_use_tasks'] self.secgroup_source = cloud_config.config['secgroup_source'] self.force_ipv4 = cloud_config.force_ipv4 self.strict_mode = strict # TODO(mordred) When os-client-config adds a "get_client_settings()" # method to CloudConfig - remove this. self._extra_config = cloud_config._openstack_config.get_extra_config( 'shade', { 'get_flavor_extra_specs': True, }) if manager is not None: self.manager = manager else: self.manager = task_manager.TaskManager( name=':'.join([self.name, self.region_name]), client=self) self._external_ipv4_names = cloud_config.get_external_ipv4_networks() self._internal_ipv4_names = cloud_config.get_internal_ipv4_networks() self._external_ipv6_names = cloud_config.get_external_ipv6_networks() self._internal_ipv6_names = cloud_config.get_internal_ipv6_networks() self._nat_destination = cloud_config.get_nat_destination() self._default_network = cloud_config.get_default_network() self._floating_ip_source = cloud_config.config.get( 'floating_ip_source') if self._floating_ip_source: if self._floating_ip_source.lower() == 'none': self._floating_ip_source = None else: self._floating_ip_source = self._floating_ip_source.lower() self._use_external_network = cloud_config.config.get( 'use_external_network', True) self._use_internal_network = cloud_config.config.get( 'use_internal_network', True) # Work around older TaskManager objects that don't have submit_task if not hasattr(self.manager, 'submit_task'): self.manager.submit_task = self.manager.submitTask (self.verify, self.cert) = cloud_config.get_requests_verify_args() # Turn off urllib3 warnings about insecure certs if we have # explicitly configured requests to tell it we do not want # cert verification if not self.verify: self.log.debug( "Turning off Insecure SSL warnings since verify=False") category = requestsexceptions.InsecureRequestWarning if category: # InsecureRequestWarning references a Warning class or is None warnings.filterwarnings('ignore', category=category) self._disable_warnings = {} self.use_direct_get = use_direct_get self._servers = None self._servers_time = 0 self._servers_lock = threading.Lock() self._ports = None self._ports_time = 0 self._ports_lock = threading.Lock() self._floating_ips = None self._floating_ips_time = 0 self._floating_ips_lock = threading.Lock() self._floating_network_by_router = None self._floating_network_by_router_run = False self._floating_network_by_router_lock = threading.Lock() self._networks_lock = threading.Lock() self._reset_network_caches() cache_expiration_time = int(cloud_config.get_cache_expiration_time()) cache_class = cloud_config.get_cache_class() cache_arguments = cloud_config.get_cache_arguments() self._resource_caches = {} if cache_class != 'dogpile.cache.null': self.cache_enabled = True self._cache = self._make_cache( cache_class, cache_expiration_time, cache_arguments) expirations = cloud_config.get_cache_expiration() for expire_key in expirations.keys(): # Only build caches for things we have list operations for if getattr( self, 'list_{0}'.format(expire_key), None): self._resource_caches[expire_key] = self._make_cache( cache_class, expirations[expire_key], cache_arguments) self._SERVER_AGE = DEFAULT_SERVER_AGE self._PORT_AGE = DEFAULT_PORT_AGE self._FLOAT_AGE = DEFAULT_FLOAT_AGE else: self.cache_enabled = False def _fake_invalidate(unused): pass class _FakeCache(object): def invalidate(self): pass # Don't cache list_servers if we're not caching things. # Replace this with a more specific cache configuration # soon. self._SERVER_AGE = 0 self._PORT_AGE = 0 self._FLOAT_AGE = 0 self._cache = _FakeCache() # Undecorate cache decorated methods. Otherwise the call stacks # wind up being stupidly long and hard to debug for method in _utils._decorated_methods: meth_obj = getattr(self, method, None) if not meth_obj: continue if (hasattr(meth_obj, 'invalidate') and hasattr(meth_obj, 'func')): new_func = functools.partial(meth_obj.func, self) new_func.invalidate = _fake_invalidate setattr(self, method, new_func) # If server expiration time is set explicitly, use that. Otherwise # fall back to whatever it was before self._SERVER_AGE = cloud_config.get_cache_resource_expiration( 'server', self._SERVER_AGE) self._PORT_AGE = cloud_config.get_cache_resource_expiration( 'port', self._PORT_AGE) self._FLOAT_AGE = cloud_config.get_cache_resource_expiration( 'floating_ip', self._FLOAT_AGE) self._container_cache = dict() self._file_hash_cache = dict() self._keystone_session = None self._legacy_clients = {} self._raw_clients = {} self._local_ipv6 = ( _utils.localhost_supports_ipv6() if not self.force_ipv4 else False) self.cloud_config = cloud_config def connect_as(self, **kwargs): """Make a new OpenStackCloud object with new auth context. Take the existing settings from the current cloud and construct a new OpenStackCloud object with some of the auth settings overridden. This is useful for getting an object to perform tasks with as another user, or in the context of a different project. .. code-block:: python cloud = shade.openstack_cloud(cloud='example') # Work normally servers = cloud.list_servers() cloud2 = cloud.connect_as(username='different-user', password='') # Work as different-user servers = cloud2.list_servers() :param kwargs: keyword arguments can contain anything that would normally go in an auth dict. They will override the same settings from the parent cloud as appropriate. Entries that do not want to be overridden can be ommitted. """ if self.cloud_config._openstack_config: config = self.cloud_config._openstack_config else: config = os_client_config.OpenStackConfig( app_name=self.cloud_config._app_name, app_version=self.cloud_config._app_version, load_yaml_config=False) params = copy.deepcopy(self.cloud_config.config) # Remove profile from current cloud so that overridding works params.pop('profile', None) # Utility function to help with the stripping below. def pop_keys(params, auth, name_key, id_key): if name_key in auth or id_key in auth: params['auth'].pop(name_key, None) params['auth'].pop(id_key, None) # If there are user, project or domain settings in the incoming auth # dict, strip out both id and name so that a user can say: # cloud.connect_as(project_name='foo') # and have that work with clouds that have a project_id set in their # config. for prefix in ('user', 'project'): if prefix == 'user': name_key = 'username' else: name_key = 'project_name' id_key = '{prefix}_id'.format(prefix=prefix) pop_keys(params, kwargs, name_key, id_key) id_key = '{prefix}_domain_id'.format(prefix=prefix) name_key = '{prefix}_domain_name'.format(prefix=prefix) pop_keys(params, kwargs, name_key, id_key) for key, value in kwargs.items(): params['auth'][key] = value # Closure to pass to OpenStackConfig to ensure the new cloud shares # the Session with the current cloud. This will ensure that version # discovery cache will be re-used. def session_constructor(*args, **kwargs): # We need to pass our current keystone session to the Session # Constructor, otherwise the new auth plugin doesn't get used. return keystoneauth1.session.Session(session=self.keystone_session) cloud_config = config.get_one_cloud( session_constructor=session_constructor, **params) # Override the cloud name so that logging/location work right if hasattr(cloud_config, '_name'): cloud_config._name = self.name else: cloud_config.name = self.name cloud_config.config['profile'] = self.name # Use self.__class__ so that OperatorCloud will return an OperatorCloud # instance. This should also help passthrough from sdk work better when # we have it. return self.__class__(cloud_config=cloud_config) def connect_as_project(self, project): """Make a new OpenStackCloud object with a new project. Take the existing settings from the current cloud and construct a new OpenStackCloud object with the project settings overridden. This is useful for getting an object to perform tasks with as another user, or in the context of a different project. .. code-block:: python cloud = shade.openstack_cloud(cloud='example') # Work normally servers = cloud.list_servers() cloud2 = cloud.connect_as_project('different-project') # Work in different-project servers = cloud2.list_servers() :param project: Either a project name or a project dict as returned by `list_projects`. """ auth = {} if isinstance(project, dict): auth['project_id'] = project.get('id') auth['project_name'] = project.get('name') if project.get('domain_id'): auth['project_domain_id'] = project['domain_id'] else: auth['project_name'] = project return self.connect_as(**auth) def _make_cache(self, cache_class, expiration_time, arguments): return dogpile.cache.make_region( function_key_generator=self._make_cache_key ).configure( cache_class, expiration_time=expiration_time, arguments=arguments) def _make_cache_key(self, namespace, fn): fname = fn.__name__ if namespace is None: name_key = self.name else: name_key = '%s:%s' % (self.name, namespace) def generate_key(*args, **kwargs): arg_key = ','.join(args) kw_keys = sorted(kwargs.keys()) kwargs_key = ','.join( ['%s:%s' % (k, kwargs[k]) for k in kw_keys if k != 'cache']) ans = "_".join( [str(name_key), fname, arg_key, kwargs_key]) return ans return generate_key def _get_cache(self, resource_name): if resource_name and resource_name in self._resource_caches: return self._resource_caches[resource_name] else: return self._cache def _get_client( self, service_key, client_class=None, interface_key=None, pass_version_arg=True, **kwargs): try: client = self.cloud_config.get_legacy_client( service_key=service_key, client_class=client_class, interface_key=interface_key, pass_version_arg=pass_version_arg, **kwargs) except Exception: self.log.debug( "Couldn't construct %(service)s object", {'service': service_key}, exc_info=True) raise if client is None: raise exc.OpenStackCloudException( "Failed to instantiate {service} client." " This could mean that your credentials are wrong.".format( service=service_key)) return client def _get_major_version_id(self, version): if isinstance(version, int): return version elif isinstance(version, six.string_types + (tuple,)): return int(version[0]) return version def _get_versioned_client( self, service_type, min_version=None, max_version=None): config_version = self.cloud_config.get_api_version(service_type) config_major = self._get_major_version_id(config_version) max_major = self._get_major_version_id(max_version) min_major = self._get_major_version_id(min_version) # NOTE(mordred) The shade logic for versions is slightly different # than the ksa Adapter constructor logic. shade knows the versions # it knows, and uses them when it detects them. However, if a user # requests a version, and it's not found, and a different one shade # does know about it found, that's a warning in shade. if config_version: if min_major and config_major < min_major: raise exc.OpenStackCloudException( "Version {config_version} requested for {service_type}" " but shade understands a minimum of {min_version}".format( config_version=config_version, service_type=service_type, min_version=min_version)) elif max_major and config_major > max_major: raise exc.OpenStackCloudException( "Version {config_version} requested for {service_type}" " but shade understands a maximum of {max_version}".format( config_version=config_version, service_type=service_type, max_version=max_version)) request_min_version = config_version request_max_version = '{version}.latest'.format( version=config_major) adapter = _adapter.ShadeAdapter( session=self.keystone_session, manager=self.manager, service_type=self.cloud_config.get_service_type(service_type), service_name=self.cloud_config.get_service_name(service_type), interface=self.cloud_config.get_interface(service_type), endpoint_override=self.cloud_config.get_endpoint(service_type), region_name=self.cloud_config.region, min_version=request_min_version, max_version=request_max_version, shade_logger=self.log) if adapter.get_endpoint(): return adapter adapter = _adapter.ShadeAdapter( session=self.keystone_session, manager=self.manager, service_type=self.cloud_config.get_service_type(service_type), service_name=self.cloud_config.get_service_name(service_type), interface=self.cloud_config.get_interface(service_type), endpoint_override=self.cloud_config.get_endpoint(service_type), region_name=self.cloud_config.region, min_version=min_version, max_version=max_version, shade_logger=self.log) # data.api_version can be None if no version was detected, such # as with neutron api_version = adapter.get_api_major_version( endpoint_override=self.cloud_config.get_endpoint(service_type)) api_major = self._get_major_version_id(api_version) # If we detect a different version that was configured, warn the user. # shade still knows what to do - but if the user gave us an explicit # version and we couldn't find it, they may want to investigate. if api_version and (api_major != config_major): warning_msg = ( '{service_type} is configured for {config_version}' ' but only {api_version} is available. shade is happy' ' with this version, but if you were trying to force an' ' override, that did not happen. You may want to check' ' your cloud, or remove the version specification from' ' your config.'.format( service_type=service_type, config_version=config_version, api_version='.'.join([str(f) for f in api_version]))) self.log.debug(warning_msg) warnings.warn(warning_msg) return adapter def _get_raw_client( self, service_type, api_version=None, endpoint_override=None): return _adapter.ShadeAdapter( session=self.keystone_session, manager=self.manager, service_type=self.cloud_config.get_service_type(service_type), service_name=self.cloud_config.get_service_name(service_type), interface=self.cloud_config.get_interface(service_type), endpoint_override=self.cloud_config.get_endpoint( service_type) or endpoint_override, region_name=self.cloud_config.region, shade_logger=self.log) def _is_client_version(self, client, version): client_name = '_{client}_client'.format(client=client) client = getattr(self, client_name) return client._version_matches(version) @property def _application_catalog_client(self): if 'application-catalog' not in self._raw_clients: self._raw_clients['application-catalog'] = self._get_raw_client( 'application-catalog') return self._raw_clients['application-catalog'] @property def _baremetal_client(self): if 'baremetal' not in self._raw_clients: client = self._get_raw_client('baremetal') # Do this to force version discovery. We need to do that, because # the endpoint-override trick we do for neutron because # ironicclient just appends a /v1 won't work and will break # keystoneauth - because ironic's versioned discovery endpoint # is non-compliant and doesn't return an actual version dict. client = self._get_versioned_client( 'baremetal', min_version=1, max_version='1.latest') self._raw_clients['baremetal'] = client return self._raw_clients['baremetal'] @property def _container_infra_client(self): if 'container-infra' not in self._raw_clients: self._raw_clients['container-infra'] = self._get_raw_client( 'container-infra') return self._raw_clients['container-infra'] @property def _compute_client(self): # TODO(mordred) Deal with microversions if 'compute' not in self._raw_clients: self._raw_clients['compute'] = self._get_raw_client('compute') return self._raw_clients['compute'] @property def _database_client(self): if 'database' not in self._raw_clients: self._raw_clients['database'] = self._get_raw_client('database') return self._raw_clients['database'] @property def _dns_client(self): if 'dns' not in self._raw_clients: dns_client = self._get_versioned_client( 'dns', min_version=2, max_version='2.latest') self._raw_clients['dns'] = dns_client return self._raw_clients['dns'] @property def _identity_client(self): if 'identity' not in self._raw_clients: self._raw_clients['identity'] = self._get_versioned_client( 'identity', min_version=2, max_version='3.latest') return self._raw_clients['identity'] @property def _raw_image_client(self): if 'raw-image' not in self._raw_clients: image_client = self._get_raw_client('image') self._raw_clients['raw-image'] = image_client return self._raw_clients['raw-image'] @property def _image_client(self): if 'image' not in self._raw_clients: self._raw_clients['image'] = self._get_versioned_client( 'image', min_version=1, max_version='2.latest') return self._raw_clients['image'] @property def _network_client(self): if 'network' not in self._raw_clients: client = self._get_raw_client('network') # TODO(mordred) I don't care if this is what neutronclient does, # fix this. # Don't bother with version discovery - there is only one version # of neutron. This is what neutronclient does, fwiw. endpoint = client.get_endpoint() if not endpoint.rstrip().rsplit('/')[1] == 'v2.0': if not endpoint.endswith('/'): endpoint += '/' endpoint = urllib.parse.urljoin( endpoint, 'v2.0') client.endpoint_override = endpoint self._raw_clients['network'] = client return self._raw_clients['network'] @property def _object_store_client(self): if 'object-store' not in self._raw_clients: raw_client = self._get_raw_client('object-store') self._raw_clients['object-store'] = raw_client return self._raw_clients['object-store'] @property def _orchestration_client(self): if 'orchestration' not in self._raw_clients: raw_client = self._get_raw_client('orchestration') self._raw_clients['orchestration'] = raw_client return self._raw_clients['orchestration'] @property def _volume_client(self): if 'volume' not in self._raw_clients: self._raw_clients['volume'] = self._get_raw_client('volume') return self._raw_clients['volume'] def pprint(self, resource): """Wrapper aroud pprint that groks munch objects""" # import late since this is a utility function import pprint new_resource = _utils._dictify_resource(resource) pprint.pprint(new_resource) def pformat(self, resource): """Wrapper aroud pformat that groks munch objects""" # import late since this is a utility function import pprint new_resource = _utils._dictify_resource(resource) return pprint.pformat(new_resource) @property def keystone_session(self): if self._keystone_session is None: try: self._keystone_session = self.cloud_config.get_session() if hasattr(self._keystone_session, 'additional_user_agent'): self._keystone_session.additional_user_agent.append( ('shade', shade.__version__)) except Exception as e: raise exc.OpenStackCloudException( "Error authenticating to keystone: %s " % str(e)) return self._keystone_session @property def _keystone_catalog(self): return self.keystone_session.auth.get_access( self.keystone_session).service_catalog @property def service_catalog(self): return self._keystone_catalog.catalog def endpoint_for(self, service_type, interface='public'): return self._keystone_catalog.url_for( service_type=service_type, interface=interface) @property def auth_token(self): # Keystone's session will reuse a token if it is still valid. # We don't need to track validity here, just get_token() each time. return self.keystone_session.get_token() @property def current_user_id(self): """Get the id of the currently logged-in user from the token.""" return self.keystone_session.auth.get_access( self.keystone_session).user_id @property def current_project_id(self): """Get the current project ID. Returns the project_id of the current token scope. None means that the token is domain scoped or unscoped. :raises keystoneauth1.exceptions.auth.AuthorizationFailure: if a new token fetch fails. :raises keystoneauth1.exceptions.auth_plugins.MissingAuthPlugin: if a plugin is not available. """ return self.keystone_session.get_project_id() @property def current_project(self): """Return a ``munch.Munch`` describing the current project""" return self._get_project_info() def _get_project_info(self, project_id=None): project_info = munch.Munch( id=project_id, name=None, domain_id=None, domain_name=None, ) if not project_id or project_id == self.current_project_id: # If we don't have a project_id parameter, it means a user is # directly asking what the current state is. # Alternately, if we have one, that means we're calling this # from within a normalize function, which means the object has # a project_id associated with it. If the project_id matches # the project_id of our current token, that means we can supplement # the info with human readable info about names if we have them. # If they don't match, that means we're an admin who has pulled # an object from a different project, so adding info from the # current token would be wrong. auth_args = self.cloud_config.config.get('auth', {}) project_info['id'] = self.current_project_id project_info['name'] = auth_args.get('project_name') project_info['domain_id'] = auth_args.get('project_domain_id') project_info['domain_name'] = auth_args.get('project_domain_name') return project_info @property def current_location(self): """Return a ``munch.Munch`` explaining the current cloud location.""" return self._get_current_location() def _get_current_location(self, project_id=None, zone=None): return munch.Munch( cloud=self.name, region_name=self.region_name, zone=zone, project=self._get_project_info(project_id), ) def _get_identity_location(self): '''Identity resources do not exist inside of projects.''' return munch.Munch( cloud=self.name, region_name=None, zone=None, project=munch.Munch( id=None, name=None, domain_id=None, domain_name=None)) def _get_project_id_param_dict(self, name_or_id): if name_or_id: project = self.get_project(name_or_id) if not project: return {} if self._is_client_version('identity', 3): return {'default_project_id': project['id']} else: return {'tenant_id': project['id']} else: return {} def _get_domain_id_param_dict(self, domain_id): """Get a useable domain.""" # Keystone v3 requires domains for user and project creation. v2 does # not. However, keystone v2 does not allow user creation by non-admin # users, so we can throw an error to the user that does not need to # mention api versions if self._is_client_version('identity', 3): if not domain_id: raise exc.OpenStackCloudException( "User or project creation requires an explicit" " domain_id argument.") else: return {'domain_id': domain_id} else: return {} def _get_identity_params(self, domain_id=None, project=None): """Get the domain and project/tenant parameters if needed. keystone v2 and v3 are divergent enough that we need to pass or not pass project or tenant_id or domain or nothing in a sane manner. """ ret = {} ret.update(self._get_domain_id_param_dict(domain_id)) ret.update(self._get_project_id_param_dict(project)) return ret def range_search(self, data, filters): """Perform integer range searches across a list of dictionaries. Given a list of dictionaries, search across the list using the given dictionary keys and a range of integer values for each key. Only dictionaries that match ALL search filters across the entire original data set will be returned. It is not a requirement that each dictionary contain the key used for searching. Those without the key will be considered non-matching. The range values must be string values and is either a set of digits representing an integer for matching, or a range operator followed by a set of digits representing an integer for matching. If a range operator is not given, exact value matching will be used. Valid operators are one of: <,>,<=,>= :param list data: List of dictionaries to be searched. :param filters: Dict describing the one or more range searches to perform. If more than one search is given, the result will be the members of the original data set that match ALL searches. An example of filtering by multiple ranges:: {"vcpus": "<=5", "ram": "<=2048", "disk": "1"} :returns: A list subset of the original data set. :raises: OpenStackCloudException on invalid range expressions. """ filtered = [] for key, range_value in filters.items(): # We always want to operate on the full data set so that # calculations for minimum and maximum are correct. results = _utils.range_filter(data, key, range_value) if not filtered: # First set of results filtered = results else: # The combination of all searches should be the intersection of # all result sets from each search. So adjust the current set # of filtered data by computing its intersection with the # latest result set. filtered = [r for r in results for f in filtered if r == f] return filtered def _get_and_munchify(self, key, data): """Wrapper around meta.get_and_munchify. Some of the methods expect a `meta` attribute to be passed in as part of the method signature. In those methods the meta param is overriding the meta module making the call to meta.get_and_munchify to fail. """ return meta.get_and_munchify(key, data) @_utils.cache_on_arguments() def list_projects(self, domain_id=None, name_or_id=None, filters=None): """List projects. With no parameters, returns a full listing of all visible projects. :param domain_id: domain ID to scope the searched projects. :param name_or_id: project name or ID. :param filters: a dict containing additional filters to use OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: a list of ``munch.Munch`` containing the projects :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ kwargs = dict( filters=filters, domain_id=domain_id) if self._is_client_version('identity', 3): kwargs['obj_name'] = 'project' pushdown, filters = _normalize._split_filters(**kwargs) try: if self._is_client_version('identity', 3): key = 'projects' else: key = 'tenants' data = self._identity_client.get( '/{endpoint}'.format(endpoint=key), params=pushdown) projects = self._normalize_projects( self._get_and_munchify(key, data)) except Exception as e: self.log.debug("Failed to list projects", exc_info=True) raise exc.OpenStackCloudException(str(e)) return _utils._filter_list(projects, name_or_id, filters) def search_projects(self, name_or_id=None, filters=None, domain_id=None): '''Backwards compatibility method for search_projects search_projects originally had a parameter list that was name_or_id, filters and list had domain_id first. This method exists in this form to allow code written with positional parameter to still work. But really, use keyword arguments. ''' return self.list_projects( domain_id=domain_id, name_or_id=name_or_id, filters=filters) def get_project(self, name_or_id, filters=None, domain_id=None): """Get exactly one project. :param name_or_id: project name or ID. :param filters: a dict containing additional filters to use. :param domain_id: domain ID (identity v3 only). :returns: a list of ``munch.Munch`` containing the project description. :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ return _utils._get_entity(self, 'project', name_or_id, filters, domain_id=domain_id) @_utils.valid_kwargs('description') def update_project(self, name_or_id, enabled=None, domain_id=None, **kwargs): with _utils.shade_exceptions( "Error in updating project {project}".format( project=name_or_id)): proj = self.get_project(name_or_id, domain_id=domain_id) if not proj: raise exc.OpenStackCloudException( "Project %s not found." % name_or_id) if enabled is not None: kwargs.update({'enabled': enabled}) # NOTE(samueldmq): Current code only allow updates of description # or enabled fields. if self._is_client_version('identity', 3): data = self._identity_client.patch( '/projects/' + proj['id'], json={'project': kwargs}) project = self._get_and_munchify('project', data) else: data = self._identity_client.post( '/tenants/' + proj['id'], json={'tenant': kwargs}) project = self._get_and_munchify('tenant', data) project = self._normalize_project(project) self.list_projects.invalidate(self) return project def create_project( self, name, description=None, domain_id=None, enabled=True): """Create a project.""" with _utils.shade_exceptions( "Error in creating project {project}".format(project=name)): project_ref = self._get_domain_id_param_dict(domain_id) project_ref.update({'name': name, 'description': description, 'enabled': enabled}) endpoint, key = ('tenants', 'tenant') if self._is_client_version('identity', 3): endpoint, key = ('projects', 'project') data = self._identity_client.post( '/{endpoint}'.format(endpoint=endpoint), json={key: project_ref}) project = self._normalize_project( self._get_and_munchify(key, data)) self.list_projects.invalidate(self) return project def delete_project(self, name_or_id, domain_id=None): """Delete a project. :param string name_or_id: Project name or ID. :param string domain_id: Domain ID containing the project(identity v3 only). :returns: True if delete succeeded, False if the project was not found. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ with _utils.shade_exceptions( "Error in deleting project {project}".format( project=name_or_id)): project = self.get_project(name_or_id, domain_id=domain_id) if project is None: self.log.debug( "Project %s not found for deleting", name_or_id) return False if self._is_client_version('identity', 3): self._identity_client.delete('/projects/' + project['id']) else: self._identity_client.delete('/tenants/' + project['id']) return True @_utils.valid_kwargs('domain_id') @_utils.cache_on_arguments() def list_users(self, **kwargs): """List users. :param domain_id: Domain ID. (v3) :returns: a list of ``munch.Munch`` containing the user description. :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ data = self._identity_client.get('/users', params=kwargs) return _utils.normalize_users( self._get_and_munchify('users', data)) @_utils.valid_kwargs('domain_id') def search_users(self, name_or_id=None, filters=None, **kwargs): """Search users. :param string name_or_id: user name or ID. :param domain_id: Domain ID. (v3) :param filters: a dict containing additional filters to use. OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: a list of ``munch.Munch`` containing the users :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ users = self.list_users(**kwargs) return _utils._filter_list(users, name_or_id, filters) @_utils.valid_kwargs('domain_id') def get_user(self, name_or_id, filters=None, **kwargs): """Get exactly one user. :param string name_or_id: user name or ID. :param domain_id: Domain ID. (v3) :param filters: a dict containing additional filters to use. OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: a single ``munch.Munch`` containing the user description. :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ return _utils._get_entity(self, 'user', name_or_id, filters, **kwargs) def get_user_by_id(self, user_id, normalize=True): """Get a user by ID. :param string user_id: user ID :param bool normalize: Flag to control dict normalization :returns: a single ``munch.Munch`` containing the user description """ data = self._identity_client.get( '/users/{user}'.format(user=user_id), error_message="Error getting user with ID {user_id}".format( user_id=user_id)) user = self._get_and_munchify('user', data) if user and normalize: user = _utils.normalize_users(user) return user # NOTE(Shrews): Keystone v2 supports updating only name, email and enabled. @_utils.valid_kwargs('name', 'email', 'enabled', 'domain_id', 'password', 'description', 'default_project') def update_user(self, name_or_id, **kwargs): self.list_users.invalidate(self) user_kwargs = {} if 'domain_id' in kwargs and kwargs['domain_id']: user_kwargs['domain_id'] = kwargs['domain_id'] user = self.get_user(name_or_id, **user_kwargs) # TODO(mordred) When this changes to REST, force interface=admin # in the adapter call if it's an admin force call (and figure out how # to make that disctinction) if self._is_client_version('identity', 2): # Do not pass v3 args to a v2 keystone. kwargs.pop('domain_id', None) kwargs.pop('description', None) kwargs.pop('default_project', None) password = kwargs.pop('password', None) if password is not None: with _utils.shade_exceptions( "Error updating password for {user}".format( user=name_or_id)): error_msg = "Error updating password for user {}".format( name_or_id) data = self._identity_client.put( '/users/{u}/OS-KSADM/password'.format(u=user['id']), json={'user': {'password': password}}, error_message=error_msg) # Identity v2.0 implements PUT. v3 PATCH. Both work as PATCH. data = self._identity_client.put( '/users/{user}'.format(user=user['id']), json={'user': kwargs}, error_message="Error in updating user {}".format(name_or_id)) else: # NOTE(samueldmq): now this is a REST call and domain_id is dropped # if None. keystoneclient drops keys with None values. if 'domain_id' in kwargs and kwargs['domain_id'] is None: del kwargs['domain_id'] data = self._identity_client.patch( '/users/{user}'.format(user=user['id']), json={'user': kwargs}, error_message="Error in updating user {}".format(name_or_id)) user = self._get_and_munchify('user', data) self.list_users.invalidate(self) return _utils.normalize_users([user])[0] def create_user( self, name, password=None, email=None, default_project=None, enabled=True, domain_id=None, description=None): """Create a user.""" params = self._get_identity_params(domain_id, default_project) params.update({'name': name, 'password': password, 'email': email, 'enabled': enabled}) if self._is_client_version('identity', 3): params['description'] = description elif description is not None: self.log.info( "description parameter is not supported on Keystone v2") error_msg = "Error in creating user {user}".format(user=name) data = self._identity_client.post('/users', json={'user': params}, error_message=error_msg) user = self._get_and_munchify('user', data) self.list_users.invalidate(self) return _utils.normalize_users([user])[0] @_utils.valid_kwargs('domain_id') def delete_user(self, name_or_id, **kwargs): # TODO(mordred) Why are we invalidating at the TOP? self.list_users.invalidate(self) user = self.get_user(name_or_id, **kwargs) if not user: self.log.debug( "User {0} not found for deleting".format(name_or_id)) return False # TODO(mordred) Extra GET only needed to support keystoneclient. # Can be removed as a follow-on. user = self.get_user_by_id(user['id'], normalize=False) self._identity_client.delete( '/users/{user}'.format(user=user['id']), error_message="Error in deleting user {user}".format( user=name_or_id)) self.list_users.invalidate(self) return True def _get_user_and_group(self, user_name_or_id, group_name_or_id): user = self.get_user(user_name_or_id) if not user: raise exc.OpenStackCloudException( 'User {user} not found'.format(user=user_name_or_id)) group = self.get_group(group_name_or_id) if not group: raise exc.OpenStackCloudException( 'Group {user} not found'.format(user=group_name_or_id)) return (user, group) def add_user_to_group(self, name_or_id, group_name_or_id): """Add a user to a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) error_msg = "Error adding user {user} to group {group}".format( user=name_or_id, group=group_name_or_id) self._identity_client.put( '/groups/{g}/users/{u}'.format(g=group['id'], u=user['id']), error_message=error_msg) def is_user_in_group(self, name_or_id, group_name_or_id): """Check to see if a user is in a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :returns: True if user is in the group, False otherwise :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) try: self._identity_client.head( '/groups/{g}/users/{u}'.format(g=group['id'], u=user['id'])) return True except exc.OpenStackCloudURINotFound: # NOTE(samueldmq): knowing this URI exists, let's interpret this as # user not found in group rather than URI not found. return False def remove_user_from_group(self, name_or_id, group_name_or_id): """Remove a user from a group. :param string name_or_id: User name or ID :param string group_name_or_id: Group name or ID :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ user, group = self._get_user_and_group(name_or_id, group_name_or_id) error_msg = "Error removing user {user} from group {group}".format( user=name_or_id, group=group_name_or_id) self._identity_client.delete( '/groups/{g}/users/{u}'.format(g=group['id'], u=user['id']), error_message=error_msg) def get_template_contents( self, template_file=None, template_url=None, template_object=None, files=None): try: return template_utils.get_template_contents( template_file=template_file, template_url=template_url, template_object=template_object, files=files) except Exception as e: raise exc.OpenStackCloudException( "Error in processing template files: %s" % str(e)) def create_stack( self, name, tags=None, template_file=None, template_url=None, template_object=None, files=None, rollback=True, wait=False, timeout=3600, environment_files=None, **parameters): """Create a stack. :param string name: Name of the stack. :param tags: List of tag(s) of the stack. (optional) :param string template_file: Path to the template. :param string template_url: URL of template. :param string template_object: URL to retrieve template object. :param dict files: dict of additional file content to include. :param boolean rollback: Enable rollback on create failure. :param boolean wait: Whether to wait for the create to finish. :param int timeout: Stack create timeout in seconds. None will use the server default. :param list environment_files: Paths to environment files to apply. Other arguments will be passed as stack parameters which will take precedence over any parameters specified in the environments. Only one of template_file, template_url, template_object should be specified. :returns: a dict containing the stack description :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ if timeout: timeout = timeout // 60 envfiles, env = template_utils.process_multiple_environments_and_files( env_paths=environment_files) tpl_files, template = template_utils.get_template_contents( template_file=template_file, template_url=template_url, template_object=template_object, files=files) params = dict( stack_name=name, tags=tags, disable_rollback=not rollback, parameters=parameters, template=template, files=dict(list(tpl_files.items()) + list(envfiles.items())), environment=env, timeout_mins=timeout, ) self._orchestration_client.post('/stacks', json=params) if wait: event_utils.poll_for_events(self, stack_name=name, action='CREATE') return self.get_stack(name) def update_stack( self, name_or_id, template_file=None, template_url=None, template_object=None, files=None, rollback=True, wait=False, timeout=3600, environment_files=None, **parameters): """Update a stack. :param string name_or_id: Name or ID of the stack to update. :param string template_file: Path to the template. :param string template_url: URL of template. :param string template_object: URL to retrieve template object. :param dict files: dict of additional file content to include. :param boolean rollback: Enable rollback on update failure. :param boolean wait: Whether to wait for the delete to finish. :param int timeout: Stack update timeout in seconds. None will use the server default. :param list environment_files: Paths to environment files to apply. Other arguments will be passed as stack parameters which will take precedence over any parameters specified in the environments. Only one of template_file, template_url, template_object should be specified. :returns: a dict containing the stack description :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API calls """ if timeout: timeout = timeout // 60 envfiles, env = template_utils.process_multiple_environments_and_files( env_paths=environment_files) tpl_files, template = template_utils.get_template_contents( template_file=template_file, template_url=template_url, template_object=template_object, files=files) params = dict( disable_rollback=not rollback, parameters=parameters, template=template, files=dict(list(tpl_files.items()) + list(envfiles.items())), environment=env, timeout_mins=timeout, ) if wait: # find the last event to use as the marker events = event_utils.get_events( self, name_or_id, event_args={'sort_dir': 'desc', 'limit': 1}) marker = events[0].id if events else None self._orchestration_client.put( '/stacks/{name_or_id}'.format(name_or_id=name_or_id), json=params) if wait: event_utils.poll_for_events(self, name_or_id, action='UPDATE', marker=marker) return self.get_stack(name_or_id) def delete_stack(self, name_or_id, wait=False): """Delete a stack :param string name_or_id: Stack name or ID. :param boolean wait: Whether to wait for the delete to finish :returns: True if delete succeeded, False if the stack was not found. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ stack = self.get_stack(name_or_id, resolve_outputs=False) if stack is None: self.log.debug("Stack %s not found for deleting", name_or_id) return False if wait: # find the last event to use as the marker events = event_utils.get_events( self, name_or_id, event_args={'sort_dir': 'desc', 'limit': 1}) marker = events[0].id if events else None self._orchestration_client.delete( '/stacks/{id}'.format(id=stack['id'])) if wait: try: event_utils.poll_for_events(self, stack_name=name_or_id, action='DELETE', marker=marker) except exc.OpenStackCloudHTTPError: pass stack = self.get_stack(name_or_id, resolve_outputs=False) if stack and stack['stack_status'] == 'DELETE_FAILED': raise exc.OpenStackCloudException( "Failed to delete stack {id}: {reason}".format( id=name_or_id, reason=stack['stack_status_reason'])) return True def get_name(self): return self.name def get_region(self): return self.region_name def get_flavor_name(self, flavor_id): flavor = self.get_flavor(flavor_id, get_extra=False) if flavor: return flavor['name'] return None def get_flavor_by_ram(self, ram, include=None, get_extra=True): """Get a flavor based on amount of RAM available. Finds the flavor with the least amount of RAM that is at least as much as the specified amount. If `include` is given, further filter based on matching flavor name. :param int ram: Minimum amount of RAM. :param string include: If given, will return a flavor whose name contains this string as a substring. """ flavors = self.list_flavors(get_extra=get_extra) for flavor in sorted(flavors, key=operator.itemgetter('ram')): if (flavor['ram'] >= ram and (not include or include in flavor['name'])): return flavor raise exc.OpenStackCloudException( "Could not find a flavor with {ram} and '{include}'".format( ram=ram, include=include)) def get_session_endpoint(self, service_key): try: return self.cloud_config.get_session_endpoint(service_key) except keystoneauth1.exceptions.catalog.EndpointNotFound as e: self.log.debug( "Endpoint not found in %s cloud: %s", self.name, str(e)) endpoint = None except exc.OpenStackCloudException: raise except Exception as e: raise exc.OpenStackCloudException( "Error getting {service} endpoint on {cloud}:{region}:" " {error}".format( service=service_key, cloud=self.name, region=self.region_name, error=str(e))) return endpoint def has_service(self, service_key): if not self.cloud_config.config.get('has_%s' % service_key, True): # TODO(mordred) add a stamp here so that we only report this once if not (service_key in self._disable_warnings and self._disable_warnings[service_key]): self.log.debug( "Disabling %(service_key)s entry in catalog" " per config", {'service_key': service_key}) self._disable_warnings[service_key] = True return False try: endpoint = self.get_session_endpoint(service_key) except exc.OpenStackCloudException: return False if endpoint: return True else: return False @_utils.cache_on_arguments() def _nova_extensions(self): extensions = set() data = self._compute_client.get( '/extensions', error_message="Error fetching extension list for nova") for extension in self._get_and_munchify('extensions', data): extensions.add(extension['alias']) return extensions def _has_nova_extension(self, extension_name): return extension_name in self._nova_extensions() def search_keypairs(self, name_or_id=None, filters=None): keypairs = self.list_keypairs() return _utils._filter_list(keypairs, name_or_id, filters) @_utils.cache_on_arguments() def _neutron_extensions(self): extensions = set() data = self._network_client.get( '/extensions.json', error_message="Error fetching extension list for neutron") for extension in self._get_and_munchify('extensions', data): extensions.add(extension['alias']) return extensions def _has_neutron_extension(self, extension_alias): return extension_alias in self._neutron_extensions() def search_networks(self, name_or_id=None, filters=None): """Search networks :param name_or_id: Name or ID of the desired network. :param filters: a dict containing additional filters to use. e.g. {'router:external': True} :returns: a list of ``munch.Munch`` containing the network description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ networks = self.list_networks(filters) return _utils._filter_list(networks, name_or_id, filters) def search_routers(self, name_or_id=None, filters=None): """Search routers :param name_or_id: Name or ID of the desired router. :param filters: a dict containing additional filters to use. e.g. {'admin_state_up': True} :returns: a list of ``munch.Munch`` containing the router description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ routers = self.list_routers(filters) return _utils._filter_list(routers, name_or_id, filters) def search_subnets(self, name_or_id=None, filters=None): """Search subnets :param name_or_id: Name or ID of the desired subnet. :param filters: a dict containing additional filters to use. e.g. {'enable_dhcp': True} :returns: a list of ``munch.Munch`` containing the subnet description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ subnets = self.list_subnets(filters) return _utils._filter_list(subnets, name_or_id, filters) def search_ports(self, name_or_id=None, filters=None): """Search ports :param name_or_id: Name or ID of the desired port. :param filters: a dict containing additional filters to use. e.g. {'device_id': '2711c67a-b4a7-43dd-ace7-6187b791c3f0'} :returns: a list of ``munch.Munch`` containing the port description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ # If port caching is enabled, do not push the filter down to # neutron; get all the ports (potentially from the cache) and # filter locally. if self._PORT_AGE: pushdown_filters = None else: pushdown_filters = filters ports = self.list_ports(pushdown_filters) return _utils._filter_list(ports, name_or_id, filters) def search_qos_policies(self, name_or_id=None, filters=None): """Search QoS policies :param name_or_id: Name or ID of the desired policy. :param filters: a dict containing additional filters to use. e.g. {'shared': True} :returns: a list of ``munch.Munch`` containing the network description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ policies = self.list_qos_policies(filters) return _utils._filter_list(policies, name_or_id, filters) def search_volumes(self, name_or_id=None, filters=None): volumes = self.list_volumes() return _utils._filter_list( volumes, name_or_id, filters) def search_volume_snapshots(self, name_or_id=None, filters=None): volumesnapshots = self.list_volume_snapshots() return _utils._filter_list( volumesnapshots, name_or_id, filters) def search_volume_backups(self, name_or_id=None, filters=None): volume_backups = self.list_volume_backups() return _utils._filter_list( volume_backups, name_or_id, filters) def search_volume_types( self, name_or_id=None, filters=None, get_extra=True): volume_types = self.list_volume_types(get_extra=get_extra) return _utils._filter_list(volume_types, name_or_id, filters) def search_flavors(self, name_or_id=None, filters=None, get_extra=True): flavors = self.list_flavors(get_extra=get_extra) return _utils._filter_list(flavors, name_or_id, filters) def search_security_groups(self, name_or_id=None, filters=None): # `filters` could be a dict or a jmespath (str) groups = self.list_security_groups( filters=filters if isinstance(filters, dict) else None ) return _utils._filter_list(groups, name_or_id, filters) def search_servers( self, name_or_id=None, filters=None, detailed=False, all_projects=False, bare=False): servers = self.list_servers( detailed=detailed, all_projects=all_projects, bare=bare) return _utils._filter_list(servers, name_or_id, filters) def search_server_groups(self, name_or_id=None, filters=None): """Seach server groups. :param name: server group name or ID. :param filters: a dict containing additional filters to use. :returns: a list of dicts containing the server groups :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ server_groups = self.list_server_groups() return _utils._filter_list(server_groups, name_or_id, filters) def search_images(self, name_or_id=None, filters=None): images = self.list_images() return _utils._filter_list(images, name_or_id, filters) def search_floating_ip_pools(self, name=None, filters=None): pools = self.list_floating_ip_pools() return _utils._filter_list(pools, name, filters) # With Neutron, there are some cases in which full server side filtering is # not possible (e.g. nested attributes or list of objects) so we also need # to use the client-side filtering # The same goes for all neutron-related search/get methods! def search_floating_ips(self, id=None, filters=None): # `filters` could be a jmespath expression which Neutron server doesn't # understand, obviously. if self._use_neutron_floating() and isinstance(filters, dict): filter_keys = ['router_id', 'status', 'tenant_id', 'project_id', 'revision_number', 'description', 'floating_network_id', 'fixed_ip_address', 'floating_ip_address', 'port_id', 'sort_dir', 'sort_key', 'tags', 'tags-any', 'not-tags', 'not-tags-any', 'fields'] neutron_filters = {k: v for k, v in filters.items() if k in filter_keys} kwargs = {'filters': neutron_filters} else: kwargs = {} floating_ips = self.list_floating_ips(**kwargs) return _utils._filter_list(floating_ips, id, filters) def search_stacks(self, name_or_id=None, filters=None): """Search stacks. :param name_or_id: Name or ID of the desired stack. :param filters: a dict containing additional filters to use. e.g. {'stack_status': 'CREATE_COMPLETE'} :returns: a list of ``munch.Munch`` containing the stack description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ stacks = self.list_stacks() return _utils._filter_list(stacks, name_or_id, filters) def list_keypairs(self): """List all available keypairs. :returns: A list of ``munch.Munch`` containing keypair info. """ data = self._compute_client.get( '/os-keypairs', error_message="Error fetching keypair list") return self._normalize_keypairs([ k['keypair'] for k in self._get_and_munchify('keypairs', data)]) def list_networks(self, filters=None): """List all available networks. :param filters: (optional) dict of filter conditions to push down :returns: A list of ``munch.Munch`` containing network info. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get("/networks.json", params=filters) return self._get_and_munchify('networks', data) def list_routers(self, filters=None): """List all available routers. :param filters: (optional) dict of filter conditions to push down :returns: A list of router ``munch.Munch``. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/routers.json", params=filters, error_message="Error fetching router list") return self._get_and_munchify('routers', data) def list_subnets(self, filters=None): """List all available subnets. :param filters: (optional) dict of filter conditions to push down :returns: A list of subnet ``munch.Munch``. """ # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get("/subnets.json", params=filters) return self._get_and_munchify('subnets', data) def list_ports(self, filters=None): """List all available ports. :param filters: (optional) dict of filter conditions to push down :returns: A list of port ``munch.Munch``. """ # If pushdown filters are specified and we do not have batched caching # enabled, bypass local caching and push down the filters. if filters and self._PORT_AGE == 0: return self._list_ports(filters) # Translate None from search interface to empty {} for kwargs below filters = {} if (time.time() - self._ports_time) >= self._PORT_AGE: # Since we're using cached data anyway, we don't need to # have more than one thread actually submit the list # ports task. Let the first one submit it while holding # a lock, and the non-blocking acquire method will cause # subsequent threads to just skip this and use the old # data until it succeeds. # Initially when we never got data, block to retrieve some data. first_run = self._ports is None if self._ports_lock.acquire(first_run): try: if not (first_run and self._ports is not None): self._ports = self._list_ports(filters) self._ports_time = time.time() finally: self._ports_lock.release() return self._ports def _list_ports(self, filters): data = self._network_client.get( "/ports.json", params=filters, error_message="Error fetching port list") return self._get_and_munchify('ports', data) def list_qos_rule_types(self, filters=None): """List all available QoS rule types. :param filters: (optional) dict of filter conditions to push down :returns: A list of rule types ``munch.Munch``. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/rule-types.json", params=filters, error_message="Error fetching QoS rule types list") return self._get_and_munchify('rule_types', data) def get_qos_rule_type_details(self, rule_type, filters=None): """Get a QoS rule type details by rule type name. :param string rule_type: Name of the QoS rule type. :returns: A rule type details ``munch.Munch`` or None if no matching rule type is found. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') if not self._has_neutron_extension('qos-rule-type-details'): raise exc.OpenStackCloudUnavailableExtension( 'qos-rule-type-details extension is not available ' 'on target cloud') data = self._network_client.get( "/qos/rule-types/{rule_type}.json".format(rule_type=rule_type), error_message="Error fetching QoS details of {rule_type} " "rule type".format(rule_type=rule_type)) return self._get_and_munchify('rule_type', data) def list_qos_policies(self, filters=None): """List all available QoS policies. :param filters: (optional) dict of filter conditions to push down :returns: A list of policies ``munch.Munch``. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/policies.json", params=filters, error_message="Error fetching QoS policies list") return self._get_and_munchify('policies', data) @_utils.cache_on_arguments(should_cache_fn=_no_pending_volumes) def list_volumes(self, cache=True): """List all available volumes. :returns: A list of volume ``munch.Munch``. """ def _list(data): volumes.extend(data.get('volumes', [])) endpoint = None for l in data.get('volumes_links', []): if 'rel' in l and 'next' == l['rel']: endpoint = l['href'] break if endpoint: try: _list(self._volume_client.get(endpoint)) except exc.OpenStackCloudURINotFound: # Catch and re-raise here because we are making recursive # calls and we just have context for the log here self.log.debug( "While listing volumes, could not find next link" " {link}.".format(link=data)) raise if not cache: warnings.warn('cache argument to list_volumes is deprecated. Use ' 'invalidate instead.') # Fetching paginated volumes can fails for several reasons, if # something goes wrong we'll have to start fetching volumes from # scratch attempts = 5 for _ in range(attempts): volumes = [] data = self._volume_client.get('/volumes/detail') if 'volumes_links' not in data: # no pagination needed volumes.extend(data.get('volumes', [])) break try: _list(data) break except exc.OpenStackCloudURINotFound: pass else: self.log.debug( "List volumes failed to retrieve all volumes after" " {attempts} attempts. Returning what we found.".format( attempts=attempts)) # list volumes didn't complete succesfully so just return what # we found return self._normalize_volumes( self._get_and_munchify(key=None, data=volumes)) @_utils.cache_on_arguments() def list_volume_types(self, get_extra=True): """List all available volume types. :returns: A list of volume ``munch.Munch``. """ data = self._volume_client.get( '/types', params=dict(is_public='None'), error_message='Error fetching volume_type list') return self._normalize_volume_types( self._get_and_munchify('volume_types', data)) @_utils.cache_on_arguments() def list_availability_zone_names(self, unavailable=False): """List names of availability zones. :param bool unavailable: Whether or not to include unavailable zones in the output. Defaults to False. :returns: A list of availability zone names, or an empty list if the list could not be fetched. """ try: data = self._compute_client.get('/os-availability-zone') except exc.OpenStackCloudHTTPError: self.log.debug( "Availability zone list could not be fetched", exc_info=True) return [] zones = self._get_and_munchify('availabilityZoneInfo', data) ret = [] for zone in zones: if zone['zoneState']['available'] or unavailable: ret.append(zone['zoneName']) return ret @_utils.cache_on_arguments() def list_flavors(self, get_extra=None): """List all available flavors. :param get_extra: Whether or not to fetch extra specs for each flavor. Defaults to True. Default behavior value can be overridden in clouds.yaml by setting shade.get_extra_specs to False. :returns: A list of flavor ``munch.Munch``. """ if get_extra is None: get_extra = self._extra_config['get_flavor_extra_specs'] data = self._compute_client.get( '/flavors/detail', params=dict(is_public='None'), error_message="Error fetching flavor list") flavors = self._normalize_flavors( self._get_and_munchify('flavors', data)) for flavor in flavors: if not flavor.extra_specs and get_extra: endpoint = "/flavors/{id}/os-extra_specs".format( id=flavor.id) try: data = self._compute_client.get( endpoint, error_message="Error fetching flavor extra specs") flavor.extra_specs = self._get_and_munchify( 'extra_specs', data) except exc.OpenStackCloudHTTPError as e: flavor.extra_specs = {} self.log.debug( 'Fetching extra specs for flavor failed:' ' %(msg)s', {'msg': str(e)}) return flavors @_utils.cache_on_arguments(should_cache_fn=_no_pending_stacks) def list_stacks(self): """List all stacks. :returns: a list of ``munch.Munch`` containing the stack description. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ data = self._orchestration_client.get( '/stacks', error_message="Error fetching stack list") return self._normalize_stacks( self._get_and_munchify('stacks', data)) def list_server_security_groups(self, server): """List all security groups associated with the given server. :returns: A list of security group ``munch.Munch``. """ # Don't even try if we're a cloud that doesn't have them if not self._has_secgroups(): return [] data = self._compute_client.get( '/servers/{server_id}/os-security-groups'.format( server_id=server['id'])) return self._normalize_secgroups( self._get_and_munchify('security_groups', data)) def _get_server_security_groups(self, server, security_groups): if not self._has_secgroups(): raise exc.OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) if not isinstance(server, dict): server = self.get_server(server, bare=True) if server is None: self.log.debug('Server %s not found', server) return None, None if not isinstance(security_groups, (list, tuple)): security_groups = [security_groups] sec_group_objs = [] for sg in security_groups: if not isinstance(sg, dict): sg = self.get_security_group(sg) if sg is None: self.log.debug('Security group %s not found for adding', sg) return None, None sec_group_objs.append(sg) return server, sec_group_objs def add_server_security_groups(self, server, security_groups): """Add security groups to a server. Add existing security groups to an existing server. If the security groups are already present on the server this will continue unaffected. :returns: False if server or security groups are undefined, True otherwise. :raises: ``OpenStackCloudException``, on operation error. """ server, security_groups = self._get_server_security_groups( server, security_groups) if not (server and security_groups): return False for sg in security_groups: self._compute_client.post( '/servers/%s/action' % server['id'], json={'addSecurityGroup': {'name': sg.name}}) return True def remove_server_security_groups(self, server, security_groups): """Remove security groups from a server Remove existing security groups from an existing server. If the security groups are not present on the server this will continue unaffected. :returns: False if server or security groups are undefined, True otherwise. :raises: ``OpenStackCloudException``, on operation error. """ server, security_groups = self._get_server_security_groups( server, security_groups) if not (server and security_groups): return False ret = True for sg in security_groups: try: self._compute_client.post( '/servers/%s/action' % server['id'], json={'removeSecurityGroup': {'name': sg.name}}) except exc.OpenStackCloudURINotFound: # NOTE(jamielennox): Is this ok? If we remove something that # isn't present should we just conclude job done or is that an # error? Nova returns ok if you try to add a group twice. self.log.debug( "The security group %s was not present on server %s so " "no action was performed", sg.name, server.name) ret = False return ret def list_security_groups(self, filters=None): """List all available security groups. :param filters: (optional) dict of filter conditions to push down :returns: A list of security group ``munch.Munch``. """ # Security groups not supported if not self._has_secgroups(): raise exc.OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) if not filters: filters = {} data = [] # Handle neutron security groups if self._use_neutron_secgroups(): # Neutron returns dicts, so no need to convert objects here. data = self._network_client.get( '/security-groups.json', params=filters, error_message="Error fetching security group list") return self._get_and_munchify('security_groups', data) # Handle nova security groups else: data = self._compute_client.get( '/os-security-groups', params=filters) return self._normalize_secgroups( self._get_and_munchify('security_groups', data)) def iter_servers(self, detailed=False, all_projects=False, bare=False, filters=None): """Iterate over all available servers. :param detailed: Whether or not to add detailed additional information. Defaults to False. :param all_projects: Whether to list servers from all projects or just the current auth scoped project. :param bare: Whether to skip adding any additional information to the server record. Defaults to False, meaning the addresses dict will be populated as needed from neutron. Setting to True implies detailed = False. :param filters: Additional query parameters passed to the API server. :yields: Lists of server ``munch.Munch`` (one list per each chunk). """ for servers in self._iter_servers(detailed=detailed, all_projects=all_projects, bare=bare, filters=filters): yield servers def list_servers(self, detailed=False, all_projects=False, bare=False, filters=None): """List all available servers. :param detailed: Whether or not to add detailed additional information. Defaults to False. :param all_projects: Whether to list servers from all projects or just the current auth scoped project. :param bare: Whether to skip adding any additional information to the server record. Defaults to False, meaning the addresses dict will be populated as needed from neutron. Setting to True implies detailed = False. :param filters: Additional query parameters passed to the API server. :returns: A list of server ``munch.Munch``. """ if (time.time() - self._servers_time) >= self._SERVER_AGE: # Since we're using cached data anyway, we don't need to # have more than one thread actually submit the list # servers task. Let the first one submit it while holding # a lock, and the non-blocking acquire method will cause # subsequent threads to just skip this and use the old # data until it succeeds. # Initially when we never got data, block to retrieve some data. first_run = self._servers is None if self._servers_lock.acquire(first_run): try: if not (first_run and self._servers is not None): servers = [] for chunk in self._iter_servers( detailed=detailed, all_projects=all_projects, bare=bare, filters=filters): servers.extend(chunk) self._servers = servers self._servers_time = time.time() finally: self._servers_lock.release() return self._servers def _iter_servers(self, detailed=False, all_projects=False, bare=False, filters=None): error_msg = "Error fetching server list on {cloud}:{region}:".format( cloud=self.name, region=self.region_name) params = filters or {} if all_projects: params['all_tenants'] = True data = self._compute_client.get( '/servers/detail', params=params, error_message=error_msg) while 'servers_links' in data: servers = self._normalize_servers( self._get_and_munchify('servers', data)) yield [ self._expand_server(server, detailed, bare) for server in servers ] parse_result = urllib.parse.urlparse( data['servers_links'][0]['href']) pagination_params = dict( urllib.parse.parse_qsl(parse_result.query)) params.update(pagination_params) data = self._compute_client.get( '/servers/detail', params=params, error_message=error_msg) servers = self._normalize_servers( self._get_and_munchify('servers', data)) yield [ self._expand_server(server, detailed, bare) for server in servers ] def list_server_groups(self): """List all available server groups. :returns: A list of server group dicts. """ data = self._compute_client.get( '/os-server-groups', error_message="Error fetching server group list") return self._get_and_munchify('server_groups', data) def get_compute_limits(self, name_or_id=None): """ Get compute limits for a project :param name_or_id: (optional) project name or ID to get limits for if different from the current project :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the limits """ params = {} project_id = None error_msg = "Failed to get limits" if name_or_id: proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") project_id = proj.id params['tenant_id'] = project_id error_msg = "{msg} for the project: {project} ".format( msg=error_msg, project=name_or_id) data = self._compute_client.get('/limits', params=params) limits = self._get_and_munchify('limits', data) return self._normalize_compute_limits(limits, project_id=project_id) @_utils.cache_on_arguments(should_cache_fn=_no_pending_images) def list_images(self, filter_deleted=True, show_all=False): """Get available images. :param filter_deleted: Control whether deleted images are returned. :param show_all: Show all images, including images that are shared but not accepted. (By default in glance v2 shared image that have not been accepted are not shown) show_all will override the value of filter_deleted to False. :returns: A list of glance images. """ if show_all: filter_deleted = False # First, try to actually get images from glance, it's more efficient images = [] params = {} image_list = [] try: if self._is_client_version('image', 2): endpoint = '/images' if show_all: params['member_status'] = 'all' else: endpoint = '/images/detail' response = self._image_client.get(endpoint, params=params) except keystoneauth1.exceptions.catalog.EndpointNotFound: # We didn't have glance, let's try nova # If this doesn't work - we just let the exception propagate response = self._compute_client.get('/images/detail') while 'next' in response: image_list.extend(meta.obj_list_to_munch(response['images'])) endpoint = response['next'] # next links from glance have the version prefix. If the catalog # has a versioned endpoint, then we can't append the next link to # it. Strip the absolute prefix (/v1/ or /v2/ to turn it into # a proper relative link. if endpoint.startswith('/v'): endpoint = endpoint[4:] response = self._image_client.get(endpoint) if 'images' in response: image_list.extend(meta.obj_list_to_munch(response['images'])) else: image_list.extend(response) for image in image_list: # The cloud might return DELETED for invalid images. # While that's cute and all, that's an implementation detail. if not filter_deleted: images.append(image) elif image.status.lower() != 'deleted': images.append(image) return self._normalize_images(images) def list_floating_ip_pools(self): """List all available floating IP pools. NOTE: This function supports the nova-net view of the world. nova-net has been deprecated, so it's highly recommended to switch to using neutron. `get_external_ipv4_floating_networks` is what you should almost certainly be using. :returns: A list of floating IP pool ``munch.Munch``. """ if not self._has_nova_extension('os-floating-ip-pools'): raise exc.OpenStackCloudUnavailableExtension( 'Floating IP pools extension is not available on target cloud') data = self._compute_client.get( 'os-floating-ip-pools', error_message="Error fetching floating IP pool list") pools = self._get_and_munchify('floating_ip_pools', data) return [{'name': p['name']} for p in pools] def _list_floating_ips(self, filters=None): if self._use_neutron_floating(): try: return self._normalize_floating_ips( self._neutron_list_floating_ips(filters)) except exc.OpenStackCloudURINotFound as e: # Nova-network don't support server-side floating ips # filtering, so it's safer to return and empty list than # to fallback to Nova which may return more results that # expected. if filters: self.log.error( "Neutron returned NotFound for floating IPs, which" " means this cloud doesn't have neutron floating ips." " shade can't fallback to trying Nova since nova" " doesn't support server-side filtering when listing" " floating ips and filters were given. If you do not" " think shade should be attempting to list floating" " ips on neutron, it is possible to control the" " behavior by setting floating_ip_source to 'nova' or" " None for cloud: %(cloud)s. If you are not already" " using clouds.yaml to configure settings for your" " cloud(s), and you want to configure this setting," " you will need a clouds.yaml file. For more" " information, please see %(doc_url)s", { 'cloud': self.name, 'doc_url': _OCC_DOC_URL, } ) # We can't fallback to nova because we push-down filters. # We got a 404 which means neutron doesn't exist. If the # user return [] self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova else: if filters: raise ValueError( "Nova-network don't support server-side floating ips " "filtering. Use the search_floatting_ips method instead" ) floating_ips = self._nova_list_floating_ips() return self._normalize_floating_ips(floating_ips) def list_floating_ips(self, filters=None): """List all available floating IPs. :param filters: (optional) dict of filter conditions to push down :returns: A list of floating IP ``munch.Munch``. """ # If pushdown filters are specified and we do not have batched caching # enabled, bypass local caching and push down the filters. if filters and self._FLOAT_AGE == 0: return self._list_floating_ips(filters) if (time.time() - self._floating_ips_time) >= self._FLOAT_AGE: # Since we're using cached data anyway, we don't need to # have more than one thread actually submit the list # floating ips task. Let the first one submit it while holding # a lock, and the non-blocking acquire method will cause # subsequent threads to just skip this and use the old # data until it succeeds. # Initially when we never got data, block to retrieve some data. first_run = self._floating_ips is None if self._floating_ips_lock.acquire(first_run): try: if not (first_run and self._floating_ips is not None): self._floating_ips = self._list_floating_ips() self._floating_ips_time = time.time() finally: self._floating_ips_lock.release() return self._floating_ips def _neutron_list_floating_ips(self, filters=None): if not filters: filters = {} data = self._network_client.get('/floatingips.json', params=filters) return self._get_and_munchify('floatingips', data) def _nova_list_floating_ips(self): try: data = self._compute_client.get('/os-floating-ips') except exc.OpenStackCloudURINotFound: return [] return self._get_and_munchify('floating_ips', data) def use_external_network(self): return self._use_external_network def use_internal_network(self): return self._use_internal_network def _reset_network_caches(self): # Variables to prevent us from going through the network finding # logic again if we've done it once. This is different from just # the cached value, since "None" is a valid value to find. with self._networks_lock: self._external_ipv4_networks = [] self._external_ipv4_floating_networks = [] self._internal_ipv4_networks = [] self._external_ipv6_networks = [] self._internal_ipv6_networks = [] self._nat_destination_network = None self._default_network_network = None self._network_list_stamp = False def _set_interesting_networks(self): external_ipv4_networks = [] external_ipv4_floating_networks = [] internal_ipv4_networks = [] external_ipv6_networks = [] internal_ipv6_networks = [] nat_destination = None default_network = None all_subnets = None # Filter locally because we have an or condition try: # TODO(mordred): Rackspace exposes neutron but it does not # work. I think that overriding what the service catalog # reports should be a thing os-client-config should handle # in a vendor profile - but for now it does not. That means # this search_networks can just totally fail. If it does # though, that's fine, clearly the neutron introspection is # not going to work. all_networks = self.list_networks() except exc.OpenStackCloudException: self._network_list_stamp = True return for network in all_networks: # External IPv4 networks if (network['name'] in self._external_ipv4_names or network['id'] in self._external_ipv4_names): external_ipv4_networks.append(network) elif ((('router:external' in network and network['router:external']) or network.get('provider:physical_network')) and network['name'] not in self._internal_ipv4_names and network['id'] not in self._internal_ipv4_names): external_ipv4_networks.append(network) # External Floating IPv4 networks if ('router:external' in network and network['router:external']): external_ipv4_floating_networks.append(network) # Internal networks if (network['name'] in self._internal_ipv4_names or network['id'] in self._internal_ipv4_names): internal_ipv4_networks.append(network) elif (not network.get('router:external', False) and not network.get('provider:physical_network') and network['name'] not in self._external_ipv4_names and network['id'] not in self._external_ipv4_names): internal_ipv4_networks.append(network) # External networks if (network['name'] in self._external_ipv6_names or network['id'] in self._external_ipv6_names): external_ipv6_networks.append(network) elif (network.get('router:external') and network['name'] not in self._internal_ipv6_names and network['id'] not in self._internal_ipv6_names): external_ipv6_networks.append(network) # Internal networks if (network['name'] in self._internal_ipv6_names or network['id'] in self._internal_ipv6_names): internal_ipv6_networks.append(network) elif (not network.get('router:external', False) and network['name'] not in self._external_ipv6_names and network['id'] not in self._external_ipv6_names): internal_ipv6_networks.append(network) # NAT Destination if self._nat_destination in ( network['name'], network['id']): if nat_destination: raise exc.OpenStackCloudException( 'Multiple networks were found matching' ' {nat_net} which is the network configured' ' to be the NAT destination. Please check your' ' cloud resources. It is probably a good idea' ' to configure this network by ID rather than' ' by name.'.format( nat_net=self._nat_destination)) nat_destination = network elif self._nat_destination is None: # TODO(mordred) need a config value for floating # ips for this cloud so that we can skip this # No configured nat destination, we have to figured # it out. if all_subnets is None: try: all_subnets = self.list_subnets() except exc.OpenStackCloudException: # Thanks Rackspace broken neutron all_subnets = [] for subnet in all_subnets: # TODO(mordred) trap for detecting more than # one network with a gateway_ip without a config if ('gateway_ip' in subnet and subnet['gateway_ip'] and network['id'] == subnet['network_id']): nat_destination = network break # Default network if self._default_network in ( network['name'], network['id']): if default_network: raise exc.OpenStackCloudException( 'Multiple networks were found matching' ' {default_net} which is the network' ' configured to be the default interface' ' network. Please check your cloud resources.' ' It is probably a good idea' ' to configure this network by ID rather than' ' by name.'.format( default_net=self._default_network)) default_network = network # Validate config vs. reality for net_name in self._external_ipv4_names: if net_name not in [net['name'] for net in external_ipv4_networks]: raise exc.OpenStackCloudException( "Networks: {network} was provided for external IPv4" " access and those networks could not be found".format( network=net_name)) for net_name in self._internal_ipv4_names: if net_name not in [net['name'] for net in internal_ipv4_networks]: raise exc.OpenStackCloudException( "Networks: {network} was provided for internal IPv4" " access and those networks could not be found".format( network=net_name)) for net_name in self._external_ipv6_names: if net_name not in [net['name'] for net in external_ipv6_networks]: raise exc.OpenStackCloudException( "Networks: {network} was provided for external IPv6" " access and those networks could not be found".format( network=net_name)) for net_name in self._internal_ipv6_names: if net_name not in [net['name'] for net in internal_ipv6_networks]: raise exc.OpenStackCloudException( "Networks: {network} was provided for internal IPv6" " access and those networks could not be found".format( network=net_name)) if self._nat_destination and not nat_destination: raise exc.OpenStackCloudException( 'Network {network} was configured to be the' ' destination for inbound NAT but it could not be' ' found'.format( network=self._nat_destination)) if self._default_network and not default_network: raise exc.OpenStackCloudException( 'Network {network} was configured to be the' ' default network interface but it could not be' ' found'.format( network=self._default_network)) self._external_ipv4_networks = external_ipv4_networks self._external_ipv4_floating_networks = external_ipv4_floating_networks self._internal_ipv4_networks = internal_ipv4_networks self._external_ipv6_networks = external_ipv6_networks self._internal_ipv6_networks = internal_ipv6_networks self._nat_destination_network = nat_destination self._default_network_network = default_network def _find_interesting_networks(self): if self._networks_lock.acquire(): try: if self._network_list_stamp: return if (not self._use_external_network and not self._use_internal_network): # Both have been flagged as skip - don't do a list return if not self.has_service('network'): return self._set_interesting_networks() self._network_list_stamp = True finally: self._networks_lock.release() def get_nat_destination(self): """Return the network that is configured to be the NAT destination. :returns: A network dict if one is found """ self._find_interesting_networks() return self._nat_destination_network def get_default_network(self): """Return the network that is configured to be the default interface. :returns: A network dict if one is found """ self._find_interesting_networks() return self._default_network_network def get_external_networks(self): """Return the networks that are configured to route northbound. This should be avoided in favor of the specific ipv4/ipv6 method, but is here for backwards compatibility. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return list( set(self._external_ipv4_networks) | set(self._external_ipv6_networks)) def get_internal_networks(self): """Return the networks that are configured to not route northbound. This should be avoided in favor of the specific ipv4/ipv6 method, but is here for backwards compatibility. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return list( set(self._internal_ipv4_networks) | set(self._internal_ipv6_networks)) def get_external_ipv4_networks(self): """Return the networks that are configured to route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._external_ipv4_networks def get_external_ipv4_floating_networks(self): """Return the networks that are configured to route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._external_ipv4_floating_networks def get_internal_ipv4_networks(self): """Return the networks that are configured to not route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._internal_ipv4_networks def get_external_ipv6_networks(self): """Return the networks that are configured to route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._external_ipv6_networks def get_internal_ipv6_networks(self): """Return the networks that are configured to not route northbound. :returns: A list of network ``munch.Munch`` if one is found """ self._find_interesting_networks() return self._internal_ipv6_networks def _has_floating_ips(self): if not self._floating_ip_source: return False else: return self._floating_ip_source in ('nova', 'neutron') def _use_neutron_floating(self): return (self.has_service('network') and self._floating_ip_source == 'neutron') def _has_secgroups(self): if not self.secgroup_source: return False else: return self.secgroup_source.lower() in ('nova', 'neutron') def _use_neutron_secgroups(self): return (self.has_service('network') and self.secgroup_source == 'neutron') def get_keypair(self, name_or_id, filters=None): """Get a keypair by name or ID. :param name_or_id: Name or ID of the keypair. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A keypair ``munch.Munch`` or None if no matching keypair is found. """ return _utils._get_entity(self, 'keypair', name_or_id, filters) def get_network(self, name_or_id, filters=None): """Get a network by name or ID. :param name_or_id: Name or ID of the network. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A network ``munch.Munch`` or None if no matching network is found. """ return _utils._get_entity(self, 'network', name_or_id, filters) def get_network_by_id(self, id): """ Get a network by ID :param id: ID of the network. :returns: A network ``munch.Munch``. """ data = self._network_client.get( '/networks/{id}'.format(id=id), error_message="Error getting network with ID {id}".format(id=id) ) network = self._get_and_munchify('network', data) return network def get_router(self, name_or_id, filters=None): """Get a router by name or ID. :param name_or_id: Name or ID of the router. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A router ``munch.Munch`` or None if no matching router is found. """ return _utils._get_entity(self, 'router', name_or_id, filters) def get_subnet(self, name_or_id, filters=None): """Get a subnet by name or ID. :param name_or_id: Name or ID of the subnet. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } :returns: A subnet ``munch.Munch`` or None if no matching subnet is found. """ return _utils._get_entity(self, 'subnet', name_or_id, filters) def get_subnet_by_id(self, id): """ Get a subnet by ID :param id: ID of the subnet. :returns: A subnet ``munch.Munch``. """ data = self._network_client.get( '/subnets/{id}'.format(id=id), error_message="Error getting subnet with ID {id}".format(id=id) ) subnet = self._get_and_munchify('subnet', data) return subnet def get_port(self, name_or_id, filters=None): """Get a port by name or ID. :param name_or_id: Name or ID of the port. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A port ``munch.Munch`` or None if no matching port is found. """ return _utils._get_entity(self, 'port', name_or_id, filters) def get_port_by_id(self, id): """ Get a port by ID :param id: ID of the port. :returns: A port ``munch.Munch``. """ data = self._network_client.get( '/ports/{id}'.format(id=id), error_message="Error getting port with ID {id}".format(id=id) ) port = self._get_and_munchify('port', data) return port def get_qos_policy(self, name_or_id, filters=None): """Get a QoS policy by name or ID. :param name_or_id: Name or ID of the policy. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A policy ``munch.Munch`` or None if no matching network is found. """ return _utils._get_entity( self, 'qos_policie', name_or_id, filters) def get_volume(self, name_or_id, filters=None): """Get a volume by name or ID. :param name_or_id: Name or ID of the volume. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A volume ``munch.Munch`` or None if no matching volume is found. """ return _utils._get_entity(self, 'volume', name_or_id, filters) def get_volume_by_id(self, id): """ Get a volume by ID :param id: ID of the volume. :returns: A volume ``munch.Munch``. """ data = self._volume_client.get( '/volumes/{id}'.format(id=id), error_message="Error getting volume with ID {id}".format(id=id) ) volume = self._normalize_volume( self._get_and_munchify('volume', data)) return volume def get_volume_type(self, name_or_id, filters=None): """Get a volume type by name or ID. :param name_or_id: Name or ID of the volume. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A volume ``munch.Munch`` or None if no matching volume is found. """ return _utils._get_entity( self, 'volume_type', name_or_id, filters) def get_flavor(self, name_or_id, filters=None, get_extra=True): """Get a flavor by name or ID. :param name_or_id: Name or ID of the flavor. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :param get_extra: Whether or not the list_flavors call should get the extra flavor specs. :returns: A flavor ``munch.Munch`` or None if no matching flavor is found. """ search_func = functools.partial( self.search_flavors, get_extra=get_extra) return _utils._get_entity(self, search_func, name_or_id, filters) def get_flavor_by_id(self, id, get_extra=True): """ Get a flavor by ID :param id: ID of the flavor. :param get_extra: Whether or not the list_flavors call should get the extra flavor specs. :returns: A flavor ``munch.Munch``. """ data = self._compute_client.get( '/flavors/{id}'.format(id=id), error_message="Error getting flavor with ID {id}".format(id=id) ) flavor = self._normalize_flavor( self._get_and_munchify('flavor', data)) if get_extra is None: get_extra = self._extra_config['get_flavor_extra_specs'] if not flavor.extra_specs and get_extra: endpoint = "/flavors/{id}/os-extra_specs".format( id=flavor.id) try: data = self._compute_client.get( endpoint, error_message="Error fetching flavor extra specs") flavor.extra_specs = self._get_and_munchify( 'extra_specs', data) except exc.OpenStackCloudHTTPError as e: flavor.extra_specs = {} self.log.debug( 'Fetching extra specs for flavor failed:' ' %(msg)s', {'msg': str(e)}) return flavor def get_security_group(self, name_or_id, filters=None): """Get a security group by name or ID. :param name_or_id: Name or ID of the security group. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A security group ``munch.Munch`` or None if no matching security group is found. """ return _utils._get_entity( self, 'security_group', name_or_id, filters) def get_security_group_by_id(self, id): """ Get a security group by ID :param id: ID of the security group. :returns: A security group ``munch.Munch``. """ if not self._has_secgroups(): raise exc.OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) error_message = ("Error getting security group with" " ID {id}".format(id=id)) if self._use_neutron_secgroups(): data = self._network_client.get( '/security-groups/{id}'.format(id=id), error_message=error_message) else: data = self._compute_client.get( '/os-security-groups/{id}'.format(id=id), error_message=error_message) return self._normalize_secgroup( self._get_and_munchify('security_group', data)) def get_server_console(self, server, length=None): """Get the console log for a server. :param server: The server to fetch the console log for. Can be either a server dict or the Name or ID of the server. :param int length: The number of lines you would like to retrieve from the end of the log. (optional, defaults to all) :returns: A string containing the text of the console log or an empty string if the cloud does not support console logs. :raises: OpenStackCloudException if an invalid server argument is given or if something else unforseen happens """ if not isinstance(server, dict): server = self.get_server(server, bare=True) if not server: raise exc.OpenStackCloudException( "Console log requested for invalid server") try: return self._get_server_console_output(server['id'], length) except exc.OpenStackCloudBadRequest: return "" def _get_server_console_output(self, server_id, length=None): data = self._compute_client.post( '/servers/{server_id}/action'.format(server_id=server_id), json={'os-getConsoleOutput': {'length': length}}) return self._get_and_munchify('output', data) def get_server( self, name_or_id=None, filters=None, detailed=False, bare=False, all_projects=False): """Get a server by name or ID. :param name_or_id: Name or ID of the server. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :param detailed: Whether or not to add detailed additional information. Defaults to False. :param bare: Whether to skip adding any additional information to the server record. Defaults to False, meaning the addresses dict will be populated as needed from neutron. Setting to True implies detailed = False. :param all_projects: Whether to get server from all projects or just the current auth scoped project. :returns: A server ``munch.Munch`` or None if no matching server is found. """ if self.use_direct_get: searchfunc = functools.partial(self.get_server_by_id, bare=True) else: searchfunc = functools.partial(self.search_servers, detailed=detailed, bare=True, all_projects=all_projects) server = _utils._get_entity(self, searchfunc, name_or_id, filters) return self._expand_server(server, detailed, bare) def _expand_server(self, server, detailed, bare): if bare or not server: return server elif detailed: return meta.get_hostvars_from_server(self, server) else: return meta.add_server_interfaces(self, server) def get_server_by_id( self, id=None, detailed=False, bare=False, all_projects=False): """Get a server by ID. :param id: ID of the server. :param detailed: Whether or not to add detailed additional information. Defaults to False. :param bare: Whether to skip adding any additional information to the server record. Defaults to False, meaning the addresses dict will be populated as needed from neutron. Setting to True implies detailed = False. :param all_projects: Whether to get server from all projects or just the current auth scoped project. :returns: A server ``munch.Munch`` or None if no matching server is found. """ params = {} if all_projects: params['all_tenants'] = True data = self._compute_client.get( '/servers/{id}'.format(id=id), params=params) server = self._get_and_munchify('server', data) return self._expand_server( self._normalize_server(server), detailed=detailed, bare=bare) def get_server_group(self, name_or_id=None, filters=None): """Get a server group by name or ID. :param name_or_id: Name or ID of the server group. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'policy': 'affinity', } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A server groups dict or None if no matching server group is found. """ return _utils._get_entity(self, 'server_group', name_or_id, filters) def get_image(self, name_or_id, filters=None): """Get an image by name or ID. :param name_or_id: Name or ID of the image. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: An image ``munch.Munch`` or None if no matching image is found """ return _utils._get_entity(self, 'image', name_or_id, filters) def get_image_by_id(self, id): """ Get a image by ID :param id: ID of the image. :returns: An image ``munch.Munch``. """ data = self._image_client.get( '/images/{id}'.format(id=id), error_message="Error getting image with ID {id}".format(id=id) ) key = 'image' if 'image' in data else None image = self._normalize_image( self._get_and_munchify(key, data)) return image def download_image( self, name_or_id, output_path=None, output_file=None, chunk_size=1024): """Download an image by name or ID :param str name_or_id: Name or ID of the image. :param output_path: the output path to write the image to. Either this or output_file must be specified :param output_file: a file object (or file-like object) to write the image data to. Only write() will be called on this object. Either this or output_path must be specified :param int chunk_size: size in bytes to read from the wire and buffer at one time. Defaults to 1024 :raises: OpenStackCloudException in the event download_image is called without exactly one of either output_path or output_file :raises: OpenStackCloudResourceNotFound if no images are found matching the name or ID provided """ if output_path is None and output_file is None: raise exc.OpenStackCloudException( 'No output specified, an output path' ' or file object is necessary to ' 'write the image data to') elif output_path is not None and output_file is not None: raise exc.OpenStackCloudException( 'Both an output path and file object' ' were provided, however only one ' 'can be used at once') image = self.search_images(name_or_id) if len(image) == 0: raise exc.OpenStackCloudResourceNotFound( "No images with name or ID %s were found" % name_or_id, None) if self._is_client_version('image', 2): endpoint = '/images/{id}/file'.format(id=image[0]['id']) else: endpoint = '/images/{id}'.format(id=image[0]['id']) response = self._image_client.get(endpoint, stream=True) with _utils.shade_exceptions("Unable to download image"): if output_path: with open(output_path, 'wb') as fd: for chunk in response.iter_content(chunk_size=chunk_size): fd.write(chunk) return elif output_file: for chunk in response.iter_content(chunk_size=chunk_size): output_file.write(chunk) return def get_floating_ip(self, id, filters=None): """Get a floating IP by ID :param id: ID of the floating IP. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A floating IP ``munch.Munch`` or None if no matching floating IP is found. """ return _utils._get_entity(self, 'floating_ip', id, filters) def get_floating_ip_by_id(self, id): """ Get a floating ip by ID :param id: ID of the floating ip. :returns: A floating ip ``munch.Munch``. """ error_message = "Error getting floating ip with ID {id}".format(id=id) if self._use_neutron_floating(): data = self._network_client.get( '/floatingips/{id}'.format(id=id), error_message=error_message) return self._normalize_floating_ip( self._get_and_munchify('floatingip', data)) else: data = self._compute_client.get( '/os-floating-ips/{id}'.format(id=id), error_message=error_message) return self._normalize_floating_ip( self._get_and_munchify('floating_ip', data)) def get_stack(self, name_or_id, filters=None, resolve_outputs=True): """Get exactly one stack. :param name_or_id: Name or ID of the desired stack. :param filters: a dict containing additional filters to use. e.g. {'stack_status': 'CREATE_COMPLETE'} :param resolve_outputs: If True, then outputs for this stack will be resolved :returns: a ``munch.Munch`` containing the stack description :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call or if multiple matches are found. """ def _search_one_stack(name_or_id=None, filters=None): # stack names are mandatory and enforced unique in the project # so a StackGet can always be used for name or ID. try: url = '/stacks/{name_or_id}'.format(name_or_id=name_or_id) if not resolve_outputs: url = '{url}?resolve_outputs=False'.format(url=url) data = self._orchestration_client.get( url, error_message="Error fetching stack") stack = self._get_and_munchify('stack', data) # Treat DELETE_COMPLETE stacks as a NotFound if stack['stack_status'] == 'DELETE_COMPLETE': return [] except exc.OpenStackCloudURINotFound: return [] stack = self._normalize_stack(stack) return _utils._filter_list([stack], name_or_id, filters) return _utils._get_entity( self, _search_one_stack, name_or_id, filters) def create_keypair(self, name, public_key=None): """Create a new keypair. :param name: Name of the keypair being created. :param public_key: Public key for the new keypair. :raises: OpenStackCloudException on operation error. """ keypair = { 'name': name, } if public_key: keypair['public_key'] = public_key data = self._compute_client.post( '/os-keypairs', json={'keypair': keypair}, error_message="Unable to create keypair {name}".format(name=name)) return self._normalize_keypair( self._get_and_munchify('keypair', data)) def delete_keypair(self, name): """Delete a keypair. :param name: Name of the keypair to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ try: self._compute_client.delete('/os-keypairs/{name}'.format( name=name)) except exc.OpenStackCloudURINotFound: self.log.debug("Keypair %s not found for deleting", name) return False return True def create_network(self, name, shared=False, admin_state_up=True, external=False, provider=None, project_id=None, availability_zone_hints=None, port_security_enabled=None, mtu_size=None): """Create a network. :param string name: Name of the network being created. :param bool shared: Set the network as shared. :param bool admin_state_up: Set the network administrative state to up. :param bool external: Whether this network is externally accessible. :param dict provider: A dict of network provider options. Example:: { 'network_type': 'vlan', 'segmentation_id': 'vlan1' } :param string project_id: Specify the project ID this network will be created on (admin-only). :param list availability_zone_hints: A list of availability zone hints. :param bool port_security_enabled: Enable / Disable port security :param int mtu_size: maximum transmission unit value to address fragmentation. Minimum value is 68 for IPv4, and 1280 for IPv6. :returns: The network object. :raises: OpenStackCloudException on operation error. """ network = { 'name': name, 'admin_state_up': admin_state_up, } if shared: network['shared'] = shared if project_id is not None: network['tenant_id'] = project_id if availability_zone_hints is not None: if not isinstance(availability_zone_hints, list): raise exc.OpenStackCloudException( "Parameter 'availability_zone_hints' must be a list") if not self._has_neutron_extension('network_availability_zone'): raise exc.OpenStackCloudUnavailableExtension( 'network_availability_zone extension is not available on ' 'target cloud') network['availability_zone_hints'] = availability_zone_hints if provider: if not isinstance(provider, dict): raise exc.OpenStackCloudException( "Parameter 'provider' must be a dict") # Only pass what we know for attr in ('physical_network', 'network_type', 'segmentation_id'): if attr in provider: arg = "provider:" + attr network[arg] = provider[attr] # Do not send 'router:external' unless it is explicitly # set since sending it *might* cause "Forbidden" errors in # some situations. It defaults to False in the client, anyway. if external: network['router:external'] = True if port_security_enabled is not None: if not isinstance(port_security_enabled, bool): raise exc.OpenStackCloudException( "Parameter 'port_security_enabled' must be a bool") network['port_security_enabled'] = port_security_enabled if mtu_size: if not isinstance(mtu_size, int): raise exc.OpenStackCloudException( "Parameter 'mtu_size' must be an integer.") if not mtu_size >= 68: raise exc.OpenStackCloudException( "Parameter 'mtu_size' must be greater than 67.") network['mtu'] = mtu_size data = self._network_client.post("/networks.json", json={'network': network}) # Reset cache so the new network is picked up self._reset_network_caches() return self._get_and_munchify('network', data) def delete_network(self, name_or_id): """Delete a network. :param name_or_id: Name or ID of the network being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ network = self.get_network(name_or_id) if not network: self.log.debug("Network %s not found for deleting", name_or_id) return False self._network_client.delete( "/networks/{network_id}.json".format(network_id=network['id'])) # Reset cache so the deleted network is removed self._reset_network_caches() return True @_utils.valid_kwargs("name", "description", "shared", "default", "project_id") def create_qos_policy(self, **kwargs): """Create a QoS policy. :param string name: Name of the QoS policy being created. :param string description: Description of created QoS policy. :param bool shared: Set the QoS policy as shared. :param bool default: Set the QoS policy as default for project. :param string project_id: Specify the project ID this QoS policy will be created on (admin-only). :returns: The QoS policy object. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') default = kwargs.pop("default", None) if default is not None: if self._has_neutron_extension('qos-default'): kwargs['is_default'] = default else: self.log.debug("'qos-default' extension is not available on " "target cloud") data = self._network_client.post("/qos/policies.json", json={'policy': kwargs}) return self._get_and_munchify('policy', data) @_utils.valid_kwargs("name", "description", "shared", "default", "project_id") def update_qos_policy(self, name_or_id, **kwargs): """Update an existing QoS policy. :param string name_or_id: Name or ID of the QoS policy to update. :param string policy_name: The new name of the QoS policy. :param string description: The new description of the QoS policy. :param bool shared: If True, the QoS policy will be set as shared. :param bool default: If True, the QoS policy will be set as default for project. :returns: The updated QoS policy object. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') default = kwargs.pop("default", None) if default is not None: if self._has_neutron_extension('qos-default'): kwargs['is_default'] = default else: self.log.debug("'qos-default' extension is not available on " "target cloud") if not kwargs: self.log.debug("No QoS policy data to update") return curr_policy = self.get_qos_policy(name_or_id) if not curr_policy: raise exc.OpenStackCloudException( "QoS policy %s not found." % name_or_id) data = self._network_client.put( "/qos/policies/{policy_id}.json".format( policy_id=curr_policy['id']), json={'policy': kwargs}) return self._get_and_munchify('policy', data) def delete_qos_policy(self, name_or_id): """Delete a QoS policy. :param name_or_id: Name or ID of the policy being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(name_or_id) if not policy: self.log.debug("QoS policy %s not found for deleting", name_or_id) return False self._network_client.delete( "/qos/policies/{policy_id}.json".format(policy_id=policy['id'])) return True def search_qos_bandwidth_limit_rules(self, policy_name_or_id, rule_id=None, filters=None): """Search QoS bandwidth limit rules :param string policy_name_or_id: Name or ID of the QoS policy to which rules should be associated. :param string rule_id: ID of searched rule. :param filters: a dict containing additional filters to use. e.g. {'max_kbps': 1000} :returns: a list of ``munch.Munch`` containing the bandwidth limit rule descriptions. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ rules = self.list_qos_bandwidth_limit_rules(policy_name_or_id, filters) return _utils._filter_list(rules, rule_id, filters) def list_qos_bandwidth_limit_rules(self, policy_name_or_id, filters=None): """List all available QoS bandwith limit rules. :param string policy_name_or_id: Name or ID of the QoS policy from from rules should be listed. :param filters: (optional) dict of filter conditions to push down :returns: A list of ``munch.Munch`` containing rule info. :raises: ``OpenStackCloudResourceNotFound`` if QoS policy will not be found. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/policies/{policy_id}/bandwidth_limit_rules.json".format( policy_id=policy['id']), params=filters, error_message="Error fetching QoS bandwith limit rules from " "{policy}".format(policy=policy['id'])) return self._get_and_munchify('bandwidth_limit_rules', data) def get_qos_bandwidth_limit_rule(self, policy_name_or_id, rule_id): """Get a QoS bandwidth limit rule by name or ID. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param rule_id: ID of the rule. :returns: A bandwidth limit rule ``munch.Munch`` or None if no matching rule is found. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) data = self._network_client.get( "/qos/policies/{policy_id}/bandwidth_limit_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), error_message="Error fetching QoS bandwith limit rule {rule_id} " "from {policy}".format(rule_id=rule_id, policy=policy['id'])) return self._get_and_munchify('bandwidth_limit_rule', data) @_utils.valid_kwargs("max_burst_kbps", "direction") def create_qos_bandwidth_limit_rule(self, policy_name_or_id, max_kbps, **kwargs): """Create a QoS bandwidth limit rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param int max_kbps: Maximum bandwidth limit value (in kilobits per second). :param int max_burst_kbps: Maximum burst value (in kilobits). :param string direction: Ingress or egress. The direction in which the traffic will be limited. :returns: The QoS bandwidth limit rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) if kwargs.get("direction") is not None: if not self._has_neutron_extension('qos-bw-limit-direction'): kwargs.pop("direction") self.log.debug( "'qos-bw-limit-direction' extension is not available on " "target cloud") kwargs['max_kbps'] = max_kbps data = self._network_client.post( "/qos/policies/{policy_id}/bandwidth_limit_rules".format( policy_id=policy['id']), json={'bandwidth_limit_rule': kwargs}) return self._get_and_munchify('bandwidth_limit_rule', data) @_utils.valid_kwargs("max_kbps", "max_burst_kbps", "direction") def update_qos_bandwidth_limit_rule(self, policy_name_or_id, rule_id, **kwargs): """Update a QoS bandwidth limit rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :param int max_kbps: Maximum bandwidth limit value (in kilobits per second). :param int max_burst_kbps: Maximum burst value (in kilobits). :param string direction: Ingress or egress. The direction in which the traffic will be limited. :returns: The updated QoS bandwidth limit rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) if kwargs.get("direction") is not None: if not self._has_neutron_extension('qos-bw-limit-direction'): kwargs.pop("direction") self.log.debug( "'qos-bw-limit-direction' extension is not available on " "target cloud") if not kwargs: self.log.debug("No QoS bandwidth limit rule data to update") return curr_rule = self.get_qos_bandwidth_limit_rule( policy_name_or_id, rule_id) if not curr_rule: raise exc.OpenStackCloudException( "QoS bandwidth_limit_rule {rule_id} not found in policy " "{policy_id}".format(rule_id=rule_id, policy_id=policy['id'])) data = self._network_client.put( "/qos/policies/{policy_id}/bandwidth_limit_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), json={'bandwidth_limit_rule': kwargs}) return self._get_and_munchify('bandwidth_limit_rule', data) def delete_qos_bandwidth_limit_rule(self, policy_name_or_id, rule_id): """Delete a QoS bandwidth limit rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) try: self._network_client.delete( "/qos/policies/{policy}/bandwidth_limit_rules/{rule}.json". format(policy=policy['id'], rule=rule_id)) except exc.OpenStackCloudURINotFound: self.log.debug( "QoS bandwidth limit rule {rule_id} not found in policy " "{policy_id}. Ignoring.".format(rule_id=rule_id, policy_id=policy['id'])) return False return True def search_qos_dscp_marking_rules(self, policy_name_or_id, rule_id=None, filters=None): """Search QoS DSCP marking rules :param string policy_name_or_id: Name or ID of the QoS policy to which rules should be associated. :param string rule_id: ID of searched rule. :param filters: a dict containing additional filters to use. e.g. {'dscp_mark': 32} :returns: a list of ``munch.Munch`` containing the dscp marking rule descriptions. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ rules = self.list_qos_dscp_marking_rules(policy_name_or_id, filters) return _utils._filter_list(rules, rule_id, filters) def list_qos_dscp_marking_rules(self, policy_name_or_id, filters=None): """List all available QoS DSCP marking rules. :param string policy_name_or_id: Name or ID of the QoS policy from from rules should be listed. :param filters: (optional) dict of filter conditions to push down :returns: A list of ``munch.Munch`` containing rule info. :raises: ``OpenStackCloudResourceNotFound`` if QoS policy will not be found. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/policies/{policy_id}/dscp_marking_rules.json".format( policy_id=policy['id']), params=filters, error_message="Error fetching QoS DSCP marking rules from " "{policy}".format(policy=policy['id'])) return meta.get_and_munchify('dscp_marking_rules', data) def get_qos_dscp_marking_rule(self, policy_name_or_id, rule_id): """Get a QoS DSCP marking rule by name or ID. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param rule_id: ID of the rule. :returns: A bandwidth limit rule ``munch.Munch`` or None if no matching rule is found. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) data = self._network_client.get( "/qos/policies/{policy_id}/dscp_marking_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), error_message="Error fetching QoS DSCP marking rule {rule_id} " "from {policy}".format(rule_id=rule_id, policy=policy['id'])) return meta.get_and_munchify('dscp_marking_rule', data) def create_qos_dscp_marking_rule(self, policy_name_or_id, dscp_mark): """Create a QoS DSCP marking rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param int dscp_mark: DSCP mark value :returns: The QoS DSCP marking rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) body = { 'dscp_mark': dscp_mark } data = self._network_client.post( "/qos/policies/{policy_id}/dscp_marking_rules".format( policy_id=policy['id']), json={'dscp_marking_rule': body}) return meta.get_and_munchify('dscp_marking_rule', data) @_utils.valid_kwargs("dscp_mark") def update_qos_dscp_marking_rule(self, policy_name_or_id, rule_id, **kwargs): """Update a QoS DSCP marking rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :param int dscp_mark: DSCP mark value :returns: The updated QoS bandwidth limit rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) if not kwargs: self.log.debug("No QoS DSCP marking rule data to update") return curr_rule = self.get_qos_dscp_marking_rule( policy_name_or_id, rule_id) if not curr_rule: raise exc.OpenStackCloudException( "QoS dscp_marking_rule {rule_id} not found in policy " "{policy_id}".format(rule_id=rule_id, policy_id=policy['id'])) data = self._network_client.put( "/qos/policies/{policy_id}/dscp_marking_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), json={'dscp_marking_rule': kwargs}) return meta.get_and_munchify('dscp_marking_rule', data) def delete_qos_dscp_marking_rule(self, policy_name_or_id, rule_id): """Delete a QoS DSCP marking rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) try: self._network_client.delete( "/qos/policies/{policy}/dscp_marking_rules/{rule}.json". format(policy=policy['id'], rule=rule_id)) except exc.OpenStackCloudURINotFound: self.log.debug( "QoS DSCP marking rule {rule_id} not found in policy " "{policy_id}. Ignoring.".format(rule_id=rule_id, policy_id=policy['id'])) return False return True def search_qos_minimum_bandwidth_rules(self, policy_name_or_id, rule_id=None, filters=None): """Search QoS minimum bandwidth rules :param string policy_name_or_id: Name or ID of the QoS policy to which rules should be associated. :param string rule_id: ID of searched rule. :param filters: a dict containing additional filters to use. e.g. {'min_kbps': 1000} :returns: a list of ``munch.Munch`` containing the bandwidth limit rule descriptions. :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call. """ rules = self.list_qos_minimum_bandwidth_rules( policy_name_or_id, filters) return _utils._filter_list(rules, rule_id, filters) def list_qos_minimum_bandwidth_rules(self, policy_name_or_id, filters=None): """List all available QoS minimum bandwith rules. :param string policy_name_or_id: Name or ID of the QoS policy from from rules should be listed. :param filters: (optional) dict of filter conditions to push down :returns: A list of ``munch.Munch`` containing rule info. :raises: ``OpenStackCloudResourceNotFound`` if QoS policy will not be found. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) # Translate None from search interface to empty {} for kwargs below if not filters: filters = {} data = self._network_client.get( "/qos/policies/{policy_id}/minimum_bandwidth_rules.json".format( policy_id=policy['id']), params=filters, error_message="Error fetching QoS minimum bandwith rules from " "{policy}".format(policy=policy['id'])) return self._get_and_munchify('minimum_bandwidth_rules', data) def get_qos_minimum_bandwidth_rule(self, policy_name_or_id, rule_id): """Get a QoS minimum bandwidth rule by name or ID. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param rule_id: ID of the rule. :returns: A bandwidth limit rule ``munch.Munch`` or None if no matching rule is found. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) data = self._network_client.get( "/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), error_message="Error fetching QoS minimum_bandwith rule {rule_id} " "from {policy}".format(rule_id=rule_id, policy=policy['id'])) return self._get_and_munchify('minimum_bandwidth_rule', data) @_utils.valid_kwargs("direction") def create_qos_minimum_bandwidth_rule(self, policy_name_or_id, min_kbps, **kwargs): """Create a QoS minimum bandwidth limit rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule should be associated. :param int min_kbps: Minimum bandwidth value (in kilobits per second). :param string direction: Ingress or egress. The direction in which the traffic will be available. :returns: The QoS minimum bandwidth rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) kwargs['min_kbps'] = min_kbps data = self._network_client.post( "/qos/policies/{policy_id}/minimum_bandwidth_rules".format( policy_id=policy['id']), json={'minimum_bandwidth_rule': kwargs}) return self._get_and_munchify('minimum_bandwidth_rule', data) @_utils.valid_kwargs("min_kbps", "direction") def update_qos_minimum_bandwidth_rule(self, policy_name_or_id, rule_id, **kwargs): """Update a QoS minimum bandwidth rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to update. :param int min_kbps: Minimum bandwidth value (in kilobits per second). :param string direction: Ingress or egress. The direction in which the traffic will be available. :returns: The updated QoS minimum bandwidth rule. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) if not kwargs: self.log.debug("No QoS minimum bandwidth rule data to update") return curr_rule = self.get_qos_minimum_bandwidth_rule( policy_name_or_id, rule_id) if not curr_rule: raise exc.OpenStackCloudException( "QoS minimum_bandwidth_rule {rule_id} not found in policy " "{policy_id}".format(rule_id=rule_id, policy_id=policy['id'])) data = self._network_client.put( "/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id}.json". format(policy_id=policy['id'], rule_id=rule_id), json={'minimum_bandwidth_rule': kwargs}) return self._get_and_munchify('minimum_bandwidth_rule', data) def delete_qos_minimum_bandwidth_rule(self, policy_name_or_id, rule_id): """Delete a QoS minimum bandwidth rule. :param string policy_name_or_id: Name or ID of the QoS policy to which rule is associated. :param string rule_id: ID of rule to delete. :raises: OpenStackCloudException on operation error. """ if not self._has_neutron_extension('qos'): raise exc.OpenStackCloudUnavailableExtension( 'QoS extension is not available on target cloud') policy = self.get_qos_policy(policy_name_or_id) if not policy: raise exc.OpenStackCloudResourceNotFound( "QoS policy {name_or_id} not Found.".format( name_or_id=policy_name_or_id)) try: self._network_client.delete( "/qos/policies/{policy}/minimum_bandwidth_rules/{rule}.json". format(policy=policy['id'], rule=rule_id)) except exc.OpenStackCloudURINotFound: self.log.debug( "QoS minimum bandwidth rule {rule_id} not found in policy " "{policy_id}. Ignoring.".format(rule_id=rule_id, policy_id=policy['id'])) return False return True def _build_external_gateway_info(self, ext_gateway_net_id, enable_snat, ext_fixed_ips): info = {} if ext_gateway_net_id: info['network_id'] = ext_gateway_net_id # Only send enable_snat if it is explicitly set. if enable_snat is not None: info['enable_snat'] = enable_snat if ext_fixed_ips: info['external_fixed_ips'] = ext_fixed_ips if info: return info return None def add_router_interface(self, router, subnet_id=None, port_id=None): """Attach a subnet to an internal router interface. Either a subnet ID or port ID must be specified for the internal interface. Supplying both will result in an error. :param dict router: The dict object of the router being changed :param string subnet_id: The ID of the subnet to use for the interface :param string port_id: The ID of the port to use for the interface :returns: A ``munch.Munch`` with the router ID (ID), subnet ID (subnet_id), port ID (port_id) and tenant ID (tenant_id). :raises: OpenStackCloudException on operation error. """ json_body = {} if subnet_id: json_body['subnet_id'] = subnet_id if port_id: json_body['port_id'] = port_id return self._network_client.put( "/routers/{router_id}/add_router_interface.json".format( router_id=router['id']), json=json_body, error_message="Error attaching interface to router {0}".format( router['id'])) def remove_router_interface(self, router, subnet_id=None, port_id=None): """Detach a subnet from an internal router interface. At least one of subnet_id or port_id must be supplied. If you specify both subnet and port ID, the subnet ID must correspond to the subnet ID of the first IP address on the port specified by the port ID. Otherwise an error occurs. :param dict router: The dict object of the router being changed :param string subnet_id: The ID of the subnet to use for the interface :param string port_id: The ID of the port to use for the interface :returns: None on success :raises: OpenStackCloudException on operation error. """ json_body = {} if subnet_id: json_body['subnet_id'] = subnet_id if port_id: json_body['port_id'] = port_id if not json_body: raise ValueError( "At least one of subnet_id or port_id must be supplied.") self._network_client.put( "/routers/{router_id}/remove_router_interface.json".format( router_id=router['id']), json=json_body, error_message="Error detaching interface from router {0}".format( router['id'])) def list_router_interfaces(self, router, interface_type=None): """List all interfaces for a router. :param dict router: A router dict object. :param string interface_type: One of None, "internal", or "external". Controls whether all, internal interfaces or external interfaces are returned. :returns: A list of port ``munch.Munch`` objects. """ # Find only router interface and gateway ports, ignore L3 HA ports etc. router_interfaces = self.search_ports(filters={ 'device_id': router['id'], 'device_owner': 'network:router_interface'} ) + self.search_ports(filters={ 'device_id': router['id'], 'device_owner': 'network:router_interface_distributed'} ) + self.search_ports(filters={ 'device_id': router['id'], 'device_owner': 'network:ha_router_replicated_interface'}) router_gateways = self.search_ports(filters={ 'device_id': router['id'], 'device_owner': 'network:router_gateway'}) ports = router_interfaces + router_gateways if interface_type: if interface_type == 'internal': return router_interfaces if interface_type == 'external': return router_gateways return ports def create_router(self, name=None, admin_state_up=True, ext_gateway_net_id=None, enable_snat=None, ext_fixed_ips=None, project_id=None, availability_zone_hints=None): """Create a logical router. :param string name: The router name. :param bool admin_state_up: The administrative state of the router. :param string ext_gateway_net_id: Network ID for the external gateway. :param bool enable_snat: Enable Source NAT (SNAT) attribute. :param list ext_fixed_ips: List of dictionaries of desired IP and/or subnet on the external network. Example:: [ { "subnet_id": "8ca37218-28ff-41cb-9b10-039601ea7e6b", "ip_address": "192.168.10.2" } ] :param string project_id: Project ID for the router. :param list availability_zone_hints: A list of availability zone hints. :returns: The router object. :raises: OpenStackCloudException on operation error. """ router = { 'admin_state_up': admin_state_up } if project_id is not None: router['tenant_id'] = project_id if name: router['name'] = name ext_gw_info = self._build_external_gateway_info( ext_gateway_net_id, enable_snat, ext_fixed_ips ) if ext_gw_info: router['external_gateway_info'] = ext_gw_info if availability_zone_hints is not None: if not isinstance(availability_zone_hints, list): raise exc.OpenStackCloudException( "Parameter 'availability_zone_hints' must be a list") if not self._has_neutron_extension('router_availability_zone'): raise exc.OpenStackCloudUnavailableExtension( 'router_availability_zone extension is not available on ' 'target cloud') router['availability_zone_hints'] = availability_zone_hints data = self._network_client.post( "/routers.json", json={"router": router}, error_message="Error creating router {0}".format(name)) return self._get_and_munchify('router', data) def update_router(self, name_or_id, name=None, admin_state_up=None, ext_gateway_net_id=None, enable_snat=None, ext_fixed_ips=None, routes=None): """Update an existing logical router. :param string name_or_id: The name or UUID of the router to update. :param string name: The new router name. :param bool admin_state_up: The administrative state of the router. :param string ext_gateway_net_id: The network ID for the external gateway. :param bool enable_snat: Enable Source NAT (SNAT) attribute. :param list ext_fixed_ips: List of dictionaries of desired IP and/or subnet on the external network. Example:: [ { "subnet_id": "8ca37218-28ff-41cb-9b10-039601ea7e6b", "ip_address": "192.168.10.2" } ] :param list routes: A list of dictionaries with destination and nexthop parameters. Example:: [ { "destination": "179.24.1.0/24", "nexthop": "172.24.3.99" } ] :returns: The router object. :raises: OpenStackCloudException on operation error. """ router = {} if name: router['name'] = name if admin_state_up is not None: router['admin_state_up'] = admin_state_up ext_gw_info = self._build_external_gateway_info( ext_gateway_net_id, enable_snat, ext_fixed_ips ) if ext_gw_info: router['external_gateway_info'] = ext_gw_info if routes: if self._has_neutron_extension('extraroute'): router['routes'] = routes else: self.log.warn( 'extra routes extension is not available on target cloud') if not router: self.log.debug("No router data to update") return curr_router = self.get_router(name_or_id) if not curr_router: raise exc.OpenStackCloudException( "Router %s not found." % name_or_id) data = self._network_client.put( "/routers/{router_id}.json".format(router_id=curr_router['id']), json={"router": router}, error_message="Error updating router {0}".format(name_or_id)) return self._get_and_munchify('router', data) def delete_router(self, name_or_id): """Delete a logical router. If a name, instead of a unique UUID, is supplied, it is possible that we could find more than one matching router since names are not required to be unique. An error will be raised in this case. :param name_or_id: Name or ID of the router being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ router = self.get_router(name_or_id) if not router: self.log.debug("Router %s not found for deleting", name_or_id) return False self._network_client.delete( "/routers/{router_id}.json".format(router_id=router['id']), error_message="Error deleting router {0}".format(name_or_id)) return True def get_image_exclude(self, name_or_id, exclude): for image in self.search_images(name_or_id): if exclude: if exclude not in image.name: return image else: return image return None def get_image_name(self, image_id, exclude=None): image = self.get_image_exclude(image_id, exclude) if image: return image.name return None def get_image_id(self, image_name, exclude=None): image = self.get_image_exclude(image_name, exclude) if image: return image.id return None def create_image_snapshot( self, name, server, wait=False, timeout=3600, **metadata): """Create an image by snapshotting an existing server. ..note:: On most clouds this is a cold snapshot - meaning that the server in question will be shutdown before taking the snapshot. It is possible that it's a live snapshot - but there is no way to know as a user, so caveat emptor. :param name: Name of the image to be created :param server: Server name or ID or dict representing the server to be snapshotted :param wait: If true, waits for image to be created. :param timeout: Seconds to wait for image creation. None is forever. :param metadata: Metadata to give newly-created image entity :returns: A ``munch.Munch`` of the Image object :raises: OpenStackCloudException if there are problems uploading """ if not isinstance(server, dict): server_obj = self.get_server(server, bare=True) if not server_obj: raise exc.OpenStackCloudException( "Server {server} could not be found and therefore" " could not be snapshotted.".format(server=server)) server = server_obj response = self._compute_client.post( '/servers/{server_id}/action'.format(server_id=server['id']), json={ "createImage": { "name": name, "metadata": metadata, } }) # You won't believe it - wait, who am I kidding - of course you will! # Nova returns the URL of the image created in the Location # header of the response. (what?) But, even better, the URL it responds # with has a very good chance of being wrong (it is built from # nova.conf values that point to internal API servers in any cloud # large enough to have both public and internal endpoints. # However, nobody has ever noticed this because novaclient doesn't # actually use that URL - it extracts the id from the end of # the url, then returns the id. This leads us to question: # a) why Nova is going to return a value in a header # b) why it's going to return data that probably broken # c) indeed the very nature of the fabric of reality # Although it fills us with existential dread, we have no choice but # to follow suit like a lemming being forced over a cliff by evil # producers from Disney. # TODO(mordred) Update this to consume json microversion when it is # available. # blueprint:remove-create-image-location-header-response image_id = response.headers['Location'].rsplit('/', 1)[1] self.list_images.invalidate(self) image = self.get_image(image_id) if not wait: return image return self.wait_for_image(image, timeout=timeout) def wait_for_image(self, image, timeout=3600): image_id = image['id'] for count in _utils._iterate_timeout( timeout, "Timeout waiting for image to snapshot"): self.list_images.invalidate(self) image = self.get_image(image_id) if not image: continue if image['status'] == 'active': return image elif image['status'] == 'error': raise exc.OpenStackCloudException( 'Image {image} hit error state'.format(image=image_id)) def delete_image( self, name_or_id, wait=False, timeout=3600, delete_objects=True): """Delete an existing image. :param name_or_id: Name of the image to be deleted. :param wait: If True, waits for image to be deleted. :param timeout: Seconds to wait for image deletion. None is forever. :param delete_objects: If True, also deletes uploaded swift objects. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException if there are problems deleting. """ image = self.get_image(name_or_id) if not image: return False self._image_client.delete( '/images/{id}'.format(id=image.id), error_message="Error in deleting image") self.list_images.invalidate(self) # Task API means an image was uploaded to swift if self.image_api_use_tasks and IMAGE_OBJECT_KEY in image: (container, objname) = image[IMAGE_OBJECT_KEY].split('/', 1) self.delete_object(container=container, name=objname) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the image to be deleted."): self._get_cache(None).invalidate() if self.get_image(image.id) is None: break return True def _get_name_and_filename(self, name): # See if name points to an existing file if os.path.exists(name): # Neat. Easy enough return (os.path.splitext(os.path.basename(name))[0], name) # Try appending the disk format name_with_ext = '.'.join(( name, self.cloud_config.config['image_format'])) if os.path.exists(name_with_ext): return (os.path.basename(name), name_with_ext) raise exc.OpenStackCloudException( 'No filename parameter was given to create_image,' ' and {name} was not the path to an existing file.' ' Please provide either a path to an existing file' ' or a name and a filename'.format(name=name)) def _hashes_up_to_date(self, md5, sha256, md5_key, sha256_key): '''Compare md5 and sha256 hashes for being up to date md5 and sha256 are the current values. md5_key and sha256_key are the previous values. ''' up_to_date = False if md5 and md5_key == md5: up_to_date = True if sha256 and sha256_key == sha256: up_to_date = True if md5 and md5_key != md5: up_to_date = False if sha256 and sha256_key != sha256: up_to_date = False return up_to_date def create_image( self, name, filename=None, container=OBJECT_AUTOCREATE_CONTAINER, md5=None, sha256=None, disk_format=None, container_format=None, disable_vendor_agent=True, wait=False, timeout=3600, allow_duplicates=False, meta=None, volume=None, **kwargs): """Upload an image. :param str name: Name of the image to create. If it is a pathname of an image, the name will be constructed from the extensionless basename of the path. :param str filename: The path to the file to upload, if needed. (optional, defaults to None) :param str container: Name of the container in swift where images should be uploaded for import if the cloud requires such a thing. (optiona, defaults to 'images') :param str md5: md5 sum of the image file. If not given, an md5 will be calculated. :param str sha256: sha256 sum of the image file. If not given, an md5 will be calculated. :param str disk_format: The disk format the image is in. (optional, defaults to the os-client-config config value for this cloud) :param str container_format: The container format the image is in. (optional, defaults to the os-client-config config value for this cloud) :param bool disable_vendor_agent: Whether or not to append metadata flags to the image to inform the cloud in question to not expect a vendor agent to be runing. (optional, defaults to True) :param bool wait: If true, waits for image to be created. Defaults to true - however, be aware that one of the upload methods is always synchronous. :param timeout: Seconds to wait for image creation. None is forever. :param allow_duplicates: If true, skips checks that enforce unique image name. (optional, defaults to False) :param meta: A dict of key/value pairs to use for metadata that bypasses automatic type conversion. :param volume: Name or ID or volume object of a volume to create an image from. Mutually exclusive with (optional, defaults to None) Additional kwargs will be passed to the image creation as additional metadata for the image and will have all values converted to string except for min_disk, min_ram, size and virtual_size which will be converted to int. If you are sure you have all of your data types correct or have an advanced need to be explicit, use meta. If you are just a normal consumer, using kwargs is likely the right choice. If a value is in meta and kwargs, meta wins. :returns: A ``munch.Munch`` of the Image object :raises: OpenStackCloudException if there are problems uploading """ if not meta: meta = {} if not disk_format: disk_format = self.cloud_config.config['image_format'] if not container_format: # https://docs.openstack.org/image-guide/image-formats.html container_format = 'bare' if volume: if 'id' in volume: volume_id = volume['id'] else: volume_obj = self.get_volume(volume) if not volume_obj: raise exc.OpenStackCloudException( "Volume {volume} given to create_image could" " not be foud".format(volume=volume)) volume_id = volume_obj['id'] return self._upload_image_from_volume( name=name, volume_id=volume_id, allow_duplicates=allow_duplicates, container_format=container_format, disk_format=disk_format, wait=wait, timeout=timeout) # If there is no filename, see if name is actually the filename if not filename: name, filename = self._get_name_and_filename(name) if not (md5 or sha256): (md5, sha256) = self._get_file_hashes(filename) if allow_duplicates: current_image = None else: current_image = self.get_image(name) if current_image: md5_key = current_image.get(IMAGE_MD5_KEY, '') sha256_key = current_image.get(IMAGE_SHA256_KEY, '') up_to_date = self._hashes_up_to_date( md5=md5, sha256=sha256, md5_key=md5_key, sha256_key=sha256_key) if up_to_date: self.log.debug( "image %(name)s exists and is up to date", {'name': name}) return current_image kwargs[IMAGE_MD5_KEY] = md5 or '' kwargs[IMAGE_SHA256_KEY] = sha256 or '' kwargs[IMAGE_OBJECT_KEY] = '/'.join([container, name]) if disable_vendor_agent: kwargs.update(self.cloud_config.config['disable_vendor_agent']) # We can never have nice things. Glance v1 took "is_public" as a # boolean. Glance v2 takes "visibility". If the user gives us # is_public, we know what they mean. If they give us visibility, they # know that they mean. if self._is_client_version('image', 2): if 'is_public' in kwargs: is_public = kwargs.pop('is_public') if is_public: kwargs['visibility'] = 'public' else: kwargs['visibility'] = 'private' try: # This makes me want to die inside if self.image_api_use_tasks: return self._upload_image_task( name, filename, container, current_image=current_image, wait=wait, timeout=timeout, md5=md5, sha256=sha256, meta=meta, **kwargs) else: # If a user used the v1 calling format, they will have # passed a dict called properties along properties = kwargs.pop('properties', {}) kwargs.update(properties) image_kwargs = dict(properties=kwargs) if disk_format: image_kwargs['disk_format'] = disk_format if container_format: image_kwargs['container_format'] = container_format return self._upload_image_put( name, filename, meta=meta, wait=wait, timeout=timeout, **image_kwargs) except exc.OpenStackCloudException: self.log.debug("Image creation failed", exc_info=True) raise except Exception as e: raise exc.OpenStackCloudException( "Image creation failed: {message}".format(message=str(e))) def _make_v2_image_params(self, meta, properties): ret = {} for k, v in iter(properties.items()): if k in ('min_disk', 'min_ram', 'size', 'virtual_size'): ret[k] = int(v) elif k == 'protected': ret[k] = v else: if v is None: ret[k] = None else: ret[k] = str(v) ret.update(meta) return ret def _upload_image_from_volume( self, name, volume_id, allow_duplicates, container_format, disk_format, wait, timeout): data = self._volume_client.post( '/volumes/{id}/action'.format(id=volume_id), json={ 'os-volume_upload_image': { 'force': allow_duplicates, 'image_name': name, 'container_format': container_format, 'disk_format': disk_format}}) response = self._get_and_munchify('os-volume_upload_image', data) if not wait: return self.get_image(response['image_id']) try: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the image to finish."): image_obj = self.get_image(response['image_id']) if image_obj and image_obj.status not in ('queued', 'saving'): return image_obj except exc.OpenStackCloudTimeout: self.log.debug( "Timeout waiting for image to become ready. Deleting.") self.delete_image(response['image_id'], wait=True) raise def _upload_image_put_v2(self, name, image_data, meta, **image_kwargs): properties = image_kwargs.pop('properties', {}) image_kwargs.update(self._make_v2_image_params(meta, properties)) image_kwargs['name'] = name data = self._image_client.post('/images', json=image_kwargs) image = self._get_and_munchify(key=None, data=data) try: self._image_client.put( '/images/{id}/file'.format(id=image.id), headers={'Content-Type': 'application/octet-stream'}, data=image_data) except Exception: self.log.debug("Deleting failed upload of image %s", name) try: self._image_client.delete( '/images/{id}'.format(id=image.id)) except exc.OpenStackCloudHTTPError: # We're just trying to clean up - if it doesn't work - shrug self.log.debug( "Failed deleting image after we failed uploading it.", exc_info=True) raise return image def _upload_image_put_v1( self, name, image_data, meta, **image_kwargs): image_kwargs['properties'].update(meta) image_kwargs['name'] = name image = self._get_and_munchify( 'image', self._image_client.post('/images', json=image_kwargs)) checksum = image_kwargs['properties'].get(IMAGE_MD5_KEY, '') try: # Let us all take a brief moment to be grateful that this # is not actually how OpenStack APIs work anymore headers = { 'x-glance-registry-purge-props': 'false', } if checksum: headers['x-image-meta-checksum'] = checksum image = self._get_and_munchify( 'image', self._image_client.put( '/images/{id}'.format(id=image.id), headers=headers, data=image_data)) except exc.OpenStackCloudHTTPError: self.log.debug("Deleting failed upload of image %s", name) try: self._image_client.delete( '/images/{id}'.format(id=image.id)) except exc.OpenStackCloudHTTPError: # We're just trying to clean up - if it doesn't work - shrug self.log.debug( "Failed deleting image after we failed uploading it.", exc_info=True) raise return self._normalize_image(image) def _upload_image_put( self, name, filename, meta, wait, timeout, **image_kwargs): image_data = open(filename, 'rb') # Because reasons and crying bunnies if self._is_client_version('image', 2): image = self._upload_image_put_v2( name, image_data, meta, **image_kwargs) else: image = self._upload_image_put_v1( name, image_data, meta, **image_kwargs) self._get_cache(None).invalidate() if not wait: return image try: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the image to finish."): image_obj = self.get_image(image.id) if image_obj and image_obj.status not in ('queued', 'saving'): return image_obj except exc.OpenStackCloudTimeout: self.log.debug( "Timeout waiting for image to become ready. Deleting.") self.delete_image(image.id, wait=True) raise def _upload_image_task( self, name, filename, container, current_image, wait, timeout, meta, md5=None, sha256=None, **image_kwargs): parameters = image_kwargs.pop('parameters', {}) image_kwargs.update(parameters) self.create_object( container, name, filename, md5=md5, sha256=sha256, metadata={OBJECT_AUTOCREATE_KEY: 'true'}, **{'content-type': 'application/octet-stream'}) if not current_image: current_image = self.get_image(name) # TODO(mordred): Can we do something similar to what nodepool does # using glance properties to not delete then upload but instead make a # new "good" image and then mark the old one as "bad" task_args = dict( type='import', input=dict( import_from='{container}/{name}'.format( container=container, name=name), image_properties=dict(name=name))) data = self._image_client.post('/tasks', json=task_args) glance_task = self._get_and_munchify(key=None, data=data) self.list_images.invalidate(self) if wait: start = time.time() image_id = None for count in _utils._iterate_timeout( timeout, "Timeout waiting for the image to import."): try: if image_id is None: status = self._image_client.get( '/tasks/{id}'.format(id=glance_task.id)) except exc.OpenStackCloudHTTPError as e: if e.response.status_code == 503: # Clear the exception so that it doesn't linger # and get reported as an Inner Exception later _utils._exc_clear() # Intermittent failure - catch and try again continue raise if status['status'] == 'success': image_id = status['result']['image_id'] try: image = self.get_image(image_id) except exc.OpenStackCloudHTTPError as e: if e.response.status_code == 503: # Clear the exception so that it doesn't linger # and get reported as an Inner Exception later _utils._exc_clear() # Intermittent failure - catch and try again continue raise if image is None: continue self.update_image_properties( image=image, meta=meta, **image_kwargs) self.log.debug( "Image Task %s imported %s in %s", glance_task.id, image_id, (time.time() - start)) # Clean up after ourselves. The object we created is not # needed after the import is done. self.delete_object(container, name) return self.get_image(image_id) elif status['status'] == 'failure': if status['message'] == IMAGE_ERROR_396: glance_task = self._image_client.post( '/tasks', data=task_args) self.list_images.invalidate(self) else: # Clean up after ourselves. The image did not import # and this isn't a 'just retry' error - glance didn't # like the content. So we don't want to keep it for # next time. self.delete_object(container, name) raise exc.OpenStackCloudException( "Image creation failed: {message}".format( message=status['message']), extra_data=status) else: return glance_task def update_image_properties( self, image=None, name_or_id=None, meta=None, **properties): if image is None: image = self.get_image(name_or_id) if not meta: meta = {} img_props = {} for k, v in iter(properties.items()): if v and k in ['ramdisk', 'kernel']: v = self.get_image_id(v) k = '{0}_id'.format(k) img_props[k] = v # This makes me want to die inside if self._is_client_version('image', 2): return self._update_image_properties_v2(image, meta, img_props) else: return self._update_image_properties_v1(image, meta, img_props) def _update_image_properties_v2(self, image, meta, properties): img_props = image.properties.copy() for k, v in iter(self._make_v2_image_params(meta, properties).items()): if image.get(k, None) != v: img_props[k] = v if not img_props: return False headers = { 'Content-Type': 'application/openstack-images-v2.1-json-patch'} patch = sorted(list(jsonpatch.JsonPatch.from_diff( image.properties, img_props)), key=operator.itemgetter('value')) # No need to fire an API call if there is an empty patch if patch: self._image_client.patch( '/images/{id}'.format(id=image.id), headers=headers, data=json.dumps(patch)) self.list_images.invalidate(self) return True def _update_image_properties_v1(self, image, meta, properties): properties.update(meta) img_props = {} for k, v in iter(properties.items()): if image.properties.get(k, None) != v: img_props['x-image-meta-{key}'.format(key=k)] = v if not img_props: return False self._image_client.put( '/images/{id}'.format(image.id), headers=img_props) self.list_images.invalidate(self) return True def create_volume( self, size, wait=True, timeout=None, image=None, bootable=None, **kwargs): """Create a volume. :param size: Size, in GB of the volume to create. :param name: (optional) Name for the volume. :param description: (optional) Name for the volume. :param wait: If true, waits for volume to be created. :param timeout: Seconds to wait for volume creation. None is forever. :param image: (optional) Image name, ID or object from which to create the volume :param bootable: (optional) Make this volume bootable. If set, wait will also be set to true. :param kwargs: Keyword arguments as expected for cinder client. :returns: The created volume object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ if bootable is not None: wait = True if image: image_obj = self.get_image(image) if not image_obj: raise exc.OpenStackCloudException( "Image {image} was requested as the basis for a new" " volume, but was not found on the cloud".format( image=image)) kwargs['imageRef'] = image_obj['id'] kwargs = self._get_volume_kwargs(kwargs) kwargs['size'] = size payload = dict(volume=kwargs) if 'scheduler_hints' in kwargs: payload['OS-SCH-HNT:scheduler_hints'] = kwargs.pop( 'scheduler_hints', None) data = self._volume_client.post( '/volumes', json=dict(payload), error_message='Error in creating volume') volume = self._get_and_munchify('volume', data) self.list_volumes.invalidate(self) if volume['status'] == 'error': raise exc.OpenStackCloudException("Error in creating volume") if wait: vol_id = volume['id'] for count in _utils._iterate_timeout( timeout, "Timeout waiting for the volume to be available."): volume = self.get_volume(vol_id) if not volume: continue if volume['status'] == 'available': if bootable is not None: self.set_volume_bootable(volume, bootable=bootable) # no need to re-fetch to update the flag, just set it. volume['bootable'] = bootable return volume if volume['status'] == 'error': raise exc.OpenStackCloudException( "Error in creating volume") return self._normalize_volume(volume) def set_volume_bootable(self, name_or_id, bootable=True): """Set a volume's bootable flag. :param name_or_id: Name, unique ID of the volume or a volume dict. :param bool bootable: Whether the volume should be bootable. (Defaults to True) :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ volume = self.get_volume(name_or_id) if not volume: raise exc.OpenStackCloudException( "Volume {name_or_id} does not exist".format( name_or_id=name_or_id)) self._volume_client.post( 'volumes/{id}/action'.format(id=volume['id']), json={'os-set_bootable': {'bootable': bootable}}, error_message="Error setting bootable on volume {volume}".format( volume=volume['id']) ) def delete_volume(self, name_or_id=None, wait=True, timeout=None, force=False): """Delete a volume. :param name_or_id: Name or unique ID of the volume. :param wait: If true, waits for volume to be deleted. :param timeout: Seconds to wait for volume deletion. None is forever. :param force: Force delete volume even if the volume is in deleting or error_deleting state. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ self.list_volumes.invalidate(self) volume = self.get_volume(name_or_id) if not volume: self.log.debug( "Volume %(name_or_id)s does not exist", {'name_or_id': name_or_id}, exc_info=True) return False with _utils.shade_exceptions("Error in deleting volume"): try: if force: self._volume_client.post( 'volumes/{id}/action'.format(id=volume['id']), json={'os-force_delete': None}) else: self._volume_client.delete( 'volumes/{id}'.format(id=volume['id'])) except exc.OpenStackCloudURINotFound: self.log.debug( "Volume {id} not found when deleting. Ignoring.".format( id=volume['id'])) return False self.list_volumes.invalidate(self) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the volume to be deleted."): if not self.get_volume(volume['id']): break return True def get_volumes(self, server, cache=True): volumes = [] for volume in self.list_volumes(cache=cache): for attach in volume['attachments']: if attach['server_id'] == server['id']: volumes.append(volume) return volumes def get_volume_limits(self, name_or_id=None): """ Get volume limits for a project :param name_or_id: (optional) project name or ID to get limits for if different from the current project :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the limits """ params = {} project_id = None error_msg = "Failed to get limits" if name_or_id: proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") project_id = proj.id params['tenant_id'] = project_id error_msg = "{msg} for the project: {project} ".format( msg=error_msg, project=name_or_id) data = self._volume_client.get('/limits', params=params) limits = self._get_and_munchify('limits', data) return limits def get_volume_id(self, name_or_id): volume = self.get_volume(name_or_id) if volume: return volume['id'] return None def volume_exists(self, name_or_id): return self.get_volume(name_or_id) is not None def get_volume_attach_device(self, volume, server_id): """Return the device name a volume is attached to for a server. This can also be used to verify if a volume is attached to a particular server. :param volume: Volume dict :param server_id: ID of server to check :returns: Device name if attached, None if volume is not attached. """ for attach in volume['attachments']: if server_id == attach['server_id']: return attach['device'] return None def detach_volume(self, server, volume, wait=True, timeout=None): """Detach a volume from a server. :param server: The server dict to detach from. :param volume: The volume dict to detach. :param wait: If true, waits for volume to be detached. :param timeout: Seconds to wait for volume detachment. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ self._compute_client.delete( '/servers/{server_id}/os-volume_attachments/{volume_id}'.format( server_id=server['id'], volume_id=volume['id']), error_message=( "Error detaching volume {volume} from server {server}".format( volume=volume['id'], server=server['id']))) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for volume %s to detach." % volume['id']): try: vol = self.get_volume(volume['id']) except Exception: self.log.debug( "Error getting volume info %s", volume['id'], exc_info=True) continue if vol['status'] == 'available': return if vol['status'] == 'error': raise exc.OpenStackCloudException( "Error in detaching volume %s" % volume['id'] ) def attach_volume(self, server, volume, device=None, wait=True, timeout=None): """Attach a volume to a server. This will attach a volume, described by the passed in volume dict (as returned by get_volume()), to the server described by the passed in server dict (as returned by get_server()) on the named device on the server. If the volume is already attached to the server, or generally not available, then an exception is raised. To re-attach to a server, but under a different device, the user must detach it first. :param server: The server dict to attach to. :param volume: The volume dict to attach. :param device: The device name where the volume will attach. :param wait: If true, waits for volume to be attached. :param timeout: Seconds to wait for volume attachment. None is forever. :returns: a volume attachment object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ dev = self.get_volume_attach_device(volume, server['id']) if dev: raise exc.OpenStackCloudException( "Volume %s already attached to server %s on device %s" % (volume['id'], server['id'], dev) ) if volume['status'] != 'available': raise exc.OpenStackCloudException( "Volume %s is not available. Status is '%s'" % (volume['id'], volume['status']) ) payload = {'volumeId': volume['id']} if device: payload['device'] = device data = self._compute_client.post( '/servers/{server_id}/os-volume_attachments'.format( server_id=server['id']), json=dict(volumeAttachment=payload), error_message="Error attaching volume {volume_id} to server " "{server_id}".format(volume_id=volume['id'], server_id=server['id'])) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for volume %s to attach." % volume['id']): try: self.list_volumes.invalidate(self) vol = self.get_volume(volume['id']) except Exception: self.log.debug( "Error getting volume info %s", volume['id'], exc_info=True) continue if self.get_volume_attach_device(vol, server['id']): break # TODO(Shrews) check to see if a volume can be in error status # and also attached. If so, we should move this # above the get_volume_attach_device call if vol['status'] == 'error': raise exc.OpenStackCloudException( "Error in attaching volume %s" % volume['id'] ) return self._normalize_volume_attachment( self._get_and_munchify('volumeAttachment', data)) def _get_volume_kwargs(self, kwargs): name = kwargs.pop('name', kwargs.pop('display_name', None)) description = kwargs.pop('description', kwargs.pop('display_description', None)) if name: if self._is_client_version('volume', 2): kwargs['name'] = name else: kwargs['display_name'] = name if description: if self._is_client_version('volume', 2): kwargs['description'] = description else: kwargs['display_description'] = description return kwargs @_utils.valid_kwargs('name', 'display_name', 'description', 'display_description') def create_volume_snapshot(self, volume_id, force=False, wait=True, timeout=None, **kwargs): """Create a volume. :param volume_id: the ID of the volume to snapshot. :param force: If set to True the snapshot will be created even if the volume is attached to an instance, if False it will not :param name: name of the snapshot, one will be generated if one is not provided :param description: description of the snapshot, one will be generated if one is not provided :param wait: If true, waits for volume snapshot to be created. :param timeout: Seconds to wait for volume snapshot creation. None is forever. :returns: The created volume object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ kwargs = self._get_volume_kwargs(kwargs) payload = {'volume_id': volume_id, 'force': force} payload.update(kwargs) data = self._volume_client.post( '/snapshots', json=dict(snapshot=payload), error_message="Error creating snapshot of volume " "{volume_id}".format(volume_id=volume_id)) snapshot = self._get_and_munchify('snapshot', data) if wait: snapshot_id = snapshot['id'] for count in _utils._iterate_timeout( timeout, "Timeout waiting for the volume snapshot to be available." ): snapshot = self.get_volume_snapshot_by_id(snapshot_id) if snapshot['status'] == 'available': break if snapshot['status'] == 'error': raise exc.OpenStackCloudException( "Error in creating volume snapshot") # TODO(mordred) need to normalize snapshots. We were normalizing them # as volumes, which is an error. They need to be normalized as # volume snapshots, which are completely different objects return snapshot def get_volume_snapshot_by_id(self, snapshot_id): """Takes a snapshot_id and gets a dict of the snapshot that maches that ID. Note: This is more efficient than get_volume_snapshot. param: snapshot_id: ID of the volume snapshot. """ data = self._volume_client.get( '/snapshots/{snapshot_id}'.format(snapshot_id=snapshot_id), error_message="Error getting snapshot " "{snapshot_id}".format(snapshot_id=snapshot_id)) return self._normalize_volume( self._get_and_munchify('snapshot', data)) def get_volume_snapshot(self, name_or_id, filters=None): """Get a volume by name or ID. :param name_or_id: Name or ID of the volume snapshot. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A volume ``munch.Munch`` or None if no matching volume is found. """ return _utils._get_entity(self, 'volume_snapshot', name_or_id, filters) def create_volume_backup(self, volume_id, name=None, description=None, force=False, wait=True, timeout=None): """Create a volume backup. :param volume_id: the ID of the volume to backup. :param name: name of the backup, one will be generated if one is not provided :param description: description of the backup, one will be generated if one is not provided :param force: If set to True the backup will be created even if the volume is attached to an instance, if False it will not :param wait: If true, waits for volume backup to be created. :param timeout: Seconds to wait for volume backup creation. None is forever. :returns: The created volume backup object. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ payload = { 'name': name, 'volume_id': volume_id, 'description': description, 'force': force, } data = self._volume_client.post( '/backups', json=dict(backup=payload), error_message="Error creating backup of volume " "{volume_id}".format(volume_id=volume_id)) backup = self._get_and_munchify('backup', data) if wait: backup_id = backup['id'] msg = ("Timeout waiting for the volume backup {} to be " "available".format(backup_id)) for _ in _utils._iterate_timeout(timeout, msg): backup = self.get_volume_backup(backup_id) if backup['status'] == 'available': break if backup['status'] == 'error': raise exc.OpenStackCloudException( "Error in creating volume backup {id}".format( id=backup_id)) return backup def get_volume_backup(self, name_or_id, filters=None): """Get a volume backup by name or ID. :returns: A backup ``munch.Munch`` or None if no matching backup is found. """ return _utils._get_entity(self, 'volume_backup', name_or_id, filters) def list_volume_snapshots(self, detailed=True, search_opts=None): """List all volume snapshots. :returns: A list of volume snapshots ``munch.Munch``. """ endpoint = '/snapshots/detail' if detailed else '/snapshots' data = self._volume_client.get( endpoint, params=search_opts, error_message="Error getting a list of snapshots") return self._get_and_munchify('snapshots', data) def list_volume_backups(self, detailed=True, search_opts=None): """ List all volume backups. :param bool detailed: Also list details for each entry :param dict search_opts: Search options A dictionary of meta data to use for further filtering. Example:: { 'name': 'my-volume-backup', 'status': 'available', 'volume_id': 'e126044c-7b4c-43be-a32a-c9cbbc9ddb56', 'all_tenants': 1 } :returns: A list of volume backups ``munch.Munch``. """ endpoint = '/backups/detail' if detailed else '/backups' data = self._volume_client.get( endpoint, params=search_opts, error_message="Error getting a list of backups") return self._get_and_munchify('backups', data) def delete_volume_backup(self, name_or_id=None, force=False, wait=False, timeout=None): """Delete a volume backup. :param name_or_id: Name or unique ID of the volume backup. :param force: Allow delete in state other than error or available. :param wait: If true, waits for volume backup to be deleted. :param timeout: Seconds to wait for volume backup deletion. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ volume_backup = self.get_volume_backup(name_or_id) if not volume_backup: return False msg = "Error in deleting volume backup" if force: self._volume_client.post( '/backups/{backup_id}/action'.format( backup_id=volume_backup['id']), json={'os-force_delete': None}, error_message=msg) else: self._volume_client.delete( '/backups/{backup_id}'.format( backup_id=volume_backup['id']), error_message=msg) if wait: msg = "Timeout waiting for the volume backup to be deleted." for count in _utils._iterate_timeout(timeout, msg): if not self.get_volume_backup(volume_backup['id']): break return True def delete_volume_snapshot(self, name_or_id=None, wait=False, timeout=None): """Delete a volume snapshot. :param name_or_id: Name or unique ID of the volume snapshot. :param wait: If true, waits for volume snapshot to be deleted. :param timeout: Seconds to wait for volume snapshot deletion. None is forever. :raises: OpenStackCloudTimeout if wait time exceeded. :raises: OpenStackCloudException on operation error. """ volumesnapshot = self.get_volume_snapshot(name_or_id) if not volumesnapshot: return False self._volume_client.delete( '/snapshots/{snapshot_id}'.format( snapshot_id=volumesnapshot['id']), error_message="Error in deleting volume snapshot") if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the volume snapshot to be deleted."): if not self.get_volume_snapshot(volumesnapshot['id']): break return True def get_server_id(self, name_or_id): server = self.get_server(name_or_id, bare=True) if server: return server['id'] return None def get_server_private_ip(self, server): return meta.get_server_private_ip(server, self) def get_server_public_ip(self, server): return meta.get_server_external_ipv4(self, server) def get_server_meta(self, server): # TODO(mordred) remove once ansible has moved to Inventory interface server_vars = meta.get_hostvars_from_server(self, server) groups = meta.get_groups_from_server(self, server, server_vars) return dict(server_vars=server_vars, groups=groups) def get_openstack_vars(self, server): return meta.get_hostvars_from_server(self, server) def _expand_server_vars(self, server): # Used by nodepool # TODO(mordred) remove after these make it into what we # actually want the API to be. return meta.expand_server_vars(self, server) def _find_floating_network_by_router(self): """Find the network providing floating ips by looking at routers.""" if self._floating_network_by_router_lock.acquire( not self._floating_network_by_router_run): if self._floating_network_by_router_run: self._floating_network_by_router_lock.release() return self._floating_network_by_router try: for router in self.list_routers(): if router['admin_state_up']: network_id = router.get( 'external_gateway_info', {}).get('network_id') if network_id: self._floating_network_by_router = network_id finally: self._floating_network_by_router_run = True self._floating_network_by_router_lock.release() return self._floating_network_by_router def available_floating_ip(self, network=None, server=None): """Get a floating IP from a network or a pool. Return the first available floating IP or allocate a new one. :param network: Name or ID of the network. :param server: Server the IP is for if known :returns: a (normalized) structure with a floating IP address description. """ if self._use_neutron_floating(): try: f_ips = self._normalize_floating_ips( self._neutron_available_floating_ips( network=network, server=server)) return f_ips[0] except exc.OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova f_ips = self._normalize_floating_ips( self._nova_available_floating_ips(pool=network) ) return f_ips[0] def _get_floating_network_id(self): # Get first existing external IPv4 network networks = self.get_external_ipv4_floating_networks() if networks: floating_network_id = networks[0]['id'] else: floating_network = self._find_floating_network_by_router() if floating_network: floating_network_id = floating_network else: raise exc.OpenStackCloudResourceNotFound( "unable to find an external network") return floating_network_id def _neutron_available_floating_ips( self, network=None, project_id=None, server=None): """Get a floating IP from a network. Return a list of available floating IPs or allocate a new one and return it in a list of 1 element. :param network: A single network name or ID, or a list of them. :param server: (server) Server the Floating IP is for :returns: a list of floating IP addresses. :raises: ``OpenStackCloudResourceNotFound``, if an external network that meets the specified criteria cannot be found. """ if project_id is None: # Make sure we are only listing floatingIPs allocated the current # tenant. This is the default behaviour of Nova project_id = self.current_project_id if network: if isinstance(network, six.string_types): network = [network] # Use given list to get first matching external network floating_network_id = None for net in network: for ext_net in self.get_external_ipv4_floating_networks(): if net in (ext_net['name'], ext_net['id']): floating_network_id = ext_net['id'] break if floating_network_id: break if floating_network_id is None: raise exc.OpenStackCloudResourceNotFound( "unable to find external network {net}".format( net=network) ) else: floating_network_id = self._get_floating_network_id() filters = { 'port': None, 'network': floating_network_id, 'location': {'project': {'id': project_id}}, } floating_ips = self._list_floating_ips() available_ips = _utils._filter_list( floating_ips, name_or_id=None, filters=filters) if available_ips: return available_ips # No available IP found or we didn't try # allocate a new Floating IP f_ip = self._neutron_create_floating_ip( network_id=floating_network_id, server=server) return [f_ip] def _nova_available_floating_ips(self, pool=None): """Get available floating IPs from a floating IP pool. Return a list of available floating IPs or allocate a new one and return it in a list of 1 element. :param pool: Nova floating IP pool name. :returns: a list of floating IP addresses. :raises: ``OpenStackCloudResourceNotFound``, if a floating IP pool is not specified and cannot be found. """ with _utils.shade_exceptions( "Unable to create floating IP in pool {pool}".format( pool=pool)): if pool is None: pools = self.list_floating_ip_pools() if not pools: raise exc.OpenStackCloudResourceNotFound( "unable to find a floating ip pool") pool = pools[0]['name'] filters = { 'instance_id': None, 'pool': pool } floating_ips = self._nova_list_floating_ips() available_ips = _utils._filter_list( floating_ips, name_or_id=None, filters=filters) if available_ips: return available_ips # No available IP found or we did not try. # Allocate a new Floating IP f_ip = self._nova_create_floating_ip(pool=pool) return [f_ip] def create_floating_ip(self, network=None, server=None, fixed_address=None, nat_destination=None, port=None, wait=False, timeout=60): """Allocate a new floating IP from a network or a pool. :param network: Name or ID of the network that the floating IP should come from. :param server: (optional) Server dict for the server to create the IP for and to which it should be attached. :param fixed_address: (optional) Fixed IP to attach the floating ip to. :param nat_destination: (optional) Name or ID of the network that the fixed IP to attach the floating IP to should be on. :param port: (optional) The port ID that the floating IP should be attached to. Specifying a port conflicts with specifying a server, fixed_address or nat_destination. :param wait: (optional) Whether to wait for the IP to be active. Defaults to False. Only applies if a server is provided. :param timeout: (optional) How long to wait for the IP to be active. Defaults to 60. Only applies if a server is provided. :returns: a floating IP address :raises: ``OpenStackCloudException``, on operation error. """ if self._use_neutron_floating(): try: return self._neutron_create_floating_ip( network_name_or_id=network, server=server, fixed_address=fixed_address, nat_destination=nat_destination, port=port, wait=wait, timeout=timeout) except exc.OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova if port: raise exc.OpenStackCloudException( "This cloud uses nova-network which does not support" " arbitrary floating-ip/port mappings. Please nudge" " your cloud provider to upgrade the networking stack" " to neutron, or alternately provide the server," " fixed_address and nat_destination arguments as appropriate") # Else, we are using Nova network f_ips = self._normalize_floating_ips( [self._nova_create_floating_ip(pool=network)]) return f_ips[0] def _submit_create_fip(self, kwargs): # Split into a method to aid in test mocking data = self._network_client.post( "/floatingips.json", json={"floatingip": kwargs}) return self._normalize_floating_ip( self._get_and_munchify('floatingip', data)) def _neutron_create_floating_ip( self, network_name_or_id=None, server=None, fixed_address=None, nat_destination=None, port=None, wait=False, timeout=60, network_id=None): if not network_id: if network_name_or_id: network = self.get_network(network_name_or_id) if not network: raise exc.OpenStackCloudResourceNotFound( "unable to find network for floating ips with ID " "{0}".format(network_name_or_id)) network_id = network['id'] else: network_id = self._get_floating_network_id() kwargs = { 'floating_network_id': network_id, } if not port: if server: (port_obj, fixed_ip_address) = self._nat_destination_port( server, fixed_address=fixed_address, nat_destination=nat_destination) if port_obj: port = port_obj['id'] if fixed_ip_address: kwargs['fixed_ip_address'] = fixed_ip_address if port: kwargs['port_id'] = port fip = self._submit_create_fip(kwargs) fip_id = fip['id'] if port: # The FIP is only going to become active in this context # when we've attached it to something, which only occurs # if we've provided a port as a parameter if wait: try: for count in _utils._iterate_timeout( timeout, "Timeout waiting for the floating IP" " to be ACTIVE", wait=self._FLOAT_AGE): fip = self.get_floating_ip(fip_id) if fip and fip['status'] == 'ACTIVE': break except exc.OpenStackCloudTimeout: self.log.error( "Timed out on floating ip %(fip)s becoming active." " Deleting", {'fip': fip_id}) try: self.delete_floating_ip(fip_id) except Exception as e: self.log.error( "FIP LEAK: Attempted to delete floating ip " "%(fip)s but received %(exc)s exception: " "%(err)s", {'fip': fip_id, 'exc': e.__class__, 'err': str(e)}) raise if fip['port_id'] != port: if server: raise exc.OpenStackCloudException( "Attempted to create FIP on port {port} for server" " {server} but FIP has port {port_id}".format( port=port, port_id=fip['port_id'], server=server['id'])) else: raise exc.OpenStackCloudException( "Attempted to create FIP on port {port}" " but something went wrong".format(port=port)) return fip def _nova_create_floating_ip(self, pool=None): with _utils.shade_exceptions( "Unable to create floating IP in pool {pool}".format( pool=pool)): if pool is None: pools = self.list_floating_ip_pools() if not pools: raise exc.OpenStackCloudResourceNotFound( "unable to find a floating ip pool") pool = pools[0]['name'] data = self._compute_client.post( '/os-floating-ips', json=dict(pool=pool)) pool_ip = self._get_and_munchify('floating_ip', data) # TODO(mordred) Remove this - it's just for compat data = self._compute_client.get('/os-floating-ips/{id}'.format( id=pool_ip['id'])) return self._get_and_munchify('floating_ip', data) def delete_floating_ip(self, floating_ip_id, retry=1): """Deallocate a floating IP from a project. :param floating_ip_id: a floating IP address ID. :param retry: number of times to retry. Optional, defaults to 1, which is in addition to the initial delete call. A value of 0 will also cause no checking of results to occur. :returns: True if the IP address has been deleted, False if the IP address was not found. :raises: ``OpenStackCloudException``, on operation error. """ for count in range(0, max(0, retry) + 1): result = self._delete_floating_ip(floating_ip_id) if (retry == 0) or not result: return result # Wait for the cached floating ip list to be regenerated if self._FLOAT_AGE: time.sleep(self._FLOAT_AGE) # neutron sometimes returns success when deleting a floating # ip. That's awesome. SO - verify that the delete actually # worked. Some clouds will set the status to DOWN rather than # deleting the IP immediately. This is, of course, a bit absurd. f_ip = self.get_floating_ip(id=floating_ip_id) if not f_ip or f_ip['status'] == 'DOWN': return True raise exc.OpenStackCloudException( "Attempted to delete Floating IP {ip} with ID {id} a total of" " {retry} times. Although the cloud did not indicate any errors" " the floating ip is still in existence. Aborting further" " operations.".format( id=floating_ip_id, ip=f_ip['floating_ip_address'], retry=retry + 1)) def _delete_floating_ip(self, floating_ip_id): if self._use_neutron_floating(): try: return self._neutron_delete_floating_ip(floating_ip_id) except exc.OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) return self._nova_delete_floating_ip(floating_ip_id) def _neutron_delete_floating_ip(self, floating_ip_id): try: self._network_client.delete( "/floatingips/{fip_id}.json".format(fip_id=floating_ip_id), error_message="unable to delete floating IP") except exc.OpenStackCloudResourceNotFound: return False except Exception as e: raise exc.OpenStackCloudException( "Unable to delete floating IP ID {fip_id}: {msg}".format( fip_id=floating_ip_id, msg=str(e))) return True def _nova_delete_floating_ip(self, floating_ip_id): try: self._compute_client.delete( '/os-floating-ips/{id}'.format(id=floating_ip_id), error_message='Unable to delete floating IP {fip_id}'.format( fip_id=floating_ip_id)) except exc.OpenStackCloudURINotFound: return False return True def delete_unattached_floating_ips(self, retry=1): """Safely delete unattached floating ips. If the cloud can safely purge any unattached floating ips without race conditions, do so. Safely here means a specific thing. It means that you are not running this while another process that might do a two step create/attach is running. You can safely run this method while another process is creating servers and attaching floating IPs to them if either that process is using add_auto_ip from shade, or is creating the floating IPs by passing in a server to the create_floating_ip call. :param retry: number of times to retry. Optional, defaults to 1, which is in addition to the initial delete call. A value of 0 will also cause no checking of results to occur. :returns: True if Floating IPs have been deleted, False if not :raises: ``OpenStackCloudException``, on operation error. """ processed = [] if self._use_neutron_floating(): for ip in self.list_floating_ips(): if not ip['attached']: processed.append(self.delete_floating_ip( floating_ip_id=ip['id'], retry=retry)) return all(processed) if processed else False def _attach_ip_to_server( self, server, floating_ip, fixed_address=None, wait=False, timeout=60, skip_attach=False, nat_destination=None): """Attach a floating IP to a server. :param server: Server dict :param floating_ip: Floating IP dict to attach :param fixed_address: (optional) fixed address to which attach the floating IP to. :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param skip_attach: (optional) Skip the actual attach and just do the wait. Defaults to False. :param nat_destination: The fixed network the server's port for the FIP to attach to will come from. :returns: The server ``munch.Munch`` :raises: OpenStackCloudException, on operation error. """ # Short circuit if we're asking to attach an IP that's already # attached ext_ip = meta.get_server_ip(server, ext_tag='floating', public=True) if ext_ip == floating_ip['floating_ip_address']: return server if self._use_neutron_floating(): if not skip_attach: try: self._neutron_attach_ip_to_server( server=server, floating_ip=floating_ip, fixed_address=fixed_address, nat_destination=nat_destination) except exc.OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova else: # Nova network self._nova_attach_ip_to_server( server_id=server['id'], floating_ip_id=floating_ip['id'], fixed_address=fixed_address) if wait: # Wait for the address to be assigned to the server server_id = server['id'] for _ in _utils._iterate_timeout( timeout, "Timeout waiting for the floating IP to be attached.", wait=self._SERVER_AGE): server = self.get_server(server_id) ext_ip = meta.get_server_ip( server, ext_tag='floating', public=True) if ext_ip == floating_ip['floating_ip_address']: return server return server def _nat_destination_port( self, server, fixed_address=None, nat_destination=None): """Returns server port that is on a nat_destination network Find a port attached to the server which is on a network which has a subnet which can be the destination of NAT. Such a network is referred to in shade as a "nat_destination" network. So this then is a function which returns a port on such a network that is associated with the given server. :param server: Server dict. :param fixed_address: Fixed ip address of the port :param nat_destination: Name or ID of the network of the port. """ # If we are caching port lists, we may not find the port for # our server if the list is old. Try for at least 2 cache # periods if that is the case. if self._PORT_AGE: timeout = self._PORT_AGE * 2 else: timeout = None for count in _utils._iterate_timeout( timeout, "Timeout waiting for port to show up in list", wait=self._PORT_AGE): try: port_filter = {'device_id': server['id']} ports = self.search_ports(filters=port_filter) break except exc.OpenStackCloudTimeout: ports = None if not ports: return (None, None) port = None if not fixed_address: if len(ports) > 1: if nat_destination: nat_network = self.get_network(nat_destination) if not nat_network: raise exc.OpenStackCloudException( 'NAT Destination {nat_destination} was configured' ' but not found on the cloud. Please check your' ' config and your cloud and try again.'.format( nat_destination=nat_destination)) else: nat_network = self.get_nat_destination() if not nat_network: raise exc.OpenStackCloudException( 'Multiple ports were found for server {server}' ' but none of the networks are a valid NAT' ' destination, so it is impossible to add a' ' floating IP. If you have a network that is a valid' ' destination for NAT and we could not find it,' ' please file a bug. But also configure the' ' nat_destination property of the networks list in' ' your clouds.yaml file. If you do not have a' ' clouds.yaml file, please make one - your setup' ' is complicated.'.format(server=server['id'])) maybe_ports = [] for maybe_port in ports: if maybe_port['network_id'] == nat_network['id']: maybe_ports.append(maybe_port) if not maybe_ports: raise exc.OpenStackCloudException( 'No port on server {server} was found matching' ' your NAT destination network {dest}. Please ' ' check your config'.format( server=server['id'], dest=nat_network['name'])) ports = maybe_ports # Select the most recent available IPv4 address # To do this, sort the ports in reverse order by the created_at # field which is a string containing an ISO DateTime (which # thankfully sort properly) This way the most recent port created, # if there are more than one, will be the arbitrary port we # select. for port in sorted( ports, key=lambda p: p.get('created_at', 0), reverse=True): for address in port.get('fixed_ips', list()): try: ip = ipaddress.ip_address(address['ip_address']) except Exception: continue if ip.version == 4: fixed_address = address['ip_address'] return port, fixed_address raise exc.OpenStackCloudException( "unable to find a free fixed IPv4 address for server " "{0}".format(server['id'])) # unfortunately a port can have more than one fixed IP: # we can't use the search_ports filtering for fixed_address as # they are contained in a list. e.g. # # "fixed_ips": [ # { # "subnet_id": "008ba151-0b8c-4a67-98b5-0d2b87666062", # "ip_address": "172.24.4.2" # } # ] # # Search fixed_address for p in ports: for fixed_ip in p['fixed_ips']: if fixed_address == fixed_ip['ip_address']: return (p, fixed_address) return (None, None) def _neutron_attach_ip_to_server( self, server, floating_ip, fixed_address=None, nat_destination=None): # Find an available port (port, fixed_address) = self._nat_destination_port( server, fixed_address=fixed_address, nat_destination=nat_destination) if not port: raise exc.OpenStackCloudException( "unable to find a port for server {0}".format( server['id'])) floating_ip_args = {'port_id': port['id']} if fixed_address is not None: floating_ip_args['fixed_ip_address'] = fixed_address return self._network_client.put( "/floatingips/{fip_id}.json".format(fip_id=floating_ip['id']), json={'floatingip': floating_ip_args}, error_message=("Error attaching IP {ip} to " "server {server_id}".format( ip=floating_ip['id'], server_id=server['id']))) def _nova_attach_ip_to_server(self, server_id, floating_ip_id, fixed_address=None): f_ip = self.get_floating_ip( id=floating_ip_id) if f_ip is None: raise exc.OpenStackCloudException( "unable to find floating IP {0}".format(floating_ip_id)) error_message = "Error attaching IP {ip} to instance {id}".format( ip=floating_ip_id, id=server_id) body = { 'address': f_ip['floating_ip_address'] } if fixed_address: body['fixed_address'] = fixed_address return self._compute_client.post( '/servers/{server_id}/action'.format(server_id=server_id), json=dict(addFloatingIp=body), error_message=error_message) def detach_ip_from_server(self, server_id, floating_ip_id): """Detach a floating IP from a server. :param server_id: ID of a server. :param floating_ip_id: Id of the floating IP to detach. :returns: True if the IP has been detached, or False if the IP wasn't attached to any server. :raises: ``OpenStackCloudException``, on operation error. """ if self._use_neutron_floating(): try: return self._neutron_detach_ip_from_server( server_id=server_id, floating_ip_id=floating_ip_id) except exc.OpenStackCloudURINotFound as e: self.log.debug( "Something went wrong talking to neutron API: " "'%(msg)s'. Trying with Nova.", {'msg': str(e)}) # Fall-through, trying with Nova # Nova network self._nova_detach_ip_from_server( server_id=server_id, floating_ip_id=floating_ip_id) def _neutron_detach_ip_from_server(self, server_id, floating_ip_id): f_ip = self.get_floating_ip(id=floating_ip_id) if f_ip is None or not f_ip['attached']: return False self._network_client.put( "/floatingips/{fip_id}.json".format(fip_id=floating_ip_id), json={"floatingip": {"port_id": None}}, error_message=("Error detaching IP {ip} from " "server {server_id}".format( ip=floating_ip_id, server_id=server_id))) return True def _nova_detach_ip_from_server(self, server_id, floating_ip_id): f_ip = self.get_floating_ip(id=floating_ip_id) if f_ip is None: raise exc.OpenStackCloudException( "unable to find floating IP {0}".format(floating_ip_id)) error_message = "Error detaching IP {ip} from instance {id}".format( ip=floating_ip_id, id=server_id) return self._compute_client.post( '/servers/{server_id}/action'.format(server_id=server_id), json=dict(removeFloatingIp=dict( address=f_ip['floating_ip_address'])), error_message=error_message) return True def _add_ip_from_pool( self, server, network, fixed_address=None, reuse=True, wait=False, timeout=60, nat_destination=None): """Add a floating IP to a server from a given pool This method reuses available IPs, when possible, or allocate new IPs to the current tenant. The floating IP is attached to the given fixed address or to the first server port/fixed address :param server: Server dict :param network: Name or ID of the network. :param fixed_address: a fixed address :param reuse: Try to reuse existing ips. Defaults to True. :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param nat_destination: (optional) the name of the network of the port to associate with the floating ip. :returns: the updated server ``munch.Munch`` """ if reuse: f_ip = self.available_floating_ip(network=network) else: start_time = time.time() f_ip = self.create_floating_ip( server=server, network=network, nat_destination=nat_destination, wait=wait, timeout=timeout) timeout = timeout - (time.time() - start_time) # Wait for cache invalidation time so that we don't try # to attach the FIP a second time below time.sleep(self._SERVER_AGE) server = self.get_server(server.id) # We run attach as a second call rather than in the create call # because there are code flows where we will not have an attached # FIP yet. However, even if it was attached in the create, we run # the attach function below to get back the server dict refreshed # with the FIP information. return self._attach_ip_to_server( server=server, floating_ip=f_ip, fixed_address=fixed_address, wait=wait, timeout=timeout, nat_destination=nat_destination) def add_ip_list( self, server, ips, wait=False, timeout=60, fixed_address=None): """Attach a list of IPs to a server. :param server: a server object :param ips: list of floating IP addresses or a single address :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param fixed_address: (optional) Fixed address of the server to attach the IP to :returns: The updated server ``munch.Munch`` :raises: ``OpenStackCloudException``, on operation error. """ if type(ips) == list: ip = ips[0] else: ip = ips f_ip = self.get_floating_ip( id=None, filters={'floating_ip_address': ip}) return self._attach_ip_to_server( server=server, floating_ip=f_ip, wait=wait, timeout=timeout, fixed_address=fixed_address) def add_auto_ip(self, server, wait=False, timeout=60, reuse=True): """Add a floating IP to a server. This method is intended for basic usage. For advanced network architecture (e.g. multiple external networks or servers with multiple interfaces), use other floating IP methods. This method can reuse available IPs, or allocate new IPs to the current project. :param server: a server dictionary. :param reuse: Whether or not to attempt to reuse IPs, defaults to True. :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param reuse: Try to reuse existing ips. Defaults to True. :returns: Floating IP address attached to server. """ server = self._add_auto_ip( server, wait=wait, timeout=timeout, reuse=reuse) return server['interface_ip'] or None def _add_auto_ip(self, server, wait=False, timeout=60, reuse=True): skip_attach = False created = False if reuse: f_ip = self.available_floating_ip() else: start_time = time.time() f_ip = self.create_floating_ip( server=server, wait=wait, timeout=timeout) timeout = timeout - (time.time() - start_time) if server: # This gets passed in for both nova and neutron # but is only meaningful for the neutron logic branch skip_attach = True created = True try: # We run attach as a second call rather than in the create call # because there are code flows where we will not have an attached # FIP yet. However, even if it was attached in the create, we run # the attach function below to get back the server dict refreshed # with the FIP information. return self._attach_ip_to_server( server=server, floating_ip=f_ip, wait=wait, timeout=timeout, skip_attach=skip_attach) except exc.OpenStackCloudTimeout: if self._use_neutron_floating() and created: # We are here because we created an IP on the port # It failed. Delete so as not to leak an unmanaged # resource self.log.error( "Timeout waiting for floating IP to become" " active. Floating IP %(ip)s:%(id)s was created for" " server %(server)s but is being deleted due to" " activation failure.", { 'ip': f_ip['floating_ip_address'], 'id': f_ip['id'], 'server': server['id']}) try: self.delete_floating_ip(f_ip['id']) except Exception as e: self.log.error( "FIP LEAK: Attempted to delete floating ip " "%(fip)s but received %(exc)s exception: %(err)s", {'fip': f_ip['id'], 'exc': e.__class__, 'err': str(e)}) raise e raise def add_ips_to_server( self, server, auto_ip=True, ips=None, ip_pool=None, wait=False, timeout=60, reuse=True, fixed_address=None, nat_destination=None): if ip_pool: server = self._add_ip_from_pool( server, ip_pool, reuse=reuse, wait=wait, timeout=timeout, fixed_address=fixed_address, nat_destination=nat_destination) elif ips: server = self.add_ip_list( server, ips, wait=wait, timeout=timeout, fixed_address=fixed_address) elif auto_ip: if self._needs_floating_ip(server, nat_destination): server = self._add_auto_ip( server, wait=wait, timeout=timeout, reuse=reuse) return server def _needs_floating_ip(self, server, nat_destination): """Figure out if auto_ip should add a floating ip to this server. If the server has a public_v4 it does not need a floating ip. If the server does not have a private_v4 it does not need a floating ip. If self.private then the server does not need a floating ip. If the cloud runs nova, and the server has a private_v4 and not a public_v4, then the server needs a floating ip. If the server has a private_v4 and no public_v4 and the cloud has a network from which floating IPs come that is connected via a router to the network from which the private_v4 address came, then the server needs a floating ip. If the server has a private_v4 and no public_v4 and the cloud does not have a network from which floating ips come, or it has one but that network is not connected to the network from which the server's private_v4 address came via a router, then the server does not need a floating ip. """ if not self._has_floating_ips(): return False if server['public_v4']: return False if not server['private_v4']: return False if self.private: return False if not self.has_service('network'): return True # No floating ip network - no FIPs try: self._get_floating_network_id() except exc.OpenStackCloudException: return False (port_obj, fixed_ip_address) = self._nat_destination_port( server, nat_destination=nat_destination) if not port_obj or not fixed_ip_address: return False return True def _get_boot_from_volume_kwargs( self, image, boot_from_volume, boot_volume, volume_size, terminate_volume, volumes, kwargs): """Return block device mappings :param image: Image dict, name or id to boot with. """ # TODO(mordred) We're only testing this in functional tests. We need # to add unit tests for this too. if boot_volume or boot_from_volume or volumes: kwargs.setdefault('block_device_mapping_v2', []) else: return kwargs # If we have boot_from_volume but no root volume, then we're # booting an image from volume if boot_volume: volume = self.get_volume(boot_volume) if not volume: raise exc.OpenStackCloudException( 'Volume {boot_volume} is not a valid volume' ' in {cloud}:{region}'.format( boot_volume=boot_volume, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': '0', 'delete_on_termination': terminate_volume, 'destination_type': 'volume', 'uuid': volume['id'], 'source_type': 'volume', } kwargs['block_device_mapping_v2'].append(block_mapping) kwargs['imageRef'] = '' elif boot_from_volume: if isinstance(image, dict): image_obj = image else: image_obj = self.get_image(image) if not image_obj: raise exc.OpenStackCloudException( 'Image {image} is not a valid image in' ' {cloud}:{region}'.format( image=image, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': '0', 'delete_on_termination': terminate_volume, 'destination_type': 'volume', 'uuid': image_obj['id'], 'source_type': 'image', 'volume_size': volume_size, } kwargs['imageRef'] = '' kwargs['block_device_mapping_v2'].append(block_mapping) if volumes and kwargs['imageRef']: # If we're attaching volumes on boot but booting from an image, # we need to specify that in the BDM. block_mapping = { u'boot_index': 0, u'delete_on_termination': True, u'destination_type': u'local', u'source_type': u'image', u'uuid': kwargs['imageRef'], } kwargs['block_device_mapping_v2'].append(block_mapping) for volume in volumes: volume_obj = self.get_volume(volume) if not volume_obj: raise exc.OpenStackCloudException( 'Volume {volume} is not a valid volume' ' in {cloud}:{region}'.format( volume=volume, cloud=self.name, region=self.region_name)) block_mapping = { 'boot_index': '-1', 'delete_on_termination': False, 'destination_type': 'volume', 'uuid': volume_obj['id'], 'source_type': 'volume', } kwargs['block_device_mapping_v2'].append(block_mapping) if boot_volume or boot_from_volume or volumes: self.list_volumes.invalidate(self) return kwargs def _encode_server_userdata(self, userdata): if hasattr(userdata, 'read'): userdata = userdata.read() if not isinstance(userdata, six.binary_type): # If the userdata passed in is bytes, just send it unmodified if not isinstance(userdata, six.string_types): raise TypeError("%s can't be encoded" % type(userdata)) # If it's not bytes, make it bytes userdata = userdata.encode('utf-8', 'strict') # Once we have base64 bytes, make them into a utf-8 string for REST return base64.b64encode(userdata).decode('utf-8') @_utils.valid_kwargs( 'meta', 'files', 'userdata', 'reservation_id', 'return_raw', 'min_count', 'max_count', 'security_groups', 'key_name', 'availability_zone', 'block_device_mapping', 'block_device_mapping_v2', 'nics', 'scheduler_hints', 'config_drive', 'admin_pass', 'disk_config') def create_server( self, name, image=None, flavor=None, auto_ip=True, ips=None, ip_pool=None, root_volume=None, terminate_volume=False, wait=False, timeout=180, reuse_ips=True, network=None, boot_from_volume=False, volume_size='50', boot_volume=None, volumes=None, nat_destination=None, group=None, **kwargs): """Create a virtual server instance. :param name: Something to name the server. :param image: Image dict, name or ID to boot with. image is required unless boot_volume is given. :param flavor: Flavor dict, name or ID to boot onto. :param auto_ip: Whether to take actions to find a routable IP for the server. (defaults to True) :param ips: List of IPs to attach to the server (defaults to None) :param ip_pool: Name of the network or floating IP pool to get an address from. (defaults to None) :param root_volume: Name or ID of a volume to boot from (defaults to None - deprecated, use boot_volume) :param boot_volume: Name or ID of a volume to boot from (defaults to None) :param terminate_volume: If booting from a volume, whether it should be deleted when the server is destroyed. (defaults to False) :param volumes: (optional) A list of volumes to attach to the server :param meta: (optional) A dict of arbitrary key/value metadata to store for this server. Both keys and values must be <=255 characters. :param files: (optional, deprecated) A dict of files to overwrite on the server upon boot. Keys are file names (i.e. ``/etc/passwd``) and values are the file contents (either as a string or as a file-like object). A maximum of five entries is allowed, and each file must be 10k or less. :param reservation_id: a UUID for the set of servers being requested. :param min_count: (optional extension) The minimum number of servers to launch. :param max_count: (optional extension) The maximum number of servers to launch. :param security_groups: A list of security group names :param userdata: user data to pass to be exposed by the metadata server this can be a file type object as well or a string. :param key_name: (optional extension) name of previously created keypair to inject into the instance. :param availability_zone: Name of the availability zone for instance placement. :param block_device_mapping: (optional) A dict of block device mappings for this server. :param block_device_mapping_v2: (optional) A dict of block device mappings for this server. :param nics: (optional extension) an ordered list of nics to be added to this server, with information about connected networks, fixed IPs, port etc. :param scheduler_hints: (optional extension) arbitrary key-value pairs specified by the client to help boot an instance :param config_drive: (optional extension) value for config drive either boolean, or volume-id :param disk_config: (optional extension) control how the disk is partitioned when the server is created. possible values are 'AUTO' or 'MANUAL'. :param admin_pass: (optional extension) add a user supplied admin password. :param wait: (optional) Wait for the address to appear as assigned to the server. Defaults to False. :param timeout: (optional) Seconds to wait, defaults to 60. See the ``wait`` parameter. :param reuse_ips: (optional) Whether to attempt to reuse pre-existing floating ips should a floating IP be needed (defaults to True) :param network: (optional) Network dict or name or ID to attach the server to. Mutually exclusive with the nics parameter. Can also be be a list of network names or IDs or network dicts. :param boot_from_volume: Whether to boot from volume. 'boot_volume' implies True, but boot_from_volume=True with no boot_volume is valid and will create a volume from the image and use that. :param volume_size: When booting an image from volume, how big should the created volume be? Defaults to 50. :param nat_destination: Which network should a created floating IP be attached to, if it's not possible to infer from the cloud's configuration. (Optional, defaults to None) :param group: ServerGroup dict, name or id to boot the server in. If a group is provided in both scheduler_hints and in the group param, the group param will win. (Optional, defaults to None) :returns: A ``munch.Munch`` representing the created server. :raises: OpenStackCloudException on operation error. """ # TODO(shade) Image is optional but flavor is not - yet flavor comes # after image in the argument list. Doh. if not flavor: raise TypeError( "create_server() missing 1 required argument: 'flavor'") if not image and not boot_volume: raise TypeError( "create_server() requires either 'image' or 'boot_volume'") # TODO(mordred) Add support for description starting in 2.19 security_groups = kwargs.get('security_groups', []) if security_groups and not isinstance(kwargs['security_groups'], list): security_groups = [security_groups] if security_groups: kwargs['security_groups'] = [] for sec_group in security_groups: kwargs['security_groups'].append(dict(name=sec_group)) if 'userdata' in kwargs: user_data = kwargs.pop('userdata') if user_data: kwargs['user_data'] = self._encode_server_userdata(user_data) for (desired, given) in ( ('OS-DCF:diskConfig', 'disk_config'), ('config_drive', 'config_drive'), ('key_name', 'key_name'), ('metadata', 'meta'), ('adminPass', 'admin_pass')): value = kwargs.pop(given, None) if value: kwargs[desired] = value hints = kwargs.pop('scheduler_hints', {}) if group: group_obj = self.get_server_group(group) if not group_obj: raise exc.OpenStackCloudException( "Server Group {group} was requested but was not found" " on the cloud".format(group=group)) hints['group'] = group_obj['id'] kwargs.setdefault('max_count', kwargs.get('max_count', 1)) kwargs.setdefault('min_count', kwargs.get('min_count', 1)) if 'nics' in kwargs and not isinstance(kwargs['nics'], list): if isinstance(kwargs['nics'], dict): # Be nice and help the user out kwargs['nics'] = [kwargs['nics']] else: raise exc.OpenStackCloudException( 'nics parameter to create_server takes a list of dicts.' ' Got: {nics}'.format(nics=kwargs['nics'])) if network and ('nics' not in kwargs or not kwargs['nics']): nics = [] if not isinstance(network, list): network = [network] for net_name in network: if isinstance(net_name, dict) and 'id' in net_name: network_obj = net_name else: network_obj = self.get_network(name_or_id=net_name) if not network_obj: raise exc.OpenStackCloudException( 'Network {network} is not a valid network in' ' {cloud}:{region}'.format( network=network, cloud=self.name, region=self.region_name)) nics.append({'net-id': network_obj['id']}) kwargs['nics'] = nics if not network and ('nics' not in kwargs or not kwargs['nics']): default_network = self.get_default_network() if default_network: kwargs['nics'] = [{'net-id': default_network['id']}] networks = [] for nic in kwargs.pop('nics', []): net = {} if 'net-id' in nic: # TODO(mordred) Make sure this is in uuid format net['uuid'] = nic.pop('net-id') # If there's a net-id, ignore net-name nic.pop('net-name', None) elif 'net-name' in nic: nic_net = self.get_network(nic['net-name']) if not nic_net: raise exc.OpenStackCloudException( "Requested network {net} could not be found.".format( net=nic['net-name'])) net['uuid'] = nic_net['id'] for ip_key in ('v4-fixed-ip', 'v6-fixed-ip', 'fixed_ip'): fixed_ip = nic.pop(ip_key, None) if fixed_ip and net.get('fixed_ip'): raise exc.OpenStackCloudException( "Only one of v4-fixed-ip, v6-fixed-ip or fixed_ip" " may be given") if fixed_ip: net['fixed_ip'] = fixed_ip # TODO(mordred) Add support for tag if server supports microversion # 2.32-2.36 or >= 2.42 for key in ('port', 'port-id'): if key in nic: net['port'] = nic.pop(key) if nic: raise exc.OpenStackCloudException( "Additional unsupported keys given for server network" " creation: {keys}".format(keys=nic.keys())) networks.append(net) if networks: kwargs['networks'] = networks if image: if isinstance(image, dict): kwargs['imageRef'] = image['id'] else: kwargs['imageRef'] = self.get_image(image).id if isinstance(flavor, dict): kwargs['flavorRef'] = flavor['id'] else: kwargs['flavorRef'] = self.get_flavor(flavor, get_extra=False).id if volumes is None: volumes = [] # nova cli calls this boot_volume. Let's be the same if root_volume and not boot_volume: boot_volume = root_volume kwargs = self._get_boot_from_volume_kwargs( image=image, boot_from_volume=boot_from_volume, boot_volume=boot_volume, volume_size=str(volume_size), terminate_volume=terminate_volume, volumes=volumes, kwargs=kwargs) kwargs['name'] = name endpoint = '/servers' # TODO(mordred) We're only testing this in functional tests. We need # to add unit tests for this too. if 'block_device_mapping_v2' in kwargs: endpoint = '/os-volumes_boot' with _utils.shade_exceptions("Error in creating instance"): server_json = {'server': kwargs} if hints: server_json['os:scheduler_hints'] = hints data = self._compute_client.post( endpoint, json=server_json) server = self._get_and_munchify('server', data) admin_pass = server.get('adminPass') or kwargs.get('admin_pass') if not wait: # This is a direct get call to skip the list_servers # cache which has absolutely no chance of containing the # new server. # Only do this if we're not going to wait for the server # to complete booting, because the only reason we do it # is to get a server record that is the return value from # get/list rather than the return value of create. If we're # going to do the wait loop below, this is a waste of a call server = self.get_server_by_id(server.id) if server.status == 'ERROR': raise exc.OpenStackCloudCreateException( resource='server', resource_id=server.id) if wait: server = self.wait_for_server( server, auto_ip=auto_ip, ips=ips, ip_pool=ip_pool, reuse=reuse_ips, timeout=timeout, nat_destination=nat_destination, ) server.adminPass = admin_pass return server def wait_for_server( self, server, auto_ip=True, ips=None, ip_pool=None, reuse=True, timeout=180, nat_destination=None): """ Wait for a server to reach ACTIVE status. """ server_id = server['id'] timeout_message = "Timeout waiting for the server to come up." start_time = time.time() # There is no point in iterating faster than the list_servers cache for count in _utils._iterate_timeout( timeout, timeout_message, # if _SERVER_AGE is 0 we still want to wait a bit # to be friendly with the server. wait=self._SERVER_AGE or 2): try: # Use the get_server call so that the list_servers # cache can be leveraged server = self.get_server(server_id) except Exception: continue if not server: continue # We have more work to do, but the details of that are # hidden from the user. So, calculate remaining timeout # and pass it down into the IP stack. remaining_timeout = timeout - int(time.time() - start_time) if remaining_timeout <= 0: raise exc.OpenStackCloudTimeout(timeout_message) server = self.get_active_server( server=server, reuse=reuse, auto_ip=auto_ip, ips=ips, ip_pool=ip_pool, wait=True, timeout=remaining_timeout, nat_destination=nat_destination) if server is not None and server['status'] == 'ACTIVE': return server def get_active_server( self, server, auto_ip=True, ips=None, ip_pool=None, reuse=True, wait=False, timeout=180, nat_destination=None): if server['status'] == 'ERROR': if 'fault' in server and 'message' in server['fault']: raise exc.OpenStackCloudException( "Error in creating the server: {reason}".format( reason=server['fault']['message']), extra_data=dict(server=server)) raise exc.OpenStackCloudException( "Error in creating the server", extra_data=dict(server=server)) if server['status'] == 'ACTIVE': if 'addresses' in server and server['addresses']: return self.add_ips_to_server( server, auto_ip, ips, ip_pool, reuse=reuse, nat_destination=nat_destination, wait=wait, timeout=timeout) self.log.debug( 'Server %(server)s reached ACTIVE state without' ' being allocated an IP address.' ' Deleting server.', {'server': server['id']}) try: self._delete_server( server=server, wait=wait, timeout=timeout) except Exception as e: raise exc.OpenStackCloudException( 'Server reached ACTIVE state without being' ' allocated an IP address AND then could not' ' be deleted: {0}'.format(e), extra_data=dict(server=server)) raise exc.OpenStackCloudException( 'Server reached ACTIVE state without being' ' allocated an IP address.', extra_data=dict(server=server)) return None def rebuild_server(self, server_id, image_id, admin_pass=None, detailed=False, bare=False, wait=False, timeout=180): kwargs = {} if image_id: kwargs['imageRef'] = image_id if admin_pass: kwargs['adminPass'] = admin_pass data = self._compute_client.post( '/servers/{server_id}/action'.format(server_id=server_id), error_message="Error in rebuilding instance", json={'rebuild': kwargs}) server = self._get_and_munchify('server', data) if not wait: return self._expand_server( self._normalize_server(server), bare=bare, detailed=detailed) admin_pass = server.get('adminPass') or admin_pass for count in _utils._iterate_timeout( timeout, "Timeout waiting for server {0} to " "rebuild.".format(server_id), wait=self._SERVER_AGE): try: server = self.get_server(server_id, bare=True) except Exception: continue if not server: continue if server['status'] == 'ERROR': raise exc.OpenStackCloudException( "Error in rebuilding the server", extra_data=dict(server=server)) if server['status'] == 'ACTIVE': server.adminPass = admin_pass break return self._expand_server(server, detailed=detailed, bare=bare) def set_server_metadata(self, name_or_id, metadata): """Set metadata in a server instance. :param str name_or_id: The name or ID of the server instance to update. :param dict metadata: A dictionary with the key=value pairs to set in the server instance. It only updates the key=value pairs provided. Existing ones will remain untouched. :raises: OpenStackCloudException on operation error. """ server = self.get_server(name_or_id, bare=True) if not server: raise exc.OpenStackCloudException( 'Invalid Server {server}'.format(server=name_or_id)) self._compute_client.post( '/servers/{server_id}/metadata'.format(server_id=server['id']), json={'metadata': metadata}, error_message='Error updating server metadata') def delete_server_metadata(self, name_or_id, metadata_keys): """Delete metadata from a server instance. :param str name_or_id: The name or ID of the server instance to update. :param list metadata_keys: A list with the keys to be deleted from the server instance. :raises: OpenStackCloudException on operation error. """ server = self.get_server(name_or_id, bare=True) if not server: raise exc.OpenStackCloudException( 'Invalid Server {server}'.format(server=name_or_id)) for key in metadata_keys: error_message = 'Error deleting metadata {key} on {server}'.format( key=key, server=name_or_id) self._compute_client.delete( '/servers/{server_id}/metadata/{key}'.format( server_id=server['id'], key=key), error_message=error_message) def delete_server( self, name_or_id, wait=False, timeout=180, delete_ips=False, delete_ip_retry=1): """Delete a server instance. :param name_or_id: name or ID of the server to delete :param bool wait: If true, waits for server to be deleted. :param int timeout: Seconds to wait for server deletion. :param bool delete_ips: If true, deletes any floating IPs associated with the instance. :param int delete_ip_retry: Number of times to retry deleting any floating ips, should the first try be unsuccessful. :returns: True if delete succeeded, False otherwise if the server does not exist. :raises: OpenStackCloudException on operation error. """ # If delete_ips is True, we need the server to not be bare. server = self.get_server(name_or_id, bare=True) if not server: return False # This portion of the code is intentionally left as a separate # private method in order to avoid an unnecessary API call to get # a server we already have. return self._delete_server( server, wait=wait, timeout=timeout, delete_ips=delete_ips, delete_ip_retry=delete_ip_retry) def _delete_server_floating_ips(self, server, delete_ip_retry): # Does the server have floating ips in its # addresses dict? If not, skip this. server_floats = meta.find_nova_interfaces( server['addresses'], ext_tag='floating') for fip in server_floats: try: ip = self.get_floating_ip(id=None, filters={ 'floating_ip_address': fip['addr']}) except exc.OpenStackCloudURINotFound: # We're deleting. If it doesn't exist - awesome # NOTE(mordred) If the cloud is a nova FIP cloud but # floating_ip_source is set to neutron, this # can lead to a FIP leak. continue if not ip: continue deleted = self.delete_floating_ip( ip['id'], retry=delete_ip_retry) if not deleted: raise exc.OpenStackCloudException( "Tried to delete floating ip {floating_ip}" " associated with server {id} but there was" " an error deleting it. Not deleting server.".format( floating_ip=ip['floating_ip_address'], id=server['id'])) def _delete_server( self, server, wait=False, timeout=180, delete_ips=False, delete_ip_retry=1): if not server: return False if delete_ips and self._has_floating_ips(): self._delete_server_floating_ips(server, delete_ip_retry) try: self._compute_client.delete( '/servers/{id}'.format(id=server['id']), error_message="Error in deleting server") except exc.OpenStackCloudURINotFound: return False except Exception: raise if not wait: return True # If the server has volume attachments, or if it has booted # from volume, deleting it will change volume state so we will # need to invalidate the cache. Avoid the extra API call if # caching is not enabled. reset_volume_cache = False if (self.cache_enabled and self.has_service('volume') and self.get_volumes(server)): reset_volume_cache = True for count in _utils._iterate_timeout( timeout, "Timed out waiting for server to get deleted.", # if _SERVER_AGE is 0 we still want to wait a bit # to be friendly with the server. wait=self._SERVER_AGE or 2): with _utils.shade_exceptions("Error in deleting server"): server = self.get_server(server['id'], bare=True) if not server: break if reset_volume_cache: self.list_volumes.invalidate(self) # Reset the list servers cache time so that the next list server # call gets a new list self._servers_time = self._servers_time - self._SERVER_AGE return True @_utils.valid_kwargs( 'name', 'description') def update_server(self, name_or_id, detailed=False, bare=False, **kwargs): """Update a server. :param name_or_id: Name of the server to be updated. :param detailed: Whether or not to add detailed additional information. Defaults to False. :param bare: Whether to skip adding any additional information to the server record. Defaults to False, meaning the addresses dict will be populated as needed from neutron. Setting to True implies detailed = False. :name: New name for the server :description: New description for the server :returns: a dictionary representing the updated server. :raises: OpenStackCloudException on operation error. """ server = self.get_server(name_or_id=name_or_id, bare=True) if server is None: raise exc.OpenStackCloudException( "failed to find server '{server}'".format(server=name_or_id)) data = self._compute_client.put( '/servers/{server_id}'.format(server_id=server['id']), error_message="Error updating server {0}".format(name_or_id), json={'server': kwargs}) server = self._normalize_server( self._get_and_munchify('server', data)) return self._expand_server(server, bare=bare, detailed=detailed) def create_server_group(self, name, policies): """Create a new server group. :param name: Name of the server group being created :param policies: List of policies for the server group. :returns: a dict representing the new server group. :raises: OpenStackCloudException on operation error. """ data = self._compute_client.post( '/os-server-groups', json={ 'server_group': { 'name': name, 'policies': policies}}, error_message="Unable to create server group {name}".format( name=name)) return self._get_and_munchify('server_group', data) def delete_server_group(self, name_or_id): """Delete a server group. :param name_or_id: Name or ID of the server group to delete :returns: True if delete succeeded, False otherwise :raises: OpenStackCloudException on operation error. """ server_group = self.get_server_group(name_or_id) if not server_group: self.log.debug("Server group %s not found for deleting", name_or_id) return False self._compute_client.delete( '/os-server-groups/{id}'.format(id=server_group['id']), error_message="Error deleting server group {name}".format( name=name_or_id)) return True def list_containers(self, full_listing=True): """List containers. :param full_listing: Ignored. Present for backwards compat :returns: list of Munch of the container objects :raises: OpenStackCloudException on operation error. """ return self._object_store_client.get('/', params=dict(format='json')) def get_container(self, name, skip_cache=False): if skip_cache or name not in self._container_cache: try: container = self._object_store_client.head(name) self._container_cache[name] = container.headers except exc.OpenStackCloudHTTPError as e: if e.response.status_code == 404: return None raise return self._container_cache[name] def create_container(self, name, public=False): container = self.get_container(name) if container: return container self._object_store_client.put(name) if public: self.set_container_access(name, 'public') return self.get_container(name, skip_cache=True) def delete_container(self, name): try: self._object_store_client.delete(name) return True except exc.OpenStackCloudHTTPError as e: if e.response.status_code == 404: return False if e.response.status_code == 409: raise exc.OpenStackCloudException( 'Attempt to delete container {container} failed. The' ' container is not empty. Please delete the objects' ' inside it before deleting the container'.format( container=name)) raise def update_container(self, name, headers): self._object_store_client.post(name, headers=headers) def set_container_access(self, name, access): if access not in OBJECT_CONTAINER_ACLS: raise exc.OpenStackCloudException( "Invalid container access specified: %s. Must be one of %s" % (access, list(OBJECT_CONTAINER_ACLS.keys()))) header = {'x-container-read': OBJECT_CONTAINER_ACLS[access]} self.update_container(name, header) def get_container_access(self, name): container = self.get_container(name, skip_cache=True) if not container: raise exc.OpenStackCloudException("Container not found: %s" % name) acl = container.get('x-container-read', '') for key, value in OBJECT_CONTAINER_ACLS.items(): # Convert to string for the comparison because swiftclient # returns byte values as bytes sometimes and apparently == # on bytes doesn't work like you'd think if str(acl) == str(value): return key raise exc.OpenStackCloudException( "Could not determine container access for ACL: %s." % acl) def _get_file_hashes(self, filename): file_key = "{filename}:{mtime}".format( filename=filename, mtime=os.stat(filename).st_mtime) if file_key not in self._file_hash_cache: self.log.debug( 'Calculating hashes for %(filename)s', {'filename': filename}) md5 = hashlib.md5() sha256 = hashlib.sha256() with open(filename, 'rb') as file_obj: for chunk in iter(lambda: file_obj.read(8192), b''): md5.update(chunk) sha256.update(chunk) self._file_hash_cache[file_key] = dict( md5=md5.hexdigest(), sha256=sha256.hexdigest()) self.log.debug( "Image file %(filename)s md5:%(md5)s sha256:%(sha256)s", {'filename': filename, 'md5': self._file_hash_cache[file_key]['md5'], 'sha256': self._file_hash_cache[file_key]['sha256']}) return (self._file_hash_cache[file_key]['md5'], self._file_hash_cache[file_key]['sha256']) @_utils.cache_on_arguments() def get_object_capabilities(self): # The endpoint in the catalog has version and project-id in it # To get capabilities, we have to disassemble and reassemble the URL # This logic is taken from swiftclient endpoint = urllib.parse.urlparse( self._object_store_client.get_endpoint()) url = "{scheme}://{netloc}/info".format( scheme=endpoint.scheme, netloc=endpoint.netloc) return self._object_store_client.get(url) def get_object_segment_size(self, segment_size): """Get a segment size that will work given capabilities""" if segment_size is None: segment_size = DEFAULT_OBJECT_SEGMENT_SIZE min_segment_size = 0 try: caps = self.get_object_capabilities() except exc.OpenStackCloudHTTPError as e: if e.response.status_code in (404, 412): # Clear the exception so that it doesn't linger # and get reported as an Inner Exception later _utils._exc_clear() server_max_file_size = DEFAULT_MAX_FILE_SIZE self.log.info( "Swift capabilities not supported. " "Using default max file size.") else: raise else: server_max_file_size = caps.get('swift', {}).get('max_file_size', 0) min_segment_size = caps.get('slo', {}).get('min_segment_size', 0) if segment_size > server_max_file_size: return server_max_file_size if segment_size < min_segment_size: return min_segment_size return segment_size def is_object_stale( self, container, name, filename, file_md5=None, file_sha256=None): metadata = self.get_object_metadata(container, name) if not metadata: self.log.debug( "swift stale check, no object: {container}/{name}".format( container=container, name=name)) return True if not (file_md5 or file_sha256): (file_md5, file_sha256) = self._get_file_hashes(filename) md5_key = metadata.get(OBJECT_MD5_KEY, '') sha256_key = metadata.get(OBJECT_SHA256_KEY, '') up_to_date = self._hashes_up_to_date( md5=file_md5, sha256=file_sha256, md5_key=md5_key, sha256_key=sha256_key) if not up_to_date: self.log.debug( "swift checksum mismatch: " " %(filename)s!=%(container)s/%(name)s", {'filename': filename, 'container': container, 'name': name}) return True self.log.debug( "swift object up to date: %(container)s/%(name)s", {'container': container, 'name': name}) return False def create_object( self, container, name, filename=None, md5=None, sha256=None, segment_size=None, use_slo=True, metadata=None, **headers): """Create a file object :param container: The name of the container to store the file in. This container will be created if it does not exist already. :param name: Name for the object within the container. :param filename: The path to the local file whose contents will be uploaded. :param md5: A hexadecimal md5 of the file. (Optional), if it is known and can be passed here, it will save repeating the expensive md5 process. It is assumed to be accurate. :param sha256: A hexadecimal sha256 of the file. (Optional) See md5. :param segment_size: Break the uploaded object into segments of this many bytes. (Optional) Shade will attempt to discover the maximum value for this from the server if it is not specified, or will use a reasonable default. :param headers: These will be passed through to the object creation API as HTTP Headers. :param use_slo: If the object is large enough to need to be a Large Object, use a static rather than dynamic object. Static Objects will delete segment objects when the manifest object is deleted. (optional, defaults to True) :param metadata: This dict will get changed into headers that set metadata of the object :raises: ``OpenStackCloudException`` on operation error. """ if not metadata: metadata = {} if not filename: filename = name # segment_size gets used as a step value in a range call, so needs # to be an int if segment_size: segment_size = int(segment_size) segment_size = self.get_object_segment_size(segment_size) file_size = os.path.getsize(filename) if not (md5 or sha256): (md5, sha256) = self._get_file_hashes(filename) headers[OBJECT_MD5_KEY] = md5 or '' headers[OBJECT_SHA256_KEY] = sha256 or '' for (k, v) in metadata.items(): headers['x-object-meta-' + k] = v # On some clouds this is not necessary. On others it is. I'm confused. self.create_container(container) if self.is_object_stale(container, name, filename, md5, sha256): endpoint = '{container}/{name}'.format( container=container, name=name) self.log.debug( "swift uploading %(filename)s to %(endpoint)s", {'filename': filename, 'endpoint': endpoint}) if file_size <= segment_size: self._upload_object(endpoint, filename, headers) else: self._upload_large_object( endpoint, filename, headers, file_size, segment_size, use_slo) def _upload_object(self, endpoint, filename, headers): return self._object_store_client.put( endpoint, headers=headers, data=open(filename, 'r')) def _get_file_segments(self, endpoint, filename, file_size, segment_size): # Use an ordered dict here so that testing can replicate things segments = collections.OrderedDict() for (index, offset) in enumerate(range(0, file_size, segment_size)): remaining = file_size - (index * segment_size) segment = _utils.FileSegment( filename, offset, segment_size if segment_size < remaining else remaining) name = '{endpoint}/{index:0>6}'.format( endpoint=endpoint, index=index) segments[name] = segment return segments def _object_name_from_url(self, url): '''Get container_name/object_name from the full URL called. Remove the Swift endpoint from the front of the URL, and remove the leaving / that will leave behind.''' endpoint = self._object_store_client.get_endpoint() object_name = url.replace(endpoint, '') if object_name.startswith('/'): object_name = object_name[1:] return object_name def _add_etag_to_manifest(self, segment_results, manifest): for result in segment_results: if 'Etag' not in result.headers: continue name = self._object_name_from_url(result.url) for entry in manifest: if entry['path'] == '/{name}'.format(name=name): entry['etag'] = result.headers['Etag'] def _upload_large_object( self, endpoint, filename, headers, file_size, segment_size, use_slo): # If the object is big, we need to break it up into segments that # are no larger than segment_size, upload each of them individually # and then upload a manifest object. The segments can be uploaded in # parallel, so we'll use the async feature of the TaskManager. segment_futures = [] segment_results = [] retry_results = [] retry_futures = [] manifest = [] # Get an OrderedDict with keys being the swift location for the # segment, the value a FileSegment file-like object that is a # slice of the data for the segment. segments = self._get_file_segments( endpoint, filename, file_size, segment_size) # Schedule the segments for upload for name, segment in segments.items(): # Async call to put - schedules execution and returns a future segment_future = self._object_store_client.put( name, headers=headers, data=segment, run_async=True) segment_futures.append(segment_future) # TODO(mordred) Collect etags from results to add to this manifest # dict. Then sort the list of dicts by path. manifest.append(dict( path='/{name}'.format(name=name), size_bytes=segment.length)) # Try once and collect failed results to retry segment_results, retry_results = task_manager.wait_for_futures( segment_futures, raise_on_error=False) self._add_etag_to_manifest(segment_results, manifest) for result in retry_results: # Grab the FileSegment for the failed upload so we can retry name = self._object_name_from_url(result.url) segment = segments[name] segment.seek(0) # Async call to put - schedules execution and returns a future segment_future = self._object_store_client.put( name, headers=headers, data=segment, run_async=True) # TODO(mordred) Collect etags from results to add to this manifest # dict. Then sort the list of dicts by path. retry_futures.append(segment_future) # If any segments fail the second time, just throw the error segment_results, retry_results = task_manager.wait_for_futures( retry_futures, raise_on_error=True) self._add_etag_to_manifest(segment_results, manifest) if use_slo: return self._finish_large_object_slo(endpoint, headers, manifest) else: return self._finish_large_object_dlo(endpoint, headers) def _finish_large_object_slo(self, endpoint, headers, manifest): # TODO(mordred) send an etag of the manifest, which is the md5sum # of the concatenation of the etags of the results headers = headers.copy() return self._object_store_client.put( endpoint, params={'multipart-manifest': 'put'}, headers=headers, data=json.dumps(manifest)) def _finish_large_object_dlo(self, endpoint, headers): headers = headers.copy() headers['X-Object-Manifest'] = endpoint return self._object_store_client.put(endpoint, headers=headers) def update_object(self, container, name, metadata=None, **headers): """Update the metadata of an object :param container: The name of the container the object is in :param name: Name for the object within the container. :param metadata: This dict will get changed into headers that set metadata of the object :param headers: These will be passed through to the object update API as HTTP Headers. :raises: ``OpenStackCloudException`` on operation error. """ if not metadata: metadata = {} metadata_headers = {} for (k, v) in metadata.items(): metadata_headers['x-object-meta-' + k] = v headers = dict(headers, **metadata_headers) return self._object_store_client.post( '{container}/{object}'.format( container=container, object=name), headers=headers) def list_objects(self, container, full_listing=True): """List objects. :param container: Name of the container to list objects in. :param full_listing: Ignored. Present for backwards compat :returns: list of Munch of the objects :raises: OpenStackCloudException on operation error. """ return self._object_store_client.get( container, params=dict(format='json')) def delete_object(self, container, name, meta=None): """Delete an object from a container. :param string container: Name of the container holding the object. :param string name: Name of the object to delete. :param dict meta: Metadata for the object in question. (optional, will be fetched if not provided) :returns: True if delete succeeded, False if the object was not found. :raises: OpenStackCloudException on operation error. """ # TODO(mordred) DELETE for swift returns status in text/plain format # like so: # Number Deleted: 15 # Number Not Found: 0 # Response Body: # Response Status: 200 OK # Errors: # We should ultimately do something with that try: if not meta: meta = self.get_object_metadata(container, name) if not meta: return False params = {} if meta.get('X-Static-Large-Object', None) == 'True': params['multipart-manifest'] = 'delete' self._object_store_client.delete( '{container}/{object}'.format( container=container, object=name), params=params) return True except exc.OpenStackCloudHTTPError: return False def delete_autocreated_image_objects( self, container=OBJECT_AUTOCREATE_CONTAINER): """Delete all objects autocreated for image uploads. This method should generally not be needed, as shade should clean up the objects it uses for object-based image creation. If something goes wrong and it is found that there are leaked objects, this method can be used to delete any objects that shade has created on the user's behalf in service of image uploads. """ # This method only makes sense on clouds that use tasks if not self.image_api_use_tasks: return False deleted = False for obj in self.list_objects(container): meta = self.get_object_metadata(container, obj['name']) if meta.get(OBJECT_AUTOCREATE_KEY) == 'true': if self.delete_object(container, obj['name'], meta): deleted = True return deleted def get_object_metadata(self, container, name): try: return self._object_store_client.head( '{container}/{object}'.format( container=container, object=name)).headers except exc.OpenStackCloudException as e: if e.response.status_code == 404: return None raise def get_object(self, container, obj, query_string=None, resp_chunk_size=1024, outfile=None): """Get the headers and body of an object :param string container: name of the container. :param string obj: name of the object. :param string query_string: query args for uri. (delimiter, prefix, etc.) :param int resp_chunk_size: chunk size of data to read. Only used if the results are being written to a file. (optional, defaults to 1k) :param outfile: Write the object to a file instead of returning the contents. If this option is given, body in the return tuple will be None. outfile can either be a file path given as a string, or a File like object. :returns: Tuple (headers, body) of the object, or None if the object is not found (404) :raises: OpenStackCloudException on operation error. """ # TODO(mordred) implement resp_chunk_size try: endpoint = '{container}/{object}'.format( container=container, object=obj) if query_string: endpoint = '{endpoint}?{query_string}'.format( endpoint=endpoint, query_string=query_string) response = self._object_store_client.get( endpoint, stream=True) response_headers = { k.lower(): v for k, v in response.headers.items()} if outfile: if isinstance(outfile, six.string_types): outfile_handle = open(outfile, 'wb') else: outfile_handle = outfile for chunk in response.iter_content( resp_chunk_size, decode_unicode=False): outfile_handle.write(chunk) if isinstance(outfile, six.string_types): outfile_handle.close() else: outfile_handle.flush() return (response_headers, None) else: return (response_headers, response.text) except exc.OpenStackCloudHTTPError as e: if e.response.status_code == 404: return None raise def create_subnet(self, network_name_or_id, cidr=None, ip_version=4, enable_dhcp=False, subnet_name=None, tenant_id=None, allocation_pools=None, gateway_ip=None, disable_gateway_ip=False, dns_nameservers=None, host_routes=None, ipv6_ra_mode=None, ipv6_address_mode=None, use_default_subnetpool=False): """Create a subnet on a specified network. :param string network_name_or_id: The unique name or ID of the attached network. If a non-unique name is supplied, an exception is raised. :param string cidr: The CIDR. :param int ip_version: The IP version, which is 4 or 6. :param bool enable_dhcp: Set to ``True`` if DHCP is enabled and ``False`` if disabled. Default is ``False``. :param string subnet_name: The name of the subnet. :param string tenant_id: The ID of the tenant who owns the network. Only administrative users can specify a tenant ID other than their own. :param list allocation_pools: A list of dictionaries of the start and end addresses for the allocation pools. For example:: [ { "start": "192.168.199.2", "end": "192.168.199.254" } ] :param string gateway_ip: The gateway IP address. When you specify both allocation_pools and gateway_ip, you must ensure that the gateway IP does not overlap with the specified allocation pools. :param bool disable_gateway_ip: Set to ``True`` if gateway IP address is disabled and ``False`` if enabled. It is not allowed with gateway_ip. Default is ``False``. :param list dns_nameservers: A list of DNS name servers for the subnet. For example:: [ "8.8.8.7", "8.8.8.8" ] :param list host_routes: A list of host route dictionaries for the subnet. For example:: [ { "destination": "0.0.0.0/0", "nexthop": "123.456.78.9" }, { "destination": "192.168.0.0/24", "nexthop": "192.168.0.1" } ] :param string ipv6_ra_mode: IPv6 Router Advertisement mode. Valid values are: 'dhcpv6-stateful', 'dhcpv6-stateless', or 'slaac'. :param string ipv6_address_mode: IPv6 address mode. Valid values are: 'dhcpv6-stateful', 'dhcpv6-stateless', or 'slaac'. :param bool use_default_subnetpool: Use the default subnetpool for ``ip_version`` to obtain a CIDR. It is required to pass ``None`` to the ``cidr`` argument when enabling this option. :returns: The new subnet object. :raises: OpenStackCloudException on operation error. """ if tenant_id is not None: filters = {'tenant_id': tenant_id} else: filters = None network = self.get_network(network_name_or_id, filters) if not network: raise exc.OpenStackCloudException( "Network %s not found." % network_name_or_id) if disable_gateway_ip and gateway_ip: raise exc.OpenStackCloudException( 'arg:disable_gateway_ip is not allowed with arg:gateway_ip') if not cidr and not use_default_subnetpool: raise exc.OpenStackCloudException( 'arg:cidr is required when a subnetpool is not used') if cidr and use_default_subnetpool: raise exc.OpenStackCloudException( 'arg:cidr must be set to None when use_default_subnetpool == ' 'True') # Be friendly on ip_version and allow strings if isinstance(ip_version, six.string_types): try: ip_version = int(ip_version) except ValueError: raise exc.OpenStackCloudException( 'ip_version must be an integer') # The body of the neutron message for the subnet we wish to create. # This includes attributes that are required or have defaults. subnet = { 'network_id': network['id'], 'ip_version': ip_version, 'enable_dhcp': enable_dhcp } # Add optional attributes to the message. if cidr: subnet['cidr'] = cidr if subnet_name: subnet['name'] = subnet_name if tenant_id: subnet['tenant_id'] = tenant_id if allocation_pools: subnet['allocation_pools'] = allocation_pools if gateway_ip: subnet['gateway_ip'] = gateway_ip if disable_gateway_ip: subnet['gateway_ip'] = None if dns_nameservers: subnet['dns_nameservers'] = dns_nameservers if host_routes: subnet['host_routes'] = host_routes if ipv6_ra_mode: subnet['ipv6_ra_mode'] = ipv6_ra_mode if ipv6_address_mode: subnet['ipv6_address_mode'] = ipv6_address_mode if use_default_subnetpool: subnet['use_default_subnetpool'] = True data = self._network_client.post("/subnets.json", json={"subnet": subnet}) return self._get_and_munchify('subnet', data) def delete_subnet(self, name_or_id): """Delete a subnet. If a name, instead of a unique UUID, is supplied, it is possible that we could find more than one matching subnet since names are not required to be unique. An error will be raised in this case. :param name_or_id: Name or ID of the subnet being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ subnet = self.get_subnet(name_or_id) if not subnet: self.log.debug("Subnet %s not found for deleting", name_or_id) return False self._network_client.delete( "/subnets/{subnet_id}.json".format(subnet_id=subnet['id'])) return True def update_subnet(self, name_or_id, subnet_name=None, enable_dhcp=None, gateway_ip=None, disable_gateway_ip=None, allocation_pools=None, dns_nameservers=None, host_routes=None): """Update an existing subnet. :param string name_or_id: Name or ID of the subnet to update. :param string subnet_name: The new name of the subnet. :param bool enable_dhcp: Set to ``True`` if DHCP is enabled and ``False`` if disabled. :param string gateway_ip: The gateway IP address. When you specify both allocation_pools and gateway_ip, you must ensure that the gateway IP does not overlap with the specified allocation pools. :param bool disable_gateway_ip: Set to ``True`` if gateway IP address is disabled and ``False`` if enabled. It is not allowed with gateway_ip. Default is ``False``. :param list allocation_pools: A list of dictionaries of the start and end addresses for the allocation pools. For example:: [ { "start": "192.168.199.2", "end": "192.168.199.254" } ] :param list dns_nameservers: A list of DNS name servers for the subnet. For example:: [ "8.8.8.7", "8.8.8.8" ] :param list host_routes: A list of host route dictionaries for the subnet. For example:: [ { "destination": "0.0.0.0/0", "nexthop": "123.456.78.9" }, { "destination": "192.168.0.0/24", "nexthop": "192.168.0.1" } ] :returns: The updated subnet object. :raises: OpenStackCloudException on operation error. """ subnet = {} if subnet_name: subnet['name'] = subnet_name if enable_dhcp is not None: subnet['enable_dhcp'] = enable_dhcp if gateway_ip: subnet['gateway_ip'] = gateway_ip if disable_gateway_ip: subnet['gateway_ip'] = None if allocation_pools: subnet['allocation_pools'] = allocation_pools if dns_nameservers: subnet['dns_nameservers'] = dns_nameservers if host_routes: subnet['host_routes'] = host_routes if not subnet: self.log.debug("No subnet data to update") return if disable_gateway_ip and gateway_ip: raise exc.OpenStackCloudException( 'arg:disable_gateway_ip is not allowed with arg:gateway_ip') curr_subnet = self.get_subnet(name_or_id) if not curr_subnet: raise exc.OpenStackCloudException( "Subnet %s not found." % name_or_id) data = self._network_client.put( "/subnets/{subnet_id}.json".format(subnet_id=curr_subnet['id']), json={"subnet": subnet}) return self._get_and_munchify('subnet', data) @_utils.valid_kwargs('name', 'admin_state_up', 'mac_address', 'fixed_ips', 'subnet_id', 'ip_address', 'security_groups', 'allowed_address_pairs', 'extra_dhcp_opts', 'device_owner', 'device_id') def create_port(self, network_id, **kwargs): """Create a port :param network_id: The ID of the network. (Required) :param name: A symbolic name for the port. (Optional) :param admin_state_up: The administrative status of the port, which is up (true, default) or down (false). (Optional) :param mac_address: The MAC address. (Optional) :param fixed_ips: List of ip_addresses and subnet_ids. See subnet_id and ip_address. (Optional) For example:: [ { "ip_address": "10.29.29.13", "subnet_id": "a78484c4-c380-4b47-85aa-21c51a2d8cbd" }, ... ] :param subnet_id: If you specify only a subnet ID, OpenStack Networking allocates an available IP from that subnet to the port. (Optional) If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. :param ip_address: If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. :param security_groups: List of security group UUIDs. (Optional) :param allowed_address_pairs: Allowed address pairs list (Optional) For example:: [ { "ip_address": "23.23.23.1", "mac_address": "fa:16:3e:c4:cd:3f" }, ... ] :param extra_dhcp_opts: Extra DHCP options. (Optional). For example:: [ { "opt_name": "opt name1", "opt_value": "value1" }, ... ] :param device_owner: The ID of the entity that uses this port. For example, a DHCP agent. (Optional) :param device_id: The ID of the device that uses this port. For example, a virtual server. (Optional) :returns: a ``munch.Munch`` describing the created port. :raises: ``OpenStackCloudException`` on operation error. """ kwargs['network_id'] = network_id data = self._network_client.post( "/ports.json", json={'port': kwargs}, error_message="Error creating port for network {0}".format( network_id)) return self._get_and_munchify('port', data) @_utils.valid_kwargs('name', 'admin_state_up', 'fixed_ips', 'security_groups', 'allowed_address_pairs', 'extra_dhcp_opts', 'device_owner', 'device_id') def update_port(self, name_or_id, **kwargs): """Update a port Note: to unset an attribute use None value. To leave an attribute untouched just omit it. :param name_or_id: name or ID of the port to update. (Required) :param name: A symbolic name for the port. (Optional) :param admin_state_up: The administrative status of the port, which is up (true) or down (false). (Optional) :param fixed_ips: List of ip_addresses and subnet_ids. (Optional) If you specify only a subnet ID, OpenStack Networking allocates an available IP from that subnet to the port. If you specify both a subnet ID and an IP address, OpenStack Networking tries to allocate the specified address to the port. For example:: [ { "ip_address": "10.29.29.13", "subnet_id": "a78484c4-c380-4b47-85aa-21c51a2d8cbd" }, ... ] :param security_groups: List of security group UUIDs. (Optional) :param allowed_address_pairs: Allowed address pairs list (Optional) For example:: [ { "ip_address": "23.23.23.1", "mac_address": "fa:16:3e:c4:cd:3f" }, ... ] :param extra_dhcp_opts: Extra DHCP options. (Optional). For example:: [ { "opt_name": "opt name1", "opt_value": "value1" }, ... ] :param device_owner: The ID of the entity that uses this port. For example, a DHCP agent. (Optional) :param device_id: The ID of the resource this port is attached to. :returns: a ``munch.Munch`` describing the updated port. :raises: OpenStackCloudException on operation error. """ port = self.get_port(name_or_id=name_or_id) if port is None: raise exc.OpenStackCloudException( "failed to find port '{port}'".format(port=name_or_id)) data = self._network_client.put( "/ports/{port_id}.json".format(port_id=port['id']), json={"port": kwargs}, error_message="Error updating port {0}".format(name_or_id)) return self._get_and_munchify('port', data) def delete_port(self, name_or_id): """Delete a port :param name_or_id: ID or name of the port to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ port = self.get_port(name_or_id=name_or_id) if port is None: self.log.debug("Port %s not found for deleting", name_or_id) return False self._network_client.delete( "/ports/{port_id}.json".format(port_id=port['id']), error_message="Error deleting port {0}".format(name_or_id)) return True def create_security_group(self, name, description, project_id=None): """Create a new security group :param string name: A name for the security group. :param string description: Describes the security group. :param string project_id: Specify the project ID this security group will be created on (admin-only). :returns: A ``munch.Munch`` representing the new security group. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ # Security groups not supported if not self._has_secgroups(): raise exc.OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) data = [] security_group_json = { 'security_group': { 'name': name, 'description': description }} if project_id is not None: security_group_json['security_group']['tenant_id'] = project_id if self._use_neutron_secgroups(): data = self._network_client.post( '/security-groups.json', json=security_group_json, error_message="Error creating security group {0}".format(name)) else: data = self._compute_client.post( '/os-security-groups', json=security_group_json) return self._normalize_secgroup( self._get_and_munchify('security_group', data)) def delete_security_group(self, name_or_id): """Delete a security group :param string name_or_id: The name or unique ID of the security group. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ # Security groups not supported if not self._has_secgroups(): raise exc.OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) # TODO(mordred): Let's come back and stop doing a GET before we do # the delete. secgroup = self.get_security_group(name_or_id) if secgroup is None: self.log.debug('Security group %s not found for deleting', name_or_id) return False if self._use_neutron_secgroups(): self._network_client.delete( '/security-groups/{sg_id}.json'.format(sg_id=secgroup['id']), error_message="Error deleting security group {0}".format( name_or_id) ) return True else: self._compute_client.delete( '/os-security-groups/{id}'.format(id=secgroup['id'])) return True @_utils.valid_kwargs('name', 'description') def update_security_group(self, name_or_id, **kwargs): """Update a security group :param string name_or_id: Name or ID of the security group to update. :param string name: New name for the security group. :param string description: New description for the security group. :returns: A ``munch.Munch`` describing the updated security group. :raises: OpenStackCloudException on operation error. """ # Security groups not supported if not self._has_secgroups(): raise exc.OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) group = self.get_security_group(name_or_id) if group is None: raise exc.OpenStackCloudException( "Security group %s not found." % name_or_id) if self._use_neutron_secgroups(): data = self._network_client.put( '/security-groups/{sg_id}.json'.format(sg_id=group['id']), json={'security_group': kwargs}, error_message="Error updating security group {0}".format( name_or_id)) else: for key in ('name', 'description'): kwargs.setdefault(key, group[key]) data = self._compute_client.put( '/os-security-groups/{id}'.format(id=group['id']), json={'security-group': kwargs}) return self._normalize_secgroup( self._get_and_munchify('security_group', data)) def create_security_group_rule(self, secgroup_name_or_id, port_range_min=None, port_range_max=None, protocol=None, remote_ip_prefix=None, remote_group_id=None, direction='ingress', ethertype='IPv4', project_id=None): """Create a new security group rule :param string secgroup_name_or_id: The security group name or ID to associate with this security group rule. If a non-unique group name is given, an exception is raised. :param int port_range_min: The minimum port number in the range that is matched by the security group rule. If the protocol is TCP or UDP, this value must be less than or equal to the port_range_max attribute value. If nova is used by the cloud provider for security groups, then a value of None will be transformed to -1. :param int port_range_max: The maximum port number in the range that is matched by the security group rule. The port_range_min attribute constrains the port_range_max attribute. If nova is used by the cloud provider for security groups, then a value of None will be transformed to -1. :param string protocol: The protocol that is matched by the security group rule. Valid values are None, tcp, udp, and icmp. :param string remote_ip_prefix: The remote IP prefix to be associated with this security group rule. This attribute matches the specified IP prefix as the source IP address of the IP packet. :param string remote_group_id: The remote group ID to be associated with this security group rule. :param string direction: Ingress or egress: The direction in which the security group rule is applied. For a compute instance, an ingress security group rule is applied to incoming (ingress) traffic for that instance. An egress rule is applied to traffic leaving the instance. :param string ethertype: Must be IPv4 or IPv6, and addresses represented in CIDR must match the ingress or egress rules. :param string project_id: Specify the project ID this security group will be created on (admin-only). :returns: A ``munch.Munch`` representing the new security group rule. :raises: OpenStackCloudException on operation error. """ # Security groups not supported if not self._has_secgroups(): raise exc.OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) secgroup = self.get_security_group(secgroup_name_or_id) if not secgroup: raise exc.OpenStackCloudException( "Security group %s not found." % secgroup_name_or_id) if self._use_neutron_secgroups(): # NOTE: Nova accepts -1 port numbers, but Neutron accepts None # as the equivalent value. rule_def = { 'security_group_id': secgroup['id'], 'port_range_min': None if port_range_min == -1 else port_range_min, 'port_range_max': None if port_range_max == -1 else port_range_max, 'protocol': protocol, 'remote_ip_prefix': remote_ip_prefix, 'remote_group_id': remote_group_id, 'direction': direction, 'ethertype': ethertype } if project_id is not None: rule_def['tenant_id'] = project_id data = self._network_client.post( '/security-group-rules.json', json={'security_group_rule': rule_def}, error_message="Error creating security group rule") else: # NOTE: Neutron accepts None for protocol. Nova does not. if protocol is None: raise exc.OpenStackCloudException('Protocol must be specified') if direction == 'egress': self.log.debug( 'Rule creation failed: Nova does not support egress rules' ) raise exc.OpenStackCloudException( 'No support for egress rules') # NOTE: Neutron accepts None for ports, but Nova requires -1 # as the equivalent value for ICMP. # # For TCP/UDP, if both are None, Neutron allows this and Nova # represents this as all ports (1-65535). Nova does not accept # None values, so to hide this difference, we will automatically # convert to the full port range. If only a single port value is # specified, it will error as normal. if protocol == 'icmp': if port_range_min is None: port_range_min = -1 if port_range_max is None: port_range_max = -1 elif protocol in ['tcp', 'udp']: if port_range_min is None and port_range_max is None: port_range_min = 1 port_range_max = 65535 security_group_rule_dict = dict(security_group_rule=dict( parent_group_id=secgroup['id'], ip_protocol=protocol, from_port=port_range_min, to_port=port_range_max, cidr=remote_ip_prefix, group_id=remote_group_id )) if project_id is not None: security_group_rule_dict[ 'security_group_rule']['tenant_id'] = project_id data = self._compute_client.post( '/os-security-group-rules', json=security_group_rule_dict ) return self._normalize_secgroup_rule( self._get_and_munchify('security_group_rule', data)) def delete_security_group_rule(self, rule_id): """Delete a security group rule :param string rule_id: The unique ID of the security group rule. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudUnavailableFeature if security groups are not supported on this cloud. """ # Security groups not supported if not self._has_secgroups(): raise exc.OpenStackCloudUnavailableFeature( "Unavailable feature: security groups" ) if self._use_neutron_secgroups(): try: self._network_client.delete( '/security-group-rules/{sg_id}.json'.format(sg_id=rule_id), error_message="Error deleting security group rule " "{0}".format(rule_id)) except exc.OpenStackCloudResourceNotFound: return False return True else: self._compute_client.delete( '/os-security-group-rules/{id}'.format(id=rule_id)) return True def list_zones(self): """List all available zones. :returns: A list of zones dicts. """ data = self._dns_client.get( "/zones", error_message="Error fetching zones list") return self._get_and_munchify('zones', data) def get_zone(self, name_or_id, filters=None): """Get a zone by name or ID. :param name_or_id: Name or ID of the zone :param filters: A dictionary of meta data to use for further filtering OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A zone dict or None if no matching zone is found. """ return _utils._get_entity(self, 'zone', name_or_id, filters) def search_zones(self, name_or_id=None, filters=None): zones = self.list_zones() return _utils._filter_list(zones, name_or_id, filters) def create_zone(self, name, zone_type=None, email=None, description=None, ttl=None, masters=None): """Create a new zone. :param name: Name of the zone being created. :param zone_type: Type of the zone (primary/secondary) :param email: Email of the zone owner (only applies if zone_type is primary) :param description: Description of the zone :param ttl: TTL (Time to live) value in seconds :param masters: Master nameservers (only applies if zone_type is secondary) :returns: a dict representing the created zone. :raises: OpenStackCloudException on operation error. """ # We capitalize in case the user passes time in lowercase, as # designate call expects PRIMARY/SECONDARY if zone_type is not None: zone_type = zone_type.upper() if zone_type not in ('PRIMARY', 'SECONDARY'): raise exc.OpenStackCloudException( "Invalid type %s, valid choices are PRIMARY or SECONDARY" % zone_type) zone = { "name": name, "email": email, "description": description, } if ttl is not None: zone["ttl"] = ttl if zone_type is not None: zone["type"] = zone_type if masters is not None: zone["masters"] = masters data = self._dns_client.post( "/zones", json=zone, error_message="Unable to create zone {name}".format(name=name)) return self._get_and_munchify(key=None, data=data) @_utils.valid_kwargs('email', 'description', 'ttl', 'masters') def update_zone(self, name_or_id, **kwargs): """Update a zone. :param name_or_id: Name or ID of the zone being updated. :param email: Email of the zone owner (only applies if zone_type is primary) :param description: Description of the zone :param ttl: TTL (Time to live) value in seconds :param masters: Master nameservers (only applies if zone_type is secondary) :returns: a dict representing the updated zone. :raises: OpenStackCloudException on operation error. """ zone = self.get_zone(name_or_id) if not zone: raise exc.OpenStackCloudException( "Zone %s not found." % name_or_id) data = self._dns_client.patch( "/zones/{zone_id}".format(zone_id=zone['id']), json=kwargs, error_message="Error updating zone {0}".format(name_or_id)) return self._get_and_munchify(key=None, data=data) def delete_zone(self, name_or_id): """Delete a zone. :param name_or_id: Name or ID of the zone being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ zone = self.get_zone(name_or_id) if zone is None: self.log.debug("Zone %s not found for deleting", name_or_id) return False return self._dns_client.delete( "/zones/{zone_id}".format(zone_id=zone['id']), error_message="Error deleting zone {0}".format(name_or_id)) return True def list_recordsets(self, zone): """List all available recordsets. :param zone: Name or ID of the zone managing the recordset :returns: A list of recordsets. """ return self._dns_client.get( "/zones/{zone_id}/recordsets".format(zone_id=zone), error_message="Error fetching recordsets list") def get_recordset(self, zone, name_or_id): """Get a recordset by name or ID. :param zone: Name or ID of the zone managing the recordset :param name_or_id: Name or ID of the recordset :returns: A recordset dict or None if no matching recordset is found. """ try: return self._dns_client.get( "/zones/{zone_id}/recordsets/{recordset_id}".format( zone_id=zone, recordset_id=name_or_id), error_message="Error fetching recordset") except Exception: return None def search_recordsets(self, zone, name_or_id=None, filters=None): recordsets = self.list_recordsets(zone=zone) return _utils._filter_list(recordsets, name_or_id, filters) def create_recordset(self, zone, name, recordset_type, records, description=None, ttl=None): """Create a recordset. :param zone: Name or ID of the zone managing the recordset :param name: Name of the recordset :param recordset_type: Type of the recordset :param records: List of the recordset definitions :param description: Description of the recordset :param ttl: TTL value of the recordset :returns: a dict representing the created recordset. :raises: OpenStackCloudException on operation error. """ if self.get_zone(zone) is None: raise exc.OpenStackCloudException( "Zone %s not found." % zone) # We capitalize the type in case the user sends in lowercase recordset_type = recordset_type.upper() body = { 'name': name, 'type': recordset_type, 'records': records } if description: body['description'] = description if ttl: body['ttl'] = ttl return self._dns_client.post( "/zones/{zone_id}/recordsets".format(zone_id=zone), json=body, error_message="Error creating recordset {name}".format(name=name)) @_utils.valid_kwargs('description', 'ttl', 'records') def update_recordset(self, zone, name_or_id, **kwargs): """Update a recordset. :param zone: Name or ID of the zone managing the recordset :param name_or_id: Name or ID of the recordset being updated. :param records: List of the recordset definitions :param description: Description of the recordset :param ttl: TTL (Time to live) value in seconds of the recordset :returns: a dict representing the updated recordset. :raises: OpenStackCloudException on operation error. """ zone_obj = self.get_zone(zone) if zone_obj is None: raise exc.OpenStackCloudException( "Zone %s not found." % zone) recordset_obj = self.get_recordset(zone, name_or_id) if recordset_obj is None: raise exc.OpenStackCloudException( "Recordset %s not found." % name_or_id) new_recordset = self._dns_client.put( "/zones/{zone_id}/recordsets/{recordset_id}".format( zone_id=zone_obj['id'], recordset_id=name_or_id), json=kwargs, error_message="Error updating recordset {0}".format(name_or_id)) return new_recordset def delete_recordset(self, zone, name_or_id): """Delete a recordset. :param zone: Name or ID of the zone managing the recordset. :param name_or_id: Name or ID of the recordset being deleted. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ zone = self.get_zone(zone) if zone is None: self.log.debug("Zone %s not found for deleting", zone) return False recordset = self.get_recordset(zone['id'], name_or_id) if recordset is None: self.log.debug("Recordset %s not found for deleting", name_or_id) return False self._dns_client.delete( "/zones/{zone_id}/recordsets/{recordset_id}".format( zone_id=zone['id'], recordset_id=name_or_id), error_message="Error deleting recordset {0}".format(name_or_id)) return True @_utils.cache_on_arguments() def list_cluster_templates(self, detail=False): """List cluster templates. :param bool detail. Ignored. Included for backwards compat. ClusterTemplates are always returned with full details. :returns: a list of dicts containing the cluster template details. :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ with _utils.shade_exceptions("Error fetching cluster template list"): try: data = self._container_infra_client.get('/clustertemplates') # NOTE(flwang): Magnum adds /clustertemplates and /cluster # to deprecate /baymodels and /bay since Newton release. So # we're using a small tag to indicate if current # cloud has those two new API endpoints. self._container_infra_client._has_magnum_after_newton = True return self._normalize_cluster_templates( self._get_and_munchify('clustertemplates', data)) except exc.OpenStackCloudURINotFound: data = self._container_infra_client.get('/baymodels/detail') return self._normalize_cluster_templates( self._get_and_munchify('baymodels', data)) list_baymodels = list_cluster_templates list_coe_cluster_templates = list_cluster_templates def search_cluster_templates( self, name_or_id=None, filters=None, detail=False): """Search cluster templates. :param name_or_id: cluster template name or ID. :param filters: a dict containing additional filters to use. :param detail: a boolean to control if we need summarized or detailed output. :returns: a list of dict containing the cluster templates :raises: ``OpenStackCloudException``: if something goes wrong during the OpenStack API call. """ cluster_templates = self.list_cluster_templates(detail=detail) return _utils._filter_list( cluster_templates, name_or_id, filters) search_baymodels = search_cluster_templates search_coe_cluster_templates = search_cluster_templates def get_cluster_template(self, name_or_id, filters=None, detail=False): """Get a cluster template by name or ID. :param name_or_id: Name or ID of the cluster template. :param filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'last_name': 'Smith', 'other': { 'gender': 'Female' } } OR A string containing a jmespath expression for further filtering. Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]" :returns: A cluster template dict or None if no matching cluster template is found. """ return _utils._get_entity(self, 'cluster_template', name_or_id, filters=filters, detail=detail) get_baymodel = get_cluster_template get_coe_cluster_template = get_cluster_template def create_cluster_template( self, name, image_id=None, keypair_id=None, coe=None, **kwargs): """Create a cluster template. :param string name: Name of the cluster template. :param string image_id: Name or ID of the image to use. :param string keypair_id: Name or ID of the keypair to use. :param string coe: Name of the coe for the cluster template. Other arguments will be passed in kwargs. :returns: a dict containing the cluster template description :raises: ``OpenStackCloudException`` if something goes wrong during the OpenStack API call """ error_message = ("Error creating cluster template of name" " {cluster_template_name}".format( cluster_template_name=name)) with _utils.shade_exceptions(error_message): body = kwargs.copy() body['name'] = name body['image_id'] = image_id body['keypair_id'] = keypair_id body['coe'] = coe try: cluster_template = self._container_infra_client.post( '/clustertemplates', json=body) self._container_infra_client._has_magnum_after_newton = True except exc.OpenStackCloudURINotFound: cluster_template = self._container_infra_client.post( '/baymodels', json=body) self.list_cluster_templates.invalidate(self) return cluster_template create_baymodel = create_cluster_template create_coe_cluster_template = create_cluster_template def delete_cluster_template(self, name_or_id): """Delete a cluster template. :param name_or_id: Name or unique ID of the cluster template. :returns: True if the delete succeeded, False if the cluster template was not found. :raises: OpenStackCloudException on operation error. """ cluster_template = self.get_cluster_template(name_or_id) if not cluster_template: self.log.debug( "Cluster template %(name_or_id)s does not exist", {'name_or_id': name_or_id}, exc_info=True) return False with _utils.shade_exceptions("Error in deleting cluster template"): if getattr(self._container_infra_client, '_has_magnum_after_newton', False): self._container_infra_client.delete( '/clustertemplates/{id}'.format(id=cluster_template['id'])) else: self._container_infra_client.delete( '/baymodels/{id}'.format(id=cluster_template['id'])) self.list_cluster_templates.invalidate(self) return True delete_baymodel = delete_cluster_template delete_coe_cluster_template = delete_cluster_template @_utils.valid_kwargs('name', 'image_id', 'flavor_id', 'master_flavor_id', 'keypair_id', 'external_network_id', 'fixed_network', 'dns_nameserver', 'docker_volume_size', 'labels', 'coe', 'http_proxy', 'https_proxy', 'no_proxy', 'network_driver', 'tls_disabled', 'public', 'registry_enabled', 'volume_driver') def update_cluster_template(self, name_or_id, operation, **kwargs): """Update a cluster template. :param name_or_id: Name or ID of the cluster template being updated. :param operation: Operation to perform - add, remove, replace. Other arguments will be passed with kwargs. :returns: a dict representing the updated cluster template. :raises: OpenStackCloudException on operation error. """ self.list_cluster_templates.invalidate(self) cluster_template = self.get_cluster_template(name_or_id) if not cluster_template: raise exc.OpenStackCloudException( "Cluster template %s not found." % name_or_id) if operation not in ['add', 'replace', 'remove']: raise TypeError( "%s operation not in 'add', 'replace', 'remove'" % operation) patches = _utils.generate_patches_from_kwargs(operation, **kwargs) # No need to fire an API call if there is an empty patch if not patches: return cluster_template with _utils.shade_exceptions( "Error updating cluster template {0}".format(name_or_id)): if getattr(self._container_infra_client, '_has_magnum_after_newton', False): self._container_infra_client.patch( '/clustertemplates/{id}'.format(id=cluster_template['id']), json=patches) else: self._container_infra_client.patch( '/baymodels/{id}'.format(id=cluster_template['id']), json=patches) new_cluster_template = self.get_cluster_template(name_or_id) return new_cluster_template update_baymodel = update_cluster_template update_coe_cluster_template = update_cluster_template def list_nics(self): msg = "Error fetching machine port list" data = self._baremetal_client.get("/ports", microversion="1.6", error_message=msg) return data['ports'] def list_nics_for_machine(self, uuid): """Returns a list of ports present on the machine node. :param uuid: String representing machine UUID value in order to identify the machine. :returns: A list of ports. """ msg = "Error fetching port list for node {node_id}".format( node_id=uuid) url = "/nodes/{node_id}/ports".format(node_id=uuid) data = self._baremetal_client.get(url, microversion="1.6", error_message=msg) return data['ports'] def get_nic_by_mac(self, mac): try: url = '/ports/detail?address=%s' % mac data = self._baremetal_client.get(url) if len(data['ports']) == 1: return data['ports'][0] except Exception: pass return None def list_machines(self): msg = "Error fetching machine node list" data = self._baremetal_client.get("/nodes", microversion="1.6", error_message=msg) return self._get_and_munchify('nodes', data) def get_machine(self, name_or_id): """Get Machine by name or uuid Search the baremetal host out by utilizing the supplied id value which can consist of a name or UUID. :param name_or_id: A node name or UUID that will be looked up. :returns: ``munch.Munch`` representing the node found or None if no nodes are found. """ # NOTE(TheJulia): This is the initial microversion shade support for # ironic was created around. Ironic's default behavior for newer # versions is to expose the field, but with a value of None for # calls by a supported, yet older microversion. # Consensus for moving forward with microversion handling in shade # seems to be to take the same approach, although ironic's API # does it for the user. version = "1.6" try: url = '/nodes/{node_id}'.format(node_id=name_or_id) return self._normalize_machine( self._baremetal_client.get(url, microversion=version)) except Exception: return None def get_machine_by_mac(self, mac): """Get machine by port MAC address :param mac: Port MAC address to query in order to return a node. :returns: ``munch.Munch`` representing the node found or None if the node is not found. """ try: port_url = '/ports/detail?address={mac}'.format(mac=mac) port = self._baremetal_client.get(port_url, microversion=1.6) machine_url = '/nodes/{machine}'.format( machine=port['ports'][0]['node_uuid']) return self._baremetal_client.get(machine_url, microversion=1.6) except Exception: return None def inspect_machine(self, name_or_id, wait=False, timeout=3600): """Inspect a Barmetal machine Engages the Ironic node inspection behavior in order to collect metadata about the baremetal machine. :param name_or_id: String representing machine name or UUID value in order to identify the machine. :param wait: Boolean value controlling if the method is to wait for the desired state to be reached or a failure to occur. :param timeout: Integer value, defautling to 3600 seconds, for the$ wait state to reach completion. :returns: ``munch.Munch`` representing the current state of the machine upon exit of the method. """ return_to_available = False machine = self.get_machine(name_or_id) if not machine: raise exc.OpenStackCloudException( "Machine inspection failed to find: %s." % name_or_id) # NOTE(TheJulia): If in available state, we can do this, however # We need to to move the host back to m if "available" in machine['provision_state']: return_to_available = True # NOTE(TheJulia): Changing available machine to managedable state # and due to state transitions we need to until that transition has # completd. self.node_set_provision_state(machine['uuid'], 'manage', wait=True, timeout=timeout) elif ("manage" not in machine['provision_state'] and "inspect failed" not in machine['provision_state']): raise exc.OpenStackCloudException( "Machine must be in 'manage' or 'available' state to " "engage inspection: Machine: %s State: %s" % (machine['uuid'], machine['provision_state'])) with _utils.shade_exceptions("Error inspecting machine"): machine = self.node_set_provision_state(machine['uuid'], 'inspect') if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for node transition to " "target state of 'inspect'"): machine = self.get_machine(name_or_id) if "inspect failed" in machine['provision_state']: raise exc.OpenStackCloudException( "Inspection of node %s failed, last error: %s" % (machine['uuid'], machine['last_error'])) if "manageable" in machine['provision_state']: break if return_to_available: machine = self.node_set_provision_state( machine['uuid'], 'provide', wait=wait, timeout=timeout) return(machine) def register_machine(self, nics, wait=False, timeout=3600, lock_timeout=600, **kwargs): """Register Baremetal with Ironic Allows for the registration of Baremetal nodes with Ironic and population of pertinant node information or configuration to be passed to the Ironic API for the node. This method also creates ports for a list of MAC addresses passed in to be utilized for boot and potentially network configuration. If a failure is detected creating the network ports, any ports created are deleted, and the node is removed from Ironic. :param list nics: An array of MAC addresses that represent the network interfaces for the node to be created. Example:: [ {'mac': 'aa:bb:cc:dd:ee:01'}, {'mac': 'aa:bb:cc:dd:ee:02'} ] :param wait: Boolean value, defaulting to false, to wait for the node to reach the available state where the node can be provisioned. It must be noted, when set to false, the method will still wait for locks to clear before sending the next required command. :param timeout: Integer value, defautling to 3600 seconds, for the wait state to reach completion. :param lock_timeout: Integer value, defaulting to 600 seconds, for locks to clear. :param kwargs: Key value pairs to be passed to the Ironic API, including uuid, name, chassis_uuid, driver_info, parameters. :raises: OpenStackCloudException on operation error. :returns: Returns a ``munch.Munch`` representing the new baremetal node. """ msg = ("Baremetal machine node failed to be created.") port_msg = ("Baremetal machine port failed to be created.") url = '/nodes' # TODO(TheJulia): At some point we need to figure out how to # handle data across when the requestor is defining newer items # with the older api. machine = self._baremetal_client.post(url, json=kwargs, error_message=msg, microversion="1.6") created_nics = [] try: for row in nics: payload = {'address': row['mac'], 'node_uuid': machine['uuid']} nic = self._baremetal_client.post('/ports', json=payload, error_message=port_msg) created_nics.append(nic['uuid']) except Exception as e: self.log.debug("ironic NIC registration failed", exc_info=True) # TODO(mordred) Handle failures here try: for uuid in created_nics: try: port_url = '/ports/{uuid}'.format(uuid=uuid) # NOTE(TheJulia): Added in hope that it is logged. port_msg = ('Failed to delete port {port} for node' '{node}').format(port=uuid, node=machine['uuid']) self._baremetal_client.delete(port_url, error_message=port_msg) except Exception: pass finally: version = "1.6" msg = "Baremetal machine failed to be deleted." url = '/nodes/{node_id}'.format( node_id=machine['uuid']) self._baremetal_client.delete(url, error_message=msg, microversion=version) raise exc.OpenStackCloudException( "Error registering NICs with the baremetal service: %s" % str(e)) with _utils.shade_exceptions( "Error transitioning node to available state"): if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for node transition to " "available state"): machine = self.get_machine(machine['uuid']) # Note(TheJulia): Per the Ironic state code, a node # that fails returns to enroll state, which means a failed # node cannot be determined at this point in time. if machine['provision_state'] in ['enroll']: self.node_set_provision_state( machine['uuid'], 'manage') elif machine['provision_state'] in ['manageable']: self.node_set_provision_state( machine['uuid'], 'provide') elif machine['last_error'] is not None: raise exc.OpenStackCloudException( "Machine encountered a failure: %s" % machine['last_error']) # Note(TheJulia): Earlier versions of Ironic default to # None and later versions default to available up until # the introduction of enroll state. # Note(TheJulia): The node will transition through # cleaning if it is enabled, and we will wait for # completion. elif machine['provision_state'] in ['available', None]: break else: if machine['provision_state'] in ['enroll']: self.node_set_provision_state(machine['uuid'], 'manage') # Note(TheJulia): We need to wait for the lock to clear # before we attempt to set the machine into provide state # which allows for the transition to available. for count in _utils._iterate_timeout( lock_timeout, "Timeout waiting for reservation to clear " "before setting provide state"): machine = self.get_machine(machine['uuid']) if (machine['reservation'] is None and machine['provision_state'] is not 'enroll'): # NOTE(TheJulia): In this case, the node has # has moved on from the previous state and is # likely not being verified, as no lock is # present on the node. self.node_set_provision_state( machine['uuid'], 'provide') machine = self.get_machine(machine['uuid']) break elif machine['provision_state'] in [ 'cleaning', 'available']: break elif machine['last_error'] is not None: raise exc.OpenStackCloudException( "Machine encountered a failure: %s" % machine['last_error']) if not isinstance(machine, str): return self._normalize_machine(machine) else: return machine def unregister_machine(self, nics, uuid, wait=False, timeout=600): """Unregister Baremetal from Ironic Removes entries for Network Interfaces and baremetal nodes from an Ironic API :param list nics: An array of strings that consist of MAC addresses to be removed. :param string uuid: The UUID of the node to be deleted. :param wait: Boolean value, defaults to false, if to block the method upon the final step of unregistering the machine. :param timeout: Integer value, representing seconds with a default value of 600, which controls the maximum amount of time to block the method's completion on. :raises: OpenStackCloudException on operation failure. """ machine = self.get_machine(uuid) invalid_states = ['active', 'cleaning', 'clean wait', 'clean failed'] if machine['provision_state'] in invalid_states: raise exc.OpenStackCloudException( "Error unregistering node '%s' due to current provision " "state '%s'" % (uuid, machine['provision_state'])) # NOTE(TheJulia) There is a high possibility of a lock being present # if the machine was just moved through the state machine. This was # previously concealed by exception retry logic that detected the # failure, and resubitted the request in python-ironicclient. try: self.wait_for_baremetal_node_lock(machine, timeout=timeout) except exc.OpenStackCloudException as e: raise exc.OpenStackCloudException( "Error unregistering node '%s': Exception occured while" " waiting to be able to proceed: %s" % (machine['uuid'], e)) for nic in nics: port_msg = ("Error removing NIC {nic} from baremetal API for " "node {uuid}").format(nic=nic, uuid=uuid) port_url = '/ports/detail?address={mac}'.format(mac=nic['mac']) port = self._baremetal_client.get(port_url, microversion=1.6, error_message=port_msg) port_url = '/ports/{uuid}'.format(uuid=port['ports'][0]['uuid']) _utils._call_client_and_retry(self._baremetal_client.delete, port_url, retry_on=[409, 503], error_message=port_msg) with _utils.shade_exceptions( "Error unregistering machine {node_id} from the baremetal " "API".format(node_id=uuid)): # NOTE(TheJulia): While this should not matter microversion wise, # ironic assumes all calls without an explicit microversion to be # version 1.0. Ironic expects to deprecate support for older # microversions in future releases, as such, we explicitly set # the version to what we have been using with the client library.. version = "1.6" msg = "Baremetal machine failed to be deleted" url = '/nodes/{node_id}'.format( node_id=uuid) _utils._call_client_and_retry(self._baremetal_client.delete, url, retry_on=[409, 503], error_message=msg, microversion=version) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for machine to be deleted"): if not self.get_machine(uuid): break def patch_machine(self, name_or_id, patch): """Patch Machine Information This method allows for an interface to manipulate node entries within Ironic. :param node_id: The server object to attach to. :param patch: The JSON Patch document is a list of dictonary objects that comply with RFC 6902 which can be found at https://tools.ietf.org/html/rfc6902. Example patch construction:: patch=[] patch.append({ 'op': 'remove', 'path': '/instance_info' }) patch.append({ 'op': 'replace', 'path': '/name', 'value': 'newname' }) patch.append({ 'op': 'add', 'path': '/driver_info/username', 'value': 'administrator' }) :raises: OpenStackCloudException on operation error. :returns: ``munch.Munch`` representing the newly updated node. """ msg = ("Error updating machine via patch operation on node " "{node}".format(node=name_or_id)) url = '/nodes/{node_id}'.format(node_id=name_or_id) return self._normalize_machine( self._baremetal_client.patch(url, json=patch, error_message=msg)) def update_machine(self, name_or_id, chassis_uuid=None, driver=None, driver_info=None, name=None, instance_info=None, instance_uuid=None, properties=None): """Update a machine with new configuration information A user-friendly method to perform updates of a machine, in whole or part. :param string name_or_id: A machine name or UUID to be updated. :param string chassis_uuid: Assign a chassis UUID to the machine. NOTE: As of the Kilo release, this value cannot be changed once set. If a user attempts to change this value, then the Ironic API, as of Kilo, will reject the request. :param string driver: The driver name for controlling the machine. :param dict driver_info: The dictonary defining the configuration that the driver will utilize to control the machine. Permutations of this are dependent upon the specific driver utilized. :param string name: A human relatable name to represent the machine. :param dict instance_info: A dictonary of configuration information that conveys to the driver how the host is to be configured when deployed. be deployed to the machine. :param string instance_uuid: A UUID value representing the instance that the deployed machine represents. :param dict properties: A dictonary defining the properties of a machine. :raises: OpenStackCloudException on operation error. :returns: ``munch.Munch`` containing a machine sub-dictonary consisting of the updated data returned from the API update operation, and a list named changes which contains all of the API paths that received updates. """ machine = self.get_machine(name_or_id) if not machine: raise exc.OpenStackCloudException( "Machine update failed to find Machine: %s. " % name_or_id) machine_config = {} new_config = {} try: if chassis_uuid: machine_config['chassis_uuid'] = machine['chassis_uuid'] new_config['chassis_uuid'] = chassis_uuid if driver: machine_config['driver'] = machine['driver'] new_config['driver'] = driver if driver_info: machine_config['driver_info'] = machine['driver_info'] new_config['driver_info'] = driver_info if name: machine_config['name'] = machine['name'] new_config['name'] = name if instance_info: machine_config['instance_info'] = machine['instance_info'] new_config['instance_info'] = instance_info if instance_uuid: machine_config['instance_uuid'] = machine['instance_uuid'] new_config['instance_uuid'] = instance_uuid if properties: machine_config['properties'] = machine['properties'] new_config['properties'] = properties except KeyError as e: self.log.debug( "Unexpected machine response missing key %s [%s]", e.args[0], name_or_id) raise exc.OpenStackCloudException( "Machine update failed - machine [%s] missing key %s. " "Potential API issue." % (name_or_id, e.args[0])) try: patch = jsonpatch.JsonPatch.from_diff(machine_config, new_config) except Exception as e: raise exc.OpenStackCloudException( "Machine update failed - Error generating JSON patch object " "for submission to the API. Machine: %s Error: %s" % (name_or_id, str(e))) with _utils.shade_exceptions( "Machine update failed - patch operation failed on Machine " "{node}".format(node=name_or_id) ): if not patch: return dict( node=machine, changes=None ) else: machine = self.patch_machine(machine['uuid'], list(patch)) change_list = [] for change in list(patch): change_list.append(change['path']) return dict( node=machine, changes=change_list ) def validate_node(self, uuid): """Returns node validation information :param string uuid: A UUID value representing the baremetal node. :raises: OpenStackCloudException on operation error or if deploy and power informations are not present. :returns: dict containing validation information for each interface: boot, console, deploy, inspect, management, network, power, raid, rescue, storage, ... """ msg = ("Failed to query the API for validation status of " "node {node_id}").format(node_id=uuid) url = '/nodes/{node_id}/validate'.format(node_id=uuid) validate_resp = self._baremetal_client.get(url, error_message=msg) is_deploy_valid = validate_resp.get( 'deploy', {'result': False}).get('result', False) is_power_valid = validate_resp.get( 'power', {'result': False}).get('result', False) if not is_deploy_valid or not is_power_valid: raise exc.OpenStackCloudException( "ironic node {} failed to validate. " "(deploy: {}, power: {})".format( uuid, validate_resp.get('deploy'), validate_resp.get('power'))) return validate_resp def node_set_provision_state(self, name_or_id, state, configdrive=None, wait=False, timeout=3600): """Set Node Provision State Enables a user to provision a Machine and optionally define a config drive to be utilized. :param string name_or_id: The Name or UUID value representing the baremetal node. :param string state: The desired provision state for the baremetal node. :param string configdrive: An optional URL or file or path representing the configdrive. In the case of a directory, the client API will create a properly formatted configuration drive file and post the file contents to the API for deployment. :param boolean wait: A boolean value, defaulted to false, to control if the method will wait for the desire end state to be reached before returning. :param integer timeout: Integer value, defaulting to 3600 seconds, representing the amount of time to wait for the desire end state to be reached. :raises: OpenStackCloudException on operation error. :returns: ``munch.Munch`` representing the current state of the machine upon exit of the method. """ # NOTE(TheJulia): Default microversion for this call is 1.6. # Setting locally until we have determined our master plan regarding # microversion handling. version = "1.6" msg = ("Baremetal machine node failed change provision state to " "{state}".format(state=state)) url = '/nodes/{node_id}/states/provision'.format( node_id=name_or_id) payload = {'target': state} if configdrive: payload['configdrive'] = configdrive machine = _utils._call_client_and_retry(self._baremetal_client.put, url, retry_on=[409, 503], json=payload, error_message=msg, microversion=version) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for node transition to " "target state of '%s'" % state): machine = self.get_machine(name_or_id) if 'failed' in machine['provision_state']: raise exc.OpenStackCloudException( "Machine encountered a failure.") # NOTE(TheJulia): This performs matching if the requested # end state matches the state the node has reached. if state in machine['provision_state']: break # NOTE(TheJulia): This performs matching for cases where # the reqeusted state action ends in available state. if ("available" in machine['provision_state'] and state in ["provide", "deleted"]): break else: machine = self.get_machine(name_or_id) return machine def set_machine_maintenance_state( self, name_or_id, state=True, reason=None): """Set Baremetal Machine Maintenance State Sets Baremetal maintenance state and maintenance reason. :param string name_or_id: The Name or UUID value representing the baremetal node. :param boolean state: The desired state of the node. True being in maintenance where as False means the machine is not in maintenance mode. This value defaults to True if not explicitly set. :param string reason: An optional freeform string that is supplied to the baremetal API to allow for notation as to why the node is in maintenance state. :raises: OpenStackCloudException on operation error. :returns: None """ msg = ("Error setting machine maintenance state to {state} on node " "{node}").format(state=state, node=name_or_id) url = '/nodes/{name_or_id}/maintenance'.format(name_or_id=name_or_id) if state: payload = {'reason': reason} self._baremetal_client.put(url, json=payload, error_message=msg) else: self._baremetal_client.delete(url, error_message=msg) return None def remove_machine_from_maintenance(self, name_or_id): """Remove Baremetal Machine from Maintenance State Similarly to set_machine_maintenance_state, this method removes a machine from maintenance state. It must be noted that this method simpily calls set_machine_maintenace_state for the name_or_id requested and sets the state to False. :param string name_or_id: The Name or UUID value representing the baremetal node. :raises: OpenStackCloudException on operation error. :returns: None """ self.set_machine_maintenance_state(name_or_id, False) def _set_machine_power_state(self, name_or_id, state): """Set machine power state to on or off This private method allows a user to turn power on or off to a node via the Baremetal API. :params string name_or_id: A string representing the baremetal node to have power turned to an "on" state. :params string state: A value of "on", "off", or "reboot" that is passed to the baremetal API to be asserted to the machine. In the case of the "reboot" state, Ironic will return the host to the "on" state. :raises: OpenStackCloudException on operation error or. :returns: None """ msg = ("Error setting machine power state to {state} on node " "{node}").format(state=state, node=name_or_id) url = '/nodes/{name_or_id}/states/power'.format(name_or_id=name_or_id) if 'reboot' in state: desired_state = 'rebooting' else: desired_state = 'power {state}'.format(state=state) payload = {'target': desired_state} _utils._call_client_and_retry(self._baremetal_client.put, url, retry_on=[409, 503], json=payload, error_message=msg, microversion="1.6") return None def set_machine_power_on(self, name_or_id): """Activate baremetal machine power This is a method that sets the node power state to "on". :params string name_or_id: A string representing the baremetal node to have power turned to an "on" state. :raises: OpenStackCloudException on operation error. :returns: None """ self._set_machine_power_state(name_or_id, 'on') def set_machine_power_off(self, name_or_id): """De-activate baremetal machine power This is a method that sets the node power state to "off". :params string name_or_id: A string representing the baremetal node to have power turned to an "off" state. :raises: OpenStackCloudException on operation error. :returns: """ self._set_machine_power_state(name_or_id, 'off') def set_machine_power_reboot(self, name_or_id): """De-activate baremetal machine power This is a method that sets the node power state to "reboot", which in essence changes the machine power state to "off", and that back to "on". :params string name_or_id: A string representing the baremetal node to have power turned to an "off" state. :raises: OpenStackCloudException on operation error. :returns: None """ self._set_machine_power_state(name_or_id, 'reboot') def activate_node(self, uuid, configdrive=None, wait=False, timeout=1200): self.node_set_provision_state( uuid, 'active', configdrive, wait=wait, timeout=timeout) def deactivate_node(self, uuid, wait=False, timeout=1200): self.node_set_provision_state( uuid, 'deleted', wait=wait, timeout=timeout) def set_node_instance_info(self, uuid, patch): msg = ("Error updating machine via patch operation on node " "{node}".format(node=uuid)) url = '/nodes/{node_id}'.format(node_id=uuid) return self._baremetal_client.patch(url, json=patch, error_message=msg) def purge_node_instance_info(self, uuid): patch = [] patch.append({'op': 'remove', 'path': '/instance_info'}) msg = ("Error updating machine via patch operation on node " "{node}".format(node=uuid)) url = '/nodes/{node_id}'.format(node_id=uuid) return self._baremetal_client.patch(url, json=patch, error_message=msg) def wait_for_baremetal_node_lock(self, node, timeout=30): """Wait for a baremetal node to have no lock. Baremetal nodes in ironic have a reservation lock that is used to represent that a conductor has locked the node while performing some sort of action, such as changing configuration as a result of a machine state change. This lock can occur during power syncronization, and prevents updates to objects attached to the node, such as ports. In the vast majority of cases, locks should clear in a few seconds, and as such this method will only wait for 30 seconds. The default wait is two seconds between checking if the lock has cleared. This method is intended for use by methods that need to gracefully block without genreating errors, however this method does prevent another client or a timer from triggering a lock immediately after we see the lock as having cleared. :param node: The json representation of the node, specificially looking for the node 'uuid' and 'reservation' fields. :param timeout: Integer in seconds to wait for the lock to clear. Default: 30 :raises: OpenStackCloudException upon client failure. :returns: None """ # TODO(TheJulia): This _can_ still fail with a race # condition in that between us checking the status, # a conductor where the conductor could still obtain # a lock before we are able to obtain a lock. # This means we should handle this with such conections if node['reservation'] is None: return else: msg = 'Waiting for lock to be released for node {node}'.format( node=node['uuid']) for count in _utils._iterate_timeout(timeout, msg, 2): current_node = self.get_machine(node['uuid']) if current_node['reservation'] is None: return @_utils.valid_kwargs('type', 'service_type', 'description') def create_service(self, name, enabled=True, **kwargs): """Create a service. :param name: Service name. :param type: Service type. (type or service_type required.) :param service_type: Service type. (type or service_type required.) :param description: Service description (optional). :param enabled: Whether the service is enabled (v3 only) :returns: a ``munch.Munch`` containing the services description, i.e. the following attributes:: - id: - name: - type: - service_type: - description: :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ type_ = kwargs.pop('type', None) service_type = kwargs.pop('service_type', None) # TODO(mordred) When this changes to REST, force interface=admin # in the adapter call if self._is_client_version('identity', 2): url, key = '/OS-KSADM/services', 'OS-KSADM:service' kwargs['type'] = type_ or service_type else: url, key = '/services', 'service' kwargs['type'] = type_ or service_type kwargs['enabled'] = enabled kwargs['name'] = name msg = 'Failed to create service {name}'.format(name=name) data = self._identity_client.post( url, json={key: kwargs}, error_message=msg) service = self._get_and_munchify(key, data) return _utils.normalize_keystone_services([service])[0] @_utils.valid_kwargs('name', 'enabled', 'type', 'service_type', 'description') def update_service(self, name_or_id, **kwargs): # NOTE(SamYaple): Service updates are only available on v3 api if self._is_client_version('identity', 2): raise exc.OpenStackCloudUnavailableFeature( 'Unavailable Feature: Service update requires Identity v3' ) # NOTE(SamYaple): Keystone v3 only accepts 'type' but shade accepts # both 'type' and 'service_type' with a preference # towards 'type' type_ = kwargs.pop('type', None) service_type = kwargs.pop('service_type', None) if type_ or service_type: kwargs['type'] = type_ or service_type if self._is_client_version('identity', 2): url, key = '/OS-KSADM/services', 'OS-KSADM:service' else: url, key = '/services', 'service' service = self.get_service(name_or_id) msg = 'Error in updating service {service}'.format(service=name_or_id) data = self._identity_client.patch( '{url}/{id}'.format(url=url, id=service['id']), json={key: kwargs}, endpoint_filter={'interface': 'admin'}, error_message=msg) service = self._get_and_munchify(key, data) return _utils.normalize_keystone_services([service])[0] def list_services(self): """List all Keystone services. :returns: a list of ``munch.Munch`` containing the services description :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ if self._is_client_version('identity', 2): url, key = '/OS-KSADM/services', 'OS-KSADM:services' else: url, key = '/services', 'services' data = self._identity_client.get( url, endpoint_filter={'interface': 'admin'}, error_message="Failed to list services") services = self._get_and_munchify(key, data) return _utils.normalize_keystone_services(services) def search_services(self, name_or_id=None, filters=None): """Search Keystone services. :param name_or_id: Name or id of the desired service. :param filters: a dict containing additional filters to use. e.g. {'type': 'network'}. :returns: a list of ``munch.Munch`` containing the services description :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ services = self.list_services() return _utils._filter_list(services, name_or_id, filters) def get_service(self, name_or_id, filters=None): """Get exactly one Keystone service. :param name_or_id: Name or id of the desired service. :param filters: a dict containing additional filters to use. e.g. {'type': 'network'} :returns: a ``munch.Munch`` containing the services description, i.e. the following attributes:: - id: - name: - type: - description: :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call or if multiple matches are found. """ return _utils._get_entity(self, 'service', name_or_id, filters) def delete_service(self, name_or_id): """Delete a Keystone service. :param name_or_id: Service name or id. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call """ service = self.get_service(name_or_id=name_or_id) if service is None: self.log.debug("Service %s not found for deleting", name_or_id) return False if self._is_client_version('identity', 2): url = '/OS-KSADM/services' else: url = '/services' error_msg = 'Failed to delete service {id}'.format(id=service['id']) self._identity_client.delete( '{url}/{id}'.format(url=url, id=service['id']), endpoint_filter={'interface': 'admin'}, error_message=error_msg) return True @_utils.valid_kwargs('public_url', 'internal_url', 'admin_url') def create_endpoint(self, service_name_or_id, url=None, interface=None, region=None, enabled=True, **kwargs): """Create a Keystone endpoint. :param service_name_or_id: Service name or id for this endpoint. :param url: URL of the endpoint :param interface: Interface type of the endpoint :param public_url: Endpoint public URL. :param internal_url: Endpoint internal URL. :param admin_url: Endpoint admin URL. :param region: Endpoint region. :param enabled: Whether the endpoint is enabled NOTE: Both v2 (public_url, internal_url, admin_url) and v3 (url, interface) calling semantics are supported. But you can only use one of them at a time. :returns: a list of ``munch.Munch`` containing the endpoint description :raises: OpenStackCloudException if the service cannot be found or if something goes wrong during the openstack API call. """ public_url = kwargs.pop('public_url', None) internal_url = kwargs.pop('internal_url', None) admin_url = kwargs.pop('admin_url', None) if (url or interface) and (public_url or internal_url or admin_url): raise exc.OpenStackCloudException( "create_endpoint takes either url and interface OR" " public_url, internal_url, admin_url") service = self.get_service(name_or_id=service_name_or_id) if service is None: raise exc.OpenStackCloudException( "service {service} not found".format( service=service_name_or_id)) if self._is_client_version('identity', 2): if url: # v2.0 in use, v3-like arguments, one endpoint created if interface != 'public': raise exc.OpenStackCloudException( "Error adding endpoint for service {service}." " On a v2 cloud the url/interface API may only be" " used for public url. Try using the public_url," " internal_url, admin_url parameters instead of" " url and interface".format( service=service_name_or_id)) endpoint_args = {'publicurl': url} else: # v2.0 in use, v2.0-like arguments, one endpoint created endpoint_args = {} if public_url: endpoint_args.update({'publicurl': public_url}) if internal_url: endpoint_args.update({'internalurl': internal_url}) if admin_url: endpoint_args.update({'adminurl': admin_url}) # keystone v2.0 requires 'region' arg even if it is None endpoint_args.update( {'service_id': service['id'], 'region': region}) data = self._identity_client.post( '/endpoints', json={'endpoint': endpoint_args}, endpoint_filter={'interface': 'admin'}, error_message=("Failed to create endpoint for service" " {service}".format(service=service['name']))) return [self._get_and_munchify('endpoint', data)] else: endpoints_args = [] if url: # v3 in use, v3-like arguments, one endpoint created endpoints_args.append( {'url': url, 'interface': interface, 'service_id': service['id'], 'enabled': enabled, 'region': region}) else: # v3 in use, v2.0-like arguments, one endpoint created for each # interface url provided endpoint_args = {'region': region, 'enabled': enabled, 'service_id': service['id']} if public_url: endpoint_args.update({'url': public_url, 'interface': 'public'}) endpoints_args.append(endpoint_args.copy()) if internal_url: endpoint_args.update({'url': internal_url, 'interface': 'internal'}) endpoints_args.append(endpoint_args.copy()) if admin_url: endpoint_args.update({'url': admin_url, 'interface': 'admin'}) endpoints_args.append(endpoint_args.copy()) endpoints = [] error_msg = ("Failed to create endpoint for service" " {service}".format(service=service['name'])) for args in endpoints_args: data = self._identity_client.post( '/endpoints', json={'endpoint': args}, error_message=error_msg) endpoints.append(self._get_and_munchify('endpoint', data)) return endpoints @_utils.valid_kwargs('enabled', 'service_name_or_id', 'url', 'interface', 'region') def update_endpoint(self, endpoint_id, **kwargs): # NOTE(SamYaple): Endpoint updates are only available on v3 api if self._is_client_version('identity', 2): raise exc.OpenStackCloudUnavailableFeature( 'Unavailable Feature: Endpoint update' ) service_name_or_id = kwargs.pop('service_name_or_id', None) if service_name_or_id is not None: kwargs['service_id'] = service_name_or_id data = self._identity_client.patch( '/endpoints/{}'.format(endpoint_id), json={'endpoint': kwargs}, error_message="Failed to update endpoint {}".format(endpoint_id)) return self._get_and_munchify('endpoint', data) def list_endpoints(self): """List Keystone endpoints. :returns: a list of ``munch.Munch`` containing the endpoint description :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ # Force admin interface if v2.0 is in use v2 = self._is_client_version('identity', 2) kwargs = {'endpoint_filter': {'interface': 'admin'}} if v2 else {} data = self._identity_client.get( '/endpoints', error_message="Failed to list endpoints", **kwargs) endpoints = self._get_and_munchify('endpoints', data) return endpoints def search_endpoints(self, id=None, filters=None): """List Keystone endpoints. :param id: endpoint id. :param filters: a dict containing additional filters to use. e.g. {'region': 'region-a.geo-1'} :returns: a list of ``munch.Munch`` containing the endpoint description. Each dict contains the following attributes:: - id: - region: - public_url: - internal_url: (optional) - admin_url: (optional) :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ # NOTE(SamYaple): With keystone v3 we can filter directly via the # the keystone api, but since the return of all the endpoints even in # large environments is small, we can continue to filter in shade just # like the v2 api. endpoints = self.list_endpoints() return _utils._filter_list(endpoints, id, filters) def get_endpoint(self, id, filters=None): """Get exactly one Keystone endpoint. :param id: endpoint id. :param filters: a dict containing additional filters to use. e.g. {'region': 'region-a.geo-1'} :returns: a ``munch.Munch`` containing the endpoint description. i.e. a ``munch.Munch`` containing the following attributes:: - id: - region: - public_url: - internal_url: (optional) - admin_url: (optional) """ return _utils._get_entity(self, 'endpoint', id, filters) def delete_endpoint(self, id): """Delete a Keystone endpoint. :param id: Id of the endpoint to delete. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ endpoint = self.get_endpoint(id=id) if endpoint is None: self.log.debug("Endpoint %s not found for deleting", id) return False # Force admin interface if v2.0 is in use v2 = self._is_client_version('identity', 2) kwargs = {'endpoint_filter': {'interface': 'admin'}} if v2 else {} error_msg = "Failed to delete endpoint {id}".format(id=id) self._identity_client.delete('/endpoints/{id}'.format(id=id), error_message=error_msg, **kwargs) return True def create_domain(self, name, description=None, enabled=True): """Create a domain. :param name: The name of the domain. :param description: A description of the domain. :param enabled: Is the domain enabled or not (default True). :returns: a ``munch.Munch`` containing the domain representation. :raise exc.OpenStackCloudException: if the domain cannot be created. """ domain_ref = {'name': name, 'enabled': enabled} if description is not None: domain_ref['description'] = description msg = 'Failed to create domain {name}'.format(name=name) data = self._identity_client.post( '/domains', json={'domain': domain_ref}, error_message=msg) domain = self._get_and_munchify('domain', data) return _utils.normalize_domains([domain])[0] def update_domain( self, domain_id=None, name=None, description=None, enabled=None, name_or_id=None): if domain_id is None: if name_or_id is None: raise exc.OpenStackCloudException( "You must pass either domain_id or name_or_id value" ) dom = self.get_domain(None, name_or_id) if dom is None: raise exc.OpenStackCloudException( "Domain {0} not found for updating".format(name_or_id) ) domain_id = dom['id'] domain_ref = {} domain_ref.update({'name': name} if name else {}) domain_ref.update({'description': description} if description else {}) domain_ref.update({'enabled': enabled} if enabled is not None else {}) error_msg = "Error in updating domain {id}".format(id=domain_id) data = self._identity_client.patch( '/domains/{id}'.format(id=domain_id), json={'domain': domain_ref}, error_message=error_msg) domain = self._get_and_munchify('domain', data) return _utils.normalize_domains([domain])[0] def delete_domain(self, domain_id=None, name_or_id=None): """Delete a domain. :param domain_id: ID of the domain to delete. :param name_or_id: Name or ID of the domain to delete. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ if domain_id is None: if name_or_id is None: raise exc.OpenStackCloudException( "You must pass either domain_id or name_or_id value" ) dom = self.get_domain(name_or_id=name_or_id) if dom is None: self.log.debug( "Domain %s not found for deleting", name_or_id) return False domain_id = dom['id'] # A domain must be disabled before deleting self.update_domain(domain_id, enabled=False) error_msg = "Failed to delete domain {id}".format(id=domain_id) self._identity_client.delete('/domains/{id}'.format(id=domain_id), error_message=error_msg) return True def list_domains(self, **filters): """List Keystone domains. :returns: a list of ``munch.Munch`` containing the domain description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ data = self._identity_client.get( '/domains', params=filters, error_message="Failed to list domains") domains = self._get_and_munchify('domains', data) return _utils.normalize_domains(domains) def search_domains(self, filters=None, name_or_id=None): """Search Keystone domains. :param name_or_id: domain name or id :param dict filters: A dict containing additional filters to use. Keys to search on are id, name, enabled and description. :returns: a list of ``munch.Munch`` containing the domain description. Each ``munch.Munch`` contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ if filters is None: filters = {} if name_or_id is not None: domains = self.list_domains() return _utils._filter_list(domains, name_or_id, filters) else: return self.list_domains(**filters) def get_domain(self, domain_id=None, name_or_id=None, filters=None): """Get exactly one Keystone domain. :param domain_id: domain id. :param name_or_id: domain name or id. :param dict filters: A dict containing additional filters to use. Keys to search on are id, name, enabled and description. :returns: a ``munch.Munch`` containing the domain description, or None if not found. Each ``munch.Munch`` contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ if domain_id is None: # NOTE(SamYaple): search_domains() has filters and name_or_id # in the wrong positional order which prevents _get_entity from # being able to return quickly if passing a domain object so we # duplicate that logic here if hasattr(name_or_id, 'id'): return name_or_id return _utils._get_entity(self, 'domain', filters, name_or_id) else: error_msg = 'Failed to get domain {id}'.format(id=domain_id) data = self._identity_client.get( '/domains/{id}'.format(id=domain_id), error_message=error_msg) domain = self._get_and_munchify('domain', data) return _utils.normalize_domains([domain])[0] @_utils.valid_kwargs('domain_id') @_utils.cache_on_arguments() def list_groups(self, **kwargs): """List Keystone Groups. :param domain_id: domain id. :returns: A list of ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ data = self._identity_client.get( '/groups', params=kwargs, error_message="Failed to list groups") return _utils.normalize_groups(self._get_and_munchify('groups', data)) @_utils.valid_kwargs('domain_id') def search_groups(self, name_or_id=None, filters=None, **kwargs): """Search Keystone groups. :param name: Group name or id. :param filters: A dict containing additional filters to use. :param domain_id: domain id. :returns: A list of ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ groups = self.list_groups(**kwargs) return _utils._filter_list(groups, name_or_id, filters) @_utils.valid_kwargs('domain_id') def get_group(self, name_or_id, filters=None, **kwargs): """Get exactly one Keystone group. :param id: Group name or id. :param filters: A dict containing additional filters to use. :param domain_id: domain id. :returns: A ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ return _utils._get_entity(self, 'group', name_or_id, filters, **kwargs) def create_group(self, name, description, domain=None): """Create a group. :param string name: Group name. :param string description: Group description. :param string domain: Domain name or ID for the group. :returns: A ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ group_ref = {'name': name} if description: group_ref['description'] = description if domain: dom = self.get_domain(domain) if not dom: raise exc.OpenStackCloudException( "Creating group {group} failed: Invalid domain " "{domain}".format(group=name, domain=domain) ) group_ref['domain_id'] = dom['id'] error_msg = "Error creating group {group}".format(group=name) data = self._identity_client.post( '/groups', json={'group': group_ref}, error_message=error_msg) group = self._get_and_munchify('group', data) self.list_groups.invalidate(self) return _utils.normalize_groups([group])[0] @_utils.valid_kwargs('domain_id') def update_group(self, name_or_id, name=None, description=None, **kwargs): """Update an existing group :param string name: New group name. :param string description: New group description. :param domain_id: domain id. :returns: A ``munch.Munch`` containing the group description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ self.list_groups.invalidate(self) group = self.get_group(name_or_id, **kwargs) if group is None: raise exc.OpenStackCloudException( "Group {0} not found for updating".format(name_or_id) ) group_ref = {} if name: group_ref['name'] = name if description: group_ref['description'] = description error_msg = "Unable to update group {name}".format(name=name_or_id) data = self._identity_client.patch( '/groups/{id}'.format(id=group['id']), json={'group': group_ref}, error_message=error_msg) group = self._get_and_munchify('group', data) self.list_groups.invalidate(self) return _utils.normalize_groups([group])[0] @_utils.valid_kwargs('domain_id') def delete_group(self, name_or_id, **kwargs): """Delete a group :param name_or_id: ID or name of the group to delete. :param domain_id: domain id. :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ group = self.get_group(name_or_id, **kwargs) if group is None: self.log.debug( "Group %s not found for deleting", name_or_id) return False error_msg = "Unable to delete group {name}".format(name=name_or_id) self._identity_client.delete('/groups/{id}'.format(id=group['id']), error_message=error_msg) self.list_groups.invalidate(self) return True @_utils.valid_kwargs('domain_id') def list_roles(self, **kwargs): """List Keystone roles. :param domain_id: domain id for listing roles (v3) :returns: a list of ``munch.Munch`` containing the role description. :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ v2 = self._is_client_version('identity', 2) url = '/OS-KSADM/roles' if v2 else '/roles' data = self._identity_client.get( url, params=kwargs, error_message="Failed to list roles") return self._normalize_roles(self._get_and_munchify('roles', data)) @_utils.valid_kwargs('domain_id') def search_roles(self, name_or_id=None, filters=None, **kwargs): """Seach Keystone roles. :param string name: role name or id. :param dict filters: a dict containing additional filters to use. :param domain_id: domain id (v3) :returns: a list of ``munch.Munch`` containing the role description. Each ``munch.Munch`` contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ roles = self.list_roles(**kwargs) return _utils._filter_list(roles, name_or_id, filters) @_utils.valid_kwargs('domain_id') def get_role(self, name_or_id, filters=None, **kwargs): """Get exactly one Keystone role. :param id: role name or id. :param filters: a dict containing additional filters to use. :param domain_id: domain id (v3) :returns: a single ``munch.Munch`` containing the role description. Each ``munch.Munch`` contains the following attributes:: - id: - name: - description: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ return _utils._get_entity(self, 'role', name_or_id, filters, **kwargs) def _keystone_v2_role_assignments(self, user, project=None, role=None, **kwargs): data = self._identity_client.get( "/tenants/{tenant}/users/{user}/roles".format( tenant=project, user=user), error_message="Failed to list role assignments") roles = self._get_and_munchify('roles', data) ret = [] for tmprole in roles: if role is not None and role != tmprole.id: continue ret.append({ 'role': { 'id': tmprole.id }, 'scope': { 'project': { 'id': project, } }, 'user': { 'id': user, } }) return ret def _keystone_v3_role_assignments(self, **filters): # NOTE(samueldmq): different parameters have different representation # patterns as query parameters in the call to the list role assignments # API. The code below handles each set of patterns separately and # renames the parameters names accordingly, ignoring 'effective', # 'include_names' and 'include_subtree' whose do not need any renaming. for k in ('group', 'role', 'user'): if k in filters: filters[k + '.id'] = filters[k] del filters[k] for k in ('project', 'domain'): if k in filters: filters['scope.' + k + '.id'] = filters[k] del filters[k] if 'os_inherit_extension_inherited_to' in filters: filters['scope.OS-INHERIT:inherited_to'] = ( filters['os_inherit_extension_inherited_to']) del filters['os_inherit_extension_inherited_to'] data = self._identity_client.get( '/role_assignments', params=filters, error_message="Failed to list role assignments") return self._get_and_munchify('role_assignments', data) def list_role_assignments(self, filters=None): """List Keystone role assignments :param dict filters: Dict of filter conditions. Acceptable keys are: * 'user' (string) - User ID to be used as query filter. * 'group' (string) - Group ID to be used as query filter. * 'project' (string) - Project ID to be used as query filter. * 'domain' (string) - Domain ID to be used as query filter. * 'role' (string) - Role ID to be used as query filter. * 'os_inherit_extension_inherited_to' (string) - Return inherited role assignments for either 'projects' or 'domains' * 'effective' (boolean) - Return effective role assignments. * 'include_subtree' (boolean) - Include subtree 'user' and 'group' are mutually exclusive, as are 'domain' and 'project'. NOTE: For keystone v2, only user, project, and role are used. Project and user are both required in filters. :returns: a list of ``munch.Munch`` containing the role assignment description. Contains the following attributes:: - id: - user|group: - project|domain: :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ # NOTE(samueldmq): although 'include_names' is a valid query parameter # in the keystone v3 list role assignments API, it would have NO effect # on shade due to normalization. It is not documented as an acceptable # filter in the docs above per design! if not filters: filters = {} # NOTE(samueldmq): the docs above say filters are *IDs*, though if # munch.Munch objects are passed, this still works for backwards # compatibility as keystoneclient allows either IDs or objects to be # passed in. # TODO(samueldmq): fix the docs above to advertise munch.Munch objects # can be provided as parameters too for k, v in filters.items(): if isinstance(v, munch.Munch): filters[k] = v['id'] if self._is_client_version('identity', 2): if filters.get('project') is None or filters.get('user') is None: raise exc.OpenStackCloudException( "Must provide project and user for keystone v2" ) assignments = self._keystone_v2_role_assignments(**filters) else: assignments = self._keystone_v3_role_assignments(**filters) return _utils.normalize_role_assignments(assignments) def create_flavor(self, name, ram, vcpus, disk, flavorid="auto", ephemeral=0, swap=0, rxtx_factor=1.0, is_public=True): """Create a new flavor. :param name: Descriptive name of the flavor :param ram: Memory in MB for the flavor :param vcpus: Number of VCPUs for the flavor :param disk: Size of local disk in GB :param flavorid: ID for the flavor (optional) :param ephemeral: Ephemeral space size in GB :param swap: Swap space in MB :param rxtx_factor: RX/TX factor :param is_public: Make flavor accessible to the public :returns: A ``munch.Munch`` describing the new flavor. :raises: OpenStackCloudException on operation error. """ with _utils.shade_exceptions("Failed to create flavor {name}".format( name=name)): payload = { 'disk': disk, 'OS-FLV-EXT-DATA:ephemeral': ephemeral, 'id': flavorid, 'os-flavor-access:is_public': is_public, 'name': name, 'ram': ram, 'rxtx_factor': rxtx_factor, 'swap': swap, 'vcpus': vcpus, } if flavorid == 'auto': payload['id'] = None data = self._compute_client.post( '/flavors', json=dict(flavor=payload)) return self._normalize_flavor( self._get_and_munchify('flavor', data)) def delete_flavor(self, name_or_id): """Delete a flavor :param name_or_id: ID or name of the flavor to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ flavor = self.get_flavor(name_or_id, get_extra=False) if flavor is None: self.log.debug( "Flavor %s not found for deleting", name_or_id) return False with _utils.shade_exceptions("Unable to delete flavor {name}".format( name=name_or_id)): self._compute_client.delete( '/flavors/{id}'.format(id=flavor['id'])) return True def set_flavor_specs(self, flavor_id, extra_specs): """Add extra specs to a flavor :param string flavor_id: ID of the flavor to update. :param dict extra_specs: Dictionary of key-value pairs. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudResourceNotFound if flavor ID is not found. """ try: self._compute_client.post( "/flavors/{id}/os-extra_specs".format(id=flavor_id), json=dict(extra_specs=extra_specs)) except Exception as e: raise exc.OpenStackCloudException( "Unable to set flavor specs: {0}".format(str(e)) ) def unset_flavor_specs(self, flavor_id, keys): """Delete extra specs from a flavor :param string flavor_id: ID of the flavor to update. :param list keys: List of spec keys to delete. :raises: OpenStackCloudException on operation error. :raises: OpenStackCloudResourceNotFound if flavor ID is not found. """ for key in keys: try: self._compute_client.delete( "/flavors/{id}/os-extra_specs/{key}".format( id=flavor_id, key=key)) except Exception as e: raise exc.OpenStackCloudException( "Unable to delete flavor spec {0}: {1}".format( key, str(e))) def _mod_flavor_access(self, action, flavor_id, project_id): """Common method for adding and removing flavor access """ with _utils.shade_exceptions("Error trying to {action} access from " "flavor ID {flavor}".format( action=action, flavor=flavor_id)): endpoint = '/flavors/{id}/action'.format(id=flavor_id) access = {'tenant': project_id} access_key = '{action}TenantAccess'.format(action=action) self._compute_client.post(endpoint, json={access_key: access}) def add_flavor_access(self, flavor_id, project_id): """Grant access to a private flavor for a project/tenant. :param string flavor_id: ID of the private flavor. :param string project_id: ID of the project/tenant. :raises: OpenStackCloudException on operation error. """ self._mod_flavor_access('add', flavor_id, project_id) def remove_flavor_access(self, flavor_id, project_id): """Revoke access from a private flavor for a project/tenant. :param string flavor_id: ID of the private flavor. :param string project_id: ID of the project/tenant. :raises: OpenStackCloudException on operation error. """ self._mod_flavor_access('remove', flavor_id, project_id) def list_flavor_access(self, flavor_id): """List access from a private flavor for a project/tenant. :param string flavor_id: ID of the private flavor. :returns: a list of ``munch.Munch`` containing the access description :raises: OpenStackCloudException on operation error. """ with _utils.shade_exceptions("Error trying to list access from " "flavor ID {flavor}".format( flavor=flavor_id)): data = self._compute_client.get( '/flavors/{id}/os-flavor-access'.format(id=flavor_id)) return _utils.normalize_flavor_accesses( self._get_and_munchify('flavor_access', data)) @_utils.valid_kwargs('domain_id') def create_role(self, name, **kwargs): """Create a Keystone role. :param string name: The name of the role. :param domain_id: domain id (v3) :returns: a ``munch.Munch`` containing the role description :raise exc.OpenStackCloudException: if the role cannot be created """ v2 = self._is_client_version('identity', 2) url = '/OS-KSADM/roles' if v2 else '/roles' kwargs['name'] = name msg = 'Failed to create role {name}'.format(name=name) data = self._identity_client.post( url, json={'role': kwargs}, error_message=msg) role = self._get_and_munchify('role', data) return self._normalize_role(role) @_utils.valid_kwargs('domain_id') def update_role(self, name_or_id, name, **kwargs): """Update a Keystone role. :param name_or_id: Name or id of the role to update :param string name: The new role name :param domain_id: domain id :returns: a ``munch.Munch`` containing the role description :raise exc.OpenStackCloudException: if the role cannot be created """ if self._is_client_version('identity', 2): raise exc.OpenStackCloudUnavailableFeature( 'Unavailable Feature: Role update requires Identity v3' ) kwargs['name_or_id'] = name_or_id role = self.get_role(**kwargs) if role is None: self.log.debug( "Role %s not found for updating", name_or_id) return False msg = 'Failed to update role {name}'.format(name=name_or_id) json_kwargs = {'role_id': role.id, 'role': {'name': name}} data = self._identity_client.patch('/roles', error_message=msg, json=json_kwargs) role = self._get_and_munchify('role', data) return self._normalize_role(role) @_utils.valid_kwargs('domain_id') def delete_role(self, name_or_id, **kwargs): """Delete a Keystone role. :param string id: Name or id of the role to delete. :param domain_id: domain id (v3) :returns: True if delete succeeded, False otherwise. :raises: ``OpenStackCloudException`` if something goes wrong during the openstack API call. """ role = self.get_role(name_or_id, **kwargs) if role is None: self.log.debug( "Role %s not found for deleting", name_or_id) return False v2 = self._is_client_version('identity', 2) url = '{preffix}/{id}'.format( preffix='/OS-KSADM/roles' if v2 else '/roles', id=role['id']) error_msg = "Unable to delete role {name}".format(name=name_or_id) self._identity_client.delete(url, error_message=error_msg) return True def _get_grant_revoke_params(self, role, user=None, group=None, project=None, domain=None): role = self.get_role(role) if role is None: return {} data = {'role': role.id} # domain and group not available in keystone v2.0 is_keystone_v2 = self._is_client_version('identity', 2) filters = {} if not is_keystone_v2 and domain: filters['domain_id'] = data['domain'] = \ self.get_domain(domain)['id'] if user: if domain: data['user'] = self.get_user(user, domain_id=filters['domain_id'], filters=filters) else: data['user'] = self.get_user(user, filters=filters) if project: # drop domain in favor of project data.pop('domain', None) data['project'] = self.get_project(project, filters=filters) if not is_keystone_v2 and group: data['group'] = self.get_group(group, filters=filters) return data def grant_role(self, name_or_id, user=None, group=None, project=None, domain=None, wait=False, timeout=60): """Grant a role to a user. :param string name_or_id: The name or id of the role. :param string user: The name or id of the user. :param string group: The name or id of the group. (v3) :param string project: The name or id of the project. :param string domain: The id of the domain. (v3) :param bool wait: Wait for role to be granted :param int timeout: Timeout to wait for role to be granted NOTE: domain is a required argument when the grant is on a project, user or group specified by name. In that situation, they are all considered to be in that domain. If different domains are in use in the same role grant, it is required to specify those by ID. NOTE: for wait and timeout, sometimes granting roles is not instantaneous. NOTE: project is required for keystone v2 :returns: True if the role is assigned, otherwise False :raise exc.OpenStackCloudException: if the role cannot be granted """ data = self._get_grant_revoke_params(name_or_id, user, group, project, domain) filters = data.copy() if not data: raise exc.OpenStackCloudException( 'Role {0} not found.'.format(name_or_id)) if data.get('user') is not None and data.get('group') is not None: raise exc.OpenStackCloudException( 'Specify either a group or a user, not both') if data.get('user') is None and data.get('group') is None: raise exc.OpenStackCloudException( 'Must specify either a user or a group') if self._is_client_version('identity', 2) and \ data.get('project') is None: raise exc.OpenStackCloudException( 'Must specify project for keystone v2') if self.list_role_assignments(filters=filters): self.log.debug('Assignment already exists') return False error_msg = "Error granting access to role: {0}".format(data) if self._is_client_version('identity', 2): # For v2.0, only tenant/project assignment is supported url = "/tenants/{t}/users/{u}/roles/OS-KSADM/{r}".format( t=data['project']['id'], u=data['user']['id'], r=data['role']) self._identity_client.put(url, error_message=error_msg, endpoint_filter={'interface': 'admin'}) else: if data.get('project') is None and data.get('domain') is None: raise exc.OpenStackCloudException( 'Must specify either a domain or project') # For v3, figure out the assignment type and build the URL if data.get('domain'): url = "/domains/{}".format(data['domain']) else: url = "/projects/{}".format(data['project']['id']) if data.get('group'): url += "/groups/{}".format(data['group']['id']) else: url += "/users/{}".format(data['user']['id']) url += "/roles/{}".format(data.get('role')) self._identity_client.put(url, error_message=error_msg) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for role to be granted"): if self.list_role_assignments(filters=filters): break return True def revoke_role(self, name_or_id, user=None, group=None, project=None, domain=None, wait=False, timeout=60): """Revoke a role from a user. :param string name_or_id: The name or id of the role. :param string user: The name or id of the user. :param string group: The name or id of the group. (v3) :param string project: The name or id of the project. :param string domain: The id of the domain. (v3) :param bool wait: Wait for role to be revoked :param int timeout: Timeout to wait for role to be revoked NOTE: for wait and timeout, sometimes revoking roles is not instantaneous. NOTE: project is required for keystone v2 :returns: True if the role is revoke, otherwise False :raise exc.OpenStackCloudException: if the role cannot be removed """ data = self._get_grant_revoke_params(name_or_id, user, group, project, domain) filters = data.copy() if not data: raise exc.OpenStackCloudException( 'Role {0} not found.'.format(name_or_id)) if data.get('user') is not None and data.get('group') is not None: raise exc.OpenStackCloudException( 'Specify either a group or a user, not both') if data.get('user') is None and data.get('group') is None: raise exc.OpenStackCloudException( 'Must specify either a user or a group') if self._is_client_version('identity', 2) and \ data.get('project') is None: raise exc.OpenStackCloudException( 'Must specify project for keystone v2') if not self.list_role_assignments(filters=filters): self.log.debug('Assignment does not exist') return False error_msg = "Error revoking access to role: {0}".format(data) if self._is_client_version('identity', 2): # For v2.0, only tenant/project assignment is supported url = "/tenants/{t}/users/{u}/roles/OS-KSADM/{r}".format( t=data['project']['id'], u=data['user']['id'], r=data['role']) self._identity_client.delete( url, error_message=error_msg, endpoint_filter={'interface': 'admin'}) else: if data.get('project') is None and data.get('domain') is None: raise exc.OpenStackCloudException( 'Must specify either a domain or project') # For v3, figure out the assignment type and build the URL if data.get('domain'): url = "/domains/{}".format(data['domain']) else: url = "/projects/{}".format(data['project']['id']) if data.get('group'): url += "/groups/{}".format(data['group']['id']) else: url += "/users/{}".format(data['user']['id']) url += "/roles/{}".format(data.get('role')) self._identity_client.delete(url, error_message=error_msg) if wait: for count in _utils._iterate_timeout( timeout, "Timeout waiting for role to be revoked"): if not self.list_role_assignments(filters=filters): break return True def list_hypervisors(self): """List all hypervisors :returns: A list of hypervisor ``munch.Munch``. """ data = self._compute_client.get( '/os-hypervisors/detail', error_message="Error fetching hypervisor list") return self._get_and_munchify('hypervisors', data) def search_aggregates(self, name_or_id=None, filters=None): """Seach host aggregates. :param name: aggregate name or id. :param filters: a dict containing additional filters to use. :returns: a list of dicts containing the aggregates :raises: ``OpenStackCloudException``: if something goes wrong during the openstack API call. """ aggregates = self.list_aggregates() return _utils._filter_list(aggregates, name_or_id, filters) def list_aggregates(self): """List all available host aggregates. :returns: A list of aggregate dicts. """ data = self._compute_client.get( '/os-aggregates', error_message="Error fetching aggregate list") return self._get_and_munchify('aggregates', data) def get_aggregate(self, name_or_id, filters=None): """Get an aggregate by name or ID. :param name_or_id: Name or ID of the aggregate. :param dict filters: A dictionary of meta data to use for further filtering. Elements of this dictionary may, themselves, be dictionaries. Example:: { 'availability_zone': 'nova', 'metadata': { 'cpu_allocation_ratio': '1.0' } } :returns: An aggregate dict or None if no matching aggregate is found. """ return _utils._get_entity(self, 'aggregate', name_or_id, filters) def create_aggregate(self, name, availability_zone=None): """Create a new host aggregate. :param name: Name of the host aggregate being created :param availability_zone: Availability zone to assign hosts :returns: a dict representing the new host aggregate. :raises: OpenStackCloudException on operation error. """ data = self._compute_client.post( '/os-aggregates', json={'aggregate': { 'name': name, 'availability_zone': availability_zone }}, error_message="Unable to create host aggregate {name}".format( name=name)) return self._get_and_munchify('aggregate', data) @_utils.valid_kwargs('name', 'availability_zone') def update_aggregate(self, name_or_id, **kwargs): """Update a host aggregate. :param name_or_id: Name or ID of the aggregate being updated. :param name: New aggregate name :param availability_zone: Availability zone to assign to hosts :returns: a dict representing the updated host aggregate. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: raise exc.OpenStackCloudException( "Host aggregate %s not found." % name_or_id) data = self._compute_client.put( '/os-aggregates/{id}'.format(id=aggregate['id']), json={'aggregate': kwargs}, error_message="Error updating aggregate {name}".format( name=name_or_id)) return self._get_and_munchify('aggregate', data) def delete_aggregate(self, name_or_id): """Delete a host aggregate. :param name_or_id: Name or ID of the host aggregate to delete. :returns: True if delete succeeded, False otherwise. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: self.log.debug("Aggregate %s not found for deleting", name_or_id) return False return self._compute_client.delete( '/os-aggregates/{id}'.format(id=aggregate['id']), error_message="Error deleting aggregate {name}".format( name=name_or_id)) return True def set_aggregate_metadata(self, name_or_id, metadata): """Set aggregate metadata, replacing the existing metadata. :param name_or_id: Name of the host aggregate to update :param metadata: Dict containing metadata to replace (Use {'key': None} to remove a key) :returns: a dict representing the new host aggregate. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: raise exc.OpenStackCloudException( "Host aggregate %s not found." % name_or_id) err_msg = "Unable to set metadata for host aggregate {name}".format( name=name_or_id) data = self._compute_client.post( '/os-aggregates/{id}/action'.format(id=aggregate['id']), json={'set_metadata': {'metadata': metadata}}, error_message=err_msg) return self._get_and_munchify('aggregate', data) def add_host_to_aggregate(self, name_or_id, host_name): """Add a host to an aggregate. :param name_or_id: Name or ID of the host aggregate. :param host_name: Host to add. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: raise exc.OpenStackCloudException( "Host aggregate %s not found." % name_or_id) err_msg = "Unable to add host {host} to aggregate {name}".format( host=host_name, name=name_or_id) return self._compute_client.post( '/os-aggregates/{id}/action'.format(id=aggregate['id']), json={'add_host': {'host': host_name}}, error_message=err_msg) def remove_host_from_aggregate(self, name_or_id, host_name): """Remove a host from an aggregate. :param name_or_id: Name or ID of the host aggregate. :param host_name: Host to remove. :raises: OpenStackCloudException on operation error. """ aggregate = self.get_aggregate(name_or_id) if not aggregate: raise exc.OpenStackCloudException( "Host aggregate %s not found." % name_or_id) err_msg = "Unable to remove host {host} to aggregate {name}".format( host=host_name, name=name_or_id) return self._compute_client.post( '/os-aggregates/{id}/action'.format(id=aggregate['id']), json={'remove_host': {'host': host_name}}, error_message=err_msg) def get_volume_type_access(self, name_or_id): """Return a list of volume_type_access. :param name_or_id: Name or ID of the volume type. :raises: OpenStackCloudException on operation error. """ volume_type = self.get_volume_type(name_or_id) if not volume_type: raise exc.OpenStackCloudException( "VolumeType not found: %s" % name_or_id) data = self._volume_client.get( '/types/{id}/os-volume-type-access'.format(id=volume_type.id), error_message="Unable to get volume type access" " {name}".format(name=name_or_id)) return self._normalize_volume_type_accesses( self._get_and_munchify('volume_type_access', data)) def add_volume_type_access(self, name_or_id, project_id): """Grant access on a volume_type to a project. :param name_or_id: ID or name of a volume_type :param project_id: A project id NOTE: the call works even if the project does not exist. :raises: OpenStackCloudException on operation error. """ volume_type = self.get_volume_type(name_or_id) if not volume_type: raise exc.OpenStackCloudException( "VolumeType not found: %s" % name_or_id) with _utils.shade_exceptions(): payload = {'project': project_id} self._volume_client.post( '/types/{id}/action'.format(id=volume_type.id), json=dict(addProjectAccess=payload), error_message="Unable to authorize {project} " "to use volume type {name}".format( name=name_or_id, project=project_id)) def remove_volume_type_access(self, name_or_id, project_id): """Revoke access on a volume_type to a project. :param name_or_id: ID or name of a volume_type :param project_id: A project id :raises: OpenStackCloudException on operation error. """ volume_type = self.get_volume_type(name_or_id) if not volume_type: raise exc.OpenStackCloudException( "VolumeType not found: %s" % name_or_id) with _utils.shade_exceptions(): payload = {'project': project_id} self._volume_client.post( '/types/{id}/action'.format(id=volume_type.id), json=dict(removeProjectAccess=payload), error_message="Unable to revoke {project} " "to use volume type {name}".format( name=name_or_id, project=project_id)) def set_compute_quotas(self, name_or_id, **kwargs): """ Set a quota in a project :param name_or_id: project name or id :param kwargs: key/value pairs of quota name and quota value :raises: OpenStackCloudException if the resource to set the quota does not exist. """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") # compute_quotas = {key: val for key, val in kwargs.items() # if key in quota.COMPUTE_QUOTAS} # TODO(ghe): Manage volume and network quotas # network_quotas = {key: val for key, val in kwargs.items() # if key in quota.NETWORK_QUOTAS} # volume_quotas = {key: val for key, val in kwargs.items() # if key in quota.VOLUME_QUOTAS} kwargs['force'] = True self._compute_client.put( '/os-quota-sets/{project}'.format(project=proj.id), json={'quota_set': kwargs}, error_message="No valid quota or resource") def get_compute_quotas(self, name_or_id): """ Get quota for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the quotas """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") data = self._compute_client.get( '/os-quota-sets/{project}'.format(project=proj.id)) return self._get_and_munchify('quota_set', data) def delete_compute_quotas(self, name_or_id): """ Delete quota for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project or the nova client call failed :returns: dict with the quotas """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") return self._compute_client.delete( '/os-quota-sets/{project}'.format(project=proj.id)) def get_compute_usage(self, name_or_id, start=None, end=None): """ Get usage for a specific project :param name_or_id: project name or id :param start: :class:`datetime.datetime` or string. Start date in UTC Defaults to 2010-07-06T12:00:00Z (the date the OpenStack project was started) :param end: :class:`datetime.datetime` or string. End date in UTC. Defaults to now :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the usage """ def parse_date(date): try: return iso8601.parse_date(date) except iso8601.iso8601.ParseError: # Yes. This is an exception mask. However,iso8601 is an # implementation detail - and the error message is actually # less informative. raise exc.OpenStackCloudException( "Date given, {date}, is invalid. Please pass in a date" " string in ISO 8601 format -" " YYYY-MM-DDTHH:MM:SS".format( date=date)) def parse_datetime_for_nova(date): # Must strip tzinfo from the date- it breaks Nova. Also, # Nova is expecting this in UTC. If someone passes in an # ISO8601 date string or a datetime with timzeone data attached, # strip the timezone data but apply offset math first so that # the user's well formed perfectly valid date will be used # correctly. offset = date.utcoffset() if offset: date = date - datetime.timedelta(hours=offset) return date.replace(tzinfo=None) if not start: start = parse_date('2010-07-06') elif not isinstance(start, datetime.datetime): start = parse_date(start) if not end: end = datetime.datetime.utcnow() elif not isinstance(start, datetime.datetime): end = parse_date(end) start = parse_datetime_for_nova(start) end = parse_datetime_for_nova(end) proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException( "project does not exist: {}".format(name=proj.id)) data = self._compute_client.get( '/os-simple-tenant-usage/{project}'.format(project=proj.id), params=dict(start=start.isoformat(), end=end.isoformat()), error_message="Unable to get usage for project: {name}".format( name=proj.id)) return self._normalize_compute_usage( self._get_and_munchify('tenant_usage', data)) def set_volume_quotas(self, name_or_id, **kwargs): """ Set a volume quota in a project :param name_or_id: project name or id :param kwargs: key/value pairs of quota name and quota value :raises: OpenStackCloudException if the resource to set the quota does not exist. """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") kwargs['tenant_id'] = proj.id self._volume_client.put( '/os-quota-sets/{tenant_id}'.format(tenant_id=proj.id), json={'quota_set': kwargs}, error_message="No valid quota or resource") def get_volume_quotas(self, name_or_id): """ Get volume quotas for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the quotas """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") data = self._volume_client.get( '/os-quota-sets/{tenant_id}'.format(tenant_id=proj.id), error_message="cinder client call failed") return self._get_and_munchify('quota_set', data) def delete_volume_quotas(self, name_or_id): """ Delete volume quotas for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project or the cinder client call failed :returns: dict with the quotas """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") return self._volume_client.delete( '/os-quota-sets/{tenant_id}'.format(tenant_id=proj.id), error_message="cinder client call failed") def set_network_quotas(self, name_or_id, **kwargs): """ Set a network quota in a project :param name_or_id: project name or id :param kwargs: key/value pairs of quota name and quota value :raises: OpenStackCloudException if the resource to set the quota does not exist. """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") self._network_client.put( '/quotas/{project_id}.json'.format(project_id=proj.id), json={'quota': kwargs}, error_message=("Error setting Neutron's quota for " "project {0}".format(proj.id))) def get_network_quotas(self, name_or_id, details=False): """ Get network quotas for a project :param name_or_id: project name or id :param details: if set to True it will return details about usage of quotas by given project :raises: OpenStackCloudException if it's not a valid project :returns: Munch object with the quotas """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") url = '/quotas/{project_id}'.format(project_id=proj.id) if details: url = url + "/details" url = url + ".json" data = self._network_client.get( url, error_message=("Error fetching Neutron's quota for " "project {0}".format(proj.id))) return self._get_and_munchify('quota', data) def get_network_extensions(self): """Get Cloud provided network extensions :returns: set of Neutron extension aliases """ return self._neutron_extensions() def delete_network_quotas(self, name_or_id): """ Delete network quotas for a project :param name_or_id: project name or id :raises: OpenStackCloudException if it's not a valid project or the network client call failed :returns: dict with the quotas """ proj = self.get_project(name_or_id) if not proj: raise exc.OpenStackCloudException("project does not exist") self._network_client.delete( '/quotas/{project_id}.json'.format(project_id=proj.id), error_message=("Error deleting Neutron's quota for " "project {0}".format(proj.id))) def list_magnum_services(self): """List all Magnum services. :returns: a list of dicts containing the service details. :raises: OpenStackCloudException on operation error. """ with _utils.shade_exceptions("Error fetching Magnum services list"): data = self._container_infra_client.get('/mservices') return self._normalize_magnum_services( self._get_and_munchify('mservices', data)) shade-1.31.0/setup.py0000666000175000017500000000200613440327640014417 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) shade-1.31.0/test-requirements.txt0000666000175000017500000000074213440327640017153 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. coverage!=4.4,>=4.0 # Apache-2.0 fixtures>=3.0.0 # Apache-2.0/BSD mock>=2.0.0 # BSD python-subunit>=1.0.0 # Apache-2.0/BSD oslotest>=3.2.0 # Apache-2.0 requests-mock>=1.2.0 # Apache-2.0 stestr>=1.0.0 # Apache-2.0 testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT shade-1.31.0/requirements.txt0000666000175000017500000000070213440327640016172 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 # shade depends on os-client-config in addition to openstacksdk so that it # can continue to provide the make_legacy_client functions. os-client-config>=1.28.0 # Apache-2.0 openstacksdk>=0.15.0 # Apache-2.0 shade-1.31.0/AUTHORS0000664000175000017500000001050313440330010013735 0ustar zuulzuul00000000000000Abhijeet Kasurde Adam Gandelman Akihiro Motoki Alberto Gireud Alvaro Aleman Andreas Jaeger Andrey Shestakov Antoni Segura Puimedon Arie Arie Bregman Atsushi SAKAI Béla Vancsics Caleb Boylan Cedric Brandily Christian Groschupp Christian Zunker Chuck Short Clark Boylan Clayton O'Neill Clint Byrum Daniel Mellado Daniel Speichert Daniel Wallace David Rabel David Shrewsbury Davide Guerri Devananda van der Veen Donovan Jones Doug Hellmann Eric Lafontaine Feilong Wang Ghe Rivero Gregory Haynes Haikel Guemar Hideki Saito Hongbin Lu Hoolio Wobbits Ian Wienand Iswarya_Vakati JP Sullivan Jakub Jursa James E. Blair James E. Blair Jamie Lennox Jens Harbott Jeremy Stanley Jon Schlueter Jordan Pittier Joshua Harlow Joshua Harlow Joshua Hesketh Julia Kreger Kim Bao Long Kyle Mestery Lars Kellogg-Stedman Lee Yarwood Mark Goddard Markus Zoeller Mathieu Bultel Matt Fischer Matthew Treinish Matthew Wagoner Mike Perez Mohammed Naser Monty Taylor Morgan Fainberg Mário Santos Nguyen Hai Truong Olivier Bourdon OpenStack Release Bot Paul Belanger Paulo Matias Pavlo Shchelokovskyy Rarm Nagalingam Ricardo Carrillo Cruz Ricardo Carrillo Cruz Roberto Polli Romain Acciari Rosario Di Somma Rosario Di Somma Rui Chen Ryan Brady Sam Yaple Sam Yaple SamYaple Samuel de Medeiros Queiroz Sean McGinnis Sorin Sbarnea Spencer Krum Stefan Andres Steve Baker Steve Leon Swapnil Kulkarni (coolsvap) Sylvain Baubeau Sławek Kapłoński Thanh Ha Thomas Herve Tim Laszlo Timothy Chavez Tobias Brox Tony Breeds Toure Dunnon Tristan Cacqueray Trygve Vea Valentin Boucher Vieri <15050873171@163.com> Yolanda Robla baiwenteng deepakmourya mariojmdavid matthew wagoner melissaml qingszhao rajat29 tengqm wacuuu shade-1.31.0/.zuul.yaml0000666000175000017500000001672613440327640014664 0ustar zuulzuul00000000000000- job: name: shade-tox-py27-tips parent: openstack-tox-py27 description: | Run tox python 27 unittests against master of important libs vars: tox_install_siblings: true zuul_work_dir: src/git.openstack.org/openstack-infra/shade # shade in required-projects so that os-client-config and keystoneauth # can add the job as well required-projects: - openstack-infra/shade - openstack/keystoneauth - openstack/openstacksdk - openstack/os-client-config - job: name: shade-tox-py35-tips parent: openstack-tox-py35 description: | Run tox python 35 unittests against master of important libs vars: tox_install_siblings: true zuul_work_dir: src/git.openstack.org/openstack-infra/shade # shade in required-projects so that os-client-config and keystoneauth # can add the job as well required-projects: - openstack-infra/shade - openstack/keystoneauth - openstack/openstacksdk - openstack/os-client-config - project-template: name: shade-tox-tips check: jobs: - shade-tox-py27-tips - shade-tox-py35-tips gate: jobs: - shade-tox-py27-tips - shade-tox-py35-tips - job: name: shade-functional-devstack-base parent: devstack-tox-functional-consumer description: | Base job for shade devstack-based functional tests post-run: playbooks/devstack/post.yaml required-projects: # These jobs will DTRT when shade triggers them, but we want to make # sure stable branches of shade never get cloned by other people, # since stable branches of shade are, well, not actually things. - name: openstack-infra/shade override-checkout: master - name: openstack/heat - name: openstack/swift timeout: 9000 vars: devstack_local_conf: post-config: $CINDER_CONF: DEFAULT: osapi_max_limit: 6 devstack_services: ceilometer-acentral: false ceilometer-acompute: false ceilometer-alarm-evaluator: false ceilometer-alarm-notifier: false ceilometer-anotification: false ceilometer-api: false ceilometer-collector: false s-account: true s-container: true s-object: true s-proxy: true devstack_plugins: heat: https://git.openstack.org/openstack/heat tox_environment: # Do we really need to set this? It's cargo culted PYTHONUNBUFFERED: 'true' # Is there a way we can query the localconf variable to get these # rather than setting them explicitly? SHADE_HAS_DESIGNATE: 0 SHADE_HAS_HEAT: 1 SHADE_HAS_MAGNUM: 0 SHADE_HAS_NEUTRON: 1 SHADE_HAS_SWIFT: 1 tox_install_siblings: false tox_envlist: functional zuul_work_dir: src/git.openstack.org/openstack-infra/shade - job: name: shade-functional-devstack-legacy parent: shade-functional-devstack-base description: | Run shade functional tests against a legacy devstack # TODO(mordred): This does not seem to work voting: false vars: devstack_localrc: ENABLE_IDENTITY_V2: true FLAT_INTERFACE: br_flat PUBLIC_INTERFACE: br_pub tox_environment: SHADE_USE_KEYSTONE_V2: 1 SHADE_HAS_NEUTRON: 0 override-checkout: stable/newton - job: name: shade-functional-devstack parent: shade-functional-devstack-base description: | Run shade functional tests against a master devstack vars: devstack_localrc: Q_SERVICE_PLUGIN_CLASSES: qos Q_ML2_PLUGIN_EXT_DRIVERS: qos,port_security - job: name: shade-functional-devstack-python3 parent: shade-functional-devstack description: | Run shade functional tests using python3 against a master devstack vars: shade_environment: SHADE_TOX_PYTHON: python3 - job: name: shade-functional-devstack-tips parent: shade-functional-devstack description: | Run shade functional tests with tips of library dependencies against a master devstack. required-projects: - name: openstack/keystoneauth - name: openstack/openstacksdk - name: openstack/os-client-config vars: tox_install_siblings: true - job: name: shade-functional-devstack-tips-python3 parent: shade-functional-devstack-tips description: | Run shade functional tests with tips of library dependencies using python3 against a master devstack. vars: tox_environment: SHADE_TOX_PYTHON: python3 - job: name: shade-functional-devstack-magnum parent: shade-functional-devstack description: | Run shade functional tests against a master devstack with magnum required-projects: - openstack/magnum - openstack/python-magnumclient vars: devstack_plugins: magnum: https://git.openstack.org/openstack/magnum devstack_localrc: MAGNUM_GUEST_IMAGE_URL: https://tarballs.openstack.org/magnum/images/fedora-atomic-f23-dib.qcow2 MAGNUM_IMAGE_NAME: fedora-atomic-f23-dib devstack_services: s-account: false s-container: false s-object: false s-proxy: false tox_environment: SHADE_HAS_SWIFT: 0 SHADE_HAS_MAGNUM: 1 voting: false - job: name: shade-ansible-functional-devstack parent: shade-functional-devstack description: | Run shade ansible functional tests against a master devstack using released version of ansible. vars: tox_envlist: ansible - job: name: shade-ansible-stable-2.5-functional-devstack parent: shade-ansible-functional-devstack description: | Run shade ansible functional tests against a master devstack using git devel branch version of ansible. branches: ^(devel|master)$ required-projects: - name: github.com/ansible/ansible override-checkout: stable-2.5 - name: openstack-infra/shade override-checkout: master - name: openstack-dev/devstack override-checkout: master vars: # test-matrix grabs branch from the zuul branch setting. If the job # is triggered by ansible, that branch will be stable-2.5 which doesn't # make sense to devstack. Override so that we run the right thing. test_matrix_branch: master tox_install_siblings: true - project-template: name: shade-functional-tips check: jobs: - shade-functional-devstack-tips - shade-functional-devstack-tips-python3 gate: jobs: - shade-functional-devstack-tips - shade-functional-devstack-tips-python3 - project: templates: - check-requirements - openstack-lower-constraints-jobs - openstack-python-jobs - openstack-python36-jobs - publish-openstack-docs-pti - publish-to-pypi - release-notes-jobs-python3 - shade-functional-tips - shade-tox-tips check: jobs: - bifrost-integration-tinyipa: voting: false - bifrost-integration-tinyipa-opensuse-150: voting: false - shade-ansible-stable-2.5-functional-devstack: voting: false - shade-ansible-functional-devstack - shade-functional-devstack - shade-functional-devstack-magnum - shade-functional-devstack-python3 gate: jobs: - shade-ansible-functional-devstack - shade-functional-devstack - shade-functional-devstack-python3 shade-1.31.0/bindep.txt0000666000175000017500000000045113440327640014711 0ustar zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed by tests; # see http://docs.openstack.org/infra/bindep/ for additional information. build-essential [platform:dpkg] python-dev [platform:dpkg] python-devel [platform:rpm] libffi-dev [platform:dpkg] libffi-devel [platform:rpm] shade-1.31.0/extras/0000775000175000017500000000000013440330010014174 5ustar zuulzuul00000000000000shade-1.31.0/extras/install-tips.sh0000666000175000017500000000203113440327640017170 0ustar zuulzuul00000000000000#!/bin/bash # Copyright (c) 2017 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. for lib in \ python-keystoneclient \ os-client-config \ keystoneauth do egg=$(echo $lib | tr '-' '_' | sed 's/python-//') if [ -d /opt/stack/new/$lib ] ; then tip_location="git+file:///opt/stack/new/$lib#egg=$egg" echo "$(which pip) install -U -e $tip_location" pip uninstall -y $lib pip install -U -e $tip_location else echo "$lib not found in /opt/stack/new/$lib" fi done shade-1.31.0/extras/run-ansible-tests.sh0000777000175000017500000000357513440327640020145 0ustar zuulzuul00000000000000#!/bin/bash ############################################################################# # run-ansible-tests.sh # # Script used to setup a tox environment for running Ansible. This is meant # to be called by tox (via tox.ini). To run the Ansible tests, use: # # tox -e ansible [TAG ...] # or # tox -e ansible -- -c cloudX [TAG ...] # # USAGE: # run-ansible-tests.sh -e ENVDIR [-c CLOUD] [TAG ...] # # PARAMETERS: # -e ENVDIR Directory of the tox environment to use for testing. # -c CLOUD Name of the cloud to use for testing. # Defaults to "devstack-admin". # [TAG ...] Optional list of space-separated tags to control which # modules are tested. # # EXAMPLES: # # Run all Ansible tests # run-ansible-tests.sh -e ansible # # # Run auth, keypair, and network tests against cloudX # run-ansible-tests.sh -e ansible -c cloudX auth keypair network ############################################################################# CLOUD="devstack-admin" ENVDIR= while getopts "c:de:" opt do case $opt in c) CLOUD=${OPTARG} ;; e) ENVDIR=${OPTARG} ;; ?) echo "Invalid option: -${OPTARG}" exit 1;; esac done if [ -z ${ENVDIR} ] then echo "Option -e is required" exit 1 fi shift $((OPTIND-1)) TAGS=$( echo "$*" | tr ' ' , ) # Run the shade Ansible tests tag_opt="" if [ ! -z ${TAGS} ] then tag_opt="--tags ${TAGS}" fi # Until we have a module that lets us determine the image we want from # within a playbook, we have to find the image here and pass it in. # We use the openstack client instead of nova client since it can use clouds.yaml. IMAGE=`openstack --os-cloud=${CLOUD} image list -f value -c Name | grep cirros | grep -v -e ramdisk -e kernel` if [ $? -ne 0 ] then echo "Failed to find Cirros image" exit 1 fi ansible-playbook -vvv ./shade/tests/ansible/run.yml -e "cloud=${CLOUD} image=${IMAGE}" ${tag_opt} shade-1.31.0/extras/delete-network.sh0000666000175000017500000000107413440327640017504 0ustar zuulzuul00000000000000neutron router-gateway-clear router1 neutron router-interface-delete router1 for subnet in private-subnet ipv6-private-subnet ; do neutron router-interface-delete router1 $subnet subnet_id=$(neutron subnet-show $subnet -f value -c id) neutron port-list | grep $subnet_id | awk '{print $2}' | xargs -n1 neutron port-delete neutron subnet-delete $subnet done neutron router-delete router1 neutron net-delete private # Make the public network directly consumable neutron subnet-update public-subnet --enable-dhcp=True neutron net-update public --shared=True shade-1.31.0/shade.egg-info/0000775000175000017500000000000013440330010015444 5ustar zuulzuul00000000000000shade-1.31.0/shade.egg-info/SOURCES.txt0000664000175000017500000003546113440330010017341 0ustar zuulzuul00000000000000.coveragerc .mailmap .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE MANIFEST.in README.rst bindep.txt lower-constraints.txt requirements.txt setup.cfg setup.py test-requirements.txt tox.ini devstack/plugin.sh doc/requirements.txt doc/source/conf.py doc/source/index.rst doc/source/contributor/coding.rst doc/source/contributor/contributing.rst doc/source/contributor/index.rst doc/source/install/index.rst doc/source/releasenotes/index.rst doc/source/user/index.rst doc/source/user/logging.rst doc/source/user/microversions.rst doc/source/user/model.rst doc/source/user/multi-cloud-demo.rst doc/source/user/usage.rst doc/source/user/examples/cleanup-servers.py doc/source/user/examples/create-server-dict.py doc/source/user/examples/create-server-name-or-id.py doc/source/user/examples/debug-logging.py doc/source/user/examples/find-an-image.py doc/source/user/examples/http-debug-logging.py doc/source/user/examples/munch-dict-object.py doc/source/user/examples/normalization.py doc/source/user/examples/server-information.py doc/source/user/examples/service-conditional-overrides.py doc/source/user/examples/service-conditionals.py doc/source/user/examples/strict-mode.py doc/source/user/examples/upload-large-object.py doc/source/user/examples/upload-object.py doc/source/user/examples/user-agent.py extras/delete-network.sh extras/install-tips.sh extras/run-ansible-tests.sh playbooks/devstack/legacy-git.yaml playbooks/devstack/post.yaml releasenotes/notes/add-current-user-id-49b6463e6bcc3b31.yaml releasenotes/notes/add-jmespath-support-f47b7a503dbbfda1.yaml releasenotes/notes/add-list_flavor_access-e038253e953e6586.yaml releasenotes/notes/add-server-console-078ed2696e5b04d9.yaml releasenotes/notes/add-show-all-images-flag-352748b6c3d99f3f.yaml releasenotes/notes/add-support-for-setting-static-routes-b3ce6cac2c5e9e51.yaml releasenotes/notes/add_description_create_user-0ddc9a0ef4da840d.yaml releasenotes/notes/add_designate_recordsets_support-69af0a6b317073e7.yaml releasenotes/notes/add_designate_zones_support-35fa9b8b09995b43.yaml releasenotes/notes/add_heat_tag_support-0668933506135082.yaml releasenotes/notes/add_host_aggregate_support-471623faf45ec3c3.yaml releasenotes/notes/add_magnum_baymodel_support-e35e5aab0b14ff75.yaml releasenotes/notes/add_magnum_services_support-3d95f9dcc60b5573.yaml releasenotes/notes/add_server_group_support-dfa472e3dae7d34d.yaml releasenotes/notes/add_update_server-8761059d6de7e68b.yaml releasenotes/notes/add_update_service-28e590a7a7524053.yaml releasenotes/notes/alternate-auth-context-3939f1492a0e1355.yaml releasenotes/notes/always-detail-cluster-templates-3eb4b5744ba327ac.yaml releasenotes/notes/boot-on-server-group-a80e51850db24b3d.yaml releasenotes/notes/bug-2001080-de52ead3c5466792.yaml releasenotes/notes/cache-in-use-volumes-c7fa8bb378106fe3.yaml releasenotes/notes/change-attach-vol-return-value-4834a1f78392abb1.yaml releasenotes/notes/cinder_volume_backups_support-6f7ceab440853833.yaml releasenotes/notes/cinderv2-norm-fix-037189c60b43089f.yaml releasenotes/notes/cleanup-objects-f99aeecf22ac13dd.yaml releasenotes/notes/compute-quotas-b07a0f24dfac8444.yaml releasenotes/notes/compute-usage-defaults-5f5b2936f17ff400.yaml releasenotes/notes/config-flavor-specs-ca712e17971482b6.yaml releasenotes/notes/create-stack-fix-12dbb59a48ac7442.yaml releasenotes/notes/create_server_network_fix-c4a56b31d2850a4b.yaml releasenotes/notes/create_service_norm-319a97433d68fa6a.yaml releasenotes/notes/data-model-cf50d86982646370.yaml releasenotes/notes/delete-autocreated-1839187b0aa35022.yaml releasenotes/notes/delete-image-objects-9d4b4e0fff36a23f.yaml releasenotes/notes/delete-obj-return-a3ecf0415b7a2989.yaml releasenotes/notes/delete_project-399f9b3107014dde.yaml releasenotes/notes/domain_operations_name_or_id-baba4cac5b67234d.yaml releasenotes/notes/dual-stack-networks-8a81941c97d28deb.yaml releasenotes/notes/endpoint-from-catalog-bad36cb0409a4e6a.yaml releasenotes/notes/false-not-attribute-error-49484d0fdc61f75d.yaml releasenotes/notes/feature-server-metadata-50caf18cec532160.yaml releasenotes/notes/fip_timeout-035c4bb3ff92fa1f.yaml releasenotes/notes/fix-config-drive-a148b7589f7e1022.yaml releasenotes/notes/fix-delete-ips-1d4eebf7bc4d4733.yaml releasenotes/notes/fix-list-networks-a592725df64c306e.yaml releasenotes/notes/fix-missing-futures-a0617a1c1ce6e659.yaml releasenotes/notes/fix-properties-key-conflict-2161ca1faaad6731.yaml releasenotes/notes/fix-supplemental-fips-c9cd58aac12eb30e.yaml releasenotes/notes/fix-update-domain-af47b066ac52eb7f.yaml releasenotes/notes/fixed-magnum-type-7406f0a60525f858.yaml releasenotes/notes/flavor_fix-a53c6b326dc34a2c.yaml releasenotes/notes/fnmatch-name-or-id-f658fe26f84086c8.yaml releasenotes/notes/get-limits-c383c512f8e01873.yaml releasenotes/notes/get-usage-72d249ff790d1b8f.yaml releasenotes/notes/get_object_api-968483adb016bce1.yaml releasenotes/notes/glance-image-pagination-0b4dfef22b25852b.yaml releasenotes/notes/grant-revoke-assignments-231d3f9596a1ae75.yaml releasenotes/notes/image-flavor-by-name-54865b00ebbf1004.yaml releasenotes/notes/image-from-volume-9acf7379f5995b5b.yaml releasenotes/notes/infer-secgroup-source-58d840aaf1a1f485.yaml releasenotes/notes/less-file-hashing-d2497337da5acbef.yaml releasenotes/notes/lifesupport-d6e700c3226e35d6.yaml releasenotes/notes/list-az-names-a38c277d1192471b.yaml releasenotes/notes/list-role-assignments-keystone-v2-b127b12b4860f50c.yaml releasenotes/notes/list-servers-all-projects-349e6dc665ba2e8d.yaml releasenotes/notes/log-request-ids-37507cb6eed9a7da.yaml releasenotes/notes/make_object_metadata_easier.yaml-e9751723e002e06f.yaml releasenotes/notes/meta-passthrough-d695bff4f9366b65.yaml releasenotes/notes/mtu-settings-8ce8b54d096580a2.yaml releasenotes/notes/multiple-updates-b48cc2f6db2e526d.yaml releasenotes/notes/net_provider-dd64b697476b7094.yaml releasenotes/notes/network-quotas-b98cce9ffeffdbf4.yaml releasenotes/notes/neutron_availability_zone_extension-675c2460ebb50a09.yaml releasenotes/notes/new-floating-attributes-213cdf5681d337e1.yaml releasenotes/notes/no-more-troveclient-0a4739c21432ac63.yaml releasenotes/notes/norm_role_assignments-a13f41768e62d40c.yaml releasenotes/notes/normalize-images-1331bea7bfffa36a.yaml releasenotes/notes/nova-flavor-to-rest-0a5757e35714a690.yaml releasenotes/notes/nova-old-microversion-5e4b8e239ba44096.yaml releasenotes/notes/orch_timeout-a3953376a9a96343.yaml releasenotes/notes/remove-magnumclient-875b3e513f98f57c.yaml releasenotes/notes/remove-novaclient-3f8d4db20d5f9582.yaml releasenotes/notes/removed-glanceclient-105c7fba9481b9be.yaml releasenotes/notes/removed-swiftclient-aff22bfaeee5f59f.yaml releasenotes/notes/router_ext_gw-b86582317bca8b39.yaml releasenotes/notes/server-create-error-id-66c698c7e633fb8b.yaml releasenotes/notes/server-security-groups-840ab28c04f359de.yaml releasenotes/notes/service_enabled_flag-c917b305d3f2e8fd.yaml releasenotes/notes/set-bootable-volume-454a7a41e7e77d08.yaml releasenotes/notes/stack-update-5886e91fd6e423bf.yaml releasenotes/notes/started-using-reno-242e2b0cd27f9480.yaml releasenotes/notes/stream-to-file-91f48d6dcea399c6.yaml releasenotes/notes/strict-mode-d493abc0c3e87945.yaml releasenotes/notes/swift-upload-lock-d18f3d42b3a0719a.yaml releasenotes/notes/switch-nova-to-created_at-45b7b50af6a2d59e.yaml releasenotes/notes/toggle-port-security-f5bc606e82141feb.yaml releasenotes/notes/update_endpoint-f87c1f42d0c0d1ef.yaml releasenotes/notes/use-interface-ip-c5cb3e7c91150096.yaml releasenotes/notes/v4-fixed-ip-325740fdae85ffa9.yaml releasenotes/notes/version-discovery-a501c4e9e9869f77.yaml releasenotes/notes/volume-quotas-5b674ee8c1f71eb6.yaml releasenotes/notes/volume-types-a07a14ae668e7dd2.yaml releasenotes/notes/wait-on-image-snapshot-27cd2eacab2fabd8.yaml releasenotes/notes/wait_for_server-8dc8446b7c673d36.yaml releasenotes/notes/workaround-transitive-deps-1e7a214f3256b77e.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/pike.rst releasenotes/source/queens.rst releasenotes/source/rocky.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder shade/__init__.py shade/_adapter.py shade/_legacy_clients.py shade/_log.py shade/_normalize.py shade/_utils.py shade/exc.py shade/inventory.py shade/meta.py shade/openstackcloud.py shade/operatorcloud.py shade/task_manager.py shade.egg-info/PKG-INFO shade.egg-info/SOURCES.txt shade.egg-info/dependency_links.txt shade.egg-info/entry_points.txt shade.egg-info/not-zip-safe shade.egg-info/pbr.json shade.egg-info/requires.txt shade.egg-info/top_level.txt shade/_heat/__init__.py shade/_heat/environment_format.py shade/_heat/event_utils.py shade/_heat/template_format.py shade/_heat/template_utils.py shade/_heat/utils.py shade/cmd/__init__.py shade/cmd/inventory.py shade/tests/__init__.py shade/tests/base.py shade/tests/fakes.py shade/tests/ansible/README.txt shade/tests/ansible/run.yml shade/tests/ansible/hooks/post_test_hook.sh shade/tests/ansible/roles/auth/tasks/main.yml shade/tests/ansible/roles/client_config/tasks/main.yml shade/tests/ansible/roles/group/tasks/main.yml shade/tests/ansible/roles/group/vars/main.yml shade/tests/ansible/roles/image/tasks/main.yml shade/tests/ansible/roles/image/vars/main.yml shade/tests/ansible/roles/keypair/tasks/main.yml shade/tests/ansible/roles/keypair/vars/main.yml shade/tests/ansible/roles/keystone_domain/tasks/main.yml shade/tests/ansible/roles/keystone_domain/vars/main.yml shade/tests/ansible/roles/keystone_role/tasks/main.yml shade/tests/ansible/roles/keystone_role/vars/main.yml shade/tests/ansible/roles/network/tasks/main.yml shade/tests/ansible/roles/network/vars/main.yml shade/tests/ansible/roles/nova_flavor/tasks/main.yml shade/tests/ansible/roles/object/tasks/main.yml shade/tests/ansible/roles/port/tasks/main.yml shade/tests/ansible/roles/port/vars/main.yml shade/tests/ansible/roles/router/tasks/main.yml shade/tests/ansible/roles/router/vars/main.yml shade/tests/ansible/roles/security_group/tasks/main.yml shade/tests/ansible/roles/security_group/vars/main.yml shade/tests/ansible/roles/server/tasks/main.yml shade/tests/ansible/roles/server/vars/main.yaml shade/tests/ansible/roles/subnet/tasks/main.yml shade/tests/ansible/roles/subnet/vars/main.yml shade/tests/ansible/roles/user/tasks/main.yml shade/tests/ansible/roles/user_group/tasks/main.yml shade/tests/ansible/roles/volume/tasks/main.yml shade/tests/functional/__init__.py shade/tests/functional/base.py shade/tests/functional/test_aggregate.py shade/tests/functional/test_cluster_templates.py shade/tests/functional/test_compute.py shade/tests/functional/test_devstack.py shade/tests/functional/test_domain.py shade/tests/functional/test_endpoints.py shade/tests/functional/test_flavor.py shade/tests/functional/test_floating_ip.py shade/tests/functional/test_floating_ip_pool.py shade/tests/functional/test_groups.py shade/tests/functional/test_identity.py shade/tests/functional/test_image.py shade/tests/functional/test_inventory.py shade/tests/functional/test_keypairs.py shade/tests/functional/test_limits.py shade/tests/functional/test_magnum_services.py shade/tests/functional/test_network.py shade/tests/functional/test_object.py shade/tests/functional/test_port.py shade/tests/functional/test_project.py shade/tests/functional/test_qos_bandwidth_limit_rule.py shade/tests/functional/test_qos_dscp_marking_rule.py shade/tests/functional/test_qos_minimum_bandwidth_rule.py shade/tests/functional/test_qos_policy.py shade/tests/functional/test_quotas.py shade/tests/functional/test_range_search.py shade/tests/functional/test_recordset.py shade/tests/functional/test_router.py shade/tests/functional/test_security_groups.py shade/tests/functional/test_server_group.py shade/tests/functional/test_services.py shade/tests/functional/test_stack.py shade/tests/functional/test_usage.py shade/tests/functional/test_users.py shade/tests/functional/test_volume.py shade/tests/functional/test_volume_backup.py shade/tests/functional/test_volume_type.py shade/tests/functional/test_zone.py shade/tests/functional/util.py shade/tests/unit/__init__.py shade/tests/unit/base.py shade/tests/unit/test__adapter.py shade/tests/unit/test__utils.py shade/tests/unit/test_aggregate.py shade/tests/unit/test_availability_zones.py shade/tests/unit/test_baremetal_node.py shade/tests/unit/test_baremetal_ports.py shade/tests/unit/test_caching.py shade/tests/unit/test_cluster_templates.py shade/tests/unit/test_create_server.py shade/tests/unit/test_create_volume_snapshot.py shade/tests/unit/test_delete_server.py shade/tests/unit/test_delete_volume_snapshot.py shade/tests/unit/test_domain_params.py shade/tests/unit/test_domains.py shade/tests/unit/test_endpoints.py shade/tests/unit/test_flavors.py shade/tests/unit/test_floating_ip_common.py shade/tests/unit/test_floating_ip_neutron.py shade/tests/unit/test_floating_ip_nova.py shade/tests/unit/test_floating_ip_pool.py shade/tests/unit/test_groups.py shade/tests/unit/test_identity_roles.py shade/tests/unit/test_image.py shade/tests/unit/test_image_snapshot.py shade/tests/unit/test_inventory.py shade/tests/unit/test_keypair.py shade/tests/unit/test_limits.py shade/tests/unit/test_magnum_services.py shade/tests/unit/test_meta.py shade/tests/unit/test_network.py shade/tests/unit/test_normalize.py shade/tests/unit/test_object.py shade/tests/unit/test_operator_noauth.py shade/tests/unit/test_port.py shade/tests/unit/test_project.py shade/tests/unit/test_qos_bandwidth_limit_rule.py shade/tests/unit/test_qos_dscp_marking_rule.py shade/tests/unit/test_qos_minimum_bandwidth_rule.py shade/tests/unit/test_qos_policy.py shade/tests/unit/test_qos_rule_type.py shade/tests/unit/test_quotas.py shade/tests/unit/test_rebuild_server.py shade/tests/unit/test_recordset.py shade/tests/unit/test_role_assignment.py shade/tests/unit/test_router.py shade/tests/unit/test_security_groups.py shade/tests/unit/test_server_console.py shade/tests/unit/test_server_delete_metadata.py shade/tests/unit/test_server_group.py shade/tests/unit/test_server_set_metadata.py shade/tests/unit/test_services.py shade/tests/unit/test_shade.py shade/tests/unit/test_shade_operator.py shade/tests/unit/test_stack.py shade/tests/unit/test_subnet.py shade/tests/unit/test_task_manager.py shade/tests/unit/test_update_server.py shade/tests/unit/test_usage.py shade/tests/unit/test_users.py shade/tests/unit/test_volume.py shade/tests/unit/test_volume_access.py shade/tests/unit/test_volume_backups.py shade/tests/unit/test_zone.py shade/tests/unit/fixtures/baremetal.json shade/tests/unit/fixtures/catalog-v2.json shade/tests/unit/fixtures/catalog-v3-suburl.json shade/tests/unit/fixtures/catalog-v3.json shade/tests/unit/fixtures/discovery.json shade/tests/unit/fixtures/dns.json shade/tests/unit/fixtures/image-version-broken.json shade/tests/unit/fixtures/image-version-suburl.json shade/tests/unit/fixtures/image-version-v1.json shade/tests/unit/fixtures/image-version-v2.json shade/tests/unit/fixtures/image-version.json shade/tests/unit/fixtures/clouds/clouds.yaml shade/tests/unit/fixtures/clouds/clouds_cache.yamlshade-1.31.0/shade.egg-info/dependency_links.txt0000664000175000017500000000000113440330010021512 0ustar zuulzuul00000000000000 shade-1.31.0/shade.egg-info/not-zip-safe0000664000175000017500000000000113440330010017672 0ustar zuulzuul00000000000000 shade-1.31.0/shade.egg-info/entry_points.txt0000664000175000017500000000007613440330010020745 0ustar zuulzuul00000000000000[console_scripts] shade-inventory = shade.cmd.inventory:main shade-1.31.0/shade.egg-info/PKG-INFO0000664000175000017500000001120013440330010016533 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: shade Version: 1.31.0 Summary: Simple client library for interacting with OpenStack clouds Home-page: http://docs.openstack.org/shade/latest Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: Introduction ============ .. warning:: shade has been superceded by `openstacksdk`_ and no longer takes new features. The existing code will continue to be maintained indefinitely for bugfixes as necessary, but improvements will be deferred to `openstacksdk`_. Please update your applications to use `openstacksdk`_ directly. shade is a simple client library for interacting with OpenStack clouds. The key word here is *simple*. Clouds can do many many many things - but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, you should probably use the lower level client libraries - or even the REST API directly. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then shade is for you. shade started its life as some code inside of ansible. ansible has a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. .. _usage_example: Example ======= Sometimes an example is nice. #. Create a ``clouds.yml`` file:: clouds: mordred: region_name: RegionOne auth: username: 'mordred' password: XXXXXXX project_name: 'shade' auth_url: 'https://montytaylor-sjc.openstack.blueboxgrid.com:5001/v2.0' Please note: *os-client-config* will look for a file called ``clouds.yaml`` in the following locations: * Current Directory * ``~/.config/openstack`` * ``/etc/openstack`` More information at https://pypi.org/project/os-client-config #. Create a server with *shade*, configured with the ``clouds.yml`` file:: import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) # Initialize cloud # Cloud configs are read with os-client-config cloud = shade.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Links ===== * `Issue Tracker `_ * `Code Review `_ * `Documentation `_ * `PyPI `_ * `Mailing list `_ * `Release notes `_ .. _openstacksdk: https://docs.openstack.org/openstacksdk/latest/user/ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 shade-1.31.0/shade.egg-info/requires.txt0000664000175000017500000000010113440330010020034 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 os-client-config>=1.28.0 openstacksdk>=0.15.0 shade-1.31.0/shade.egg-info/pbr.json0000664000175000017500000000005613440330010017123 0ustar zuulzuul00000000000000{"git_version": "b23000c", "is_release": true}shade-1.31.0/shade.egg-info/top_level.txt0000664000175000017500000000000613440330010020172 0ustar zuulzuul00000000000000shade shade-1.31.0/LICENSE0000666000175000017500000002363613440327640013726 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. shade-1.31.0/README.rst0000666000175000017500000000622613440327640014404 0ustar zuulzuul00000000000000Introduction ============ .. warning:: shade has been superceded by `openstacksdk`_ and no longer takes new features. The existing code will continue to be maintained indefinitely for bugfixes as necessary, but improvements will be deferred to `openstacksdk`_. Please update your applications to use `openstacksdk`_ directly. shade is a simple client library for interacting with OpenStack clouds. The key word here is *simple*. Clouds can do many many many things - but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, you should probably use the lower level client libraries - or even the REST API directly. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then shade is for you. shade started its life as some code inside of ansible. ansible has a bunch of different OpenStack related modules, and there was a ton of duplicated code. Eventually, between refactoring that duplication into an internal library, and adding logic and features that the OpenStack Infra team had developed to run client applications at scale, it turned out that we'd written nine-tenths of what we'd need to have a standalone library. .. _usage_example: Example ======= Sometimes an example is nice. #. Create a ``clouds.yml`` file:: clouds: mordred: region_name: RegionOne auth: username: 'mordred' password: XXXXXXX project_name: 'shade' auth_url: 'https://montytaylor-sjc.openstack.blueboxgrid.com:5001/v2.0' Please note: *os-client-config* will look for a file called ``clouds.yaml`` in the following locations: * Current Directory * ``~/.config/openstack`` * ``/etc/openstack`` More information at https://pypi.org/project/os-client-config #. Create a server with *shade*, configured with the ``clouds.yml`` file:: import shade # Initialize and turn on debug logging shade.simple_logging(debug=True) # Initialize cloud # Cloud configs are read with os-client-config cloud = shade.openstack_cloud(cloud='mordred') # Upload an image to the cloud image = cloud.create_image( 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server( 'my-server', image=image, flavor=flavor, wait=True, auto_ip=True) Links ===== * `Issue Tracker `_ * `Code Review `_ * `Documentation `_ * `PyPI `_ * `Mailing list `_ * `Release notes `_ .. _openstacksdk: https://docs.openstack.org/openstacksdk/latest/user/