glance_store-0.23.0/0000775000175100017510000000000013230237776014277 5ustar zuulzuul00000000000000glance_store-0.23.0/PKG-INFO0000664000175100017510000000471113230237776015377 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: glance_store Version: 0.23.0 Summary: OpenStack Image Service Store Library Home-page: http://docs.openstack.org/developer/glance_store Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: http://governance.openstack.org/badges/glance_store.svg :target: http://governance.openstack.org/reference/tags/index.html :alt: The following tags have been asserted for the Glance Store Library: "project:official", "stable:follows-policy", "vulnerability:managed", "team:diverse-affiliation". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for glance_store. (The SVG in the governance repo is updated automatically.) .. Change things from this point on Glance Store Library ==================== Glance's stores library This library has been extracted from the Glance source code for the specific use of the Glance and Glare projects. The API it exposes is not stable, has some shortcomings, and is not a general purpose interface. We would eventually like to change this, but for now using this library outside of Glance or Glare will not be supported by the core team. * License: Apache License, Version 2.0 * Documentation: https://docs.openstack.org/glance_store/latest/ * Source: http://git.openstack.org/cgit/openstack/glance_store * Bugs: http://bugs.launchpad.net/glance-store Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: OpenStack Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 glance_store-0.23.0/LICENSE0000666000175100017510000002363713230237440015305 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. glance_store-0.23.0/releasenotes/0000775000175100017510000000000013230237776016770 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/notes/0000775000175100017510000000000013230237776020120 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/notes/start-using-reno-73ef709807e37b74.yaml0000666000175100017510000000007113230237440026306 0ustar zuulzuul00000000000000--- other: - Start using reno to manage release notes. glance_store-0.23.0/releasenotes/notes/move-rootwrap-config-f2cf435c548aab5c.yaml0000666000175100017510000000031113230237440027423 0ustar zuulzuul00000000000000--- upgrade: - Packagers should be aware that the rootwrap configuration files have been moved from etc/ to etc/glance/ in order to be consistent with where other projects place these files. glance_store-0.23.0/releasenotes/notes/pike-relnote-9f547df14184d18c.yaml0000666000175100017510000000476513230237440025554 0ustar zuulzuul00000000000000--- prelude: > This was a quiet development cycle for the ``glance_store`` library. No new features were added. Several bugs were fixed and some code changes were committed to increase stability. fixes: - | The following bugs were fixed during the Pike release cycle. * Bug 1618666_: Fix SafeConfigParser DeprecationWarning in Python 3.2+ * Bug 1668848_: PBR 2.0.0 will break projects not using constraints * Bug 1657710_: Unit test passes only because is launched as non-root user * Bug 1686063_: RBD driver can't delete image with unprotected snapshot * Bug 1691132_: Fixed tests failing due to updated oslo.config * Bug 1693670_: Fix doc generation for Python3 * Bug 1643516_: Cinder driver: TypeError in _open_cinder_volume * Bug 1620214_: Sheepdog: command execution failure .. _1618666: https://code.launchpad.net/bugs/1618666 .. _1668848: https://code.launchpad.net/bugs/1668848 .. _1657710: https://code.launchpad.net/bugs/1657710 .. _1686063: https://code.launchpad.net/bugs/1686063 .. _1691132: https://code.launchpad.net/bugs/1691132 .. _1693670: https://code.launchpad.net/bugs/1693670 .. _1643516: https://code.launchpad.net/bugs/1643516 .. _1620214: https://code.launchpad.net/bugs/1620214 other: - | The following improvements were made during the Pike release cycle. * `Fixed string formatting in log message `_ * `Correct error msg variable that could be unassigned `_ * `Use HostAddressOpt for store opts that accept IP and hostnames `_ * `Replace six.iteritems() with .items() `_ * `Add python 3.5 in classifier and envlist `_ * `Initialize privsep root_helper command `_ * `Documentation was reorganized according to the new standard layout `_ glance_store-0.23.0/releasenotes/notes/releasenote-0.17.0-efee3f557ea2096a.yaml0000666000175100017510000000116313230237440026406 0ustar zuulzuul00000000000000--- prelude: > Some deprecated exceptions have been removed. See upgrade section for more details. upgrade: - The following list of exceptions have been deprecated since 0.10.0 release -- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, ``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, ``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, ``ImageDataNotFound``, ``InvalidParameterValue``, ``InvalidImageStatusTransition``. This release removes these exceptions so any remnant consumption of the same must be avoided/removed. glance_store-0.23.0/releasenotes/notes/remove-s3-driver-f432afa1f53ecdf8.yaml0000666000175100017510000000126413230237440026543 0ustar zuulzuul00000000000000--- prelude: > glance_store._drivers.s3 removed from tree. upgrade: - The S3 driver has been removed completely from the glance_store source tree. All environments running and (or) using this s3-driver piece of code and have not been migrated will stop working after the upgrade. We recommend you use a different storage backend that is still being supported by Glance. The standard deprecation path has been used to remove this. The proces requiring store driver maintainers was initiated at http://lists.openstack.org/pipermail/openstack-dev/2015-December/081966.html . Since, S3 driver did not get any maintainer, it was decided to remove it. glance_store-0.23.0/releasenotes/notes/queens-relnote-5fa2d009d9a9e458.yaml0000666000175100017510000000501713230237467026201 0ustar zuulzuul00000000000000--- prelude: > This was a quiet development cycle for the ``glance_store`` library. One new feature was added to the Swift store driver. Several bugs were fixed and some code changes were committed to increase stability. features: - | A `BufferedReader`_ has been added to the Swift store driver in order to enable better recovery from errors during uploads of large image files. Because this reader buffers image data, it could cause Glance to use a much larger amount of disk space, and so the Buffered Reader is *not* enabled by default. To use the new reader with the Swift store, you must do the following: * Set the ``glance_store`` configuration option ``swift_buffer_on_upload`` to ``True`` * Set the ``glance_store`` configuration option ``swift_upload_buffer_dir`` to a string value representing an absolute directory path. This directory will be used to hold the buffered data. The Buffered Reader works by taking advantage of the way Swift stores large objects by segmenting them into discrete chunks. Thus, the amount of disk space a Glance API node will require for buffering is a function of the ``swift_store_large_object_chunk_size`` setting and the number of worker threads (configured in **glance-api.conf** as the value of ``workers``). Disk utilization will cap at the following value swift_store_large_object_chunk_size * workers * 1000 Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the Glance image cache, which may affect overall performance. For more information, see the `Buffered Reader for Swift Driver`_ spec. .. _BufferedReader: http://git.openstack.org/cgit/openstack/glance_store/commit/?id=2e0024c85ca2ddf380014e44213be4fb876f680e .. _Buffered Reader for Swift Driver: http://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/buffered-reader-for-swift-driver.html fixes: - | * Bug 1738331_: Fix BufferedReader writing zero size chunks * Bug 1733502_: Use cached auth_ref instead of getting a new one each time .. _1738331: https://code.launchpad.net/bugs/1738331 .. _1733502: https://code.launchpad.net/bugs/1733502 upgrade: - | Two new configuration options, ``swift_buffer_on_upload`` and ``swift_upload_buffer_dir`` have been introduced. These apply only to users of the Swift store and their use is optional. See the New Features section for more information. glance_store-0.23.0/releasenotes/notes/remove-gridfs-driver-09286e27613b4353.yaml0000666000175100017510000000033213230237440026751 0ustar zuulzuul00000000000000--- prelude: > glance_store._drivers.gridfs deprecations: - The gridfs driver has been removed from the tree. The environments using this driver that were not migrated will stop working after the upgrade.glance_store-0.23.0/releasenotes/notes/multi-tenant-store-058b67ce5b7f3bd0.yaml0000666000175100017510000000060113230237440027040 0ustar zuulzuul00000000000000--- upgrade: - If using Swift in the multi-tenant mode for storing images in Glance, please note that the configuration options ``swift_store_multi_tenant`` and ``swift_store_config_file`` are now mutually exclusive and cannot be configured together. If you intend to use multi-tenant store, please make sure that you have not set a swift configuration file. glance_store-0.23.0/releasenotes/notes/sorted-drivers-for-configs-a905f07d3bf9c973.yaml0000666000175100017510000000116113230237440030406 0ustar zuulzuul00000000000000--- prelude: > Return list of store drivers in sorted order for generating configs. More info in ``Upgrade Notes`` and ``Bug Fixes`` section. upgrade: - This version of glance_store will result in Glance generating the configs in a sorted (deterministic) order. So, preferably store releases on or after this should be used for generating any new configs if the mismatched ordering of the configs results in an issue in your environment. fixes: - Bug 1619487 is fixed which was causing random order of the generation of configs in Glance. See ``upgrade`` section for more details. glance_store-0.23.0/releasenotes/notes/support-cinder-upload-c85849d9c88bbd7e.yaml0000666000175100017510000000063413230237440027563 0ustar zuulzuul00000000000000--- features: - Implemented image uploading, downloading and deletion for cinder store. It also supports new settings to put image volumes into a specific project to hide them from users and to control them based on ACL of the images. Note that cinder store is currently considered experimental, so current deployers should be aware that the use of it in production right now may be risky. ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000glance_store-0.23.0/releasenotes/notes/set-documented-default-directory-for-filesystem-9b417a29416d3a94.yamlglance_store-0.23.0/releasenotes/notes/set-documented-default-directory-for-filesystem-9b417a29416d30000666000175100017510000000036213230237440033106 0ustar zuulzuul00000000000000--- other: - For years, `/var/lib/glance/images` has been presented as the default dir for the filesystem store. It was not part of the default value until now. New deployments and ppl overriding config files should watch for this. glance_store-0.23.0/releasenotes/notes/vmware-store-requests-369485d2cfdb6175.yaml0000666000175100017510000000047413230237440027453 0ustar zuulzuul00000000000000--- security: - Previously the VMWare Datastore was using HTTPS Connections from httplib which do not verify the connection. By switching to using requests library the VMware storage backend now verifies HTTPS connection to vCenter server and thus addresses the vulnerabilities described in OSSN-0033. glance_store-0.23.0/releasenotes/notes/prevent-unauthorized-errors-ebb9cf2236595cd0.yaml0000666000175100017510000000111113230237440030777 0ustar zuulzuul00000000000000--- prelude: > Prevent Unauthorized errors during uploading or donwloading data to Swift store. features: - Allow glance_store to refresh token when upload or download data to Swift store. glance_store identifies if token is going to expire soon when executing request to Swift and refresh the token. For multi-tenant swift store glance_store uses trusts, for single-tenant swift store glance_store uses credentials from swift store configurations. Please also note that this feature is enabled if and only if Keystone V3 API is available and enabled.glance_store-0.23.0/releasenotes/notes/improved-configuration-options-3635b56aba3072c9.yaml0000666000175100017510000000247613230237440031315 0ustar zuulzuul00000000000000--- prelude: > Improved configuration options for glance_store. Please refer to the ``other`` section for more information. other: - The glance_store configuration options have been improved with detailed help texts, defaults for sample configuration files, explicit choices of values for operators to choose from, and a strict range defined with ``min`` and ``max`` boundaries. It is to be noted that the configuration options that take integer values now have a strict range defined with "min" and/or "max" boundaries where appropriate. This renders the configuration options incapable of taking certain values that may have been accepted before but were actually invalid. For example, configuration options specifying counts, where a negative value was undefined, would have still accepted the supplied negative value. Such options will no longer accept negative values. However, options where a negative value was previously defined (for example, -1 to mean unlimited) will remain unaffected by this change. Values that do not comply with the appropriate restrictions will prevent the service from starting. The logs will contain a message indicating the problematic configuration option and the reason why the supplied value has been rejected. glance_store-0.23.0/releasenotes/notes/.placeholder0000666000175100017510000000000013230237440022357 0ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/0000775000175100017510000000000013230237776020270 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/newton.rst0000666000175100017510000000023213230237440022317 0ustar zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton glance_store-0.23.0/releasenotes/source/index.rst0000666000175100017510000000026313230237440022120 0ustar zuulzuul00000000000000============================ Glance_store Release Notes ============================ .. toctree:: :maxdepth: 1 unreleased pike ocata newton mitaka liberty glance_store-0.23.0/releasenotes/source/mitaka.rst0000666000175100017510000000023213230237440022253 0ustar zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka glance_store-0.23.0/releasenotes/source/ocata.rst0000666000175100017510000000023013230237440022072 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata glance_store-0.23.0/releasenotes/source/_templates/0000775000175100017510000000000013230237776022425 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/_templates/.placeholder0000666000175100017510000000000013230237440024664 0ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/conf.py0000666000175100017510000002137113230237440021561 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Glance_store Release Notes documentation build configuration file # # Modified from corresponding configuration file in Glance. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # openstackdocstheme options repository_name = 'openstack/glance_store' bug_project = 'glance-store' bug_tag = '' html_last_updated_fmt = '%Y-%m-%d %H:%M' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Glance_store Release Notes' copyright = u'2015, Openstack Foundation' # Release notes are unversioned, so we don't need to set version or release version = '' release = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'GlanceStoreReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'GlanceStoreReleaseNotes.tex', u'Glance_store Release Notes Documentation', u'Glance_store Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'glancestorereleasenotes', u'Glance_store Release Notes Documentation', [u'Glance_store Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'GlanceStoreReleaseNotes', u'Glance_store Release Notes Documentation', u'Glance_store Developers', 'GlanceStoreReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] glance_store-0.23.0/releasenotes/source/locale/0000775000175100017510000000000013230237776021527 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/locale/zh_CN/0000775000175100017510000000000013230237776022530 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/locale/zh_CN/LC_MESSAGES/0000775000175100017510000000000013230237776024315 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/locale/zh_CN/LC_MESSAGES/releasenotes.po0000666000175100017510000000457313230237440027345 0ustar zuulzuul00000000000000# zzxwill , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: Glance_store Release Notes 0.20.1\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2017-03-22 21:38+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-08-23 02:05+0000\n" "Last-Translator: zzxwill \n" "Language-Team: Chinese (China)\n" "Language: zh-CN\n" "X-Generator: Zanata 3.9.6\n" "Plural-Forms: nplurals=1; plural=0\n" msgid "0.11.0" msgstr "0.11.0" msgid "0.12.0" msgstr "0.12.0" msgid "0.16.0" msgstr "0.16.0" msgid "0.17.0" msgstr "0.17.0" msgid "Current Series Release Notes" msgstr "当前版本发布说明" msgid "Deprecation Notes" msgstr "弃用说明" msgid "Glance_store Release Notes" msgstr "Glance_store发布说明" msgid "Liberty Series Release Notes" msgstr "Liberty版本发布说明" msgid "Mitaka Series Release Notes" msgstr "Mitaka 版本发布说明" msgid "New Features" msgstr "新特性" msgid "Other Notes" msgstr "其他说明" msgid "Security Issues" msgstr "安全问题" msgid "Start using reno to manage release notes." msgstr "开始使用reno管理发布说明。" msgid "" "The following list of exceptions have been deprecated since 0.10.0 release " "-- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, " "``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, " "``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, " "``ImageDataNotFound``, ``InvalidParameterValue``, " "``InvalidImageStatusTransition``. This release removes these exceptions so " "any remnant consumption of the same must be avoided/removed." msgstr "" "以下的异常列表自0.10.0版本后已经弃用了 ——``Conflict``, " "``ForbiddenPublicImage`` ``ProtectedImageDelete``, " "``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, " "``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, " "``ImageDataNotFound``, ``InvalidParameterValue``, " "``InvalidImageStatusTransition``。该版本移除了这些异常,所以任何遗留的相同的" "使用方式必须避免或去掉。" msgid "Upgrade Notes" msgstr "升级说明" msgid "glance_store._drivers.gridfs" msgstr "glance_store._drivers.gridfs" msgid "glance_store._drivers.s3 removed from tree." msgstr "glance_store._drivers.s3从树上移除了。" glance_store-0.23.0/releasenotes/source/locale/en_GB/0000775000175100017510000000000013230237776022501 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/locale/en_GB/LC_MESSAGES/0000775000175100017510000000000013230237776024266 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po0000666000175100017510000004077313230237440027320 0ustar zuulzuul00000000000000# Andi Chandler , 2016. #zanata # Andi Chandler , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: Glance_store Release Notes 0.21.1\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2017-09-22 13:53+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-10-05 01:04+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en-GB\n" "X-Generator: Zanata 3.9.6\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "0.11.0" msgstr "0.11.0" msgid "0.12.0" msgstr "0.12.0" msgid "0.14.0" msgstr "0.14.0" msgid "0.16.0" msgstr "0.16.0" msgid "0.17.0" msgstr "0.17.0" msgid "0.18.0-5" msgstr "0.18.0-5" msgid "0.19.0" msgstr "0.19.0" msgid "0.21.0" msgstr "0.21.0" msgid "" "Allow glance_store to refresh token when upload or download data to Swift " "store. glance_store identifies if token is going to expire soon when " "executing request to Swift and refresh the token. For multi-tenant swift " "store glance_store uses trusts, for single-tenant swift store glance_store " "uses credentials from swift store configurations. Please also note that this " "feature is enabled if and only if Keystone V3 API is available and enabled." msgstr "" "Allow glance_store to refresh token when upload or download data to Swift " "store. glance_store identifies if token is going to expire soon when " "executing request to Swift and refresh the token. For multi-tenant swift " "store glance_store uses trusts, for single-tenant swift store glance_store " "uses credentials from swift store configurations. Please also note that this " "feature is enabled if and only if Keystone V3 API is available and enabled." msgid "Bug 1618666_: Fix SafeConfigParser DeprecationWarning in Python 3.2+" msgstr "Bug 1618666_: Fix SafeConfigParser DeprecationWarning in Python 3.2+" msgid "" "Bug 1619487 is fixed which was causing random order of the generation of " "configs in Glance. See ``upgrade`` section for more details." msgstr "" "Bug 1619487 is fixed which was causing random order of the generation of " "configs in Glance. See ``upgrade`` section for more details." msgid "Bug 1620214_: Sheepdog: command execution failure" msgstr "Bug 1620214_: Sheepdog: command execution failure" msgid "Bug 1643516_: Cinder driver: TypeError in _open_cinder_volume" msgstr "Bug 1643516_: Cinder driver: TypeError in _open_cinder_volume" msgid "" "Bug 1657710_: Unit test passes only because is launched as non-root user" msgstr "" "Bug 1657710_: Unit test passes only because is launched as non-root user" msgid "Bug 1668848_: PBR 2.0.0 will break projects not using constraints" msgstr "Bug 1668848_: PBR 2.0.0 will break projects not using constraints" msgid "Bug 1686063_: RBD driver can't delete image with unprotected snapshot" msgstr "Bug 1686063_: RBD driver can't delete image with unprotected snapshot" msgid "Bug 1691132_: Fixed tests failing due to updated oslo.config" msgstr "Bug 1691132_: Fixed tests failing due to updated oslo.config" msgid "Bug 1693670_: Fix doc generation for Python3" msgstr "Bug 1693670_: Fix doc generation for Python3" msgid "Bug Fixes" msgstr "Bug Fixes" msgid "Current Series Release Notes" msgstr "Current Series Release Notes" msgid "Deprecation Notes" msgstr "Deprecation Notes" msgid "" "For years, `/var/lib/glance/images` has been presented as the default dir " "for the filesystem store. It was not part of the default value until now. " "New deployments and ppl overriding config files should watch for this." msgstr "" "For years, `/var/lib/glance/images` has been presented as the default dir " "for the filesystem store. It was not part of the default value until now. " "New deployments and people overriding config files should watch for this." msgid "Glance_store Release Notes" msgstr "Glance_store Release Notes" msgid "" "If using Swift in the multi-tenant mode for storing images in Glance, please " "note that the configuration options ``swift_store_multi_tenant`` and " "``swift_store_config_file`` are now mutually exclusive and cannot be " "configured together. If you intend to use multi-tenant store, please make " "sure that you have not set a swift configuration file." msgstr "" "If using Swift in the multi-tenant mode for storing images in Glance, please " "note that the configuration options ``swift_store_multi_tenant`` and " "``swift_store_config_file`` are now mutually exclusive and cannot be " "configured together. If you intend to use multi-tenant store, please make " "sure that you have not set a swift configuration file." msgid "" "Implemented image uploading, downloading and deletion for cinder store. It " "also supports new settings to put image volumes into a specific project to " "hide them from users and to control them based on ACL of the images. Note " "that cinder store is currently considered experimental, so current deployers " "should be aware that the use of it in production right now may be risky." msgstr "" "Implemented image uploading, downloading and deletion for Cinder store. It " "also supports new settings to put image volumes into a specific project to " "hide them from users and to control them based on ACL of the images. Note " "that Cinder store is currently considered experimental, so current deployers " "should be aware that the use of it in production right now may be risky." msgid "" "Improved configuration options for glance_store. Please refer to the " "``other`` section for more information." msgstr "" "Improved configuration options for glance_store. Please refer to the " "``other`` section for more information." msgid "Liberty Series Release Notes" msgstr "Liberty Series Release Notes" msgid "Mitaka Series Release Notes" msgstr "Mitaka Series Release Notes" msgid "New Features" msgstr "New Features" msgid "Newton Series Release Notes" msgstr "Newton Series Release Notes" msgid "Ocata Series Release Notes" msgstr "Ocata Series Release Notes" msgid "Other Notes" msgstr "Other Notes" msgid "" "Packagers should be aware that the rootwrap configuration files have been " "moved from etc/ to etc/glance/ in order to be consistent with where other " "projects place these files." msgstr "" "Packagers should be aware that the rootwrap configuration files have been " "moved from etc/ to etc/glance/ in order to be consistent with where other " "projects place these files." msgid "Pike Series Release Notes" msgstr "Pike Series Release Notes" msgid "Prelude" msgstr "Prelude" msgid "" "Prevent Unauthorized errors during uploading or donwloading data to Swift " "store." msgstr "" "Prevent Unauthorised errors during uploading or downloading data to Swift " "store." msgid "" "Previously the VMWare Datastore was using HTTPS Connections from httplib " "which do not verify the connection. By switching to using requests library " "the VMware storage backend now verifies HTTPS connection to vCenter server " "and thus addresses the vulnerabilities described in OSSN-0033." msgstr "" "Previously the VMware Datastore was using HTTPS Connections from httplib " "which do not verify the connection. By switching to using requests library " "the VMware storage backend now verifies HTTPS connection to vCenter server " "and thus addresses the vulnerabilities described in OSSN-0033." msgid "" "Return list of store drivers in sorted order for generating configs. More " "info in ``Upgrade Notes`` and ``Bug Fixes`` section." msgstr "" "Return list of store drivers in sorted order for generating configs. More " "info in ``Upgrade Notes`` and ``Bug Fixes`` section." msgid "Security Issues" msgstr "Security Issues" msgid "" "Some deprecated exceptions have been removed. See upgrade section for more " "details." msgstr "" "Some deprecated exceptions have been removed. See upgrade section for more " "details." msgid "Start using reno to manage release notes." msgstr "Start using reno to manage release notes." msgid "" "The S3 driver has been removed completely from the glance_store source tree. " "All environments running and (or) using this s3-driver piece of code and " "have not been migrated will stop working after the upgrade. We recommend you " "use a different storage backend that is still being supported by Glance. The " "standard deprecation path has been used to remove this. The proces requiring " "store driver maintainers was initiated at http://lists.openstack.org/" "pipermail/openstack-dev/2015-December/081966.html . Since, S3 driver did not " "get any maintainer, it was decided to remove it." msgstr "" "The S3 driver has been removed completely from the glance_store source tree. " "All environments running and (or) using this s3-driver piece of code and " "have not been migrated will stop working after the upgrade. We recommend you " "use a different storage backend that is still being supported by Glance. The " "standard deprecation path has been used to remove this. The process " "requiring store driver maintainers was initiated at http://lists.openstack." "org/pipermail/openstack-dev/2015-December/081966.html . Since, S3 driver did " "not get any maintainer, it was decided to remove it." msgid "The following bugs were fixed during the Pike release cycle." msgstr "The following bugs were fixed during the Pike release cycle." msgid "The following improvements were made during the Pike release cycle." msgstr "The following improvements were made during the Pike release cycle." msgid "" "The following list of exceptions have been deprecated since 0.10.0 release " "-- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, " "``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, " "``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, " "``ImageDataNotFound``, ``InvalidParameterValue``, " "``InvalidImageStatusTransition``. This release removes these exceptions so " "any remnant consumption of the same must be avoided/removed." msgstr "" "The following list of exceptions have been deprecated since 0.10.0 release " "-- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, " "``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, " "``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, " "``ImageDataNotFound``, ``InvalidParameterValue``, " "``InvalidImageStatusTransition``. This release removes these exceptions so " "any remnant consumption of the same must be avoided/removed." msgid "" "The glance_store configuration options have been improved with detailed help " "texts, defaults for sample configuration files, explicit choices of values " "for operators to choose from, and a strict range defined with ``min`` and " "``max`` boundaries. It is to be noted that the configuration options that " "take integer values now have a strict range defined with \"min\" and/or \"max" "\" boundaries where appropriate. This renders the configuration options " "incapable of taking certain values that may have been accepted before but " "were actually invalid. For example, configuration options specifying counts, " "where a negative value was undefined, would have still accepted the supplied " "negative value. Such options will no longer accept negative values. However, " "options where a negative value was previously defined (for example, -1 to " "mean unlimited) will remain unaffected by this change. Values that do not " "comply with the appropriate restrictions will prevent the service from " "starting. The logs will contain a message indicating the problematic " "configuration option and the reason why the supplied value has been rejected." msgstr "" "The glance_store configuration options have been improved with detailed help " "texts, defaults for sample configuration files, explicit choices of values " "for operators to choose from, and a strict range defined with ``min`` and " "``max`` boundaries. It is to be noted that the configuration options that " "take integer values now have a strict range defined with \"min\" and/or \"max" "\" boundaries where appropriate. This renders the configuration options " "incapable of taking certain values that may have been accepted before but " "were actually invalid. For example, configuration options specifying counts, " "where a negative value was undefined, would have still accepted the supplied " "negative value. Such options will no longer accept negative values. However, " "options where a negative value was previously defined (for example, -1 to " "mean unlimited) will remain unaffected by this change. Values that do not " "comply with the appropriate restrictions will prevent the service from " "starting. The logs will contain a message indicating the problematic " "configuration option and the reason why the supplied value has been rejected." msgid "" "The gridfs driver has been removed from the tree. The environments using " "this driver that were not migrated will stop working after the upgrade." msgstr "" "The gridfs driver has been removed from the tree. The environments using " "this driver that were not migrated will stop working after the upgrade." msgid "" "This version of glance_store will result in Glance generating the configs in " "a sorted (deterministic) order. So, preferably store releases on or after " "this should be used for generating any new configs if the mismatched " "ordering of the configs results in an issue in your environment." msgstr "" "This version of glance_store will result in Glance generating the configs in " "a sorted (deterministic) order. So, preferably store releases on or after " "this should be used for generating any new configs if the mismatched " "ordering of the configs results in an issue in your environment." msgid "" "This was a quiet development cycle for the ``glance_store`` library. No new " "features were added. Several bugs were fixed and some code changes were " "committed to increase stability." msgstr "" "This was a quiet development cycle for the ``glance_store`` library. No new " "features were added. Several bugs were fixed and some code changes were " "committed to increase stability." msgid "Upgrade Notes" msgstr "Upgrade Notes" msgid "" "`Add python 3.5 in classifier and envlist `_" msgstr "" "`Add python 3.5 in classifier and envlist `_" msgid "" "`Correct error msg variable that could be unassigned `_" msgstr "" "`Correct error msg variable that could be unassigned `_" msgid "" "`Documentation was reorganized according to the new standard layout `_" msgstr "" "`Documentation was reorganised according to the new standard layout `_" msgid "" "`Fixed string formatting in log message `_" msgstr "" "`Fixed string formatting in log message `_" msgid "" "`Initialize privsep root_helper command `_" msgstr "" "`Initialize privsep root_helper command `_" msgid "" "`Replace six.iteritems() with .items() `_" msgstr "" "`Replace six.iteritems() with .items() `_" msgid "" "`Use HostAddressOpt for store opts that accept IP and hostnames `_" msgstr "" "`Use HostAddressOpt for store opts that accept IP and hostnames `_" msgid "glance_store._drivers.gridfs" msgstr "glance_store._drivers.gridfs" msgid "glance_store._drivers.s3 removed from tree." msgstr "glance_store._drivers.s3 removed from tree." glance_store-0.23.0/releasenotes/source/pike.rst0000666000175100017510000000021713230237440021740 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike glance_store-0.23.0/releasenotes/source/_static/0000775000175100017510000000000013230237776021716 5ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/_static/.placeholder0000666000175100017510000000000013230237440024155 0ustar zuulzuul00000000000000glance_store-0.23.0/releasenotes/source/liberty.rst0000666000175100017510000000022213230237440022456 0ustar zuulzuul00000000000000============================== Liberty Series Release Notes ============================== .. release-notes:: :branch: origin/stable/liberty glance_store-0.23.0/releasenotes/source/unreleased.rst0000666000175100017510000000016013230237440023134 0ustar zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: glance_store-0.23.0/setup.py0000666000175100017510000000200613230237440015775 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) glance_store-0.23.0/etc/0000775000175100017510000000000013230237776015052 5ustar zuulzuul00000000000000glance_store-0.23.0/etc/glance/0000775000175100017510000000000013230237776016303 5ustar zuulzuul00000000000000glance_store-0.23.0/etc/glance/rootwrap.d/0000775000175100017510000000000013230237776020402 5ustar zuulzuul00000000000000glance_store-0.23.0/etc/glance/rootwrap.d/glance_cinder_store.filters0000666000175100017510000000236013230237440025754 0ustar zuulzuul00000000000000# glance-rootwrap command filters for glance cinder store # This file should be owned by (and only-writable by) the root user [Filters] # cinder store driver disk_chown: RegExpFilter, chown, root, chown, \d+, /dev/(?!.*/\.\.).* # os-brick mount: CommandFilter, mount, root blockdev: RegExpFilter, blockdev, root, blockdev, (--getsize64|--flushbufs), /dev/.* tee: CommandFilter, tee, root mkdir: CommandFilter, mkdir, root chown: RegExpFilter, chown, root, chown root:root /etc/pstorage/clusters/(?!.*/\.\.).* ip: CommandFilter, ip, root dd: CommandFilter, dd, root iscsiadm: CommandFilter, iscsiadm, root aoe-revalidate: CommandFilter, aoe-revalidate, root aoe-discover: CommandFilter, aoe-discover, root aoe-flush: CommandFilter, aoe-flush, root read_initiator: ReadFileFilter, /etc/iscsi/initiatorname.iscsi multipath: CommandFilter, multipath, root multipathd: CommandFilter, multipathd, root systool: CommandFilter, systool, root sg_scan: CommandFilter, sg_scan, root cp: CommandFilter, cp, root drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root, /opt/emc/scaleio/sdc/bin/drv_cfg, --query_guid sds_cli: CommandFilter, /usr/local/bin/sds/sds_cli, root vgc-cluster: CommandFilter, vgc-cluster, root scsi_id: CommandFilter, /lib/udev/scsi_id, root glance_store-0.23.0/etc/glance/rootwrap.conf0000666000175100017510000000171413230237440021020 0ustar zuulzuul00000000000000# Configuration for glance-rootwrap # This file should be owned by (and only-writable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writeable by root ! filters_path=/etc/glance/rootwrap.d,/usr/share/glance/rootwrap # List of directories to search executables in, in case filters do not # explicitely specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, local0, local1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR glance_store-0.23.0/functional_testing.conf.sample0000666000175100017510000000023113230237440022307 0ustar zuulzuul00000000000000[tests] stores = file,swift [admin] user = admin:admin key = secretadmin auth_version = 2 auth_address = http://localhost:35357/v2.0 region = RegionOne glance_store-0.23.0/doc/0000775000175100017510000000000013230237776015044 5ustar zuulzuul00000000000000glance_store-0.23.0/doc/source/0000775000175100017510000000000013230237776016344 5ustar zuulzuul00000000000000glance_store-0.23.0/doc/source/index.rst0000666000175100017510000000052113230237440020171 0ustar zuulzuul00000000000000============== glance_store ============== The glance_store library supports the creation, deletion and gather of data assets from/to a set of several, different, storage technologies. .. toctree:: :maxdepth: 1 user/index reference/index .. rubric:: Indices and tables * :ref:`genindex` * :ref:`modindex` * :ref:`search` glance_store-0.23.0/doc/source/user/0000775000175100017510000000000013230237776017322 5ustar zuulzuul00000000000000glance_store-0.23.0/doc/source/user/index.rst0000666000175100017510000000017713230237440021156 0ustar zuulzuul00000000000000================================= glance-store User Documentation ================================= .. toctree:: drivers glance_store-0.23.0/doc/source/user/drivers.rst0000666000175100017510000000372213230237440021524 0ustar zuulzuul00000000000000 Glance Store Drivers ==================== Glance store supports several different drivers. These drivers live within the library's code base and they are maintained by either members of the Glance community or OpenStack in general. Please, find below the table of supported drivers and maintainers: +-------------------+---------------------+------------------------------------+------------------+ | Driver | Maintainer | Email | IRC Nick | +===================+=====================+====================================+==================+ | File System | Glance Team | openstack-dev@lists.openstack.org | openstack-glance | +-------------------+---------------------+------------------------------------+------------------+ | HTTP | Glance Team | openstack-dev@lists.openstack.org | openstack-glance | +-------------------+---------------------+------------------------------------+------------------+ | RBD | Fei Long Wang | flwang@catalyst.net.nz | flwang | +-------------------+---------------------+------------------------------------+------------------+ | Cinder | Tomoki Sekiyama | tomoki.sekiyama@gmail.com | | +-------------------+---------------------+------------------------------------+------------------+ | Swift | Matthew Oliver | matt@oliver.net.au | mattoliverau | +-------------------+---------------------+------------------------------------+------------------+ | VMware | Sabari Murugesan | smurugesan@vmware.com | sabari | +-------------------+---------------------+------------------------------------+------------------+ | Sheepdog | YAMADA Hideki | yamada.hideki@lab.ntt.co.jp | yamada-h | +-------------------+---------------------+------------------------------------+------------------+ glance_store-0.23.0/doc/source/conf.py0000666000175100017510000000574313230237440017642 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import subprocess import sys import warnings sys.path.insert(0, os.path.abspath('../..')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'openstackdocstheme'] # openstackdocstheme options repository_name = 'openstack/glance_store' bug_project = 'glance-store' bug_tag = '' html_last_updated_fmt = '%Y-%m-%d %H:%M' # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # Add any paths that contain templates here, relative to this directory. # templates_path = [] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'glance_store' copyright = u'2014, OpenStack Foundation' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' # html_static_path = ['static'] html_theme = 'openstackdocs' # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project modindex_common_prefix = ['glance_store.'] git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local", "-n1"] try: html_last_updated_fmt = subprocess.check_output(git_cmd).decode('utf-8') except Exception: warnings.warn('Cannot get last updated time from git repository. ' 'Not setting "html_last_updated_fmt".') # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, '%s Documentation' % project, 'OpenStack Foundation', 'manual'), ] glance_store-0.23.0/doc/source/reference/0000775000175100017510000000000013230237776020302 5ustar zuulzuul00000000000000glance_store-0.23.0/doc/source/reference/index.rst0000666000175100017510000000021413230237440022126 0ustar zuulzuul00000000000000============================== glance-store Reference Guide ============================== .. toctree:: :maxdepth: 1 api/autoindex glance_store-0.23.0/ChangeLog0000664000175100017510000004572613230237775016066 0ustar zuulzuul00000000000000CHANGES ======= 0.23.0 ------ * Add Queens release note * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Fix some wrong url and add license * Updated from global requirements * Updated from global requirements * Fix BufferedReader writing zero size chunks * Updated from global requirements * Updated from global requirements * Use cached auth\_ref instead of gettin a new one each time * Remove setting of version/release from releasenotes * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Revert "Remove team:diverse-affiliation from tags" * Expand sz to size * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements * Update reno for stable/pike * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove team:diverse-affiliation from tags 0.21.0 ------ * Updated from global requirements * Add release note for Pike * Cinder driver: TypeError in \_open\_cinder\_volume * Updated from global requirements * Updated from global requirements * Updated from global requirements * set warning-is-error for documentation build * switch from oslosphinx to openstackdocstheme * rearrange existing documentation according to the new standard layout * Updated from global requirements * Updated from global requirements * Fix html\_last\_updated\_fmt for Python3 * Fixed tests due to updated oslo.config * Initialize privsep root\_helper command * Don't fail when trying to unprotect unprotected snapshot on RBD * Updated from global requirements * Add python 3.5 in classifier and envlist * Imported Translations from Zanata * Update maintainer's email address * Updated from global requirements * Buffered reader: Upload recovery for swift store * Updated from global requirements * Replace six.iteritems() with .items() * Removes unnecessary utf-8 coding for glance\_store * Use HostAddressOpt for store opts that accept IP and hostnames * Updated from global requirements * An unit test passes because is launched as non-root user * Update test requirement * Updated from global requirements * Updated from global requirements * Fix SafeConfigParser DeprecationWarning in Python 3.2+ * Update reno for stable/ocata * Correct error msg variable that could be unassigned * Fixing string formatting bug in log message 0.20.0 ------ * Updated from global requirements * Remove debtcollector in requirements.txt * Log at error when we intend to reraise the exception * Suppress oslo-config DeprecationWarning during functional test 0.19.0 ------ * Raise exc when using multi-tenant and swift+config * Updated from global requirements * Use storage\_url in DB for multi-tenant swift store * Add alt text for badges * Fix a typo in help text * Show team and repo badges on README * take into consideration created volume size in cinder backend * Updated from global requirements * Move rootwrap config files from etc/\* into etc/glance/\* * Update README * Convert to keystoneauth * Updated from global requirements * Fix a typo in rootwrap.conf and glance\_cinder\_store.filters * Fix dbg msg when swift can't determine image size * Refactor get\_manager\_for\_store in an OO manner * Add cinder\_volume\_type to cinder store configuration * Enable release notes translation * Updated from global requirements * Do not require entry-point dependencies in tests * Updated from global requirements * Updated from global requirements * Updated from global requirements * Sheepdog: fix command execution failure * Update home-page url in setup.cfg * Do not call image.stat() when we only need the size * TrivialFix: Merge imports in code * standardize release note page ordering * Clean imports in code * Reason to return sorted list of drivers for opts * Updated from global requirements * Always return a sorted list of drivers for configs * Fix doc build if git is absent * Improve tools/tox\_install.sh * Update reno for stable/newton 0.18.0 ------ * Fix header passed to requests * Updated from global requirements 0.17.0 ------ * Add release notes for 0.17.0 * Release note for glance\_store configuration opts * Improving help text for Swift store opts * Improving help text for Swift store util opts * Improve help text of cinder driver opts * Fix help text of swift\_store\_config\_file * Improving help text for backend store opts * Remove "Services which consume this" section * Improve the help text for Swift driver opts * Updated from global requirements * Improving help text for Sheepdog opts * Use constraints for all tox environments * Improve help text of http driver opts * Improve help text of filesystem store opts * Improve help text of rbd driver opts * Improving help text for Glance store Swift opts * Remove deprecated exceptions * Improve the help text for vmware datastore driver opts * Updated from global requirements 0.16.0 ------ * Updated from global requirements * Updated from global requirements * Remove S3 driver 0.15.0 ------ * Fix cinder config string as per current i18n state * Sheepdog:modify default addr * Cleanup i18n marker functions to match Oslo usage * Updated from global requirements * Don't include openstack/common in flake8 exclude list 0.14.0 ------ * Add bandit to pep8 and bandit testenv * Remove unused variable in vmware store * Imported Translations from Zanata * Split functional tests apart * Updated from global requirements * Check that size is a number * Replace dict.iterkeys with six.iterkeys to make PY3 compatible * cinder: Fix get\_size return value * The function add calculation size\_gb need improve * Updated from global requirements * Updated from global requirements * Fix argument order for assertEqual to (expected, observed) * Updated from global requirements * Updated from global requirements * Remove -c from tox.ini * tox respects upper-constraints.txt * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Fix minor misspellings affecting Config Reference Guide * Remove verbose option from glance\_store tests * Updated from global requirements * Updated from global requirements * Improve help text of swift driver opts * Updated from global requirements * Add functional tests for swift * Imported Translations from Zanata * Updated from global requirements * Updated from global requirements * Fix releasenotes to pass reno gates * Updated from global requirements * tox: use os-testr instead of testr * Fix swiftclient mocks * Deprecate swift driver options properly * Fix typos in config files * Setup defaults for swift driver authentication * Fix doc generation warnings and errors * trivial:fixing one W503 pep8 error * Module docs are not generated * Fix cinder store to support Cinder RemoteFS backends * Missing params in store\_add\_to\_backend docstring * Mock swiftclient's functions in tests * Update reno for stable/mitaka 0.13.0 ------ * Add https ca\_file and insecure options to VMware Store * swift: Do not search storage\_url for ks v2 0.12.0 ------ * Fix misspelling in the releasenote support-cinder-upload * Add new config options for HTTPS store * Implement get, add and delete for cinder store * Implement re-authentication for swift driver * Implement swift store connection manager * Updated from global requirements * test\_http\_get\_redirect is not testing redirects correctly * Switch VMWare Datastore to use Requests * Updated from global requirements * Add base for functional tests * Add small image verifier for swift backend * Switch HTTP store to using requests 0.11.0 ------ * Change approach to request storage url for multi-tenant store * Remove unused parameters from swift connection init * Sheepdog: fix image-download failure * LOG.warn is deprecated in python3 * Updated from global requirements * Updated from global requirements * Use url\_for from keystoneclient in swift store * Remove deprecated datastore\_name, datacenter\_path * Add backend tests from glance * Fix some inconsistency in docstrings * Updated from global requirements * Change Swift zero-size chunk behaviour * Sheepdog: fix upload failure in API v2 * Remove unnecessary re-raise of NotFound exception * Updated from global requirements * Add signature verifier to backend drivers * Use oslo\_utils.encodeutils.exception\_to\_unicode() * Fix default mutables for set\_acls * Deprecate unused Exceptions * Remove unnecessary auth module * Updated from global requirements * Deprecate the S3 driver * Document supported drivers and maintainers * Remove the gridfs driver * Set documented default directory for filesystem * Imported Translations from Zanata * Updated from global requirements * Swift store: do not send a 0 byte chunk * Store.get\_size: handle HTTPException * Replace deprecated library function os.popen() with subprocess * Updated from global requirements * Deprecated tox -downloadcache option removed * Add docs section to tox.ini * Replace assertEqual(None, \*) with assertIsNone in tests * Updated from global requirements * Remove duplicate keys from dictionary * Remove unreachable code * Sheepdog: Change storelocation format * Updated from global requirements * Add reno for release notes management in glance\_store * Put py34 first in the env order of tox * Updated from global requirements * Add list of supported stores to help * Add functional testing devstack gate hooks 0.10.0 ------ * Rel notes for 0.10.0 * Updated from global requirements * Remove useless config.py file * vmware: check for response body in error conditions * remove default=None for config options * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements * Updated from global requirements * Remove deprecated glance\_store opts from default section * Updated from global requirements * Improving GlanceStoreException * Activate pep8 check that \_ is imported * '\_' is used by i18n * VMware: Fix missing space in error message * Handle swift store's optional dependency * Fix swift store tests for latest swiftclient 0.9.1 ----- * rbd: re-add the absolute\_import and with\_statement imports 0.9.0 ----- * Release notes 0.9.0 and corrected library version * Updated from global requirements * Catch InvalidURL when requesting store size * Imported Translations from Transifex * Add proxy support to S3 Store * Prevent glance-api hangups during connection to rbd * rbd driver cannot delete residual image from ceph in some cases 0.8.0 ----- * Imported Translations from Transifex * Add explicit dependencies for store dependencies * Support V3 authentication with swift 0.7.1 ----- * rbd: make sure features is an int when passed to librbd.create 0.7.0 ----- * setup.cfg: add Python 3 classifiers * Remove usage of assert\_called\_once in mocks * Add .eggs/\* to .gitignore * Imported Translations from Transifex * Updated from global requirements * Make cinderclient a more optional dependency * Port S3 driver to Python 3 * Do not used named args when using swiftclient * logging failed exception info for add image operation * Fix random test error in swift store delete * Port swift driver to Python 3 * Port vmware driver to Python 3 * RBD: Reading rbd\_default\_features from ceph.conf * Move glance\_store tests into the main package * Use six.moves to fix imports on Python 3 * Move python-cinderclient to test-requirements.txt * Updated from global requirements 0.6.0 ----- * Add release notes for 0.6.0 * Drop py26 support * Port remaining tests to Python 3 * Fix Python 3 issues * Close a file to fix a resource warning on Python 3 * Port exception\_to\_str() to Python 3 * Disable propagating BadStoreConfiguration * Sync up with global-requirements 0.5.0 ----- * Add release notes for 0.5.0 * Drop use of 'oslo' namespace package * Fix RBD delete image on creation failure * Use is\_valid\_ipv6() from oslo.utils * Properly instantiate Forbidden exception * Update README to work with release tools * Remove ordereddict from requirements * gridfs: add pymongo to test-requirements and update tests * Add release notes for 0.1.10-0.3.0 * Only warn on duplicate path on fs backend * Propagate BadStoreConfiguration to library user * Handle optional dependency in vmware store * Update oslo libraries * Initialize vmware session during store creation 0.4.0 ----- * Add release notes for 0.4.0 * Fix intermittent failure in test\_vmware\_store * Deprecate the gridfs store * Remove incubative openstack.common.context module * Update help text with sample conf * Use oslo\_config.cfg.ConfigOpts in glance\_store * Make dependency on boto entirely conditional * Move from oslo.utils to oslo\_utils (supplement) * Fix timeout during upload from slow resource 0.3.0 ----- * Throw NotFound exception when template is gone * Deprecate VMware store single datastore options * Use oslo\_utils.units where appropriate * VMware: Support Multiple Datastores 0.2.0 ----- * Correct such logic in store.get() when chunk\_size param provided * Support for deleting images stored as SLO in Swift * Enable DRIVER\_REUSABLE for vmware store 0.1.12 ------ * Show fully qualified store name in update\_capabilities() logging * Move to hacking 0.10 * Fix sorting query string keys for arbitrary url schemes * Unify using six.moves.range rename everywhere 0.1.11 ------ * Remove duplicate key * Add coverage report to run\_test.sh * Use a named enum for capability values * Check VMware session before uploading image * Add capabilities to storage driver * Fixing PEP8 E712 and E265 * Convert httpretty tests to requests-mock * Replace snet config with endpoint config * Rename oslo.concurrency to oslo\_concurrency * Remove retry on failed uploads to VMware datastore * Remove old dependencies * Validate metadata JSON file * Use default datacenter\_path from oslo.vmware * Remove unused exception StorageQuotaFull * Move from oslo.config to oslo\_config * Move from oslo.utils to oslo\_utils * Add needed extra space to error message * Define a new parameter to pass CA cert file * Use testr directly from tox * Raise appropriate exception if socket error occurs * Swift Store to use Multiple Containers * Use testr directly from tox * Remove deprecated options * Correct GlanceStoreException to provide valid message - glance\_store * Catch NotFound exception in http.Store get\_size * VMware store: Re-use api session token 0.1.10 ------ 0.1.9 ----- * Test swift multi-tenant store get context * Test swift multi-tenant store add context * Use oslo.concurrency * Move cinder store to use auth\_token * Swift Multi-tenant store: Fix image upload * Use statvfs instead of df to get available space * Fix public image ACL in multi-tenant Swift mode * Updated run\_tests.sh to run tests in debug mode * Remove validate\_location * Imported Translations from Transifex * Add coverage to test-requirements.txt * Imported Translations from Transifex * Switch to using oslo.utils * Remove network\_utils * Recover from errors while deleting image segments * VMware store: Use the Content-Length if available * Backporting S3 multi-part upload functionality to glace\_store * Make rbd store's pool handling more universal * s3\_store\_host parameter with port number * Enhance configuration handling * Enable F841 check * Portback part change of adding status field to image location * Mark glance\_store as being a universal wheel * Imported Translations from Transifex * Use oslo.serialization * Fix H402 * Portback part change of enabling F821 check * Adding common.utils.exception\_to\_str() to avoid encoding issue * Replace stubout with oslotest.moxstubout * Fix RBD store to use READ\_CHUNKSIZE and correct return of get() * Add a run\_tests.sh * Run tests parallel by default * Add ordereddict to reqs for py2.6 compatibility * rbd: fix chunk size units * Imported Translations from Transifex * Stop using intersphinx * Cleanup shebang in non-executable module * Correct Sheepdog store configuration * Correct base class of no\_conf driver * Handle session timeout in the VMware store * Add entry-point for oslo.config options and update registering logic * Configure the stores explicitly * Imported Translations from Transifex * Return the right filesize when chunk\_size != None * Allowing operator to configure a permission for image file in fs store * Align swift's store API 0.1.7 ----- * Add \`OPTIONS\` attribute to swift.Store function 0.1.5 ----- * Add missing stores to setup.cfg * Set group for DeprecatedOpts * Complete random\_access for the filesystem store * Work toward Python 3.4 support and testing 0.1.3 ----- * Register store's configs w/o creating instances 0.1.2 ----- * Add deprecated options support for storage drivers * Rename locale files for glance\_store rename * Update .gitreview for project rename 0.1.1 ----- * Rename glance.store to glance\_store * Port of 97882f796c0e8969c606ae723d14b6b443e2e2f9 * Port of 502be24afa122eef08186001e54c1e1180114ccf * Fix collection order issues and unit test failures 0.1.0 ----- 0.0.1a2 ------- * Fix development classifier * Imported Translations from Transifex * Package glance's package entirely 0.0.1a1 ------- * Split CHUNKSIZE into WRITE/READ\_CHUNKSIZE * Port swift store * Add validate\_location * Fix some Exceptions incompatibilities * Imported Translations from Transifex * Setup for glance.store for translation * Set the right classifiers in setup.cfg * Remove version string from setup.cfg * Add .gitreview to the repo * Fix flake8 errors * Adopt oslo.i18n * Pull multipath support from glance/master * Update from oslo-incubator * Pass offset and chunk\_size to the \`get\` method * Migrate vmware store * Move FakeHTTPResponse to a common utils module * Removed commented code * Remove deprecated \_schedule\_delayed\_delete\_from\_backend function * BugFix: Point to the exceptions module * BugFix: define scheme outside the \`try\` block * Add a way to register store options * Update functions signatures w/ optional context * Remove old scrubber options * Move exceptions out of common and add backends.py * Use exception * Remove dependency on oslo-log * Add offset and chunk\_size to the get method * Migrate the rbd store * Use register\_store\_schemes everywhere * Add missing context keyword to the s3 store * Migrate cinder store * Remove location\_strategy, it belongs to Glance * S3 store ported * Move options registration to \_\_init\_\_ * GridFS Store * Port sheepdog and its test suite * Update from oslo-inc and added processutils * Fix http store tests * Added fake driver, restored base tests, fixed load driver issue * Use context when needed * Add context=None to http store methods * Remove old exceptions * HTTP migrated * Accept a message keyword in exceptions * Filesystem driver restored * Move drivers under \_driver * Added testr * Config & Import fixes * Move base test to glance/store * Deprecate old options, make the list shorter * Add glance.store common * Add tests w/ some fixes, although they don't run yet * Update gitignore * Add requirements and testr * Add oslo-inc modules * Copying from glance glance_store-0.23.0/setup.cfg0000666000175100017510000000520713230237776016126 0ustar zuulzuul00000000000000[metadata] name = glance_store summary = OpenStack Image Service Store Library description-file = README.rst author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = http://docs.openstack.org/developer/glance_store classifier = Development Status :: 5 - Production/Stable Environment :: OpenStack Intended Audience :: Developers Intended Audience :: Information Technology License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.5 [files] packages = glance_store [entry_points] glance_store.drivers = file = glance_store._drivers.filesystem:Store http = glance_store._drivers.http:Store swift = glance_store._drivers.swift:Store rbd = glance_store._drivers.rbd:Store sheepdog = glance_store._drivers.sheepdog:Store cinder = glance_store._drivers.cinder:Store vmware = glance_store._drivers.vmware_datastore:Store # TESTS ONLY no_conf = glance_store.tests.fakes:UnconfigurableStore # Backwards compatibility glance.store.filesystem.Store = glance_store._drivers.filesystem:Store glance.store.http.Store = glance_store._drivers.http:Store glance.store.swift.Store = glance_store._drivers.swift:Store glance.store.rbd.Store = glance_store._drivers.rbd:Store glance.store.sheepdog.Store = glance_store._drivers.sheepdog:Store glance.store.cinder.Store = glance_store._drivers.cinder:Store glance.store.vmware_datastore.Store = glance_store._drivers.vmware_datastore:Store oslo.config.opts = glance.store = glance_store.backend:_list_opts console_scripts = glance-rootwrap = oslo_rootwrap.cmd:main [extras] vmware = oslo.vmware>=2.17.0 # Apache-2.0 swift = httplib2>=0.9.1 # MIT python-swiftclient>=3.2.0 # Apache-2.0 cinder = python-cinderclient>=3.3.0 # Apache-2.0 os-brick>=2.2.0 # Apache-2.0 oslo.rootwrap>=5.8.0 # Apache-2.0 oslo.privsep>=1.23.0 # Apache-2.0 [build_sphinx] source-dir = doc/source build-dir = doc/build all_files = 1 warning-is-error = 1 [pbr] autodoc_index_modules = True api_doc_dir = reference/api autodoc_exclude_modules = glance_store.tests.* [upload_sphinx] upload-dir = doc/build/html [compile_catalog] directory = glance_store/locale domain = glance_store [update_catalog] domain = glance_store output_dir = glance_store/locale input_file = glance_store/locale/glance_store.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = glance_store/locale/glance_store.pot [wheel] universal = 1 [egg_info] tag_build = tag_date = 0 glance_store-0.23.0/README.rst0000666000175100017510000000251113230237440015753 0ustar zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: http://governance.openstack.org/badges/glance_store.svg :target: http://governance.openstack.org/reference/tags/index.html :alt: The following tags have been asserted for the Glance Store Library: "project:official", "stable:follows-policy", "vulnerability:managed", "team:diverse-affiliation". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for glance_store. (The SVG in the governance repo is updated automatically.) .. Change things from this point on Glance Store Library ==================== Glance's stores library This library has been extracted from the Glance source code for the specific use of the Glance and Glare projects. The API it exposes is not stable, has some shortcomings, and is not a general purpose interface. We would eventually like to change this, but for now using this library outside of Glance or Glare will not be supported by the core team. * License: Apache License, Version 2.0 * Documentation: https://docs.openstack.org/glance_store/latest/ * Source: http://git.openstack.org/cgit/openstack/glance_store * Bugs: http://bugs.launchpad.net/glance-store glance_store-0.23.0/babel.cfg0000666000175100017510000000002013230237440016003 0ustar zuulzuul00000000000000[python: **.py] glance_store-0.23.0/glance_store.egg-info/0000775000175100017510000000000013230237776020436 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store.egg-info/PKG-INFO0000664000175100017510000000471113230237775021535 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: glance-store Version: 0.23.0 Summary: OpenStack Image Service Store Library Home-page: http://docs.openstack.org/developer/glance_store Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: http://governance.openstack.org/badges/glance_store.svg :target: http://governance.openstack.org/reference/tags/index.html :alt: The following tags have been asserted for the Glance Store Library: "project:official", "stable:follows-policy", "vulnerability:managed", "team:diverse-affiliation". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for glance_store. (The SVG in the governance repo is updated automatically.) .. Change things from this point on Glance Store Library ==================== Glance's stores library This library has been extracted from the Glance source code for the specific use of the Glance and Glare projects. The API it exposes is not stable, has some shortcomings, and is not a general purpose interface. We would eventually like to change this, but for now using this library outside of Glance or Glare will not be supported by the core team. * License: Apache License, Version 2.0 * Documentation: https://docs.openstack.org/glance_store/latest/ * Source: http://git.openstack.org/cgit/openstack/glance_store * Bugs: http://bugs.launchpad.net/glance-store Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: OpenStack Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 glance_store-0.23.0/glance_store.egg-info/entry_points.txt0000664000175100017510000000172313230237775023736 0ustar zuulzuul00000000000000[console_scripts] glance-rootwrap = oslo_rootwrap.cmd:main [glance_store.drivers] cinder = glance_store._drivers.cinder:Store file = glance_store._drivers.filesystem:Store glance.store.cinder.Store = glance_store._drivers.cinder:Store glance.store.filesystem.Store = glance_store._drivers.filesystem:Store glance.store.http.Store = glance_store._drivers.http:Store glance.store.rbd.Store = glance_store._drivers.rbd:Store glance.store.sheepdog.Store = glance_store._drivers.sheepdog:Store glance.store.swift.Store = glance_store._drivers.swift:Store glance.store.vmware_datastore.Store = glance_store._drivers.vmware_datastore:Store http = glance_store._drivers.http:Store no_conf = glance_store.tests.fakes:UnconfigurableStore rbd = glance_store._drivers.rbd:Store sheepdog = glance_store._drivers.sheepdog:Store swift = glance_store._drivers.swift:Store vmware = glance_store._drivers.vmware_datastore:Store [oslo.config.opts] glance.store = glance_store.backend:_list_opts glance_store-0.23.0/glance_store.egg-info/pbr.json0000664000175100017510000000005613230237775022114 0ustar zuulzuul00000000000000{"git_version": "38d9bcb", "is_release": true}glance_store-0.23.0/glance_store.egg-info/requires.txt0000664000175100017510000000073013230237775023035 0ustar zuulzuul00000000000000oslo.config>=5.1.0 oslo.i18n>=3.15.3 oslo.serialization!=2.19.1,>=2.18.0 oslo.utils>=3.33.0 oslo.concurrency>=3.25.0 stevedore>=1.20.0 enum34>=1.0.4 eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 six>=1.10.0 jsonschema<3.0.0,>=2.6.0 keystoneauth1>=3.3.0 python-keystoneclient>=3.8.0 requests>=2.14.2 [cinder] python-cinderclient>=3.3.0 os-brick>=2.2.0 oslo.rootwrap>=5.8.0 oslo.privsep>=1.23.0 [swift] httplib2>=0.9.1 python-swiftclient>=3.2.0 [vmware] oslo.vmware>=2.17.0 glance_store-0.23.0/glance_store.egg-info/dependency_links.txt0000664000175100017510000000000113230237775024503 0ustar zuulzuul00000000000000 glance_store-0.23.0/glance_store.egg-info/SOURCES.txt0000664000175100017510000001010213230237776022314 0ustar zuulzuul00000000000000.testr.conf AUTHORS ChangeLog LICENSE README.rst babel.cfg functional_testing.conf.sample requirements.txt run_tests.sh setup.cfg setup.py test-requirements.txt tox.ini doc/source/conf.py doc/source/index.rst doc/source/reference/index.rst doc/source/user/drivers.rst doc/source/user/index.rst etc/glance/rootwrap.conf etc/glance/rootwrap.d/glance_cinder_store.filters glance_store/__init__.py glance_store/backend.py glance_store/capabilities.py glance_store/driver.py glance_store/exceptions.py glance_store/i18n.py glance_store/location.py glance_store.egg-info/PKG-INFO glance_store.egg-info/SOURCES.txt glance_store.egg-info/dependency_links.txt glance_store.egg-info/entry_points.txt glance_store.egg-info/not-zip-safe glance_store.egg-info/pbr.json glance_store.egg-info/requires.txt glance_store.egg-info/top_level.txt glance_store/_drivers/__init__.py glance_store/_drivers/cinder.py glance_store/_drivers/filesystem.py glance_store/_drivers/http.py glance_store/_drivers/rbd.py glance_store/_drivers/sheepdog.py glance_store/_drivers/vmware_datastore.py glance_store/_drivers/swift/__init__.py glance_store/_drivers/swift/buffered.py glance_store/_drivers/swift/connection_manager.py glance_store/_drivers/swift/store.py glance_store/_drivers/swift/utils.py glance_store/common/__init__.py glance_store/common/utils.py glance_store/locale/en_GB/LC_MESSAGES/glance_store.po glance_store/locale/ko_KR/LC_MESSAGES/glance_store.po glance_store/tests/__init__.py glance_store/tests/base.py glance_store/tests/fakes.py glance_store/tests/utils.py glance_store/tests/etc/glance-swift.conf glance_store/tests/functional/__init__.py glance_store/tests/functional/base.py glance_store/tests/functional/filesystem/__init__.py glance_store/tests/functional/filesystem/test_functional_filesystem.py glance_store/tests/functional/hooks/gate_hook.sh glance_store/tests/functional/hooks/post_test_hook.sh glance_store/tests/functional/swift/__init__.py glance_store/tests/functional/swift/test_functional_swift.py glance_store/tests/unit/__init__.py glance_store/tests/unit/test_backend.py glance_store/tests/unit/test_cinder_store.py glance_store/tests/unit/test_connection_manager.py glance_store/tests/unit/test_exceptions.py glance_store/tests/unit/test_filesystem_store.py glance_store/tests/unit/test_http_store.py glance_store/tests/unit/test_opts.py glance_store/tests/unit/test_rbd_store.py glance_store/tests/unit/test_sheepdog_store.py glance_store/tests/unit/test_store_base.py glance_store/tests/unit/test_store_capabilities.py glance_store/tests/unit/test_swift_store.py glance_store/tests/unit/test_swift_store_utils.py glance_store/tests/unit/test_vmware_store.py releasenotes/notes/.placeholder releasenotes/notes/improved-configuration-options-3635b56aba3072c9.yaml releasenotes/notes/move-rootwrap-config-f2cf435c548aab5c.yaml releasenotes/notes/multi-tenant-store-058b67ce5b7f3bd0.yaml releasenotes/notes/pike-relnote-9f547df14184d18c.yaml releasenotes/notes/prevent-unauthorized-errors-ebb9cf2236595cd0.yaml releasenotes/notes/queens-relnote-5fa2d009d9a9e458.yaml releasenotes/notes/releasenote-0.17.0-efee3f557ea2096a.yaml releasenotes/notes/remove-gridfs-driver-09286e27613b4353.yaml releasenotes/notes/remove-s3-driver-f432afa1f53ecdf8.yaml releasenotes/notes/set-documented-default-directory-for-filesystem-9b417a29416d3a94.yaml releasenotes/notes/sorted-drivers-for-configs-a905f07d3bf9c973.yaml releasenotes/notes/start-using-reno-73ef709807e37b74.yaml releasenotes/notes/support-cinder-upload-c85849d9c88bbd7e.yaml releasenotes/notes/vmware-store-requests-369485d2cfdb6175.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/liberty.rst releasenotes/source/mitaka.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po releasenotes/source/locale/zh_CN/LC_MESSAGES/releasenotes.po tools/colorizer.py tools/install_venv.py tools/install_venv_common.py tools/tox_install.sh tools/with_venv.shglance_store-0.23.0/glance_store.egg-info/top_level.txt0000664000175100017510000000001513230237775023163 0ustar zuulzuul00000000000000glance_store glance_store-0.23.0/glance_store.egg-info/not-zip-safe0000664000175100017510000000000113230237701022650 0ustar zuulzuul00000000000000 glance_store-0.23.0/tools/0000775000175100017510000000000013230237776015437 5ustar zuulzuul00000000000000glance_store-0.23.0/tools/install_venv_common.py0000666000175100017510000001350613230237440022060 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Provides methods needed by installation script for OpenStack development virtual environments. Since this script is used to bootstrap a virtualenv from the system's Python environment, it should be kept strictly compatible with Python 2.6. Synced in from openstack-common """ from __future__ import print_function import optparse import os import subprocess import sys class InstallVenv(object): def __init__(self, root, venv, requirements, test_requirements, py_version, project): self.root = root self.venv = venv self.requirements = requirements self.test_requirements = test_requirements self.py_version = py_version self.project = project def die(self, message, *args): print(message % args, file=sys.stderr) sys.exit(1) def check_python_version(self): if sys.version_info < (2, 6): self.die("Need Python Version >= 2.6") def run_command_with_code(self, cmd, redirect_output=True, check_exit_code=True): """Runs a command in an out-of-process shell. Returns the output of that command. Working directory is self.root. """ if redirect_output: stdout = subprocess.PIPE else: stdout = None proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout) output = proc.communicate()[0] if check_exit_code and proc.returncode != 0: self.die('Command "%s" failed.\n%s', ' '.join(cmd), output) return (output, proc.returncode) def run_command(self, cmd, redirect_output=True, check_exit_code=True): return self.run_command_with_code(cmd, redirect_output, check_exit_code)[0] def get_distro(self): if (os.path.exists('/etc/fedora-release') or os.path.exists('/etc/redhat-release')): return Fedora( self.root, self.venv, self.requirements, self.test_requirements, self.py_version, self.project) else: return Distro( self.root, self.venv, self.requirements, self.test_requirements, self.py_version, self.project) def check_dependencies(self): self.get_distro().install_virtualenv() def create_virtualenv(self, no_site_packages=True): """Creates the virtual environment and installs PIP. Creates the virtual environment and installs PIP only into the virtual environment. """ if not os.path.isdir(self.venv): print('Creating venv...', end=' ') if no_site_packages: self.run_command(['virtualenv', '-q', '--no-site-packages', self.venv]) else: self.run_command(['virtualenv', '-q', self.venv]) print('done.') else: print("venv already exists...") pass def pip_install(self, *args): self.run_command(['tools/with_venv.sh', 'pip', 'install', '--upgrade'] + list(args), redirect_output=False) def install_dependencies(self): print('Installing dependencies with pip (this can take a while)...') # First things first, make sure our venv has the latest pip and # setuptools and pbr self.pip_install('pip>=1.4') self.pip_install('setuptools') self.pip_install('pbr') self.pip_install('-r', self.requirements, '-r', self.test_requirements) def parse_args(self, argv): """Parses command-line arguments.""" parser = optparse.OptionParser() parser.add_option('-n', '--no-site-packages', action='store_true', help="Do not inherit packages from global Python " "install") return parser.parse_args(argv[1:])[0] class Distro(InstallVenv): def check_cmd(self, cmd): return bool(self.run_command(['which', cmd], check_exit_code=False).strip()) def install_virtualenv(self): if self.check_cmd('virtualenv'): return if self.check_cmd('easy_install'): print('Installing virtualenv via easy_install...', end=' ') if self.run_command(['easy_install', 'virtualenv']): print('Succeeded') return else: print('Failed') self.die('ERROR: virtualenv not found.\n\n%s development' ' requires virtualenv, please install it using your' ' favorite package management tool' % self.project) class Fedora(Distro): """This covers all Fedora-based distributions. Includes: Fedora, RHEL, CentOS, Scientific Linux """ def check_pkg(self, pkg): return self.run_command_with_code(['rpm', '-q', pkg], check_exit_code=False)[1] == 0 def install_virtualenv(self): if self.check_cmd('virtualenv'): return if not self.check_pkg('python-virtualenv'): self.die("Please install 'python-virtualenv'.") super(Fedora, self).install_virtualenv() glance_store-0.23.0/tools/with_venv.sh0000777000175100017510000000033213230237440017773 0ustar zuulzuul00000000000000#!/bin/bash TOOLS_PATH=${TOOLS_PATH:-$(dirname $0)} VENV_PATH=${VENV_PATH:-${TOOLS_PATH}} VENV_DIR=${VENV_NAME:-/../.venv} TOOLS=${TOOLS_PATH} VENV=${VENV:-${VENV_PATH}/${VENV_DIR}} source ${VENV}/bin/activate && "$@" glance_store-0.23.0/tools/colorizer.py0000666000175100017510000002701013230237440020007 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2013, Nebula, Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Colorizer Code is borrowed from Twisted: # Copyright (c) 2001-2010 Twisted Matrix Laboratories. # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """Display a subunit stream through a colorized unittest test runner.""" import heapq import six import subunit import sys import unittest import testtools class _AnsiColorizer(object): """ A colorizer is an object that loosely wraps around a stream, allowing callers to write text to the stream in a particular color. Colorizer classes must implement C{supported()} and C{write(text, color)}. """ _colors = dict(black=30, red=31, green=32, yellow=33, blue=34, magenta=35, cyan=36, white=37) def __init__(self, stream): self.stream = stream @staticmethod def supported(stream=sys.stdout): """ A method that returns True if the current platform supports coloring terminal output using this method. Returns False otherwise. """ if not stream.isatty(): return False # auto color only on TTYs try: import curses except ImportError: return False else: try: try: return curses.tigetnum("colors") > 2 except curses.error: curses.setupterm() return curses.tigetnum("colors") > 2 except Exception: # guess false in case of error return False def write(self, text, color): """ Write the given text to the stream in the given color. @param text: Text to be written to the stream. @param color: A string label for a color. e.g. 'red', 'white'. """ color = self._colors[color] self.stream.write('\x1b[%s;1m%s\x1b[0m' % (color, text)) class _Win32Colorizer(object): """ See _AnsiColorizer docstring. """ def __init__(self, stream): import win32console red, green, blue, bold = (win32console.FOREGROUND_RED, win32console.FOREGROUND_GREEN, win32console.FOREGROUND_BLUE, win32console.FOREGROUND_INTENSITY) self.stream = stream self.screenBuffer = win32console.GetStdHandle( win32console.STD_OUT_HANDLE) self._colors = { 'normal': red | green | blue, 'red': red | bold, 'green': green | bold, 'blue': blue | bold, 'yellow': red | green | bold, 'magenta': red | blue | bold, 'cyan': green | blue | bold, 'white': red | green | blue | bold } @staticmethod def supported(stream=sys.stdout): try: import win32console screenBuffer = win32console.GetStdHandle( win32console.STD_OUT_HANDLE) except ImportError: return False import pywintypes try: screenBuffer.SetConsoleTextAttribute( win32console.FOREGROUND_RED | win32console.FOREGROUND_GREEN | win32console.FOREGROUND_BLUE) except pywintypes.error: return False else: return True def write(self, text, color): color = self._colors[color] self.screenBuffer.SetConsoleTextAttribute(color) self.stream.write(text) self.screenBuffer.SetConsoleTextAttribute(self._colors['normal']) class _NullColorizer(object): """ See _AnsiColorizer docstring. """ def __init__(self, stream): self.stream = stream @staticmethod def supported(stream=sys.stdout): return True def write(self, text, color): self.stream.write(text) def get_elapsed_time_color(elapsed_time): if elapsed_time > 1.0: return 'red' elif elapsed_time > 0.25: return 'yellow' else: return 'green' class SubunitTestResult(testtools.TestResult): def __init__(self, stream, descriptions, verbosity): super(SubunitTestResult, self).__init__() self.stream = stream self.showAll = verbosity > 1 self.num_slow_tests = 10 self.slow_tests = [] # this is a fixed-sized heap self.colorizer = None # NOTE(vish): reset stdout for the terminal check stdout = sys.stdout sys.stdout = sys.__stdout__ for colorizer in [_Win32Colorizer, _AnsiColorizer, _NullColorizer]: if colorizer.supported(): self.colorizer = colorizer(self.stream) break sys.stdout = stdout self.start_time = None self.last_time = {} self.results = {} self.last_written = None def _writeElapsedTime(self, elapsed): color = get_elapsed_time_color(elapsed) self.colorizer.write(" %.2f" % elapsed, color) def _addResult(self, test, *args): try: name = test.id() except AttributeError: name = 'Unknown.unknown' test_class, test_name = name.rsplit('.', 1) elapsed = (self._now() - self.start_time).total_seconds() item = (elapsed, test_class, test_name) if len(self.slow_tests) >= self.num_slow_tests: heapq.heappushpop(self.slow_tests, item) else: heapq.heappush(self.slow_tests, item) self.results.setdefault(test_class, []) self.results[test_class].append((test_name, elapsed) + args) self.last_time[test_class] = self._now() self.writeTests() def _writeResult(self, test_name, elapsed, long_result, color, short_result, success): if self.showAll: self.stream.write(' %s' % str(test_name).ljust(66)) self.colorizer.write(long_result, color) if success: self._writeElapsedTime(elapsed) self.stream.writeln() else: self.colorizer.write(short_result, color) def addSuccess(self, test): super(SubunitTestResult, self).addSuccess(test) self._addResult(test, 'OK', 'green', '.', True) def addFailure(self, test, err): if test.id() == 'process-returncode': return super(SubunitTestResult, self).addFailure(test, err) self._addResult(test, 'FAIL', 'red', 'F', False) def addError(self, test, err): super(SubunitTestResult, self).addFailure(test, err) self._addResult(test, 'ERROR', 'red', 'E', False) def addSkip(self, test, reason=None, details=None): super(SubunitTestResult, self).addSkip(test, reason, details) self._addResult(test, 'SKIP', 'blue', 'S', True) def startTest(self, test): self.start_time = self._now() super(SubunitTestResult, self).startTest(test) def writeTestCase(self, cls): if not self.results.get(cls): return if cls != self.last_written: self.colorizer.write(cls, 'white') self.stream.writeln() for result in self.results[cls]: self._writeResult(*result) del self.results[cls] self.stream.flush() self.last_written = cls def writeTests(self): time = self.last_time.get(self.last_written, self._now()) if not self.last_written or (self._now() - time).total_seconds() > 2.0: diff = 3.0 while diff > 2.0: classes = self.results.keys() oldest = min(classes, key=lambda x: self.last_time[x]) diff = (self._now() - self.last_time[oldest]).total_seconds() self.writeTestCase(oldest) else: self.writeTestCase(self.last_written) def done(self): self.stopTestRun() def stopTestRun(self): for cls in list(six.iterkeys(self.results)): self.writeTestCase(cls) self.stream.writeln() self.writeSlowTests() def writeSlowTests(self): # Pare out 'fast' tests slow_tests = [item for item in self.slow_tests if get_elapsed_time_color(item[0]) != 'green'] if slow_tests: slow_total_time = sum(item[0] for item in slow_tests) slow = ("Slowest %i tests took %.2f secs:" % (len(slow_tests), slow_total_time)) self.colorizer.write(slow, 'yellow') self.stream.writeln() last_cls = None # sort by name for elapsed, cls, name in sorted(slow_tests, key=lambda x: x[1] + x[2]): if cls != last_cls: self.colorizer.write(cls, 'white') self.stream.writeln() last_cls = cls self.stream.write(' %s' % str(name).ljust(68)) self._writeElapsedTime(elapsed) self.stream.writeln() def printErrors(self): if self.showAll: self.stream.writeln() self.printErrorList('ERROR', self.errors) self.printErrorList('FAIL', self.failures) def printErrorList(self, flavor, errors): for test, err in errors: self.colorizer.write("=" * 70, 'red') self.stream.writeln() self.colorizer.write(flavor, 'red') self.stream.writeln(": %s" % test.id()) self.colorizer.write("-" * 70, 'red') self.stream.writeln() self.stream.writeln("%s" % err) test = subunit.ProtocolTestCase(sys.stdin, passthrough=None) if sys.version_info[0:2] <= (2, 6): runner = unittest.TextTestRunner(verbosity=2) else: runner = unittest.TextTestRunner( verbosity=2, resultclass=SubunitTestResult) if runner.run(test).wasSuccessful(): exit_code = 0 else: exit_code = 1 sys.exit(exit_code) glance_store-0.23.0/tools/install_venv.py0000666000175100017510000000456513230237440020515 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Copyright 2010 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Installation script for glance_store's development virtualenv """ from __future__ import print_function import os import sys import install_venv_common as install_venv # noqa def print_help(): help = """ glance_store development environment setup is complete. glance_store development uses virtualenv to track and manage Python dependencies while in development and testing. To activate the glance_store virtualenv for the extent of your current shell session you can run: $ source .venv/bin/activate Or, if you prefer, you can run commands in the virtualenv on a case by case basis by running: $ tools/with_venv.sh Also, make test will automatically use the virtualenv. """ print(help) def main(argv): root = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) venv = os.path.join(root, '.venv') pip_requires = os.path.join(root, 'requirements.txt') test_requires = os.path.join(root, 'test-requirements.txt') py_version = "python%s.%s" % (sys.version_info[0], sys.version_info[1]) project = 'glance_store' install = install_venv.InstallVenv(root, venv, pip_requires, test_requires, py_version, project) options = install.parse_args(argv) install.check_python_version() install.check_dependencies() install.create_virtualenv(no_site_packages=options.no_site_packages) install.install_dependencies() install.run_command([os.path.join(venv, 'bin/python'), 'setup.py', 'develop']) print_help() if __name__ == '__main__': main(sys.argv) glance_store-0.23.0/tools/tox_install.sh0000777000175100017510000000342613230237440020331 0ustar zuulzuul00000000000000#!/usr/bin/env bash # Library constraint file contains version pin that is in conflict with # installing the library from source. We should replace the version pin in # the constraints file before applying it for from-source installation. ZUUL_CLONER=/usr/zuul-env/bin/zuul-cloner BRANCH_NAME=master LIB_NAME=glance_store requirements_installed=$(echo "import openstack_requirements" | python 2>/dev/null ; echo $?) set -e CONSTRAINTS_FILE=$1 shift install_cmd="pip install" mydir=$(mktemp -dt "$LIB_NAME-tox_install-XXXXXXX") trap "rm -rf $mydir" EXIT localfile=$mydir/upper-constraints.txt if [[ $CONSTRAINTS_FILE != http* ]]; then CONSTRAINTS_FILE=file://$CONSTRAINTS_FILE fi curl $CONSTRAINTS_FILE -k -o $localfile install_cmd="$install_cmd -c$localfile" if [ $requirements_installed -eq 0 ]; then echo "ALREADY INSTALLED" > /tmp/tox_install.txt echo "Requirements already installed; using existing package" elif [ -x "$ZUUL_CLONER" ]; then echo "ZUUL CLONER" > /tmp/tox_install.txt pushd $mydir $ZUUL_CLONER --cache-dir \ /opt/git \ --branch $BRANCH_NAME \ git://git.openstack.org \ openstack/requirements cd openstack/requirements $install_cmd -e . popd else echo "PIP HARDCODE" > /tmp/tox_install.txt if [ -z "$REQUIREMENTS_PIP_LOCATION" ]; then REQUIREMENTS_PIP_LOCATION="git+https://git.openstack.org/openstack/requirements@$BRANCH_NAME#egg=requirements" fi $install_cmd -U -e ${REQUIREMENTS_PIP_LOCATION} fi # This is the main purpose of the script: Allow local installation of # the current repo. It is listed in constraints file and thus any # install will be constrained and we need to unconstrain it. edit-constraints $localfile -- $LIB_NAME "-e file://$PWD#egg=$LIB_NAME" $install_cmd -U $* exit $? glance_store-0.23.0/requirements.txt0000666000175100017510000000132113230237440017546 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. oslo.config>=5.1.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 oslo.concurrency>=3.25.0 # Apache-2.0 stevedore>=1.20.0 # Apache-2.0 enum34>=1.0.4;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT six>=1.10.0 # MIT jsonschema<3.0.0,>=2.6.0 # MIT keystoneauth1>=3.3.0 # Apache-2.0 python-keystoneclient>=3.8.0 # Apache-2.0 requests>=2.14.2 # Apache-2.0 glance_store-0.23.0/run_tests.sh0000777000175100017510000001576513230237440016670 0ustar zuulzuul00000000000000#!/bin/bash set -eu function usage { echo "Usage: $0 [OPTION]..." echo "Run test suite(s)" echo "" echo " -V, --virtual-env Always use virtualenv. Install automatically if not present" echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment" echo " -s, --no-site-packages Isolate the virtualenv from the global Python environment" echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added." echo " -u, --update Update the virtual environment with any newer package versions" echo " -p, --pep8 Just run PEP8 and HACKING compliance check" echo " -P, --no-pep8 Don't run static code checks" echo " -c, --coverage Generate coverage report" echo " -d, --debug Run tests with testtools instead of testr. This allows you to use the debugger." echo " -h, --help Print this usage message" echo " --virtual-env-path Location of the virtualenv directory" echo " Default: \$(pwd)" echo " --virtual-env-name Name of the virtualenv directory" echo " Default: .venv" echo " --tools-path Location of the tools directory" echo " Default: \$(pwd)" echo " --concurrency How many processes to use when running the tests. A value of 0 autodetects concurrency from your CPU count" echo " Default: 0" echo "" echo "Note: with no options specified, the script will try to run the tests in a virtual environment," echo " If no virtualenv is found, the script will ask if you would like to create one. If you " echo " prefer to run tests NOT in a virtual environment, simply pass the -N option." exit } function process_options { i=1 while [ $i -le $# ]; do case "${!i}" in -h|--help) usage;; -V|--virtual-env) always_venv=1; never_venv=0;; -N|--no-virtual-env) always_venv=0; never_venv=1;; -s|--no-site-packages) no_site_packages=1;; -f|--force) force=1;; -u|--update) update=1;; -p|--pep8) just_pep8=1;; -P|--no-pep8) no_pep8=1;; -c|--coverage) coverage=1;; -d|--debug) debug=1;; --virtual-env-path) (( i++ )) venv_path=${!i} ;; --virtual-env-name) (( i++ )) venv_dir=${!i} ;; --tools-path) (( i++ )) tools_path=${!i} ;; --concurrency) (( i++ )) concurrency=${!i} ;; -*) testropts="$testropts ${!i}";; *) testrargs="$testrargs ${!i}" esac (( i++ )) done } tool_path=${tools_path:-$(pwd)} venv_path=${venv_path:-$(pwd)} venv_dir=${venv_name:-.venv} with_venv=tools/with_venv.sh always_venv=0 never_venv=0 force=0 no_site_packages=0 installvenvopts= testrargs= testropts= wrapper="" just_pep8=0 no_pep8=0 coverage=0 debug=0 update=0 concurrency=0 LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=C process_options $@ # Make our paths available to other scripts we call export venv_path export venv_dir export venv_name export tools_dir export venv=${venv_path}/${venv_dir} if [ $no_site_packages -eq 1 ]; then installvenvopts="--no-site-packages" fi function run_tests { # Cleanup *pyc ${wrapper} find . -type f -name "*.pyc" -delete if [ $debug -eq 1 ]; then if [ "$testropts" = "" ] && [ "$testrargs" = "" ]; then # Default to running all tests if specific test is not # provided. testrargs="discover ./tests" fi ${wrapper} python -m testtools.run $testropts $testrargs # Short circuit because all of the testr and coverage stuff # below does not make sense when running testtools.run for # debugging purposes. return $? fi if [ $coverage -eq 1 ]; then TESTRTESTS="$TESTRTESTS --coverage" else TESTRTESTS="$TESTRTESTS" fi # Just run the test suites in current environment set +e testrargs=`echo "$testrargs" | sed -e's/^\s*\(.*\)\s*$/\1/'` TESTRTESTS="$TESTRTESTS --testr-args='--subunit --concurrency $concurrency $testropts $testrargs'" if [ setup.cfg -nt glance_store.egg-info/entry_points.txt ] then ${wrapper} python setup.py egg_info fi echo "Running \`${wrapper} $TESTRTESTS\`" if ${wrapper} which subunit-2to1 2>&1 > /dev/null then # subunit-2to1 is present, testr subunit stream should be in version 2 # format. Convert to version one before colorizing. bash -c "${wrapper} $TESTRTESTS | ${wrapper} subunit-2to1 | ${wrapper} tools/colorizer.py" else bash -c "${wrapper} $TESTRTESTS | ${wrapper} tools/colorizer.py" fi RESULT=$? set -e copy_subunit_log if [ $coverage -eq 1 ]; then echo "Generating HTML coverage report in covhtml/" # Don't compute coverage for common code, which is tested elsewhere ${wrapper} coverage combine ${wrapper} coverage html --include='glance_store/*' -d covhtml -i ${wrapper} coverage report --include='glance_store/*' -i fi return $RESULT } function copy_subunit_log { LOGNAME=`cat .testrepository/next-stream` LOGNAME=$(($LOGNAME - 1)) LOGNAME=".testrepository/${LOGNAME}" cp $LOGNAME subunit.log } function run_pep8 { echo "Running flake8 ..." if [ $never_venv -eq 1 ]; then echo "**WARNING**:" echo "Running flake8 without virtual env may miss OpenStack HACKING detection" fi bash -c "${wrapper} flake8" echo "Testing translation files ..." bash -c "${wrapper} find glance_store -type f -regex '.*\.pot?' -print0|${wrapper} xargs --null -n 1 ${wrapper} msgfmt --check-format -o /dev/null" } TESTRTESTS="python setup.py testr" if [ $never_venv -eq 0 ] then # Remove the virtual environment if --force used if [ $force -eq 1 ]; then echo "Cleaning virtualenv..." rm -rf ${venv} fi if [ $update -eq 1 ]; then echo "Updating virtualenv..." python tools/install_venv.py $installvenvopts fi if [ -e ${venv} ]; then wrapper="${with_venv}" else if [ $always_venv -eq 1 ]; then # Automatically install the virtualenv python tools/install_venv.py $installvenvopts wrapper="${with_venv}" else echo -e "No virtual environment found...create one? (Y/n) \c" read use_ve if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then # Install the virtualenv and run the test suite in it python tools/install_venv.py $installvenvopts wrapper=${with_venv} fi fi fi fi # Delete old coverage data from previous runs if [ $coverage -eq 1 ]; then ${wrapper} coverage erase fi if [ $just_pep8 -eq 1 ]; then run_pep8 exit fi run_tests # NOTE(sirp): we only want to run pep8 when we're running the full-test suite, # not when we're running tests individually. To handle this, we need to # distinguish between options (testropts), which begin with a '-', and # arguments (testrargs). if [ -z "$testrargs" ]; then if [ $no_pep8 -eq 0 ]; then run_pep8 fi fi glance_store-0.23.0/tox.ini0000666000175100017510000000441613230237440015605 0ustar zuulzuul00000000000000[tox] minversion = 1.6 envlist = py35,py27,pep8 skipsdist = True [testenv] setenv = VIRTUAL_ENV={envdir} usedevelop = True install_command = {toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} --allow-all-external --allow-insecure netaddr -U {opts} {packages} deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt .[vmware,swift,cinder] passenv = OS_TEST_* commands = ostestr --slowest {posargs} [testenv:docs] commands = python setup.py build_sphinx [testenv:releasenotes] commands = sphinx-build -a -E -W -d releasenotes/build/.doctrees -b html releasenotes/source releasenotes/build/html [testenv:pep8] commands = flake8 {posargs} # Run security linter # The following bandit tests are being skipped: # B101 - Use of assert detected. # B110 - Try, Except, Pass detected. # B303 - Use of insecure MD2, MD4, or MD5 hash function. bandit -r glance_store -x tests --skip B101,B110,B303 [testenv:bandit] # NOTE(browne): This is required for the integration test job of the bandit # project. Please do not remove. # The following bandit tests are being skipped: # B101 - Use of assert detected. # B110 - Try, Except, Pass detected. # B303 - Use of insecure MD2, MD4, or MD5 hash function. commands = bandit -r glance_store -x tests --skip B101,B110,B303 [testenv:cover] setenv = VIRTUAL_ENV={envdir} commands = python setup.py testr --coverage --testr-args='^(?!.*test.*coverage).*$' [testenv:venv] commands = {posargs} [testenv:functional-swift] sitepackages = True setenv = OS_TEST_PATH=./glance_store/tests/functional/swift commands = python setup.py testr --slowest --testr-args='glance_store.tests.functional.swift' [testenv:functional-filesystem] sitepackages = True setenv = OS_TEST_PATH=./glance_store/tests/functional/filesystem commands = python setup.py testr --slowest --testr-args='glance_store.tests.functional.filesystem' [flake8] # TODO(dmllr): Analyze or fix the warnings blacklisted below # H301 one import per line # H404 multi line docstring should start with a summary # H405 multi line docstring summary not separated with an empty line ignore = H301,H404,H405 exclude = .venv,.git,.tox,dist,doc,etc,*glance_store/locale*,*lib/python*,*egg,build glance_store-0.23.0/test-requirements.txt0000666000175100017510000000141513230237467020540 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. # Metrics and style hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 # Packaging mock>=2.0.0 # BSD # Unit testing coverage!=4.4,>=4.0 # Apache-2.0 fixtures>=3.0.0 # Apache-2.0/BSD python-subunit>=1.0.0 # Apache-2.0/BSD requests-mock>=1.1.0 # Apache-2.0 testrepository>=0.0.18 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT oslotest>=3.2.0 # Apache-2.0 os-testr>=1.0.0 # Apache-2.0 bandit>=1.1.0 # Apache-2.0 # this is required for the docs build jobs sphinx!=1.6.6,>=1.6.2 # BSD openstackdocstheme>=1.17.0 # Apache-2.0 reno>=2.5.0 # Apache-2.0 glance_store-0.23.0/glance_store/0000775000175100017510000000000013230237776016744 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/common/0000775000175100017510000000000013230237776020234 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/common/__init__.py0000666000175100017510000000000013230237440022321 0ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/common/utils.py0000666000175100017510000000773113230237440021744 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ System-level utilities and helper functions. """ import logging import uuid try: from eventlet import sleep except ImportError: from time import sleep from glance_store.i18n import _ LOG = logging.getLogger(__name__) def is_uuid_like(val): """Returns validation of a value as a UUID. For our purposes, a UUID is a canonical form string: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa """ try: return str(uuid.UUID(val)) == val except (TypeError, ValueError, AttributeError): return False def chunkreadable(iter, chunk_size=65536): """ Wrap a readable iterator with a reader yielding chunks of a preferred size, otherwise leave iterator unchanged. :param iter: an iter which may also be readable :param chunk_size: maximum size of chunk """ return chunkiter(iter, chunk_size) if hasattr(iter, 'read') else iter def chunkiter(fp, chunk_size=65536): """ Return an iterator to a file-like obj which yields fixed size chunks :param fp: a file-like object :param chunk_size: maximum size of chunk """ while True: chunk = fp.read(chunk_size) if chunk: yield chunk else: break def cooperative_iter(iter): """ Return an iterator which schedules after each iteration. This can prevent eventlet thread starvation. :param iter: an iterator to wrap """ try: for chunk in iter: sleep(0) yield chunk except Exception as err: msg = _("Error: cooperative_iter exception %s") % err LOG.error(msg) raise def cooperative_read(fd): """ Wrap a file descriptor's read with a partial function which schedules after each read. This can prevent eventlet thread starvation. :param fd: a file descriptor to wrap """ def readfn(*args): result = fd.read(*args) sleep(0) return result return readfn class CooperativeReader(object): """ An eventlet thread friendly class for reading in image data. When accessing data either through the iterator or the read method we perform a sleep to allow a co-operative yield. When there is more than one image being uploaded/downloaded this prevents eventlet thread starvation, ie allows all threads to be scheduled periodically rather than having the same thread be continuously active. """ def __init__(self, fd): """ :param fd: Underlying image file object """ self.fd = fd self.iterator = None # NOTE(markwash): if the underlying supports read(), overwrite the # default iterator-based implementation with cooperative_read which # is more straightforward if hasattr(fd, 'read'): self.read = cooperative_read(fd) def read(self, length=None): """Return the next chunk of the underlying iterator. This is replaced with cooperative_read in __init__ if the underlying fd already supports read(). """ if self.iterator is None: self.iterator = self.__iter__() try: return next(self.iterator) except StopIteration: return '' def __iter__(self): return cooperative_iter(self.fd.__iter__()) glance_store-0.23.0/glance_store/__init__.py0000666000175100017510000000132413230237440021043 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from .backend import * # noqa from .driver import * # noqa from .exceptions import * # noqa glance_store-0.23.0/glance_store/capabilities.py0000666000175100017510000001713513230237440021744 0ustar zuulzuul00000000000000# Copyright (c) 2015 IBM, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Glance Store capability""" import logging import threading import time import enum from eventlet import tpool from oslo_utils import reflection from glance_store import exceptions from glance_store.i18n import _LW _STORE_CAPABILITES_UPDATE_SCHEDULING_BOOK = {} _STORE_CAPABILITES_UPDATE_SCHEDULING_LOCK = threading.Lock() LOG = logging.getLogger(__name__) class BitMasks(enum.IntEnum): NONE = 0b00000000 ALL = 0b11111111 READ_ACCESS = 0b00000001 # Included READ_ACCESS READ_OFFSET = 0b00000011 # Included READ_ACCESS READ_CHUNK = 0b00000101 # READ_OFFSET | READ_CHUNK READ_RANDOM = 0b00000111 WRITE_ACCESS = 0b00001000 # Included WRITE_ACCESS WRITE_OFFSET = 0b00011000 # Included WRITE_ACCESS WRITE_CHUNK = 0b00101000 # WRITE_OFFSET | WRITE_CHUNK WRITE_RANDOM = 0b00111000 # READ_ACCESS | WRITE_ACCESS RW_ACCESS = 0b00001001 # READ_OFFSET | WRITE_OFFSET RW_OFFSET = 0b00011011 # READ_CHUNK | WRITE_CHUNK RW_CHUNK = 0b00101101 # RW_OFFSET | RW_CHUNK RW_RANDOM = 0b00111111 # driver is stateless and can be reused safely DRIVER_REUSABLE = 0b01000000 class StoreCapability(object): def __init__(self): # Set static store capabilities base on # current driver implementation. self._capabilities = getattr(self.__class__, "_CAPABILITIES", 0) @property def capabilities(self): return self._capabilities @staticmethod def contains(x, y): return x & y == y def update_capabilities(self): """ Update dynamic storage capabilities based on current driver configuration and backend status when needed. As a hook, the function will be triggered in two cases: calling once after store driver get configured, it was used to update dynamic storage capabilities based on current driver configuration, or calling when the capabilities checking of an operation failed every time, this was used to refresh dynamic storage capabilities based on backend status then. This function shouldn't raise any exception out. """ LOG.debug(("Store %s doesn't support updating dynamic " "storage capabilities. Please overwrite " "'update_capabilities' method of the store to " "implement updating logics if needed.") % reflection.get_class_name(self)) def is_capable(self, *capabilities): """ Check if requested capability(s) are supported by current driver instance. :param capabilities: required capability(s). """ caps = 0 for cap in capabilities: caps |= int(cap) return self.contains(self.capabilities, caps) def set_capabilities(self, *dynamic_capabilites): """ Set dynamic storage capabilities based on current driver configuration and backend status. :param dynamic_capabilites: dynamic storage capability(s). """ for cap in dynamic_capabilites: self._capabilities |= int(cap) def unset_capabilities(self, *dynamic_capabilites): """ Unset dynamic storage capabilities. :param dynamic_capabilites: dynamic storage capability(s). """ caps = 0 for cap in dynamic_capabilites: caps |= int(cap) # TODO(zhiyan): Cascaded capability removal is # skipped currently, we can add it back later # when a concrete requirement comes out. # For example, when removing READ_ACCESS, all # read related capabilities need to be removed # together, e.g. READ_RANDOM. self._capabilities &= ~caps def _schedule_capabilities_update(store): def _update_capabilities(store, context): with context['lock']: if context['updating']: return context['updating'] = True try: store.update_capabilities() except Exception: pass finally: context['updating'] = False # NOTE(zhiyan): Update 'latest_update' field # in anyway even an exception raised, to # prevent call problematic routine cyclically. context['latest_update'] = int(time.time()) global _STORE_CAPABILITES_UPDATE_SCHEDULING_BOOK book = _STORE_CAPABILITES_UPDATE_SCHEDULING_BOOK if store not in book: with _STORE_CAPABILITES_UPDATE_SCHEDULING_LOCK: if store not in book: book[store] = {'latest_update': int(time.time()), 'lock': threading.Lock(), 'updating': False} else: context = book[store] # NOTE(zhiyan): We don't need to lock 'latest_update' # field for check since time increased one-way only. sec = (int(time.time()) - context['latest_update'] - store.conf.glance_store.store_capabilities_update_min_interval) if sec >= 0: if not context['updating']: # NOTE(zhiyan): Using a real thread pool instead # of green pool due to store capabilities updating # probably calls some inevitably blocking code for # IO operation on remote or local storage. # Eventlet allows operator to uses environment var # EVENTLET_THREADPOOL_SIZE to desired pool size. tpool.execute(_update_capabilities, store, context) def check(store_op_fun): def op_checker(store, *args, **kwargs): # NOTE(zhiyan): Trigger the hook of updating store # dynamic capabilities based on current store status. if store.conf.glance_store.store_capabilities_update_min_interval > 0: _schedule_capabilities_update(store) get_capabilities = [ BitMasks.READ_ACCESS, BitMasks.READ_OFFSET if kwargs.get('offset') else BitMasks.NONE, BitMasks.READ_CHUNK if kwargs.get('chunk_size') else BitMasks.NONE ] op_cap_map = { 'get': get_capabilities, 'add': [BitMasks.WRITE_ACCESS], 'delete': [BitMasks.WRITE_ACCESS]} op_exec_map = { 'get': (exceptions.StoreRandomGetNotSupported if kwargs.get('offset') or kwargs.get('chunk_size') else exceptions.StoreGetNotSupported), 'add': exceptions.StoreAddDisabled, 'delete': exceptions.StoreDeleteNotSupported} op = store_op_fun.__name__.lower() try: req_cap = op_cap_map[op] except KeyError: LOG.warning(_LW('The capability of operation "%s" ' 'could not be checked.'), op) else: if not store.is_capable(*req_cap): kwargs.setdefault('offset', 0) kwargs.setdefault('chunk_size', None) raise op_exec_map[op](**kwargs) return store_op_fun(store, *args, **kwargs) return op_checker glance_store-0.23.0/glance_store/exceptions.py0000666000175100017510000001247013230237440021471 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Glance Store exception subclasses""" import six import six.moves.urllib.parse as urlparse import warnings from glance_store.i18n import _ warnings.simplefilter('always') class BackendException(Exception): pass class UnsupportedBackend(BackendException): pass class RedirectException(Exception): def __init__(self, url): self.url = urlparse.urlparse(url) class GlanceStoreException(Exception): """ Base Glance Store Exception To correctly use this class, inherit from it and define a 'message' property. That message will get printf'd with the keyword arguments provided to the constructor. """ message = _("An unknown exception occurred") def __init__(self, message=None, **kwargs): if not message: message = self.message try: if kwargs: message = message % kwargs except Exception: pass self.msg = message super(GlanceStoreException, self).__init__(message) def __unicode__(self): # NOTE(flwang): By default, self.msg is an instance of Message, which # can't be converted by str(). Based on the definition of # __unicode__, it should return unicode always. return six.text_type(self.msg) class MissingCredentialError(GlanceStoreException): message = _("Missing required credential: %(required)s") class BadAuthStrategy(GlanceStoreException): message = _("Incorrect auth strategy, expected \"%(expected)s\" but " "received \"%(received)s\"") class AuthorizationRedirect(GlanceStoreException): message = _("Redirecting to %(uri)s for authorization.") class NotFound(GlanceStoreException): message = _("Image %(image)s not found") class UnknownScheme(GlanceStoreException): message = _("Unknown scheme '%(scheme)s' found in URI") class BadStoreUri(GlanceStoreException): message = _("The Store URI was malformed: %(uri)s") class Duplicate(GlanceStoreException): message = _("Image %(image)s already exists") class StorageFull(GlanceStoreException): message = _("There is not enough disk space on the image storage media.") class StorageWriteDenied(GlanceStoreException): message = _("Permission to write image storage media denied.") class AuthBadRequest(GlanceStoreException): message = _("Connect error/bad request to Auth service at URL %(url)s.") class AuthUrlNotFound(GlanceStoreException): message = _("Auth service at URL %(url)s not found.") class AuthorizationFailure(GlanceStoreException): message = _("Authorization failed.") class NotAuthenticated(GlanceStoreException): message = _("You are not authenticated.") class Forbidden(GlanceStoreException): message = _("You are not authorized to complete this action.") class Invalid(GlanceStoreException): # NOTE(NiallBunting) This could be deprecated however the debtcollector # seems to have problems deprecating this as well as the subclasses. message = _("Data supplied was not valid.") class BadStoreConfiguration(GlanceStoreException): message = _("Store %(store_name)s could not be configured correctly. " "Reason: %(reason)s") class DriverLoadFailure(GlanceStoreException): message = _("Driver %(driver_name)s could not be loaded.") class StoreDeleteNotSupported(GlanceStoreException): message = _("Deleting images from this store is not supported.") class StoreGetNotSupported(GlanceStoreException): message = _("Getting images from this store is not supported.") class StoreRandomGetNotSupported(StoreGetNotSupported): message = _("Getting images randomly from this store is not supported. " "Offset: %(offset)s, length: %(chunk_size)s") class StoreAddDisabled(GlanceStoreException): message = _("Configuration for store failed. Adding images to this " "store is disabled.") class MaxRedirectsExceeded(GlanceStoreException): message = _("Maximum redirects (%(redirects)s) was exceeded.") class NoServiceEndpoint(GlanceStoreException): message = _("Response from Keystone does not contain a Glance endpoint.") class RegionAmbiguity(GlanceStoreException): message = _("Multiple 'image' service matches for region %(region)s. This " "generally means that a region is required and you have not " "supplied one.") class RemoteServiceUnavailable(GlanceStoreException): message = _("Remote server where the image is present is unavailable.") class HasSnapshot(GlanceStoreException): message = _("The image cannot be deleted because it has snapshot(s).") class InUseByStore(GlanceStoreException): message = _("The image cannot be deleted because it is in use through " "the backend store outside of Glance.") glance_store-0.23.0/glance_store/location.py0000666000175100017510000001341513230237440021120 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A class that describes the location of an image in Glance. In Glance, an image can either be **stored** in Glance, or it can be **registered** in Glance but actually be stored somewhere else. We needed a class that could support the various ways that Glance describes where exactly an image is stored. An image in Glance has two location properties: the image URI and the image storage URI. The image URI is essentially the permalink identifier for the image. It is displayed in the output of various Glance API calls and, while read-only, is entirely user-facing. It shall **not** contain any security credential information at all. The Glance image URI shall be the host:port of that Glance API server along with /images/. The Glance storage URI is an internal URI structure that Glance uses to maintain critical information about how to access the images that it stores in its storage backends. It **may contain** security credentials and is **not** user-facing. """ import logging from oslo_config import cfg from six.moves import urllib from glance_store import exceptions CONF = cfg.CONF LOG = logging.getLogger(__name__) SCHEME_TO_CLS_MAP = {} def get_location_from_uri(uri, conf=CONF): """ Given a URI, return a Location object that has had an appropriate store parse the URI. :param uri: A URI that could come from the end-user in the Location attribute/header. :param conf: The global configuration. Example URIs: https://user:pass@example.com:80/images/some-id http://images.oracle.com/123456 swift://example.com/container/obj-id swift://user:account:pass@authurl.com/container/obj-id swift+http://user:account:pass@authurl.com/container/obj-id file:///var/lib/glance/images/1 cinder://volume-id """ pieces = urllib.parse.urlparse(uri) if pieces.scheme not in SCHEME_TO_CLS_MAP.keys(): raise exceptions.UnknownScheme(scheme=pieces.scheme) scheme_info = SCHEME_TO_CLS_MAP[pieces.scheme] return Location(pieces.scheme, scheme_info['location_class'], conf, uri=uri) def register_scheme_map(scheme_map): """ Given a mapping of 'scheme' to store_name, adds the mapping to the known list of schemes. This function overrides existing stores. """ for (k, v) in scheme_map.items(): LOG.debug("Registering scheme %s with %s", k, v) SCHEME_TO_CLS_MAP[k] = v class Location(object): """ Class describing the location of an image that Glance knows about """ def __init__(self, store_name, store_location_class, conf, uri=None, image_id=None, store_specs=None): """ Create a new Location object. :param store_name: The string identifier/scheme of the storage backend :param store_location_class: The store location class to use for this location instance. :param image_id: The identifier of the image in whatever storage backend is used. :param uri: Optional URI to construct location from :param store_specs: Dictionary of information about the location of the image that is dependent on the backend store """ self.store_name = store_name self.image_id = image_id self.store_specs = store_specs or {} self.conf = conf self.store_location = store_location_class(self.store_specs, conf) if uri: self.store_location.parse_uri(uri) def get_store_uri(self): """ Returns the Glance image URI, which is the host:port of the API server along with /images/ """ return self.store_location.get_uri() def get_uri(self): return None class StoreLocation(object): """ Base class that must be implemented by each store """ def __init__(self, store_specs, conf): self.conf = conf self.specs = store_specs if self.specs: self.process_specs() def process_specs(self): """ Subclasses should implement any processing of the self.specs collection such as storing credentials and possibly establishing connections. """ pass def get_uri(self): """ Subclasses should implement a method that returns an internal URI that, when supplied to the StoreLocation instance, can be interpreted by the StoreLocation's parse_uri() method. The URI returned from this method shall never be public and only used internally within Glance, so it is fine to encode credentials in this URI. """ raise NotImplementedError("StoreLocation subclass must implement " "get_uri()") def parse_uri(self, uri): """ Subclasses should implement a method that accepts a string URI and sets appropriate internal fields such that a call to get_uri() will return a proper internal URI """ raise NotImplementedError("StoreLocation subclass must implement " "parse_uri()") glance_store-0.23.0/glance_store/_drivers/0000775000175100017510000000000013230237776020561 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/_drivers/cinder.py0000666000175100017510000007025713230237440022400 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for Cinder""" import contextlib import errno import hashlib import logging import math import os import shlex import socket import time from keystoneauth1.access import service_catalog as keystone_sc from keystoneauth1 import exceptions as keystone_exc from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import units from glance_store import capabilities from glance_store.common import utils import glance_store.driver from glance_store import exceptions from glance_store.i18n import _, _LE, _LW, _LI import glance_store.location try: from cinderclient import exceptions as cinder_exception from cinderclient.v2 import client as cinderclient from os_brick.initiator import connector from oslo_privsep import priv_context except ImportError: cinder_exception = None cinderclient = None connector = None priv_context = None CONF = cfg.CONF LOG = logging.getLogger(__name__) _CINDER_OPTS = [ cfg.StrOpt('cinder_catalog_info', default='volumev2::publicURL', help=_(""" Information to match when looking for cinder in the service catalog. When the ``cinder_endpoint_template`` is not set and any of ``cinder_store_auth_address``, ``cinder_store_user_name``, ``cinder_store_project_name``, ``cinder_store_password`` is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. ``cinder_os_region_name``, if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the ``openstack catalog list`` command. Possible values: * A string of of the following form: ``::`` At least ``service_type`` and ``interface`` should be specified. ``service_name`` can be omitted. Related options: * cinder_os_region_name * cinder_endpoint_template * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_password """)), cfg.StrOpt('cinder_endpoint_template', default=None, help=_(""" Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if ``cinder_store_auth_address``, ``cinder_store_user_name``, ``cinder_store_project_name``, and ``cinder_store_password`` are specified. If this configuration option is set, ``cinder_catalog_info`` will be ignored. Possible values: * URL template string for cinder endpoint, where ``%%(tenant)s`` is replaced with the current tenant (project) name. For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s`` Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_password * cinder_catalog_info """)), cfg.StrOpt('cinder_os_region_name', deprecated_name='os_region_name', default=None, help=_(""" Region name to lookup cinder service from the service catalog. This is used only when ``cinder_catalog_info`` is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: * A string that is a valid region name. Related options: * cinder_catalog_info """)), cfg.StrOpt('cinder_ca_certificates_file', help=_(""" Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. ``cinder_api_insecure`` must be set to ``True`` to enable the verification. Possible values: * Path to a ca certificates file Related options: * cinder_api_insecure """)), cfg.IntOpt('cinder_http_retries', min=0, default=3, help=_(""" Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: * A positive integer Related options: * None """)), cfg.IntOpt('cinder_state_transition_timeout', min=0, default=300, help=_(""" Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from ``creating`` to ``available`` after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. `error``), the image creation fails. Possible values: * A positive integer Related options: * None """)), cfg.BoolOpt('cinder_api_insecure', default=False, help=_(""" Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by ``cinder_ca_certificates_file`` option. Possible values: * True * False Related options: * cinder_ca_certificates_file """)), cfg.StrOpt('cinder_store_auth_address', default=None, help=_(""" The address where the cinder authentication service is listening. When all of ``cinder_store_auth_address``, ``cinder_store_user_name``, ``cinder_store_project_name``, and ``cinder_store_password`` options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: * A valid authentication service address, for example: ``http://openstack.example.org/identity/v2.0`` Related options: * cinder_store_user_name * cinder_store_password * cinder_store_project_name """)), cfg.StrOpt('cinder_store_user_name', default=None, help=_(""" User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: * A valid user name Related options: * cinder_store_auth_address * cinder_store_password * cinder_store_project_name """)), cfg.StrOpt('cinder_store_password', secret=True, help=_(""" Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: * A valid password for the user specified by ``cinder_store_user_name`` Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name """)), cfg.StrOpt('cinder_store_project_name', default=None, help=_(""" Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: * A valid project name Related options: * ``cinder_store_auth_address`` * ``cinder_store_user_name`` * ``cinder_store_password`` """)), cfg.StrOpt('rootwrap_config', default='/etc/glance/rootwrap.conf', help=_(""" Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: * Path to the rootwrap config file Related options: * None """)), cfg.StrOpt('cinder_volume_type', default=None, help=_(""" Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: * A valid volume type from cinder Related options: * None """)), ] def get_root_helper(): return 'sudo glance-rootwrap %s' % CONF.glance_store.rootwrap_config def is_user_overriden(conf): return all([conf.glance_store.get('cinder_store_' + key) for key in ['user_name', 'password', 'project_name', 'auth_address']]) def get_cinderclient(conf, context=None): glance_store = conf.glance_store user_overriden = is_user_overriden(conf) if user_overriden: username = glance_store.cinder_store_user_name password = glance_store.cinder_store_password project = glance_store.cinder_store_project_name url = glance_store.cinder_store_auth_address else: username = context.user password = context.auth_token project = context.tenant if glance_store.cinder_endpoint_template: url = glance_store.cinder_endpoint_template % context.to_dict() else: info = glance_store.cinder_catalog_info service_type, service_name, interface = info.split(':') try: catalog = keystone_sc.ServiceCatalogV2(context.service_catalog) url = catalog.url_for( region_name=glance_store.cinder_os_region_name, service_type=service_type, service_name=service_name, interface=interface) except keystone_exc.EndpointNotFound: reason = _("Failed to find Cinder from a service catalog.") raise exceptions.BadStoreConfiguration(store_name="cinder", reason=reason) c = cinderclient.Client(username, password, project, auth_url=url, insecure=glance_store.cinder_api_insecure, retries=glance_store.cinder_http_retries, cacert=glance_store.cinder_ca_certificates_file) LOG.debug('Cinderclient connection created for user %(user)s using URL: ' '%(url)s.', {'user': username, 'url': url}) # noauth extracts user_id:project_id from auth_token if not user_overriden: c.client.auth_token = context.auth_token or '%s:%s' % (username, project) c.client.management_url = url return c class StoreLocation(glance_store.location.StoreLocation): """Class describing a Cinder URI.""" def process_specs(self): self.scheme = self.specs.get('scheme', 'cinder') self.volume_id = self.specs.get('volume_id') def get_uri(self): return "cinder://%s" % self.volume_id def parse_uri(self, uri): if not uri.startswith('cinder://'): reason = _("URI must start with 'cinder://'") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) self.scheme = 'cinder' self.volume_id = uri[9:] if not utils.is_uuid_like(self.volume_id): reason = _("URI contains invalid volume ID") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) @contextlib.contextmanager def temporary_chown(path): owner_uid = os.getuid() orig_uid = os.stat(path).st_uid if orig_uid != owner_uid: processutils.execute('chown', owner_uid, path, run_as_root=True, root_helper=get_root_helper()) try: yield finally: if orig_uid != owner_uid: processutils.execute('chown', orig_uid, path, run_as_root=True, root_helper=get_root_helper()) class Store(glance_store.driver.Store): """Cinder backend store adapter.""" _CAPABILITIES = (capabilities.BitMasks.READ_RANDOM | capabilities.BitMasks.WRITE_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _CINDER_OPTS EXAMPLE_URL = "cinder://" def __init__(self, *args, **kargs): super(Store, self).__init__(*args, **kargs) LOG.warning(_LW("Cinder store is considered experimental. " "Current deployers should be aware that the use " "of it in production right now may be risky.")) def get_schemes(self): return ('cinder',) def _check_context(self, context, require_tenant=False): user_overriden = is_user_overriden(self.conf) if user_overriden and not require_tenant: return if context is None: reason = _("Cinder storage requires a context.") raise exceptions.BadStoreConfiguration(store_name="cinder", reason=reason) if not user_overriden and context.service_catalog is None: reason = _("Cinder storage requires a service catalog.") raise exceptions.BadStoreConfiguration(store_name="cinder", reason=reason) def _wait_volume_status(self, volume, status_transition, status_expected): max_recheck_wait = 15 timeout = self.conf.glance_store.cinder_state_transition_timeout volume = volume.manager.get(volume.id) tries = 0 elapsed = 0 while volume.status == status_transition: if elapsed >= timeout: msg = (_('Timeout while waiting while volume %(volume_id)s ' 'status is %(status)s.') % {'volume_id': volume.id, 'status': status_transition}) LOG.error(msg) raise exceptions.BackendException(msg) wait = min(0.5 * 2 ** tries, max_recheck_wait) time.sleep(wait) tries += 1 elapsed += wait volume = volume.manager.get(volume.id) if volume.status != status_expected: msg = (_('The status of volume %(volume_id)s is unexpected: ' 'status = %(status)s, expected = %(expected)s.') % {'volume_id': volume.id, 'status': volume.status, 'expected': status_expected}) LOG.error(msg) raise exceptions.BackendException(msg) return volume @contextlib.contextmanager def _open_cinder_volume(self, client, volume, mode): attach_mode = 'rw' if mode == 'wb' else 'ro' device = None root_helper = get_root_helper() priv_context.init(root_helper=shlex.split(root_helper)) host = socket.gethostname() properties = connector.get_connector_properties(root_helper, host, False, False) try: volume.reserve(volume) except cinder_exception.ClientException as e: msg = (_('Failed to reserve volume %(volume_id)s: %(error)s') % {'volume_id': volume.id, 'error': e}) LOG.error(msg) raise exceptions.BackendException(msg) try: connection_info = volume.initialize_connection(volume, properties) conn = connector.InitiatorConnector.factory( connection_info['driver_volume_type'], root_helper, conn=connection_info) device = conn.connect_volume(connection_info['data']) volume.attach(None, None, attach_mode, host_name=host) volume = self._wait_volume_status(volume, 'attaching', 'in-use') if (connection_info['driver_volume_type'] == 'rbd' and not conn.do_local_attach): yield device['path'] else: with temporary_chown(device['path']), \ open(device['path'], mode) as f: yield f except Exception: LOG.exception(_LE('Exception while accessing to cinder volume ' '%(volume_id)s.'), {'volume_id': volume.id}) raise finally: if volume.status == 'in-use': volume.begin_detaching(volume) elif volume.status == 'attaching': volume.unreserve(volume) if device: try: conn.disconnect_volume(connection_info['data'], device) except Exception: LOG.exception(_LE('Failed to disconnect volume ' '%(volume_id)s.'), {'volume_id': volume.id}) try: volume.terminate_connection(volume, properties) except Exception: LOG.exception(_LE('Failed to terminate connection of volume ' '%(volume_id)s.'), {'volume_id': volume.id}) try: client.volumes.detach(volume) except Exception: LOG.exception(_LE('Failed to detach volume %(volume_id)s.'), {'volume_id': volume.id}) def _cinder_volume_data_iterator(self, client, volume, max_size, offset=0, chunk_size=None, partial_length=None): chunk_size = chunk_size if chunk_size else self.READ_CHUNKSIZE partial = partial_length is not None with self._open_cinder_volume(client, volume, 'rb') as fp: if offset: fp.seek(offset) max_size -= offset while True: if partial: size = min(chunk_size, partial_length, max_size) else: size = min(chunk_size, max_size) chunk = fp.read(size) if chunk: yield chunk max_size -= len(chunk) if max_size <= 0: break if partial: partial_length -= len(chunk) if partial_length <= 0: break else: break @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :param offset: offset to start reading :param chunk_size: size to read, or None to get all the image :param context: Request context :raises `glance_store.exceptions.NotFound` if image does not exist """ loc = location.store_location self._check_context(context) try: client = get_cinderclient(self.conf, context) volume = client.volumes.get(loc.volume_id) size = int(volume.metadata.get('image_size', volume.size * units.Gi)) iterator = self._cinder_volume_data_iterator( client, volume, size, offset=offset, chunk_size=self.READ_CHUNKSIZE, partial_length=chunk_size) return (iterator, chunk_size or size) except cinder_exception.NotFound: reason = _("Failed to get image size due to " "volume can not be found: %s") % loc.volume_id LOG.error(reason) raise exceptions.NotFound(reason) except cinder_exception.ClientException as e: msg = (_('Failed to get image volume %(volume_id)s: %(error)s') % {'volume_id': loc.volume_id, 'error': e}) LOG.error(msg) raise exceptions.BackendException(msg) def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file and returns the image size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist :rtype: int """ loc = location.store_location try: self._check_context(context) volume = get_cinderclient(self.conf, context).volumes.get(loc.volume_id) return int(volume.metadata.get('image_size', volume.size * units.Gi)) except cinder_exception.NotFound: raise exceptions.NotFound(image=loc.volume_id) except Exception: LOG.exception(_LE("Failed to get image size due to " "internal error.")) return 0 @capabilities.check def add(self, image_id, image_file, image_size, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param context: The request context :param verifier: An object used to verify signatures for images :retval tuple of URL in backing store, bytes written, checksum and a dictionary with storage system specific information :raises `glance_store.exceptions.Duplicate` if the image already existed """ self._check_context(context, require_tenant=True) client = get_cinderclient(self.conf, context) checksum = hashlib.md5() bytes_written = 0 size_gb = int(math.ceil(float(image_size) / units.Gi)) if size_gb == 0: size_gb = 1 name = "image-%s" % image_id owner = context.tenant metadata = {'glance_image_id': image_id, 'image_size': str(image_size), 'image_owner': owner} volume_type = self.conf.glance_store.cinder_volume_type LOG.debug('Creating a new volume: image_size=%d size_gb=%d type=%s', image_size, size_gb, volume_type or 'None') if image_size == 0: LOG.info(_LI("Since image size is zero, we will be doing " "resize-before-write for each GB which " "will be considerably slower than normal.")) volume = client.volumes.create(size_gb, name=name, metadata=metadata, volume_type=volume_type) volume = self._wait_volume_status(volume, 'creating', 'available') size_gb = volume.size failed = True need_extend = True buf = None try: while need_extend: with self._open_cinder_volume(client, volume, 'wb') as f: f.seek(bytes_written) if buf: f.write(buf) bytes_written += len(buf) while True: buf = image_file.read(self.WRITE_CHUNKSIZE) if not buf: need_extend = False break checksum.update(buf) if verifier: verifier.update(buf) if (bytes_written + len(buf) > size_gb * units.Gi and image_size == 0): break f.write(buf) bytes_written += len(buf) if need_extend: size_gb += 1 LOG.debug("Extending volume %(volume_id)s to %(size)s GB.", {'volume_id': volume.id, 'size': size_gb}) volume.extend(volume, size_gb) try: volume = self._wait_volume_status(volume, 'extending', 'available') size_gb = volume.size except exceptions.BackendException: raise exceptions.StorageFull() failed = False except IOError as e: # Convert IOError reasons to Glance Store exceptions errors = {errno.EFBIG: exceptions.StorageFull(), errno.ENOSPC: exceptions.StorageFull(), errno.EACCES: exceptions.StorageWriteDenied()} raise errors.get(e.errno, e) finally: if failed: LOG.error(_LE("Failed to write to volume %(volume_id)s."), {'volume_id': volume.id}) try: volume.delete() except Exception: LOG.exception(_LE('Failed to delete of volume ' '%(volume_id)s.'), {'volume_id': volume.id}) if image_size == 0: metadata.update({'image_size': str(bytes_written)}) volume.update_all_metadata(metadata) volume.update_readonly_flag(volume, True) checksum_hex = checksum.hexdigest() LOG.debug("Wrote %(bytes_written)d bytes to volume %(volume_id)s " "with checksum %(checksum_hex)s.", {'bytes_written': bytes_written, 'volume_id': volume.id, 'checksum_hex': checksum_hex}) return ('cinder://%s' % volume.id, bytes_written, checksum_hex, {}) @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :location `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises NotFound if image does not exist :raises Forbidden if cannot delete because of permissions """ loc = location.store_location self._check_context(context) try: volume = get_cinderclient(self.conf, context).volumes.get(loc.volume_id) volume.delete() except cinder_exception.NotFound: raise exceptions.NotFound(image=loc.volume_id) except cinder_exception.ClientException as e: msg = (_('Failed to delete volume %(volume_id)s: %(error)s') % {'volume_id': loc.volume_id, 'error': e}) raise exceptions.BackendException(msg) glance_store-0.23.0/glance_store/_drivers/__init__.py0000666000175100017510000000000013230237440022646 0ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/_drivers/rbd.py0000666000175100017510000005167613230237440021707 0ustar zuulzuul00000000000000# Copyright 2010-2011 Josh Durgin # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for RBD (RADOS (Reliable Autonomic Distributed Object Store) Block Device)""" from __future__ import absolute_import from __future__ import with_statement import contextlib import hashlib import logging import math from oslo_config import cfg from oslo_utils import units from six.moves import urllib from glance_store import capabilities from glance_store.common import utils from glance_store import driver from glance_store import exceptions from glance_store.i18n import _, _LE, _LI from glance_store import location try: import rados import rbd except ImportError: rados = None rbd = None DEFAULT_POOL = 'images' DEFAULT_CONFFILE = '/etc/ceph/ceph.conf' DEFAULT_USER = None # let librados decide based on the Ceph conf file DEFAULT_CHUNKSIZE = 8 # in MiB DEFAULT_SNAPNAME = 'snap' LOG = logging.getLogger(__name__) _RBD_OPTS = [ cfg.IntOpt('rbd_store_chunk_size', default=DEFAULT_CHUNKSIZE, min=1, help=_(""" Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: * Any positive integer value Related options: * None """)), cfg.StrOpt('rbd_store_pool', default=DEFAULT_POOL, help=_(""" RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a ``pool``. Each pool is defined with the number of placement groups it can contain. The default pool that is used is 'images'. More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: * A valid pool name Related options: * None """)), cfg.StrOpt('rbd_store_user', default=DEFAULT_USER, help=_(""" RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: * A valid RADOS user Related options: * rbd_store_ceph_conf """)), cfg.StrOpt('rbd_store_ceph_conf', default=DEFAULT_CONFFILE, help=_(""" Ceph configuration file path. This configuration option takes in the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to None, librados will locate the default configuration file which is located at /etc/ceph/ceph.conf. If using Cephx authentication, this file should include a reference to the right keyring in a client. section Possible Values: * A valid path to a configuration file Related options: * rbd_store_user """)), cfg.IntOpt('rados_connect_timeout', default=0, help=_(""" Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: * Any integer value Related options: * None """)) ] class StoreLocation(location.StoreLocation): """ Class describing a RBD URI. This is of the form: rbd://image or rbd://fsid/pool/image/snapshot """ def process_specs(self): # convert to ascii since librbd doesn't handle unicode for key, value in self.specs.items(): self.specs[key] = str(value) self.fsid = self.specs.get('fsid') self.pool = self.specs.get('pool') self.image = self.specs.get('image') self.snapshot = self.specs.get('snapshot') def get_uri(self): if self.fsid and self.pool and self.snapshot: # ensure nothing contains / or any other url-unsafe character safe_fsid = urllib.parse.quote(self.fsid, '') safe_pool = urllib.parse.quote(self.pool, '') safe_image = urllib.parse.quote(self.image, '') safe_snapshot = urllib.parse.quote(self.snapshot, '') return "rbd://%s/%s/%s/%s" % (safe_fsid, safe_pool, safe_image, safe_snapshot) else: return "rbd://%s" % self.image def parse_uri(self, uri): prefix = 'rbd://' if not uri.startswith(prefix): reason = _('URI must start with rbd://') msg = _LI("Invalid URI: %s") % reason LOG.info(msg) raise exceptions.BadStoreUri(message=reason) # convert to ascii since librbd doesn't handle unicode try: ascii_uri = str(uri) except UnicodeError: reason = _('URI contains non-ascii characters') msg = _LI("Invalid URI: %s") % reason LOG.info(msg) raise exceptions.BadStoreUri(message=reason) pieces = ascii_uri[len(prefix):].split('/') if len(pieces) == 1: self.fsid, self.pool, self.image, self.snapshot = \ (None, None, pieces[0], None) elif len(pieces) == 4: self.fsid, self.pool, self.image, self.snapshot = \ map(urllib.parse.unquote, pieces) else: reason = _('URI must have exactly 1 or 4 components') msg = _LI("Invalid URI: %s") % reason LOG.info(msg) raise exceptions.BadStoreUri(message=reason) if any(map(lambda p: p == '', pieces)): reason = _('URI cannot contain empty components') msg = _LI("Invalid URI: %s") % reason LOG.info(msg) raise exceptions.BadStoreUri(message=reason) class ImageIterator(object): """ Reads data from an RBD image, one chunk at a time. """ def __init__(self, pool, name, snapshot, store, chunk_size=None): self.pool = pool or store.pool self.name = name self.snapshot = snapshot self.user = store.user self.conf_file = store.conf_file self.chunk_size = chunk_size or store.READ_CHUNKSIZE self.store = store def __iter__(self): try: with self.store.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: with conn.open_ioctx(self.pool) as ioctx: with rbd.Image(ioctx, self.name, snapshot=self.snapshot) as image: size = image.size() bytes_left = size while bytes_left > 0: length = min(self.chunk_size, bytes_left) data = image.read(size - bytes_left, length) bytes_left -= len(data) yield data raise StopIteration() except rbd.ImageNotFound: raise exceptions.NotFound( _('RBD image %s does not exist') % self.name) class Store(driver.Store): """An implementation of the RBD backend adapter.""" _CAPABILITIES = capabilities.BitMasks.RW_ACCESS OPTIONS = _RBD_OPTS EXAMPLE_URL = "rbd://///" def get_schemes(self): return ('rbd',) @contextlib.contextmanager def get_connection(self, conffile, rados_id): client = rados.Rados(conffile=conffile, rados_id=rados_id) try: client.connect(timeout=self.connect_timeout) except rados.Error: msg = _LE("Error connecting to ceph cluster.") LOG.exception(msg) raise exceptions.BackendException() try: yield client finally: client.shutdown() def configure_add(self): """ Configure the Store to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exceptions.BadStoreConfiguration` """ try: chunk = self.conf.glance_store.rbd_store_chunk_size self.chunk_size = chunk * units.Mi self.READ_CHUNKSIZE = self.chunk_size self.WRITE_CHUNKSIZE = self.READ_CHUNKSIZE # these must not be unicode since they will be passed to a # non-unicode-aware C library self.pool = str(self.conf.glance_store.rbd_store_pool) self.user = str(self.conf.glance_store.rbd_store_user) self.conf_file = str(self.conf.glance_store.rbd_store_ceph_conf) self.connect_timeout = self.conf.glance_store.rados_connect_timeout except cfg.ConfigFileValueError as e: reason = _("Error in store configuration: %s") % e LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name='rbd', reason=reason) @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ loc = location.store_location return (ImageIterator(loc.pool, loc.image, loc.snapshot, self), self.get_size(location)) def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns the size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ loc = location.store_location # if there is a pool specific in the location, use it; otherwise # we fall back to the default pool specified in the config target_pool = loc.pool or self.pool with self.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: with conn.open_ioctx(target_pool) as ioctx: try: with rbd.Image(ioctx, loc.image, snapshot=loc.snapshot) as image: img_info = image.stat() return img_info['size'] except rbd.ImageNotFound: msg = _('RBD image %s does not exist') % loc.get_uri() LOG.debug(msg) raise exceptions.NotFound(msg) def _create_image(self, fsid, conn, ioctx, image_name, size, order, context=None): """ Create an rbd image. If librbd supports it, make it a cloneable snapshot, so that copy-on-write volumes can be created from it. :param image_name: Image's name :retval: `glance_store.rbd.StoreLocation` object """ librbd = rbd.RBD() features = conn.conf_get('rbd_default_features') if ((features is None) or (int(features) == 0)): features = rbd.RBD_FEATURE_LAYERING librbd.create(ioctx, image_name, size, order, old_format=False, features=int(features)) return StoreLocation({ 'fsid': fsid, 'pool': self.pool, 'image': image_name, 'snapshot': DEFAULT_SNAPNAME, }, self.conf) def _delete_image(self, target_pool, image_name, snapshot_name=None, context=None): """ Delete RBD image and snapshot. :param image_name: Image's name :param snapshot_name: Image snapshot's name :raises: NotFound if image does not exist; InUseByStore if image is in use or snapshot unprotect failed """ with self.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: with conn.open_ioctx(target_pool) as ioctx: try: # First remove snapshot. if snapshot_name is not None: with rbd.Image(ioctx, image_name) as image: try: self._unprotect_snapshot(image, snapshot_name) image.remove_snap(snapshot_name) except rbd.ImageNotFound as exc: msg = (_("Snap Operating Exception " "%(snap_exc)s " "Snapshot does not exist.") % {'snap_exc': exc}) LOG.debug(msg) except rbd.ImageBusy as exc: log_msg = (_LE("Snap Operating Exception " "%(snap_exc)s " "Snapshot is in use.") % {'snap_exc': exc}) LOG.error(log_msg) raise exceptions.InUseByStore() # Then delete image. rbd.RBD().remove(ioctx, image_name) except rbd.ImageHasSnapshots: log_msg = (_LE("Remove image %(img_name)s failed. " "It has snapshot(s) left.") % {'img_name': image_name}) LOG.error(log_msg) raise exceptions.HasSnapshot() except rbd.ImageBusy: log_msg = (_LE("Remove image %(img_name)s failed. " "It is in use.") % {'img_name': image_name}) LOG.error(log_msg) raise exceptions.InUseByStore() except rbd.ImageNotFound: msg = _("RBD image %s does not exist") % image_name raise exceptions.NotFound(message=msg) def _unprotect_snapshot(self, image, snap_name): try: image.unprotect_snap(snap_name) except rbd.InvalidArgument: # NOTE(slaweq): if snapshot was unprotected already, rbd library # raises InvalidArgument exception without any "clear" message. # Such exception is not dangerous for us so it will be just logged LOG.debug("Snapshot %s is unprotected already" % snap_name) @capabilities.check def add(self, image_id, image_file, image_size, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param verifier: An object used to verify signatures for images :retval: tuple of URL in backing store, bytes written, checksum and a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already existed """ checksum = hashlib.md5() image_name = str(image_id) with self.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: fsid = None if hasattr(conn, 'get_fsid'): fsid = conn.get_fsid() with conn.open_ioctx(self.pool) as ioctx: order = int(math.log(self.WRITE_CHUNKSIZE, 2)) LOG.debug('creating image %s with order %d and size %d', image_name, order, image_size) if image_size == 0: LOG.warning(_("since image size is zero we will be doing " "resize-before-write for each chunk which " "will be considerably slower than normal")) try: loc = self._create_image(fsid, conn, ioctx, image_name, image_size, order) except rbd.ImageExists: msg = _('RBD image %s already exists') % image_id raise exceptions.Duplicate(message=msg) try: with rbd.Image(ioctx, image_name) as image: bytes_written = 0 offset = 0 chunks = utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE) for chunk in chunks: # If the image size provided is zero we need to do # a resize for the amount we are writing. This will # be slower so setting a higher chunk size may # speed things up a bit. if image_size == 0: chunk_length = len(chunk) length = offset + chunk_length bytes_written += chunk_length LOG.debug(_("resizing image to %s KiB") % (length / units.Ki)) image.resize(length) LOG.debug(_("writing chunk at offset %s") % (offset)) offset += image.write(chunk, offset) checksum.update(chunk) if verifier: verifier.update(chunk) if loc.snapshot: image.create_snap(loc.snapshot) image.protect_snap(loc.snapshot) except Exception as exc: log_msg = (_LE("Failed to store image %(img_name)s " "Store Exception %(store_exc)s") % {'img_name': image_name, 'store_exc': exc}) LOG.error(log_msg) # Delete image if one was created try: target_pool = loc.pool or self.pool self._delete_image(target_pool, loc.image, loc.snapshot) except exceptions.NotFound: pass raise exc # Make sure we send back the image size whether provided or inferred. if image_size == 0: image_size = bytes_written return (loc.get_uri(), image_size, checksum.hexdigest(), {}) @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete. :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist; InUseByStore if image is in use or snapshot unprotect failed """ loc = location.store_location target_pool = loc.pool or self.pool self._delete_image(target_pool, loc.image, loc.snapshot) glance_store-0.23.0/glance_store/_drivers/swift/0000775000175100017510000000000013230237776021715 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/_drivers/swift/__init__.py0000666000175100017510000000134313230237440024015 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store._drivers.swift import utils # noqa from glance_store._drivers.swift.store import * # noqa glance_store-0.23.0/glance_store/_drivers/swift/connection_manager.py0000666000175100017510000002103213230237440026104 0ustar zuulzuul00000000000000# Copyright 2010-2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Connection Manager for Swift connections that responsible for providing connection with valid credentials and updated token""" import logging from oslo_utils import encodeutils from glance_store import exceptions from glance_store.i18n import _, _LI LOG = logging.getLogger(__name__) class SwiftConnectionManager(object): """Connection Manager class responsible for initializing and managing swiftclient connections in store. The instance of that class can provide swift connections with a valid(and refreshed) user token if the token is going to expire soon. """ AUTH_HEADER_NAME = 'X-Auth-Token' def __init__(self, store, store_location, context=None, allow_reauth=False): """Initialize manager with parameters required to establish connection. Initialize store and prepare it for interacting with swift. Also initialize keystone client that need to be used for authentication if allow_reauth is True. The method invariant is the following: if method was executed successfully and self.allow_reauth is True users can safely request valid(no expiration) swift connections any time. Otherwise, connection manager initialize a connection once and always returns that connection to users. :param store: store that provides connections :param store_location: image location in store :param context: user context to access data in Swift :param allow_reauth: defines if re-authentication need to be executed when a user request the connection """ self._client = None self.store = store self.location = store_location self.context = context self.allow_reauth = allow_reauth self.storage_url = self._get_storage_url() self.connection = self._init_connection() def get_connection(self): """Get swift client connection. Returns swift client connection. If allow_reauth is True and connection token is going to expire soon then the method returns updated connection. The method invariant is the following: if self.allow_reauth is False then the method returns the same connection for every call. So the connection may expire. If self.allow_reauth is True the returned swift connection is always valid and cannot expire at least for swift_store_expire_soon_interval. """ if self.allow_reauth: # we are refreshing token only and if only connection manager # re-authentication is allowed. Token refreshing is setup by # connection manager users. Also we disable re-authentication # if there is not way to execute it (cannot initialize trusts for # multi-tenant or auth_version is not 3) auth_ref = self.client.session.auth.auth_ref # if connection token is going to expire soon (keystone checks # is token is going to expire or expired already) if auth_ref.will_expire_soon( self.store.conf.glance_store.swift_store_expire_soon_interval ): LOG.info(_LI("Requesting new token for swift connection.")) # request new token with session and client provided by store auth_token = self.client.session.get_auth_headers().get( self.AUTH_HEADER_NAME) LOG.info(_LI("Token has been successfully requested. " "Refreshing swift connection.")) # initialize new switclient connection with fresh token self.connection = self.store.get_store_connection( auth_token, self.storage_url) return self.connection @property def client(self): """Return keystone client to request a new token. Initialize a client lazily from the method provided by glance_store. The method invariant is the following: if client cannot be initialized raise exception otherwise return initialized client that can be used for re-authentication any time. """ if self._client is None: self._client = self._init_client() return self._client def _init_connection(self): """Initialize and return valid Swift connection.""" auth_token = self.client.session.get_auth_headers().get( self.AUTH_HEADER_NAME) return self.store.get_store_connection( auth_token, self.storage_url) def _init_client(self): """Initialize Keystone client.""" return self.store.init_client(location=self.location, context=self.context) def _get_storage_url(self): """Request swift storage url.""" raise NotImplementedError() def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): pass class SingleTenantConnectionManager(SwiftConnectionManager): def _get_storage_url(self): """Get swift endpoint from keystone Return endpoint for swift from service catalog. The method works only Keystone v3. If you are using different version (1 or 2) it returns None. :return: swift endpoint """ if self.store.auth_version == '3': try: return self.client.session.get_endpoint( service_type=self.store.service_type, interface=self.store.endpoint_type, region_name=self.store.region ) except Exception as e: # do the same that swift driver does # when catching ClientException msg = _("Cannot find swift service endpoint : " "%s") % encodeutils.exception_to_unicode(e) raise exceptions.BackendException(msg) def _init_connection(self): if self.store.auth_version == '3': return super(SingleTenantConnectionManager, self)._init_connection() else: # no re-authentication for v1 and v2 self.allow_reauth = False # use good old connection initialization return self.store.get_connection(self.location, self.context) class MultiTenantConnectionManager(SwiftConnectionManager): def __init__(self, store, store_location, context=None, allow_reauth=False): # no context - no party if context is None: reason = _("Multi-tenant Swift storage requires a user context.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) super(MultiTenantConnectionManager, self).__init__( store, store_location, context, allow_reauth) def __exit__(self, exc_type, exc_val, exc_tb): if self._client and self.client.trust_id: # client has been initialized - need to cleanup resources LOG.info(_LI("Revoking trust %s"), self.client.trust_id) self.client.trusts.delete(self.client.trust_id) def _get_storage_url(self): return self.location.swift_url def _init_connection(self): if self.allow_reauth: try: return super(MultiTenantConnectionManager, self)._init_connection() except Exception as e: LOG.debug("Cannot initialize swift connection for multi-tenant" " store with trustee token: %s. Using user token for" " connection initialization.", e) # for multi-tenant store we have a token, so we can use it # for connection initialization but we cannot fetch new token # with client self.allow_reauth = False return self.store.get_store_connection( self.context.auth_token, self.storage_url) glance_store-0.23.0/glance_store/_drivers/swift/buffered.py0000666000175100017510000001374113230237440024045 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import socket import tempfile from oslo_config import cfg from oslo_utils import encodeutils from glance_store import exceptions from glance_store.i18n import _ LOG = logging.getLogger(__name__) READ_SIZE = 65536 BUFFERING_OPTS = [ cfg.StrOpt('swift_upload_buffer_dir', help=_(""" Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option ``swift_buffer_on_upload`` is set to True. * This directory should be provisioned keeping in mind the ``swift_store_large_object_chunk_size`` and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: * String value representing an absolute directory path Related options: * swift_buffer_on_upload * swift_store_large_object_chunk_size """)), ] CONF = cfg.CONF def validate_buffering(buffer_dir): if buffer_dir is None: msg = _('Configuration option "swift_upload_buffer_dir" is ' 'not set. Please set it to a valid path to buffer ' 'during Swift uploads.') raise exceptions.BadStoreConfiguration(store_name='swift', reason=msg) # NOTE(dharinic): Ensure that the provided directory path for # buffering is valid try: _tmpfile = tempfile.TemporaryFile(dir=buffer_dir) except OSError as err: msg = (_('Unable to use buffer directory set with ' '"swift_upload_buffer_dir". Error: %s') % encodeutils.exception_to_unicode(err)) raise exceptions.BadStoreConfiguration(store_name='swift', reason=msg) else: _tmpfile.close() return True class BufferedReader(object): """Buffer a chunk (segment) worth of data to disk before sending it swift. This creates the ability to back the input stream up and re-try put object requests. (Swiftclient will try to reset the file pointer on any upload failure if seek and tell methods are provided on the input file.) Chunks are temporarily buffered to disk. Disk space consumed will be roughly (segment size * number of in-flight upload requests). There exists a possibility where the disk space consumed for buffering MAY eat into the disk space available for glance cache. This may affect image download performance. So, extra care should be taken while deploying this to ensure there is enough disk space available. """ def __init__(self, fd, checksum, total, verifier=None): self.fd = fd self.total = total self.checksum = checksum self.verifier = verifier # maintain a pointer to use to update checksum and verifier self.update_position = 0 buffer_dir = CONF.glance_store.swift_upload_buffer_dir self._tmpfile = tempfile.TemporaryFile(dir=buffer_dir) self._buffered = False self.is_zero_size = False self._buffer() # Setting the file pointer back to the beginning of file self._tmpfile.seek(0) def read(self, size): """Read up to a chunk's worth of data from the input stream into a file buffer. Then return data out of that buffer. """ remaining = self.total - self._tmpfile.tell() read_size = min(remaining, size) # read out of the buffered chunk result = self._tmpfile.read(read_size) # update the checksum and verifier with only the bytes # they have not seen update = self.update_position - self._tmpfile.tell() if update < 0: self.checksum.update(result[update:]) if self.verifier: self.verifier.update(result[update:]) self.update_position += abs(update) return result def _buffer(self): to_buffer = self.total LOG.debug("Buffering %s bytes of image segment" % to_buffer) while not self._buffered: read_size = min(to_buffer, READ_SIZE) try: buf = self.fd.read(read_size) except IOError as e: # We actually don't know what exactly self.fd is. And as a # result we don't know which exception it may raise. To pass # the retry mechanism inside swift client we must limit the # possible set of errors. raise socket.error(*e.args) if len(buf) == 0: self._tmpfile.seek(0) self._buffered = True self.is_zero_size = True break self._tmpfile.write(buf) to_buffer -= len(buf) # NOTE(belliott) seek and tell get used by python-swiftclient to "reset" # if there is a put_object error def seek(self, offset): LOG.debug("Seek from %s to %s" % (self._tmpfile.tell(), offset)) self._tmpfile.seek(offset) def tell(self): return self._tmpfile.tell() @property def bytes_read(self): return self.tell() def __enter__(self): self._tmpfile.__enter__() return self def __exit__(self, type, value, traceback): # close and delete the temporary file used to buffer data self._tmpfile.__exit__(type, value, traceback) glance_store-0.23.0/glance_store/_drivers/swift/store.py0000666000175100017510000017321013230237440023415 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for SWIFT""" import hashlib import logging import math from keystoneauth1.access import service_catalog as keystone_sc from keystoneauth1 import identity as ks_identity from keystoneauth1 import session as ks_session from keystoneclient.v3 import client as ks_client from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import units import six from six.moves import http_client from six.moves import urllib try: import swiftclient except ImportError: swiftclient = None import glance_store from glance_store._drivers.swift import buffered from glance_store._drivers.swift import connection_manager from glance_store._drivers.swift import utils as sutils from glance_store import capabilities from glance_store import driver from glance_store import exceptions from glance_store.i18n import _, _LE, _LI from glance_store import location LOG = logging.getLogger(__name__) DEFAULT_CONTAINER = 'glance' DEFAULT_LARGE_OBJECT_SIZE = 5 * units.Ki # 5GB DEFAULT_LARGE_OBJECT_CHUNK_SIZE = 200 # 200M ONE_MB = units.k * units.Ki # Here we used the mixed meaning of MB _SWIFT_OPTS = [ cfg.BoolOpt('swift_store_auth_insecure', default=False, help=_(""" Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: * True * False Related options: * swift_store_cacert """)), cfg.StrOpt('swift_store_cacert', sample_default='/etc/ssl/certs/ca-certificates.crt', help=_(""" Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: * A valid path to a CA file Related options: * swift_store_auth_insecure """)), cfg.StrOpt('swift_store_region', sample_default='RegionTwo', help=_(""" The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with ``swift_store_region`` allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. NOTE: Setting the region with ``swift_store_region`` is tenant-specific and is necessary ``only if`` the tenant has multiple endpoints across different regions. Possible values: * A string value representing a valid Swift region. Related Options: * None """)), cfg.StrOpt('swift_store_endpoint', sample_default="""\ https://swift.openstack.example.org/v1/path_not_including_container\ _name\ """, help=_(""" The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by ``auth`` is used. Setting an endpoint with ``swift_store_endpoint`` overrides the storage URL and is used for Glance image storage. NOTE: The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: * String value representing a valid URL path up to a Swift container Related Options: * None """)), cfg.StrOpt('swift_store_endpoint_type', default='publicURL', choices=('publicURL', 'adminURL', 'internalURL'), help=_(""" Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: * publicURL * adminURL * internalURL Related options: * swift_store_endpoint """)), cfg.StrOpt('swift_store_service_type', default='object-store', help=_(""" Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to ``object-store``. NOTE: If ``swift_store_auth_version`` is set to 2, the value for this configuration option needs to be ``object-store``. If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: * A string representing a valid service type for Swift storage. Related Options: * None """)), cfg.StrOpt('swift_store_container', default=DEFAULT_CONTAINER, help=_(""" Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option ``swift_store_multiple_containers_seed``. When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by ``swift_store_multiple_containers_seed``). Example: if the seed is set to 3 and swift_store_container = ``glance``, then an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in the container ``glance_fda``. All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be ``glance_fdae39a1-ba.`` Possible values: * If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account * If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of ``swift_store_multiple_containers_seed`` should be taken into account as well. Related options: * ``swift_store_multiple_containers_seed`` * ``swift_store_multi_tenant`` * ``swift_store_create_container_on_put`` """)), cfg.IntOpt('swift_store_large_object_size', default=DEFAULT_LARGE_OBJECT_SIZE, min=1, help=_(""" The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to http://docs.openstack.org/developer/swift/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. NOTE: This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: * A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: * ``swift_store_large_object_chunk_size`` """)), cfg.IntOpt('swift_store_large_object_chunk_size', default=DEFAULT_LARGE_OBJECT_CHUNK_SIZE, min=1, help=_(""" The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to ``swift_store_large_object_size`` for more detail. For example: if ``swift_store_large_object_size`` is 5GB and ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: * A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: * ``swift_store_large_object_size`` """)), cfg.BoolOpt('swift_store_create_container_on_put', default=False, help=_(""" Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: * True * False Related options: * None """)), cfg.BoolOpt('swift_store_multi_tenant', default=False, help=_(""" Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage NOTE: If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the 'swift_store_config_file' option. Possible values: * True * False Related options: * swift_store_config_file """)), cfg.IntOpt('swift_store_multiple_containers_seed', default=0, min=0, max=32, help=_(""" Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to ``swift_store_container`` for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html NOTE: This is used only when swift_store_multi_tenant is disabled. Possible values: * A non-negative integer less than or equal to 32 Related options: * ``swift_store_container`` * ``swift_store_multi_tenant`` * ``swift_store_create_container_on_put`` """)), cfg.ListOpt('swift_store_admin_tenants', default=[], help=_(""" List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: * A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: * None """)), cfg.BoolOpt('swift_store_ssl_compression', default=True, help=_(""" SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: * True * False Related Options: * None """)), cfg.IntOpt('swift_store_retry_get_count', default=0, min=0, help=_(""" The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, ``swift_store_retry_get_count`` ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: * Zero * Positive integer value Related Options: * None """)), cfg.IntOpt('swift_store_expire_soon_interval', min=0, default=60, help=_(""" Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: * Zero * Positive integer value Related Options: * None """)), cfg.BoolOpt('swift_store_use_trusts', default=True, help=_(""" Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, ``swift_store_use_trusts`` is set to ``True``(use of trusts is enabled). If set to ``False``, a user token is used for the Swift connection instead, eliminating the overhead of trust creation. NOTE: This option is considered only when ``swift_store_multi_tenant`` is set to ``True`` Possible values: * True * False Related options: * swift_store_multi_tenant """)), cfg.BoolOpt('swift_buffer_on_upload', default=False, help=_(""" Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: (``swift_store_large_object_chunk_size`` * ``workers`` * 1000) Possible values: * True * False Related options: * swift_upload_buffer_dir """)) ] def swift_retry_iter(resp_iter, length, store, location, manager): if not length and isinstance(resp_iter, six.BytesIO): if six.PY3: # On Python 3, io.BytesIO does not have a len attribute, instead # go the end using seek to get the size of the file pos = resp_iter.tell() resp_iter.seek(0, 2) length = resp_iter.tell() resp_iter.seek(pos) else: # On Python 2, StringIO has a len attribute length = resp_iter.len length = length if length else (resp_iter.len if hasattr(resp_iter, 'len') else 0) retries = 0 bytes_read = 0 while retries <= store.conf.glance_store.swift_store_retry_get_count: try: for chunk in resp_iter: yield chunk bytes_read += len(chunk) except swiftclient.ClientException as e: LOG.warning(_("Swift exception raised %s") % encodeutils.exception_to_unicode(e)) if bytes_read != length: if retries == store.conf.glance_store.swift_store_retry_get_count: # terminate silently and let higher level decide LOG.error(_LE("Stopping Swift retries after %d " "attempts") % retries) break else: retries += 1 glance_conf = store.conf.glance_store retry_count = glance_conf.swift_store_retry_get_count LOG.info(_LI("Retrying Swift connection " "(%(retries)d/%(max_retries)d) with " "range=%(start)d-%(end)d"), {'retries': retries, 'max_retries': retry_count, 'start': bytes_read, 'end': length}) (_resp_headers, resp_iter) = store._get_object(location, manager, bytes_read) else: break class StoreLocation(location.StoreLocation): """ Class describing a Swift URI. A Swift URI can look like any of the following: swift://user:pass@authurl.com/container/obj-id swift://account:user:pass@authurl.com/container/obj-id swift+http://user:pass@authurl.com/container/obj-id swift+https://user:pass@authurl.com/container/obj-id When using multi-tenant a URI might look like this (a storage URL): swift+https://example.com/container/obj-id The swift+http:// URIs indicate there is an HTTP authentication URL. The default for Swift is an HTTPS authentication URL, so swift:// and swift+https:// are the same... """ def process_specs(self): self.scheme = self.specs.get('scheme', 'swift+https') self.user = self.specs.get('user') self.key = self.specs.get('key') self.auth_or_store_url = self.specs.get('auth_or_store_url') self.container = self.specs.get('container') self.obj = self.specs.get('obj') def _get_credstring(self): if self.user and self.key: return '%s:%s' % (urllib.parse.quote(self.user), urllib.parse.quote(self.key)) return '' def get_uri(self, credentials_included=True): auth_or_store_url = self.auth_or_store_url if auth_or_store_url.startswith('http://'): auth_or_store_url = auth_or_store_url[len('http://'):] elif auth_or_store_url.startswith('https://'): auth_or_store_url = auth_or_store_url[len('https://'):] credstring = self._get_credstring() auth_or_store_url = auth_or_store_url.strip('/') container = self.container.strip('/') obj = self.obj.strip('/') if not credentials_included: # Used only in case of an add # Get the current store from config store = self.conf.glance_store.default_swift_reference return '%s://%s/%s/%s' % ('swift+config', store, container, obj) if self.scheme == 'swift+config': if self.ssl_enabled: self.scheme = 'swift+https' else: self.scheme = 'swift+http' if credstring != '': credstring = "%s@" % credstring return '%s://%s%s/%s/%s' % (self.scheme, credstring, auth_or_store_url, container, obj) def _get_conf_value_from_account_ref(self, netloc): try: ref_params = sutils.SwiftParams(self.conf).params self.user = ref_params[netloc]['user'] self.key = ref_params[netloc]['key'] netloc = ref_params[netloc]['auth_address'] self.ssl_enabled = True if netloc != '': if netloc.startswith('http://'): self.ssl_enabled = False netloc = netloc[len('http://'):] elif netloc.startswith('https://'): netloc = netloc[len('https://'):] except KeyError: reason = _("Badly formed Swift URI. Credentials not found for " "account reference") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) return netloc def _form_uri_parts(self, netloc, path): if netloc != '': # > Python 2.6.1 if '@' in netloc: creds, netloc = netloc.split('@') else: creds = None else: # Python 2.6.1 compat # see lp659445 and Python issue7904 if '@' in path: creds, path = path.split('@') else: creds = None netloc = path[0:path.find('/')].strip('/') path = path[path.find('/'):].strip('/') if creds: cred_parts = creds.split(':') if len(cred_parts) < 2: reason = _("Badly formed credentials in Swift URI.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) key = cred_parts.pop() user = ':'.join(cred_parts) creds = urllib.parse.unquote(creds) try: self.user, self.key = creds.rsplit(':', 1) except exceptions.BadStoreConfiguration: self.user = urllib.parse.unquote(user) self.key = urllib.parse.unquote(key) else: self.user = None self.key = None return netloc, path def _form_auth_or_store_url(self, netloc, path): path_parts = path.split('/') try: self.obj = path_parts.pop() self.container = path_parts.pop() if not netloc.startswith('http'): # push hostname back into the remaining to build full authurl path_parts.insert(0, netloc) self.auth_or_store_url = '/'.join(path_parts) except IndexError: reason = _("Badly formed Swift URI.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) def parse_uri(self, uri): """ Parse URLs. This method fixes an issue where credentials specified in the URL are interpreted differently in Python 2.6.1+ than prior versions of Python. It also deals with the peculiarity that new-style Swift URIs have where a username can contain a ':', like so: swift://account:user:pass@authurl.com/container/obj and for system created locations with account reference swift+config://account_reference/container/obj """ # Make sure that URIs that contain multiple schemes, such as: # swift://user:pass@http://authurl.com/v1/container/obj # are immediately rejected. if uri.count('://') != 1: reason = _("URI cannot contain more than one occurrence " "of a scheme. If you have specified a URI like " "swift://user:pass@http://authurl.com/v1/container/obj" ", you need to change it to use the " "swift+http:// scheme, like so: " "swift+http://user:pass@authurl.com/v1/container/obj") LOG.info(_LI("Invalid store URI: %(reason)s"), {'reason': reason}) raise exceptions.BadStoreUri(message=reason) pieces = urllib.parse.urlparse(uri) assert pieces.scheme in ('swift', 'swift+http', 'swift+https', 'swift+config') self.scheme = pieces.scheme netloc = pieces.netloc path = pieces.path.lstrip('/') # NOTE(Sridevi): Fix to map the account reference to the # corresponding configuration value if self.scheme == 'swift+config': netloc = self._get_conf_value_from_account_ref(netloc) else: netloc, path = self._form_uri_parts(netloc, path) self._form_auth_or_store_url(netloc, path) @property def swift_url(self): """ Creates a fully-qualified auth address that the Swift client library can use. The scheme for the auth_address is determined using the scheme included in the `location` field. HTTPS is assumed, unless 'swift+http' is specified. """ if self.auth_or_store_url.startswith('http'): return self.auth_or_store_url else: if self.scheme == 'swift+config': if self.ssl_enabled: self.scheme = 'swift+https' else: self.scheme = 'swift+http' if self.scheme in ('swift+https', 'swift'): auth_scheme = 'https://' else: auth_scheme = 'http://' return ''.join([auth_scheme, self.auth_or_store_url]) def Store(conf): # NOTE(dharinic): Multi-tenant store cannot work with swift config if conf.glance_store.swift_store_multi_tenant: if (conf.glance_store.default_store == 'swift+config' or sutils.is_multiple_swift_store_accounts_enabled(conf)): msg = _("Swift multi-tenant store cannot be configured to " "work with swift+config. The options " "'swift_store_multi_tenant' and " "'swift_store_config_file' are mutually exclusive. " "If you inted to use multi-tenant swift store, please " "make sure that you have not set a swift configuration " "file with the 'swift_store_config_file' option.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=msg) try: conf.register_opts(_SWIFT_OPTS + sutils.swift_opts + buffered.BUFFERING_OPTS, group='glance_store') except cfg.DuplicateOptError: pass if conf.glance_store.swift_store_multi_tenant: return MultiTenantStore(conf) return SingleTenantStore(conf) Store.OPTIONS = _SWIFT_OPTS + sutils.swift_opts + buffered.BUFFERING_OPTS def _is_slo(slo_header): if (slo_header is not None and isinstance(slo_header, six.string_types) and slo_header.lower() == 'true'): return True return False class BaseStore(driver.Store): _CAPABILITIES = capabilities.BitMasks.RW_ACCESS CHUNKSIZE = 65536 OPTIONS = _SWIFT_OPTS + sutils.swift_opts def get_schemes(self): return ('swift+https', 'swift', 'swift+http', 'swift+config') def configure(self, re_raise_bsc=False): glance_conf = self.conf.glance_store _obj_size = self._option_get('swift_store_large_object_size') self.large_object_size = _obj_size * ONE_MB _chunk_size = self._option_get('swift_store_large_object_chunk_size') self.large_object_chunk_size = _chunk_size * ONE_MB self.admin_tenants = glance_conf.swift_store_admin_tenants self.region = glance_conf.swift_store_region self.service_type = glance_conf.swift_store_service_type self.conf_endpoint = glance_conf.swift_store_endpoint self.endpoint_type = glance_conf.swift_store_endpoint_type self.insecure = glance_conf.swift_store_auth_insecure self.ssl_compression = glance_conf.swift_store_ssl_compression self.cacert = glance_conf.swift_store_cacert if swiftclient is None: msg = _("Missing dependency python_swiftclient.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=msg) if glance_conf.swift_buffer_on_upload: buffer_dir = glance_conf.swift_upload_buffer_dir if buffered.validate_buffering(buffer_dir): self.reader_class = buffered.BufferedReader else: self.reader_class = ChunkReader super(BaseStore, self).configure(re_raise_bsc=re_raise_bsc) def _get_object(self, location, manager, start=None): headers = {} if start is not None: bytes_range = 'bytes=%d-' % start headers = {'Range': bytes_range} try: resp_headers, resp_body = manager.get_connection().get_object( location.container, location.obj, resp_chunk_size=self.CHUNKSIZE, headers=headers) except swiftclient.ClientException as e: if e.http_status == http_client.NOT_FOUND: msg = _("Swift could not find object %s.") % location.obj LOG.warning(msg) raise exceptions.NotFound(message=msg) else: raise return (resp_headers, resp_body) @capabilities.check def get(self, location, connection=None, offset=0, chunk_size=None, context=None): location = location.store_location # initialize manager to receive valid connections allow_retry = \ self.conf.glance_store.swift_store_retry_get_count > 0 with self.get_manager(location, context, allow_reauth=allow_retry) as manager: (resp_headers, resp_body) = self._get_object(location, manager=manager) class ResponseIndexable(glance_store.Indexable): def another(self): try: return next(self.wrapped) except StopIteration: return '' length = int(resp_headers.get('content-length', 0)) if allow_retry: resp_body = swift_retry_iter(resp_body, length, self, location, manager=manager) return ResponseIndexable(resp_body, length), length def get_size(self, location, connection=None, context=None): location = location.store_location if not connection: connection = self.get_connection(location, context=context) try: resp_headers = connection.head_object( location.container, location.obj) return int(resp_headers.get('content-length', 0)) except Exception: return 0 def _option_get(self, param): result = getattr(self.conf.glance_store, param) if not result: reason = (_("Could not find %(param)s in configuration options.") % param) LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) return result def _delete_stale_chunks(self, connection, container, chunk_list): for chunk in chunk_list: LOG.debug("Deleting chunk %s" % chunk) try: connection.delete_object(container, chunk) except Exception: msg = _("Failed to delete orphaned chunk " "%(container)s/%(chunk)s") LOG.exception(msg % {'container': container, 'chunk': chunk}) @capabilities.check def add(self, image_id, image_file, image_size, context=None, verifier=None): location = self.create_location(image_id, context=context) # initialize a manager with re-auth if image need to be splitted need_chunks = (image_size == 0) or ( image_size >= self.large_object_size) with self.get_manager(location, context, allow_reauth=need_chunks) as manager: self._create_container_if_missing(location.container, manager.get_connection()) LOG.debug("Adding image object '%(obj_name)s' " "to Swift" % dict(obj_name=location.obj)) try: if not need_chunks: # Image size is known, and is less than large_object_size. # Send to Swift with regular PUT. if verifier: checksum = hashlib.md5() reader = ChunkReader(image_file, checksum, image_size, verifier) obj_etag = manager.get_connection().put_object( location.container, location.obj, reader, content_length=image_size) else: obj_etag = manager.get_connection().put_object( location.container, location.obj, image_file, content_length=image_size) else: # Write the image into Swift in chunks. chunk_id = 1 if image_size > 0: total_chunks = str(int( math.ceil(float(image_size) / float(self.large_object_chunk_size)))) else: # image_size == 0 is when we don't know the size # of the image. This can occur with older clients # that don't inspect the payload size. LOG.debug("Cannot determine image size because it is " "either not provided in the request or " "chunked-transfer encoding is used. " "Adding image as a segmented object to " "Swift.") total_chunks = '?' checksum = hashlib.md5() written_chunks = [] combined_chunks_size = 0 while True: chunk_size = self.large_object_chunk_size if image_size == 0: content_length = None else: left = image_size - combined_chunks_size if left == 0: break if chunk_size > left: chunk_size = left content_length = chunk_size chunk_name = "%s-%05d" % (location.obj, chunk_id) with self.reader_class(image_file, checksum, chunk_size, verifier) as reader: if reader.is_zero_size is True: LOG.debug('Not writing zero-length chunk.') break try: chunk_etag = \ manager.get_connection().put_object( location.container, chunk_name, reader, content_length=content_length) written_chunks.append(chunk_name) except Exception: # Delete orphaned segments from swift backend with excutils.save_and_reraise_exception(): LOG.error(_("Error during chunked upload " "to backend, deleting stale " "chunks.")) self._delete_stale_chunks( manager.get_connection(), location.container, written_chunks) bytes_read = reader.bytes_read msg = ("Wrote chunk %(chunk_name)s (%(chunk_id)d/" "%(total_chunks)s) of length %(bytes_read)" "d to Swift returning MD5 of content: " "%(chunk_etag)s" % {'chunk_name': chunk_name, 'chunk_id': chunk_id, 'total_chunks': total_chunks, 'bytes_read': bytes_read, 'chunk_etag': chunk_etag}) LOG.debug(msg) chunk_id += 1 combined_chunks_size += bytes_read # In the case we have been given an unknown image size, # set the size to the total size of the combined chunks. if image_size == 0: image_size = combined_chunks_size # Now we write the object manifest and return the # manifest's etag... manifest = "%s/%s-" % (location.container, location.obj) headers = {'ETag': hashlib.md5(b"").hexdigest(), 'X-Object-Manifest': manifest} # The ETag returned for the manifest is actually the # MD5 hash of the concatenated checksums of the strings # of each chunk...so we ignore this result in favour of # the MD5 of the entire image file contents, so that # users can verify the image file contents accordingly manager.get_connection().put_object(location.container, location.obj, None, headers=headers) obj_etag = checksum.hexdigest() # NOTE: We return the user and key here! Have to because # location is used by the API server to return the actual # image data. We *really* should consider NOT returning # the location attribute from GET /images/ and # GET /images/details if sutils.is_multiple_swift_store_accounts_enabled(self.conf): include_creds = False else: include_creds = True return (location.get_uri(credentials_included=include_creds), image_size, obj_etag, {}) except swiftclient.ClientException as e: if e.http_status == http_client.CONFLICT: msg = _("Swift already has an image at this location") raise exceptions.Duplicate(message=msg) msg = (_(u"Failed to add object to Swift.\n" "Got error from Swift: %s.") % encodeutils.exception_to_unicode(e)) LOG.error(msg) raise glance_store.BackendException(msg) @capabilities.check def delete(self, location, connection=None, context=None): location = location.store_location if not connection: connection = self.get_connection(location, context=context) try: # We request the manifest for the object. If one exists, # that means the object was uploaded in chunks/segments, # and we need to delete all the chunks as well as the # manifest. dlo_manifest = None slo_manifest = None try: headers = connection.head_object( location.container, location.obj) dlo_manifest = headers.get('x-object-manifest') slo_manifest = headers.get('x-static-large-object') except swiftclient.ClientException as e: if e.http_status != http_client.NOT_FOUND: raise if _is_slo(slo_manifest): # Delete the manifest as well as the segments query_string = 'multipart-manifest=delete' connection.delete_object(location.container, location.obj, query_string=query_string) return if dlo_manifest: # Delete all the chunks before the object manifest itself obj_container, obj_prefix = dlo_manifest.split('/', 1) segments = connection.get_container( obj_container, prefix=obj_prefix)[1] for segment in segments: # TODO(jaypipes): This would be an easy area to parallelize # since we're simply sending off parallelizable requests # to Swift to delete stuff. It's not like we're going to # be hogging up network or file I/O here... try: connection.delete_object(obj_container, segment['name']) except swiftclient.ClientException as e: msg = _('Unable to delete segment %(segment_name)s') msg = msg % {'segment_name': segment['name']} LOG.exception(msg) # Delete object (or, in segmented case, the manifest) connection.delete_object(location.container, location.obj) except swiftclient.ClientException as e: if e.http_status == http_client.NOT_FOUND: msg = _("Swift could not find image at URI.") raise exceptions.NotFound(message=msg) else: raise def _create_container_if_missing(self, container, connection): """ Creates a missing container in Swift if the ``swift_store_create_container_on_put`` option is set. :param container: Name of container to create :param connection: Connection to swift service """ try: connection.head_container(container) except swiftclient.ClientException as e: if e.http_status == http_client.NOT_FOUND: if self.conf.glance_store.swift_store_create_container_on_put: try: msg = (_LI("Creating swift container %(container)s") % {'container': container}) LOG.info(msg) connection.put_container(container) except swiftclient.ClientException as e: msg = (_("Failed to add container to Swift.\n" "Got error from Swift: %s.") % encodeutils.exception_to_unicode(e)) raise glance_store.BackendException(msg) else: msg = (_("The container %(container)s does not exist in " "Swift. Please set the " "swift_store_create_container_on_put option " "to add container to Swift automatically.") % {'container': container}) raise glance_store.BackendException(msg) else: raise def get_connection(self, location, context=None): raise NotImplementedError() def create_location(self, image_id, context=None): raise NotImplementedError() def init_client(self, location, context=None): """Initialize and return client to authorize against keystone The method invariant is the following: it always returns Keystone client that can be used to receive fresh token in any time. Otherwise it raises appropriate exception. :param location: swift location data :param context: user context (it is not required if user grants are specified for single tenant store) :return correctly initialized keystone client """ raise NotImplementedError() def get_store_connection(self, auth_token, storage_url): """Get initialized swift connection :param auth_token: auth token :param storage_url: swift storage url :return: swiftclient connection that allows to request container and others """ # initialize a connection return swiftclient.Connection( preauthurl=storage_url, preauthtoken=auth_token, insecure=self.insecure, ssl_compression=self.ssl_compression, cacert=self.cacert) def get_manager(self, store_location, context=None, allow_reauth=False): """Return appropriate connection manager for store The method detects store type (singletenant or multitenant) and returns appropriate connection manager (singletenant or multitenant) that allows to request swiftclient connections. :param store_location: StoreLocation object that define image location :param context: user context :param allow_reauth: defines if we allow re-authentication when user token is expired and refresh swift connection :return: connection manager for store """ msg = _("There is no Connection Manager implemented for %s class.") raise NotImplementedError(msg % self.__class__.__name__) class SingleTenantStore(BaseStore): EXAMPLE_URL = "swift://:@//" def __init__(self, conf): super(SingleTenantStore, self).__init__(conf) self.ref_params = sutils.SwiftParams(self.conf).params def configure(self, re_raise_bsc=False): # set configuration before super so configure_add can override self.auth_version = self._option_get('swift_store_auth_version') self.user_domain_id = None self.user_domain_name = None self.project_domain_id = None self.project_domain_name = None super(SingleTenantStore, self).configure(re_raise_bsc=re_raise_bsc) def configure_add(self): default_ref = self.conf.glance_store.default_swift_reference default_swift_reference = self.ref_params.get(default_ref) if default_swift_reference: self.auth_address = default_swift_reference.get('auth_address') if (not default_swift_reference) or (not self.auth_address): reason = _("A value for swift_store_auth_address is required.") LOG.error(reason) raise exceptions.BadStoreConfiguration(message=reason) if self.auth_address.startswith('http://'): self.scheme = 'swift+http' else: self.scheme = 'swift+https' self.container = self.conf.glance_store.swift_store_container self.auth_version = default_swift_reference.get('auth_version') self.user = default_swift_reference.get('user') self.key = default_swift_reference.get('key') self.user_domain_id = default_swift_reference.get('user_domain_id') self.user_domain_name = default_swift_reference.get('user_domain_name') self.project_domain_id = default_swift_reference.get( 'project_domain_id') self.project_domain_name = default_swift_reference.get( 'project_domain_name') if not (self.user or self.key): reason = _("A value for swift_store_ref_params is required.") LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) def create_location(self, image_id, context=None): container_name = self.get_container_name(image_id, self.container) specs = {'scheme': self.scheme, 'container': container_name, 'obj': str(image_id), 'auth_or_store_url': self.auth_address, 'user': self.user, 'key': self.key} return StoreLocation(specs, self.conf) def get_container_name(self, image_id, default_image_container): """ Returns appropriate container name depending upon value of ``swift_store_multiple_containers_seed``. In single-container mode, which is a seed value of 0, simply returns default_image_container. In multiple-container mode, returns default_image_container as the prefix plus a suffix determined by the multiple container seed examples: single-container mode: 'glance' multiple-container mode: 'glance_3a1' for image uuid 3A1xxxxxxx... :param image_id: UUID of image :param default_image_container: container name from ``swift_store_container`` """ seed_num_chars = \ self.conf.glance_store.swift_store_multiple_containers_seed if seed_num_chars is None \ or seed_num_chars < 0 or seed_num_chars > 32: reason = _("An integer value between 0 and 32 is required for" " swift_store_multiple_containers_seed.") LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) elif seed_num_chars > 0: image_id = str(image_id).lower() num_dashes = image_id[:seed_num_chars].count('-') num_chars = seed_num_chars + num_dashes name_suffix = image_id[:num_chars] new_container_name = default_image_container + '_' + name_suffix return new_container_name else: return default_image_container def get_connection(self, location, context=None): if not location.user: reason = _("Location is missing user:password information.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) auth_url = location.swift_url if not auth_url.endswith('/'): auth_url += '/' if self.auth_version in ('2', '3'): try: tenant_name, user = location.user.split(':') except ValueError: reason = (_("Badly formed tenant:user '%(user)s' in " "Swift URI") % {'user': location.user}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) else: tenant_name = None user = location.user os_options = {} if self.region: os_options['region_name'] = self.region os_options['endpoint_type'] = self.endpoint_type os_options['service_type'] = self.service_type if self.user_domain_id: os_options['user_domain_id'] = self.user_domain_id if self.user_domain_name: os_options['user_domain_name'] = self.user_domain_name if self.project_domain_id: os_options['project_domain_id'] = self.project_domain_id if self.project_domain_name: os_options['project_domain_name'] = self.project_domain_name return swiftclient.Connection( auth_url, user, location.key, preauthurl=self.conf_endpoint, insecure=self.insecure, tenant_name=tenant_name, auth_version=self.auth_version, os_options=os_options, ssl_compression=self.ssl_compression, cacert=self.cacert) def init_client(self, location, context=None): """Initialize keystone client with swift service user credentials""" # prepare swift admin credentials if not location.user: reason = _("Location is missing user:password information.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) auth_url = location.swift_url if not auth_url.endswith('/'): auth_url += '/' try: tenant_name, user = location.user.split(':') except ValueError: reason = (_("Badly formed tenant:user '%(user)s' in " "Swift URI") % {'user': location.user}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) # initialize a keystone plugin for swift admin with creds password = ks_identity.V3Password( auth_url=auth_url, username=user, password=location.key, project_name=tenant_name, user_domain_id=self.user_domain_id, user_domain_name=self.user_domain_name, project_domain_id=self.project_domain_id, project_domain_name=self.project_domain_name) sess = ks_session.Session(auth=password) return ks_client.Client(session=sess) def get_manager(self, store_location, context=None, allow_reauth=False): return connection_manager.SingleTenantConnectionManager(self, store_location, context, allow_reauth) class MultiTenantStore(BaseStore): EXAMPLE_URL = "swift:////" def _get_endpoint(self, context): self.container = self.conf.glance_store.swift_store_container if context is None: reason = _("Multi-tenant Swift storage requires a context.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) if context.service_catalog is None: reason = _("Multi-tenant Swift storage requires " "a service catalog.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) self.storage_url = self.conf_endpoint if not self.storage_url: catalog = keystone_sc.ServiceCatalogV2(context.service_catalog) self.storage_url = catalog.url_for(service_type=self.service_type, region_name=self.region, interface=self.endpoint_type) if self.storage_url.startswith('http://'): self.scheme = 'swift+http' else: self.scheme = 'swift+https' return self.storage_url def delete(self, location, connection=None, context=None): if not connection: connection = self.get_connection(location.store_location, context=context) super(MultiTenantStore, self).delete(location, connection) connection.delete_container(location.store_location.container) def set_acls(self, location, public=False, read_tenants=None, write_tenants=None, connection=None, context=None): location = location.store_location if not connection: connection = self.get_connection(location, context=context) if read_tenants is None: read_tenants = [] if write_tenants is None: write_tenants = [] headers = {} if public: headers['X-Container-Read'] = "*:*" elif read_tenants: headers['X-Container-Read'] = ','.join('%s:*' % i for i in read_tenants) else: headers['X-Container-Read'] = '' write_tenants.extend(self.admin_tenants) if write_tenants: headers['X-Container-Write'] = ','.join('%s:*' % i for i in write_tenants) else: headers['X-Container-Write'] = '' try: connection.post_container(location.container, headers=headers) except swiftclient.ClientException as e: if e.http_status == http_client.NOT_FOUND: msg = _("Swift could not find image at URI.") raise exceptions.NotFound(message=msg) else: raise def create_location(self, image_id, context=None): ep = self._get_endpoint(context) specs = {'scheme': self.scheme, 'container': self.container + '_' + str(image_id), 'obj': str(image_id), 'auth_or_store_url': ep} return StoreLocation(specs, self.conf) def get_connection(self, location, context=None): return swiftclient.Connection( preauthurl=location.swift_url, preauthtoken=context.auth_token, insecure=self.insecure, ssl_compression=self.ssl_compression, cacert=self.cacert) def init_client(self, location, context=None): # read client parameters from config files ref_params = sutils.SwiftParams(self.conf).params default_ref = self.conf.glance_store.default_swift_reference default_swift_reference = ref_params.get(default_ref) if not default_swift_reference: reason = _("default_swift_reference %s is " "required."), default_ref LOG.error(reason) raise exceptions.BadStoreConfiguration(message=reason) auth_address = default_swift_reference.get('auth_address') user = default_swift_reference.get('user') key = default_swift_reference.get('key') user_domain_id = default_swift_reference.get('user_domain_id') user_domain_name = default_swift_reference.get('user_domain_name') project_domain_id = default_swift_reference.get('project_domain_id') project_domain_name = default_swift_reference.get( 'project_domain_name') # create client for multitenant user(trustor) trustor_auth = ks_identity.V3Token(auth_url=auth_address, token=context.auth_token, project_id=context.tenant) trustor_sess = ks_session.Session(auth=trustor_auth) trustor_client = ks_client.Client(session=trustor_sess) auth_ref = trustor_client.session.auth.get_auth_ref(trustor_sess) roles = [t['name'] for t in auth_ref['roles']] # create client for trustee - glance user specified in swift config tenant_name, user = user.split(':') password = ks_identity.V3Password( auth_url=auth_address, username=user, password=key, project_name=tenant_name, user_domain_id=user_domain_id, user_domain_name=user_domain_name, project_domain_id=project_domain_id, project_domain_name=project_domain_name) trustee_sess = ks_session.Session(auth=password) trustee_client = ks_client.Client(session=trustee_sess) # request glance user id - we will use it as trustee user trustee_user_id = trustee_client.session.get_user_id() # create trust for trustee user trust_id = trustor_client.trusts.create( trustee_user=trustee_user_id, trustor_user=context.user, project=context.tenant, impersonation=True, role_names=roles ).id # initialize a new client with trust and trustee credentials # create client for glance trustee user client_password = ks_identity.V3Password( auth_url=auth_address, username=user, password=key, trust_id=trust_id, user_domain_id=user_domain_id, user_domain_name=user_domain_name, project_domain_id=project_domain_id, project_domain_name=project_domain_name ) # now we can authenticate against KS # as trustee of user who provided token client_sess = ks_session.Session(auth=client_password) return ks_client.Client(session=client_sess) def get_manager(self, store_location, context=None, allow_reauth=False): # if global toggle is turned off then do not allow re-authentication # with trusts if not self.conf.glance_store.swift_store_use_trusts: allow_reauth = False return connection_manager.MultiTenantConnectionManager(self, store_location, context, allow_reauth) class ChunkReader(object): def __init__(self, fd, checksum, total, verifier=None): self.fd = fd self.checksum = checksum self.total = total self.verifier = verifier self.bytes_read = 0 self.is_zero_size = False self.byteone = fd.read(1) if len(self.byteone) == 0: self.is_zero_size = True def do_read(self, i): if self.bytes_read == 0 and i > 0 and self.byteone is not None: return self.byteone + self.fd.read(i - 1) else: return self.fd.read(i) def read(self, i): left = self.total - self.bytes_read if i > left: i = left result = self.do_read(i) self.bytes_read += len(result) self.checksum.update(result) if self.verifier: self.verifier.update(result) return result def __enter__(self): return self def __exit__(self, type, value, traceback): pass glance_store-0.23.0/glance_store/_drivers/swift/utils.py0000666000175100017510000001551613230237440023425 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import sys from oslo_config import cfg from six.moves import configparser from glance_store import exceptions from glance_store.i18n import _, _LE swift_opts = [ cfg.StrOpt('default_swift_reference', default="ref1", help=_(""" Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is 'ref1'. This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: * A valid string value Related options: * None """)), cfg.StrOpt('swift_store_auth_version', default='2', help=_('Version of the authentication service to use. ' 'Valid versions are 2 and 3 for keystone and 1 ' '(deprecated) for swauth and rackspace.'), deprecated_for_removal=True, deprecated_reason=_(""" The option 'auth_version' in the Swift back-end configuration file is used instead. """)), cfg.StrOpt('swift_store_auth_address', help=_('The address where the Swift authentication ' 'service is listening.'), deprecated_for_removal=True, deprecated_reason=_(""" The option 'auth_address' in the Swift back-end configuration file is used instead. """)), cfg.StrOpt('swift_store_user', secret=True, help=_('The user to authenticate against the Swift ' 'authentication service.'), deprecated_for_removal=True, deprecated_reason=_(""" The option 'user' in the Swift back-end configuration file is set instead. """)), cfg.StrOpt('swift_store_key', secret=True, help=_('Auth key for the user authenticating against the ' 'Swift authentication service.'), deprecated_for_removal=True, deprecated_reason=_(""" The option 'key' in the Swift back-end configuration file is used to set the authentication key instead. """)), cfg.StrOpt('swift_store_config_file', default=None, help=_(""" Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. NOTE: Please do not configure this option if you have set ``swift_store_multi_tenant`` to ``True``. Possible values: * String value representing an absolute path on the glance-api node Related options: * swift_store_multi_tenant """)), ] _config_defaults = {'user_domain_id': 'default', 'user_domain_name': None, 'project_domain_id': 'default', 'project_domain_name': None} if sys.version_info >= (3, 2): CONFIG = configparser.ConfigParser(defaults=_config_defaults) else: CONFIG = configparser.SafeConfigParser(defaults=_config_defaults) LOG = logging.getLogger(__name__) def is_multiple_swift_store_accounts_enabled(conf): if conf.glance_store.swift_store_config_file is None: return False return True class SwiftParams(object): def __init__(self, conf): self.conf = conf if is_multiple_swift_store_accounts_enabled(self.conf): self.params = self._load_config() else: self.params = self._form_default_params() def _form_default_params(self): default = {} if ( self.conf.glance_store.swift_store_user and self.conf.glance_store.swift_store_key and self.conf.glance_store.swift_store_auth_address ): glance_store = self.conf.glance_store default['user'] = glance_store.swift_store_user default['key'] = glance_store.swift_store_key default['auth_address'] = glance_store.swift_store_auth_address default['project_domain_id'] = 'default' default['project_domain_name'] = None default['user_domain_id'] = 'default' default['user_domain_name'] = None default['auth_version'] = glance_store.swift_store_auth_version return {glance_store.default_swift_reference: default} return {} def _load_config(self): try: scf = self.conf.glance_store.swift_store_config_file conf_file = self.conf.find_file(scf) CONFIG.read(conf_file) except Exception as e: msg = (_("swift config file " "%(conf)s:%(exc)s not found"), {'conf': self.conf.glance_store.swift_store_config_file, 'exc': e}) LOG.error(msg) raise exceptions.BadStoreConfiguration(store_name='swift', reason=msg) account_params = {} account_references = CONFIG.sections() for ref in account_references: reference = {} try: for param in ('auth_address', 'user', 'key', 'project_domain_id', 'project_domain_name', 'user_domain_id', 'user_domain_name'): reference[param] = CONFIG.get(ref, param) try: reference['auth_version'] = CONFIG.get(ref, 'auth_version') except configparser.NoOptionError: av = self.conf.glance_store.swift_store_auth_version reference['auth_version'] = av account_params[ref] = reference except (ValueError, SyntaxError, configparser.NoOptionError) as e: LOG.exception(_LE("Invalid format of swift store config cfg")) return account_params glance_store-0.23.0/glance_store/_drivers/sheepdog.py0000666000175100017510000003346413230237440022731 0ustar zuulzuul00000000000000# Copyright 2013 Taobao Inc. # Copyright (C) 2016 Nippon Telegraph and Telephone Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for Sheepdog storage system""" import hashlib import logging import six from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import excutils from oslo_utils import units import glance_store from glance_store import capabilities from glance_store.common import utils import glance_store.driver from glance_store import exceptions from glance_store.i18n import _ import glance_store.location LOG = logging.getLogger(__name__) DEFAULT_ADDR = '127.0.0.1' DEFAULT_PORT = 7000 DEFAULT_CHUNKSIZE = 64 # in MiB _SHEEPDOG_OPTS = [ cfg.IntOpt('sheepdog_store_chunk_size', min=1, default=DEFAULT_CHUNKSIZE, help=_(""" Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: * Positive integer value representing size in mebibytes. Related Options: * None """)), cfg.PortOpt('sheepdog_store_port', default=DEFAULT_PORT, help=_(""" Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called 'sheep', manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using ``sheepdog_store_port`` option to store chunks of Glance images. Possible values: * A valid port number (0 to 65535) Related Options: * sheepdog_store_address """)), cfg.HostAddressOpt('sheepdog_store_address', default=DEFAULT_ADDR, help=_(""" Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the 'sheep' is 127.0.0.1. The Sheepdog daemon, also called 'sheep', manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using ``sheepdog_store_address`` option to store chunks of Glance images. Possible values: * A valid IPv4 address * A valid IPv6 address * A valid hostname Related Options: * sheepdog_store_port """)) ] class SheepdogImage(object): """Class describing an image stored in Sheepdog storage.""" def __init__(self, addr, port, name, chunk_size): self.addr = addr self.port = port self.name = name self.chunk_size = chunk_size def _run_command(self, command, data, *params): cmd = ['collie', 'vdi'] cmd.extend(command.split(' ')) cmd.extend(['-a', self.addr, '-p', self.port, self.name]) cmd.extend(params) try: return processutils.execute( *cmd, process_input=data)[0] except processutils.ProcessExecutionError as exc: LOG.error(exc) raise glance_store.BackendException(exc) def get_size(self): """ Return the size of the this image Sheepdog Usage: collie vdi list -r -a address -p port image """ out = self._run_command("list -r", None) return int(out.split(' ')[3]) def read(self, offset, count): """ Read up to 'count' bytes from this image starting at 'offset' and return the data. Sheepdog Usage: collie vdi read -a address -p port image offset len """ return self._run_command("read", None, str(offset), str(count)) def write(self, data, offset, count): """ Write up to 'count' bytes from the data to this image starting at 'offset' Sheepdog Usage: collie vdi write -a address -p port image offset len """ self._run_command("write", data, str(offset), str(count)) def create(self, size): """ Create this image in the Sheepdog cluster with size 'size'. Sheepdog Usage: collie vdi create -a address -p port image size """ if not isinstance(size, (six.integer_types, float)): raise exceptions.Forbidden("Size is not a number") self._run_command("create", None, str(size)) def resize(self, size): """Resize this image in the Sheepdog cluster with size 'size'. Sheepdog Usage: collie vdi create -a address -p port image size """ self._run_command("resize", None, str(size)) def delete(self): """ Delete this image in the Sheepdog cluster Sheepdog Usage: collie vdi delete -a address -p port image """ self._run_command("delete", None) def exist(self): """ Check if this image exists in the Sheepdog cluster via 'list' command Sheepdog Usage: collie vdi list -r -a address -p port image """ out = self._run_command("list -r", None) if not out: return False else: return True class StoreLocation(glance_store.location.StoreLocation): """ Class describing a Sheepdog URI. This is of the form: sheepdog://addr:port:image """ def process_specs(self): self.image = self.specs.get('image') self.addr = self.specs.get('addr') self.port = self.specs.get('port') def get_uri(self): return "sheepdog://%(addr)s:%(port)d:%(image)s" % { 'addr': self.addr, 'port': self.port, 'image': self.image} def parse_uri(self, uri): valid_schema = 'sheepdog://' if not uri.startswith(valid_schema): reason = _("URI must start with '%s'") % valid_schema raise exceptions.BadStoreUri(message=reason) pieces = uri[len(valid_schema):].split(':') if len(pieces) == 3: self.image = pieces[2] self.port = int(pieces[1]) self.addr = pieces[0] # This is used for backwards compatibility. else: self.image = pieces[0] self.port = self.conf.glance_store.sheepdog_store_port self.addr = self.conf.glance_store.sheepdog_store_address class ImageIterator(object): """ Reads data from an Sheepdog image, one chunk at a time. """ def __init__(self, image): self.image = image def __iter__(self): image = self.image total = left = image.get_size() while left > 0: length = min(image.chunk_size, left) data = image.read(total - left, length) left -= len(data) yield data raise StopIteration() class Store(glance_store.driver.Store): """Sheepdog backend adapter.""" _CAPABILITIES = (capabilities.BitMasks.RW_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _SHEEPDOG_OPTS EXAMPLE_URL = "sheepdog://addr:port:image" def get_schemes(self): return ('sheepdog',) def configure_add(self): """ Configure the Store to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exceptions.BadStoreConfiguration` """ try: chunk_size = self.conf.glance_store.sheepdog_store_chunk_size self.chunk_size = chunk_size * units.Mi self.READ_CHUNKSIZE = self.chunk_size self.WRITE_CHUNKSIZE = self.READ_CHUNKSIZE self.addr = self.conf.glance_store.sheepdog_store_address self.port = self.conf.glance_store.sheepdog_store_port except cfg.ConfigFileValueError as e: reason = _("Error in store configuration: %s") % e LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name='sheepdog', reason=reason) try: processutils.execute("collie") except processutils.ProcessExecutionError as exc: reason = _("Error in store configuration: %s") % exc LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name='sheepdog', reason=reason) @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a generator for reading the image file :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ loc = location.store_location image = SheepdogImage(loc.addr, loc.port, loc.image, self.READ_CHUNKSIZE) if not image.exist(): raise exceptions.NotFound(_("Sheepdog image %s does not exist") % image.name) return (ImageIterator(image), image.get_size()) def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file and returns the image size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist :rtype: int """ loc = location.store_location image = SheepdogImage(loc.addr, loc.port, loc.image, self.READ_CHUNKSIZE) if not image.exist(): raise exceptions.NotFound(_("Sheepdog image %s does not exist") % image.name) return image.get_size() @capabilities.check def add(self, image_id, image_file, image_size, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param verifier: An object used to verify signatures for images :retval: tuple of URL in backing store, bytes written, and checksum :raises: `glance_store.exceptions.Duplicate` if the image already existed """ image = SheepdogImage(self.addr, self.port, image_id, self.WRITE_CHUNKSIZE) if image.exist(): raise exceptions.Duplicate(_("Sheepdog image %s already exists") % image_id) location = StoreLocation({ 'image': image_id, 'addr': self.addr, 'port': self.port }, self.conf) image.create(image_size) try: offset = 0 checksum = hashlib.md5() chunks = utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE) for chunk in chunks: chunk_length = len(chunk) # If the image size provided is zero we need to do # a resize for the amount we are writing. This will # be slower so setting a higher chunk size may # speed things up a bit. if image_size == 0: image.resize(offset + chunk_length) image.write(chunk, offset, chunk_length) offset += chunk_length checksum.update(chunk) if verifier: verifier.update(chunk) except Exception: # Note(zhiyan): clean up already received data when # error occurs such as ImageSizeLimitExceeded exceptions. with excutils.save_and_reraise_exception(): image.delete() return (location.get_uri(), offset, checksum.hexdigest(), {}) @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist """ loc = location.store_location image = SheepdogImage(loc.addr, loc.port, loc.image, self.WRITE_CHUNKSIZE) if not image.exist(): raise exceptions.NotFound(_("Sheepdog image %s does not exist") % loc.image) image.delete() glance_store-0.23.0/glance_store/_drivers/vmware_datastore.py0000666000175100017510000007364613230237440024510 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack, LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for VMware Datastore""" import hashlib import logging import os from oslo_config import cfg from oslo_utils import excutils from oslo_utils import netutils from oslo_utils import units try: from oslo_vmware import api import oslo_vmware.exceptions as vexc from oslo_vmware.objects import datacenter as oslo_datacenter from oslo_vmware.objects import datastore as oslo_datastore from oslo_vmware import vim_util except ImportError: api = None from six.moves import urllib import six.moves.urllib.parse as urlparse import requests from requests import adapters from requests.packages.urllib3.util import retry import six # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range import glance_store from glance_store import capabilities from glance_store.common import utils from glance_store import exceptions from glance_store.i18n import _, _LE from glance_store import location LOG = logging.getLogger(__name__) CHUNKSIZE = 1024 * 64 # 64kB MAX_REDIRECTS = 5 DEFAULT_STORE_IMAGE_DIR = '/openstack_glance' DS_URL_PREFIX = '/folder' STORE_SCHEME = 'vsphere' _VMWARE_OPTS = [ cfg.HostAddressOpt('vmware_server_host', sample_default='127.0.0.1', help=_(""" Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: * A valid IPv4 or IPv6 address * A valid DNS name Related options: * vmware_server_username * vmware_server_password """)), cfg.StrOpt('vmware_server_username', sample_default='root', help=_(""" Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: * Any string that is the username for a user with appropriate privileges Related options: * vmware_server_host * vmware_server_password """)), cfg.StrOpt('vmware_server_password', sample_default='vmware', help=_(""" Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: * Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: * vmware_server_host * vmware_server_username """), secret=True), cfg.IntOpt('vmware_api_retry_count', default=10, min=1, help=_(""" The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify 'retry forever'. Possible Values: * Any positive integer value Related options: * None """)), cfg.IntOpt('vmware_task_poll_interval', default=5, min=1, help=_(""" Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: * Any positive integer value Related options: * None """)), cfg.StrOpt('vmware_store_image_dir', default=DEFAULT_STORE_IMAGE_DIR, help=_(""" The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: * Any string that is a valid path to a directory Related options: * None """)), cfg.BoolOpt('vmware_insecure', default=False, deprecated_name='vmware_api_insecure', help=_(""" Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: * True * False Related options: * vmware_ca_file """)), cfg.StrOpt('vmware_ca_file', sample_default='/etc/ssl/certs/ca-certificates.crt', help=_(""" Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: * Any string that is a valid absolute path to a CA file Related options: * vmware_insecure """)), cfg.MultiStrOpt( 'vmware_datastores', help=_(""" The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes ::. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: * Any string of the format: :: Related options: * None """))] def http_response_iterator(conn, response, size): """Return an iterator for a file-like object. :param conn: HTTP(S) Connection :param response: http_client.HTTPResponse object :param size: Chunk size to iterate with """ try: chunk = response.read(size) while chunk: yield chunk chunk = response.read(size) finally: conn.close() class _Reader(object): def __init__(self, data, verifier=None): self._size = 0 self.data = data self.checksum = hashlib.md5() self.verifier = verifier def read(self, size=None): result = self.data.read(size) self._size += len(result) self.checksum.update(result) if self.verifier: self.verifier.update(result) return result @property def size(self): return self._size class StoreLocation(location.StoreLocation): """Class describing an VMware URI. An VMware URI can look like any of the following: vsphere://server_host/folder/file_path?dcPath=dc_path&dsName=ds_name """ def __init__(self, store_specs, conf): super(StoreLocation, self).__init__(store_specs, conf) self.datacenter_path = None self.datastore_name = None def process_specs(self): self.scheme = self.specs.get('scheme', STORE_SCHEME) self.server_host = self.specs.get('server_host') self.path = os.path.join(DS_URL_PREFIX, self.specs.get('image_dir').strip('/'), self.specs.get('image_id')) self.datacenter_path = self.specs.get('datacenter_path') self.datstore_name = self.specs.get('datastore_name') param_list = {'dsName': self.datstore_name} if self.datacenter_path: param_list['dcPath'] = self.datacenter_path self.query = urllib.parse.urlencode(param_list) def get_uri(self): if netutils.is_valid_ipv6(self.server_host): base_url = '%s://[%s]%s' % (self.scheme, self.server_host, self.path) else: base_url = '%s://%s%s' % (self.scheme, self.server_host, self.path) return '%s?%s' % (base_url, self.query) # NOTE(flaper87): Commenting out for now, it's probably better to do # it during image add/get. This validation relies on a config param # which doesn't make sense to have in the StoreLocation instance. # def _is_valid_path(self, path): # sdir = self.conf.glance_store.vmware_store_image_dir.strip('/') # return path.startswith(os.path.join(DS_URL_PREFIX, sdir)) def parse_uri(self, uri): if not uri.startswith('%s://' % STORE_SCHEME): reason = (_("URI %(uri)s must start with %(scheme)s://") % {'uri': uri, 'scheme': STORE_SCHEME}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) (self.scheme, self.server_host, path, params, query, fragment) = urllib.parse.urlparse(uri) if not query: path, query = path.split('?') self.path = path self.query = query # NOTE(flaper87): Read comment on `_is_valid_path` # reason = 'Badly formed VMware datastore URI %(uri)s.' % {'uri': uri} # LOG.debug(reason) # raise exceptions.BadStoreUri(reason) parts = urllib.parse.parse_qs(self.query) dc_path = parts.get('dcPath') if dc_path: self.datacenter_path = dc_path[0] ds_name = parts.get('dsName') if ds_name: self.datastore_name = ds_name[0] @property def https_url(self): """ Creates a https url that can be used to upload/download data from a vmware store. """ parsed_url = urlparse.urlparse(self.get_uri()) new_url = parsed_url._replace(scheme='https') return urlparse.urlunparse(new_url) class Store(glance_store.Store): """An implementation of the VMware datastore adapter.""" _CAPABILITIES = (capabilities.BitMasks.RW_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _VMWARE_OPTS WRITE_CHUNKSIZE = units.Mi def __init__(self, conf): super(Store, self).__init__(conf) self.datastores = {} def reset_session(self): self.session = api.VMwareAPISession( self.server_host, self.server_username, self.server_password, self.api_retry_count, self.tpoll_interval, cacert=self.ca_file, insecure=self.api_insecure) return self.session def get_schemes(self): return (STORE_SCHEME,) def _sanity_check(self): if self.conf.glance_store.vmware_api_retry_count <= 0: msg = _('vmware_api_retry_count should be greater than zero') LOG.error(msg) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=msg) if self.conf.glance_store.vmware_task_poll_interval <= 0: msg = _('vmware_task_poll_interval should be greater than zero') LOG.error(msg) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=msg) def configure(self, re_raise_bsc=False): self._sanity_check() self.scheme = STORE_SCHEME self.server_host = self._option_get('vmware_server_host') self.server_username = self._option_get('vmware_server_username') self.server_password = self._option_get('vmware_server_password') self.api_retry_count = self.conf.glance_store.vmware_api_retry_count self.tpoll_interval = self.conf.glance_store.vmware_task_poll_interval self.ca_file = self.conf.glance_store.vmware_ca_file self.api_insecure = self.conf.glance_store.vmware_insecure if api is None: msg = _("Missing dependencies: oslo_vmware") raise exceptions.BadStoreConfiguration( store_name="vmware_datastore", reason=msg) self.session = self.reset_session() super(Store, self).configure(re_raise_bsc=re_raise_bsc) def _get_datacenter(self, datacenter_path): search_index_moref = self.session.vim.service_content.searchIndex dc_moref = self.session.invoke_api( self.session.vim, 'FindByInventoryPath', search_index_moref, inventoryPath=datacenter_path) dc_name = datacenter_path.rsplit('/', 1)[-1] # TODO(sabari): Add datacenter_path attribute in oslo.vmware dc_obj = oslo_datacenter.Datacenter(ref=dc_moref, name=dc_name) dc_obj.path = datacenter_path return dc_obj def _get_datastore(self, datacenter_path, datastore_name): dc_obj = self._get_datacenter(datacenter_path) datastore_ret = self.session.invoke_api( vim_util, 'get_object_property', self.session.vim, dc_obj.ref, 'datastore') if datastore_ret: datastore_refs = datastore_ret.ManagedObjectReference for ds_ref in datastore_refs: ds_obj = oslo_datastore.get_datastore_by_ref(self.session, ds_ref) if ds_obj.name == datastore_name: ds_obj.datacenter = dc_obj return ds_obj def _get_freespace(self, ds_obj): # TODO(sabari): Move this function into oslo_vmware's datastore object. return self.session.invoke_api( vim_util, 'get_object_property', self.session.vim, ds_obj.ref, 'summary.freeSpace') def _parse_datastore_info_and_weight(self, datastore): weight = 0 parts = [part.strip() for part in datastore.rsplit(":", 2)] if len(parts) < 2: msg = _('vmware_datastores format must be ' 'datacenter_path:datastore_name:weight or ' 'datacenter_path:datastore_name') LOG.error(msg) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=msg) if len(parts) == 3 and parts[2]: weight = parts[2] if not weight.isdigit(): msg = (_('Invalid weight value %(weight)s in ' 'vmware_datastores configuration') % {'weight': weight}) LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="vmware_datastore", reason=msg) datacenter_path, datastore_name = parts[0], parts[1] if not datacenter_path or not datastore_name: msg = _('Invalid datacenter_path or datastore_name specified ' 'in vmware_datastores configuration') LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="vmware_datastore", reason=msg) return datacenter_path, datastore_name, weight def _build_datastore_weighted_map(self, datastores): """Build an ordered map where the key is a weight and the value is a Datastore object. :param: a list of datastores in the format datacenter_path:datastore_name:weight :return: a map with key-value : """ ds_map = {} for ds in datastores: dc_path, name, weight = self._parse_datastore_info_and_weight(ds) # Fetch the server side reference. ds_obj = self._get_datastore(dc_path, name) if not ds_obj: msg = (_("Could not find datastore %(ds_name)s " "in datacenter %(dc_path)s") % {'ds_name': name, 'dc_path': dc_path}) LOG.error(msg) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=msg) ds_map.setdefault(int(weight), []).append(ds_obj) return ds_map def configure_add(self): datastores = self._option_get('vmware_datastores') self.datastores = self._build_datastore_weighted_map(datastores) self.store_image_dir = self.conf.glance_store.vmware_store_image_dir def select_datastore(self, image_size): """Select a datastore with free space larger than image size.""" for k, v in sorted(self.datastores.items(), reverse=True): max_ds = None max_fs = 0 for ds in v: # Update with current freespace ds.freespace = self._get_freespace(ds) if ds.freespace > max_fs: max_ds = ds max_fs = ds.freespace if max_ds and max_ds.freespace >= image_size: return max_ds msg = _LE("No datastore found with enough free space to contain an " "image of size %d") % image_size LOG.error(msg) raise exceptions.StorageFull() def _option_get(self, param): result = getattr(self.conf.glance_store, param) if not result: reason = (_("Could not find %(param)s in configuration " "options.") % {'param': param}) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=reason) return result def _build_vim_cookie_header(self, verify_session=False): """Build ESX host session cookie header.""" if verify_session and not self.session.is_current_session_active(): self.reset_session() vim_cookies = self.session.vim.client.options.transport.cookiejar if len(list(vim_cookies)) > 0: cookie = list(vim_cookies)[0] return cookie.name + '=' + cookie.value @capabilities.check def add(self, image_id, image_file, image_size, context=None, verifier=None): """Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param verifier: An object used to verify signatures for images :retval tuple of URL in backing store, bytes written, checksum and a dictionary with storage system specific information :raises: `glance.common.exceptions.Duplicate` if the image already existed `glance.common.exceptions.UnexpectedStatus` if the upload request returned an unexpected status. The expected responses are 201 Created and 200 OK. """ ds = self.select_datastore(image_size) image_file = _Reader(image_file, verifier) headers = {} if image_size > 0: headers.update({'Content-Length': six.text_type(image_size)}) data = image_file else: data = utils.chunkiter(image_file, CHUNKSIZE) loc = StoreLocation({'scheme': self.scheme, 'server_host': self.server_host, 'image_dir': self.store_image_dir, 'datacenter_path': ds.datacenter.path, 'datastore_name': ds.name, 'image_id': image_id}, self.conf) # NOTE(arnaud): use a decorator when the config is not tied to self cookie = self._build_vim_cookie_header(True) headers = dict(headers) headers.update({'Cookie': cookie}) session = new_session(self.api_insecure, self.ca_file) url = loc.https_url try: response = session.put(url, data=data, headers=headers) except IOError as e: # TODO(sigmavirus24): Figure out what the new exception type would # be in requests. # When a session is not authenticated, the socket is closed by # the server after sending the response. http_client has an open # issue with https that raises Broken Pipe # error instead of returning the response. # See http://bugs.python.org/issue16062. Here, we log the error # and continue to look into the response. msg = _LE('Communication error sending http %(method)s request ' 'to the url %(url)s.\n' 'Got IOError %(e)s') % {'method': 'PUT', 'url': url, 'e': e} LOG.error(msg) raise exceptions.BackendException(msg) except Exception: with excutils.save_and_reraise_exception(): LOG.exception(_LE('Failed to upload content of image ' '%(image)s'), {'image': image_id}) res = response.raw if res.status == requests.codes.conflict: raise exceptions.Duplicate(_("Image file %(image_id)s already " "exists!") % {'image_id': image_id}) if res.status not in (requests.codes.created, requests.codes.ok): msg = (_LE('Failed to upload content of image %(image)s. ' 'The request returned an unexpected status: %(status)s.' '\nThe response body:\n%(body)s') % {'image': image_id, 'status': res.status, 'body': getattr(res, 'body', None)}) LOG.error(msg) raise exceptions.BackendException(msg) return (loc.get_uri(), image_file.size, image_file.checksum.hexdigest(), {}) @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() """ conn, resp, content_length = self._query(location, 'GET') iterator = http_response_iterator(conn, resp, self.READ_CHUNKSIZE) class ResponseIndexable(glance_store.Indexable): def another(self): try: return next(self.wrapped) except StopIteration: return '' return (ResponseIndexable(iterator, content_length), content_length) def get_size(self, location, context=None): """Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns the size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() """ conn = None try: conn, resp, size = self._query(location, 'HEAD') return size finally: # NOTE(sabari): Close the connection as the request was made with # stream=True. if conn is not None: conn.close() @capabilities.check def delete(self, location, context=None): """Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist """ file_path = '[%s] %s' % ( location.store_location.datastore_name, location.store_location.path[len(DS_URL_PREFIX):]) dc_obj = self._get_datacenter(location.store_location.datacenter_path) delete_task = self.session.invoke_api( self.session.vim, 'DeleteDatastoreFile_Task', self.session.vim.service_content.fileManager, name=file_path, datacenter=dc_obj.ref) try: self.session.wait_for_task(delete_task) except vexc.FileNotFoundException: msg = _('Image file %s not found') % file_path LOG.warning(msg) raise exceptions.NotFound(message=msg) except Exception: with excutils.save_and_reraise_exception(): LOG.exception(_LE('Failed to delete image %(image)s ' 'content.') % {'image': location.image_id}) def _query(self, location, method): session = new_session(self.api_insecure, self.ca_file) loc = location.store_location redirects_followed = 0 # TODO(sabari): The redirect logic was added to handle cases when the # backend redirects http url's to https. But the store never makes a # http request and hence this can be safely removed. while redirects_followed < MAX_REDIRECTS: conn, resp = self._retry_request(session, method, location) # NOTE(sigmavirus24): _retry_request handles 4xx and 5xx errors so # if the response is not a redirect, we can return early. if not conn.is_redirect: break redirects_followed += 1 location_header = conn.headers.get('location') if location_header: if resp.status not in (301, 302): reason = (_("The HTTP URL %(path)s attempted to redirect " "with an invalid %(status)s status code.") % {'path': loc.path, 'status': resp.status}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) conn.close() location = self._new_location(location, location_header) else: # NOTE(sigmavirus24): We exceeded the maximum number of redirects msg = ("The HTTP URL exceeded %(max_redirects)s maximum " "redirects.", {'max_redirects': MAX_REDIRECTS}) LOG.debug(msg) raise exceptions.MaxRedirectsExceeded(redirects=MAX_REDIRECTS) content_length = int(resp.getheader('content-length', 0)) return (conn, resp, content_length) def _retry_request(self, session, method, location): loc = location.store_location # NOTE(arnaud): use a decorator when the config is not tied to self for i in range(self.api_retry_count + 1): cookie = self._build_vim_cookie_header() headers = {'Cookie': cookie} conn = session.request(method, loc.https_url, headers=headers, stream=True) resp = conn.raw if resp.status >= 400: if resp.status == requests.codes.unauthorized: self.reset_session() continue if resp.status == requests.codes.not_found: reason = _('VMware datastore could not find image at URI.') LOG.info(reason) raise exceptions.NotFound(message=reason) msg = ('HTTP request returned a %(status)s status code.' % {'status': resp.status}) LOG.debug(msg) raise exceptions.BadStoreUri(msg) break return conn, resp def _new_location(self, old_location, url): store_name = old_location.store_name store_class = old_location.store_location.__class__ image_id = old_location.image_id store_specs = old_location.store_specs # Note(sabari): The redirect url will have a scheme 'http(s)', but the # store only accepts url with scheme 'vsphere'. Thus, replacing with # store's scheme. parsed_url = urlparse.urlparse(url) new_url = parsed_url._replace(scheme='vsphere') vsphere_url = urlparse.urlunparse(new_url) return glance_store.location.Location(store_name, store_class, self.conf, uri=vsphere_url, image_id=image_id, store_specs=store_specs) def new_session(insecure=False, ca_file=None, total_retries=None): session = requests.Session() if total_retries is not None: http_adapter = adapters.HTTPAdapter( max_retries=retry.Retry(total=total_retries)) https_adapter = adapters.HTTPAdapter( max_retries=retry.Retry(total=total_retries)) session.mount('http://', http_adapter) session.mount('https://', https_adapter) session.verify = ca_file if ca_file else not insecure return session glance_store-0.23.0/glance_store/_drivers/filesystem.py0000666000175100017510000007002413230237440023310 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A simple filesystem-backed store """ import errno import hashlib import logging import os import stat import jsonschema from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import units from six.moves import urllib import glance_store from glance_store import capabilities from glance_store.common import utils import glance_store.driver from glance_store import exceptions from glance_store.i18n import _, _LE, _LW import glance_store.location LOG = logging.getLogger(__name__) _FILESYSTEM_CONFIGS = [ cfg.StrOpt('filesystem_store_datadir', default='/var/lib/glance/images', help=_(""" Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which ``glance-api`` runs. If the write access isn't available, a ``BadStoreConfiguration`` exception is raised and the filesystem store may not be available for adding new images. NOTE: This directory is used only when filesystem store is used as a storage backend. Either ``filesystem_store_datadir`` or ``filesystem_store_datadirs`` option must be specified in ``glance-api.conf``. If both options are specified, a ``BadStoreConfiguration`` will be raised and the filesystem store may not be available for adding new images. Possible values: * A valid path to a directory Related options: * ``filesystem_store_datadirs`` * ``filesystem_store_file_perm`` """)), cfg.MultiStrOpt('filesystem_store_datadirs', help=_(""" List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the ``filesystem_store_datadir`` configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at http://docs.openstack.org/developer/glance/configuring.html NOTE: This directory is used only when filesystem store is used as a storage backend. Either ``filesystem_store_datadir`` or ``filesystem_store_datadirs`` option must be specified in ``glance-api.conf``. If both options are specified, a ``BadStoreConfiguration`` will be raised and the filesystem store may not be available for adding new images. Possible values: * List of strings of the following form: * ``:`` Related options: * ``filesystem_store_datadir`` * ``filesystem_store_file_perm`` """)), cfg.StrOpt('filesystem_store_metadata_file', help=_(""" Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys ``id`` and ``mountpoint``. The value for both keys should be a string. Possible values: * A valid path to the store metadata file Related options: * None """)), cfg.IntOpt('filesystem_store_file_perm', default=0, help=_(""" File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at http://docs.openstack.org/developer/glance/configuring.html Possible values: * A valid file access permission * Zero * Any negative integer Related options: * None """))] MULTI_FILESYSTEM_METADATA_SCHEMA = { "type": "array", "items": { "type": "object", "properties": { "id": {"type": "string"}, "mountpoint": {"type": "string"} }, "required": ["id", "mountpoint"], } } class StoreLocation(glance_store.location.StoreLocation): """Class describing a Filesystem URI.""" def process_specs(self): self.scheme = self.specs.get('scheme', 'file') self.path = self.specs.get('path') def get_uri(self): return "file://%s" % self.path def parse_uri(self, uri): """ Parse URLs. This method fixes an issue where credentials specified in the URL are interpreted differently in Python 2.6.1+ than prior versions of Python. """ pieces = urllib.parse.urlparse(uri) assert pieces.scheme in ('file', 'filesystem') self.scheme = pieces.scheme path = (pieces.netloc + pieces.path).strip() if path == '': reason = _("No path specified in URI") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) self.path = path class ChunkedFile(object): """ We send this back to the Glance API server as something that can iterate over a large file """ def __init__(self, filepath, offset=0, chunk_size=4096, partial_length=None): self.filepath = filepath self.chunk_size = chunk_size self.partial_length = partial_length self.partial = self.partial_length is not None self.fp = open(self.filepath, 'rb') if offset: self.fp.seek(offset) def __iter__(self): """Return an iterator over the image file.""" try: if self.fp: while True: if self.partial: size = min(self.chunk_size, self.partial_length) else: size = self.chunk_size chunk = self.fp.read(size) if chunk: yield chunk if self.partial: self.partial_length -= len(chunk) if self.partial_length <= 0: break else: break finally: self.close() def close(self): """Close the internal file pointer""" if self.fp: self.fp.close() self.fp = None class Store(glance_store.driver.Store): _CAPABILITIES = (capabilities.BitMasks.READ_RANDOM | capabilities.BitMasks.WRITE_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _FILESYSTEM_CONFIGS READ_CHUNKSIZE = 64 * units.Ki WRITE_CHUNKSIZE = READ_CHUNKSIZE FILESYSTEM_STORE_METADATA = None def get_schemes(self): return ('file', 'filesystem') def _check_write_permission(self, datadir): """ Checks if directory created to write image files has write permission. :datadir is a directory path in which glance wites image files. :raises: BadStoreConfiguration exception if datadir is read-only. """ if not os.access(datadir, os.W_OK): msg = (_("Permission to write in %s denied") % datadir) LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=msg) def _set_exec_permission(self, datadir): """ Set the execution permission of owner-group and/or other-users to image directory if the image file which contained needs relevant access permissions. :datadir is a directory path in which glance writes image files. """ if self.conf.glance_store.filesystem_store_file_perm <= 0: return try: mode = os.stat(datadir)[stat.ST_MODE] perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8) if perm & stat.S_IRWXO > 0: if not mode & stat.S_IXOTH: # chmod o+x mode |= stat.S_IXOTH os.chmod(datadir, mode) if perm & stat.S_IRWXG > 0: if not mode & stat.S_IXGRP: # chmod g+x os.chmod(datadir, mode | stat.S_IXGRP) except (IOError, OSError): LOG.warning(_LW("Unable to set execution permission of " "owner-group and/or other-users to datadir: %s") % datadir) def _create_image_directories(self, directory_paths): """ Create directories to write image files if it does not exist. :directory_paths is a list of directories belonging to glance store. :raises: BadStoreConfiguration exception if creating a directory fails. """ for datadir in directory_paths: if os.path.exists(datadir): self._check_write_permission(datadir) self._set_exec_permission(datadir) else: msg = _("Directory to write image files does not exist " "(%s). Creating.") % datadir LOG.info(msg) try: os.makedirs(datadir) self._check_write_permission(datadir) self._set_exec_permission(datadir) except (IOError, OSError): if os.path.exists(datadir): # NOTE(markwash): If the path now exists, some other # process must have beat us in the race condition. # But it doesn't hurt, so we can safely ignore # the error. self._check_write_permission(datadir) self._set_exec_permission(datadir) continue reason = _("Unable to create datadir: %s") % datadir LOG.error(reason) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=reason) def _validate_metadata(self, metadata_file): """Validate metadata against json schema. If metadata is valid then cache metadata and use it when creating new image. :param metadata_file: JSON metadata file path :raises: BadStoreConfiguration exception if metadata is not valid. """ try: with open(metadata_file, 'rb') as fptr: metadata = jsonutils.load(fptr) if isinstance(metadata, dict): # If metadata is of type dictionary # i.e. - it contains only one mountpoint # then convert it to list of dictionary. metadata = [metadata] # Validate metadata against json schema jsonschema.validate(metadata, MULTI_FILESYSTEM_METADATA_SCHEMA) glance_store.check_location_metadata(metadata) self.FILESYSTEM_STORE_METADATA = metadata except (jsonschema.exceptions.ValidationError, exceptions.BackendException, ValueError) as vee: err_msg = encodeutils.exception_to_unicode(vee) reason = _('The JSON in the metadata file %(file)s is ' 'not valid and it can not be used: ' '%(vee)s.') % dict(file=metadata_file, vee=err_msg) LOG.error(reason) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=reason) except IOError as ioe: err_msg = encodeutils.exception_to_unicode(ioe) reason = _('The path for the metadata file %(file)s could ' 'not be accessed: ' '%(ioe)s.') % dict(file=metadata_file, ioe=err_msg) LOG.error(reason) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=reason) def configure_add(self): """ Configure the Store to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exceptions.BadStoreConfiguration` """ if not (self.conf.glance_store.filesystem_store_datadir or self.conf.glance_store.filesystem_store_datadirs): reason = (_("Specify at least 'filesystem_store_datadir' or " "'filesystem_store_datadirs' option")) LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="filesystem", reason=reason) if (self.conf.glance_store.filesystem_store_datadir and self.conf.glance_store.filesystem_store_datadirs): reason = (_("Specify either 'filesystem_store_datadir' or " "'filesystem_store_datadirs' option")) LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="filesystem", reason=reason) if self.conf.glance_store.filesystem_store_file_perm > 0: perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8) if not perm & stat.S_IRUSR: reason = _LE("Specified an invalid " "'filesystem_store_file_perm' option which " "could make image file to be unaccessible by " "glance service.") LOG.error(reason) reason = _("Invalid 'filesystem_store_file_perm' option.") raise exceptions.BadStoreConfiguration(store_name="filesystem", reason=reason) self.multiple_datadirs = False directory_paths = set() if self.conf.glance_store.filesystem_store_datadir: self.datadir = self.conf.glance_store.filesystem_store_datadir directory_paths.add(self.datadir) else: self.multiple_datadirs = True self.priority_data_map = {} for datadir in self.conf.glance_store.filesystem_store_datadirs: (datadir_path, priority) = self._get_datadir_path_and_priority(datadir) priority_paths = self.priority_data_map.setdefault( int(priority), []) self._check_directory_paths(datadir_path, directory_paths, priority_paths) directory_paths.add(datadir_path) priority_paths.append(datadir_path) self.priority_list = sorted(self.priority_data_map, reverse=True) self._create_image_directories(directory_paths) metadata_file = self.conf.glance_store.filesystem_store_metadata_file if metadata_file: self._validate_metadata(metadata_file) def _check_directory_paths(self, datadir_path, directory_paths, priority_paths): """ Checks if directory_path is already present in directory_paths. :datadir_path is directory path. :datadir_paths is set of all directory paths. :raises: BadStoreConfiguration exception if same directory path is already present in directory_paths. """ if datadir_path in directory_paths: msg = (_("Directory %(datadir_path)s specified " "multiple times in filesystem_store_datadirs " "option of filesystem configuration") % {'datadir_path': datadir_path}) # If present with different priority it's a bad configuration if datadir_path not in priority_paths: LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=msg) # Present with same prio (exact duplicate) only deserves a warning LOG.warning(msg) def _get_datadir_path_and_priority(self, datadir): """ Gets directory paths and its priority from filesystem_store_datadirs option in glance-api.conf. :param datadir: is directory path with its priority. :returns: datadir_path as directory path priority as priority associated with datadir_path :raises: BadStoreConfiguration exception if priority is invalid or empty directory path is specified. """ priority = 0 parts = [part.strip() for part in datadir.rsplit(":", 1)] datadir_path = parts[0] if len(parts) == 2 and parts[1]: priority = parts[1] if not priority.isdigit(): msg = (_("Invalid priority value %(priority)s in " "filesystem configuration") % {'priority': priority}) LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=msg) if not datadir_path: msg = _("Invalid directory specified in filesystem configuration") LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=msg) return datadir_path, priority @staticmethod def _resolve_location(location): filepath = location.store_location.path if not os.path.exists(filepath): raise exceptions.NotFound(image=filepath) filesize = os.path.getsize(filepath) return filepath, filesize def _get_metadata(self, filepath): """Return metadata dictionary. If metadata is provided as list of dictionaries then return metadata as dictionary containing 'id' and 'mountpoint'. If there are multiple nfs directories (mountpoints) configured for glance, then we need to create metadata JSON file as list of dictionaries containing all mountpoints with unique id. But Nova will not be able to find in which directory (mountpoint) image is present if we store list of dictionary(containing mountpoints) in glance image metadata. So if there are multiple mountpoints then we will return dict containing exact mountpoint where image is stored. If image path does not start with any of the 'mountpoint' provided in metadata JSON file then error is logged and empty dictionary is returned. :param filepath: Path of image on store :returns: metadata dictionary """ if self.FILESYSTEM_STORE_METADATA: for image_meta in self.FILESYSTEM_STORE_METADATA: if filepath.startswith(image_meta['mountpoint']): return image_meta reason = (_LE("The image path %(path)s does not match with " "any of the mountpoint defined in " "metadata: %(metadata)s. An empty dictionary " "will be returned to the client.") % dict(path=filepath, metadata=self.FILESYSTEM_STORE_METADATA)) LOG.error(reason) return {} @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ filepath, filesize = self._resolve_location(location) msg = _("Found image at %s. Returning in ChunkedFile.") % filepath LOG.debug(msg) return (ChunkedFile(filepath, offset=offset, chunk_size=self.READ_CHUNKSIZE, partial_length=chunk_size), chunk_size or filesize) def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file and returns the image size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist :rtype: int """ filepath, filesize = self._resolve_location(location) msg = _("Found image at %s.") % filepath LOG.debug(msg) return filesize @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist :raises: Forbidden if cannot delete because of permissions """ loc = location.store_location fn = loc.path if os.path.exists(fn): try: LOG.debug(_("Deleting image at %(fn)s"), {'fn': fn}) os.unlink(fn) except OSError: raise exceptions.Forbidden( message=(_("You cannot delete file %s") % fn)) else: raise exceptions.NotFound(image=fn) def _get_capacity_info(self, mount_point): """Calculates total available space for given mount point. :mount_point is path of glance data directory """ # Calculate total available space stvfs_result = os.statvfs(mount_point) total_available_space = stvfs_result.f_bavail * stvfs_result.f_bsize return max(0, total_available_space) def _find_best_datadir(self, image_size): """Finds the best datadir by priority and free space. Traverse directories returning the first one that has sufficient free space, in priority order. If two suitable directories have the same priority, choose the one with the most free space available. :param image_size: size of image being uploaded. :returns: best_datadir as directory path of the best priority datadir. :raises: exceptions.StorageFull if there is no datadir in self.priority_data_map that can accommodate the image. """ if not self.multiple_datadirs: return self.datadir best_datadir = None max_free_space = 0 for priority in self.priority_list: for datadir in self.priority_data_map.get(priority): free_space = self._get_capacity_info(datadir) if free_space >= image_size and free_space > max_free_space: max_free_space = free_space best_datadir = datadir # If datadir is found which can accommodate image and has maximum # free space for the given priority then break the loop, # else continue to lookup further. if best_datadir: break else: msg = (_("There is no enough disk space left on the image " "storage media. requested=%s") % image_size) LOG.exception(msg) raise exceptions.StorageFull(message=msg) return best_datadir @capabilities.check def add(self, image_id, image_file, image_size, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param verifier: An object used to verify signatures for images :retval: tuple of URL in backing store, bytes written, checksum and a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already existed :note:: By default, the backend writes the image data to a file `//`, where is the value of the filesystem_store_datadir configuration option and is the supplied image ID. """ datadir = self._find_best_datadir(image_size) filepath = os.path.join(datadir, str(image_id)) if os.path.exists(filepath): raise exceptions.Duplicate(image=filepath) checksum = hashlib.md5() bytes_written = 0 try: with open(filepath, 'wb') as f: for buf in utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE): bytes_written += len(buf) checksum.update(buf) if verifier: verifier.update(buf) f.write(buf) except IOError as e: if e.errno != errno.EACCES: self._delete_partial(filepath, image_id) errors = {errno.EFBIG: exceptions.StorageFull(), errno.ENOSPC: exceptions.StorageFull(), errno.EACCES: exceptions.StorageWriteDenied()} raise errors.get(e.errno, e) except Exception: with excutils.save_and_reraise_exception(): self._delete_partial(filepath, image_id) checksum_hex = checksum.hexdigest() metadata = self._get_metadata(filepath) LOG.debug(_("Wrote %(bytes_written)d bytes to %(filepath)s with " "checksum %(checksum_hex)s"), {'bytes_written': bytes_written, 'filepath': filepath, 'checksum_hex': checksum_hex}) if self.conf.glance_store.filesystem_store_file_perm > 0: perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8) try: os.chmod(filepath, perm) except (IOError, OSError): LOG.warning(_LW("Unable to set permission to image: %s") % filepath) return ('file://%s' % filepath, bytes_written, checksum_hex, metadata) @staticmethod def _delete_partial(filepath, iid): try: os.unlink(filepath) except Exception as e: msg = _('Unable to remove partial image ' 'data for image %(iid)s: %(e)s') LOG.error(msg % dict(iid=iid, e=encodeutils.exception_to_unicode(e))) glance_store-0.23.0/glance_store/_drivers/http.py0000666000175100017510000002627313230237440022112 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging from oslo_config import cfg from oslo_utils import encodeutils from six.moves import urllib import requests from glance_store import capabilities import glance_store.driver from glance_store import exceptions from glance_store.i18n import _, _LI import glance_store.location LOG = logging.getLogger(__name__) MAX_REDIRECTS = 5 _HTTP_OPTS = [ cfg.StrOpt('https_ca_certificates_file', help=_(""" Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the ``https_insecure`` option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: * A valid path to a CA file Related options: * https_insecure """)), cfg.BoolOpt('https_insecure', default=True, help=_(""" Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if ``https_ca_certificates_file`` is set. The remote server certificate will then be verified using the file specified using the ``https_ca_certificates_file`` option. Possible values: * True * False Related options: * https_ca_certificates_file """)), cfg.DictOpt('http_proxy_information', default={}, help=_(""" The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: * A comma separated list of scheme:proxy pairs as described above Related options: * None """))] class StoreLocation(glance_store.location.StoreLocation): """Class describing an HTTP(S) URI.""" def process_specs(self): self.scheme = self.specs.get('scheme', 'http') self.netloc = self.specs['netloc'] self.user = self.specs.get('user') self.password = self.specs.get('password') self.path = self.specs.get('path') def _get_credstring(self): if self.user: return '%s:%s@' % (self.user, self.password) return '' def get_uri(self): return "%s://%s%s%s" % ( self.scheme, self._get_credstring(), self.netloc, self.path) def parse_uri(self, uri): """ Parse URLs. This method fixes an issue where credentials specified in the URL are interpreted differently in Python 2.6.1+ than prior versions of Python. """ pieces = urllib.parse.urlparse(uri) assert pieces.scheme in ('https', 'http') self.scheme = pieces.scheme netloc = pieces.netloc path = pieces.path try: if '@' in netloc: creds, netloc = netloc.split('@') else: creds = None except ValueError: # Python 2.6.1 compat # see lp659445 and Python issue7904 if '@' in path: creds, path = path.split('@') else: creds = None if creds: try: self.user, self.password = creds.split(':') except ValueError: reason = _("Credentials are not well-formatted.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) else: self.user = None if netloc == '': LOG.info(_LI("No address specified in HTTP URL")) raise exceptions.BadStoreUri(uri=uri) else: # IPv6 address has the following format [1223:0:0:..]: # we need to be sure that we are validating port in both IPv4,IPv6 delimiter = "]:" if netloc.count(":") > 1 else ":" host, dlm, port = netloc.partition(delimiter) # if port is present in location then validate port format if port and not port.isdigit(): raise exceptions.BadStoreUri(uri=uri) self.netloc = netloc self.path = path def http_response_iterator(conn, response, size): """ Return an iterator for a file-like object. :param conn: HTTP(S) Connection :param response: urllib3.HTTPResponse object :param size: Chunk size to iterate with """ try: chunk = response.read(size) while chunk: yield chunk chunk = response.read(size) finally: conn.close() class Store(glance_store.driver.Store): """An implementation of the HTTP(S) Backend Adapter""" _CAPABILITIES = (capabilities.BitMasks.READ_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _HTTP_OPTS @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() """ try: conn, resp, content_length = self._query(location, 'GET') except requests.exceptions.ConnectionError: reason = _("Remote server where the image is present " "is unavailable.") LOG.exception(reason) raise exceptions.RemoteServiceUnavailable(message=reason) iterator = http_response_iterator(conn, resp, self.READ_CHUNKSIZE) class ResponseIndexable(glance_store.Indexable): def another(self): try: return next(self.wrapped) except StopIteration: return '' return (ResponseIndexable(iterator, content_length), content_length) def get_schemes(self): return ('http', 'https') def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns the size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() """ conn = None try: conn, resp, size = self._query(location, 'HEAD') except requests.exceptions.ConnectionError as exc: err_msg = encodeutils.exception_to_unicode(exc) reason = _("The HTTP URL is invalid: %s") % err_msg LOG.info(reason) raise exceptions.BadStoreUri(message=reason) finally: # NOTE(sabari): Close the connection as the request was made with # stream=True if conn is not None: conn.close() return size def _query(self, location, verb): redirects_followed = 0 while redirects_followed < MAX_REDIRECTS: loc = location.store_location conn = self._get_response(loc, verb) # NOTE(sigmavirus24): If it was generally successful, break early if conn.status_code < 300: break self._check_store_uri(conn, loc) redirects_followed += 1 # NOTE(sigmavirus24): Close the response so we don't leak sockets conn.close() location = self._new_location(location, conn.headers['location']) else: reason = (_("The HTTP URL exceeded %s maximum " "redirects.") % MAX_REDIRECTS) LOG.debug(reason) raise exceptions.MaxRedirectsExceeded(message=reason) resp = conn.raw content_length = int(resp.getheader('content-length', 0)) return (conn, resp, content_length) def _new_location(self, old_location, url): store_name = old_location.store_name store_class = old_location.store_location.__class__ image_id = old_location.image_id store_specs = old_location.store_specs return glance_store.location.Location(store_name, store_class, self.conf, uri=url, image_id=image_id, store_specs=store_specs) @staticmethod def _check_store_uri(conn, loc): # TODO(sigmavirus24): Make this a staticmethod # Check for bad status codes if conn.status_code >= 400: if conn.status_code == requests.codes.not_found: reason = _("HTTP datastore could not find image at URI.") LOG.debug(reason) raise exceptions.NotFound(message=reason) reason = (_("HTTP URL %(url)s returned a " "%(status)s status code. \nThe response body:\n" "%(body)s") % {'url': loc.path, 'status': conn.status_code, 'body': conn.text}) LOG.debug(reason) raise exceptions.BadStoreUri(message=reason) if conn.is_redirect and conn.status_code not in (301, 302): reason = (_("The HTTP URL %(url)s attempted to redirect " "with an invalid %(status)s status code."), {'url': loc.path, 'status': conn.status_code}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) def _get_response(self, location, verb): if not hasattr(self, 'session'): self.session = requests.Session() ca_bundle = self.conf.glance_store.https_ca_certificates_file disable_https = self.conf.glance_store.https_insecure self.session.verify = ca_bundle if ca_bundle else not disable_https self.session.proxies = self.conf.glance_store.http_proxy_information return self.session.request(verb, location.get_uri(), stream=True, allow_redirects=False) glance_store-0.23.0/glance_store/i18n.py0000666000175100017510000000213113230237440020060 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_i18n as i18n _translators = i18n.TranslatorFactory(domain='glance_store') # The primary translation function using the well-known name "_" _ = _translators.primary # Translators for log levels. # # The abbreviated names are meant to reflect the usual use of a short # name like '_'. The "L" is for "log" and the other letter comes from # the level. _LI = _translators.log_info _LW = _translators.log_warning _LE = _translators.log_error _LC = _translators.log_critical glance_store-0.23.0/glance_store/tests/0000775000175100017510000000000013230237776020106 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/__init__.py0000666000175100017510000000000013230237440022173 0ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/etc/0000775000175100017510000000000013230237776020661 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/etc/glance-swift.conf0000666000175100017510000000111213230237440024074 0ustar zuulzuul00000000000000[ref1] user = tenant:user1 key = key1 auth_address = example.com [ref2] user = user2 key = key2 user_domain_id = default project_domain_id = default auth_version = 3 auth_address = http://example.com [store_2] user = tenant:user1 key = key1 auth_address= https://localhost:8080 [store_3] user= tenant:user2 key= key2 auth_address= https://localhost:8080 [store_4] user = tenant:user1 key = key1 auth_address = http://localhost:80 [store_5] user = tenant:user1 key = key1 auth_address = http://localhost [store_6] user = tenant:user1 key = key1 auth_address = https://localhost/v1 glance_store-0.23.0/glance_store/tests/base.py0000666000175100017510000000526413230237440021367 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2014 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import shutil import fixtures from oslo_config import cfg from oslotest import base import glance_store as store from glance_store import location class StoreBaseTest(base.BaseTestCase): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): super(StoreBaseTest, self).setUp() self.conf = self._CONF self.conf(args=[]) store.register_opts(self.conf) self.config(stores=[]) # Ensure stores + locations cleared location.SCHEME_TO_CLS_MAP = {} store.create_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path self.addCleanup(self.conf.reset) def copy_data_file(self, file_name, dst_dir): src_file_name = os.path.join('glance_store/tests/etc', file_name) shutil.copy(src_file_name, dst_dir) dst_file_name = os.path.join(dst_dir, file_name) return dst_file_name def config(self, **kw): """Override some configuration values. The keyword arguments are the names of configuration options to override and their values. If a group argument is supplied, the overrides are applied to the specified configuration option group. All overrides are automatically cleared at the end of the current test by the fixtures cleanup process. """ group = kw.pop('group', 'glance_store') for k, v in kw.items(): self.conf.set_override(k, v, group) def register_store_schemes(self, store, store_entry): schemes = store.get_schemes() scheme_map = {} loc_cls = store.get_store_location_class() for scheme in schemes: scheme_map[scheme] = { 'store': store, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_map(scheme_map) glance_store-0.23.0/glance_store/tests/unit/0000775000175100017510000000000013230237776021065 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/unit/test_swift_store.py0000666000175100017510000025675713230237440025062 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the Swift backend store""" import copy import fixtures import hashlib import mock import tempfile import uuid from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils import units from oslotest import moxstubout import requests_mock import six from six import moves from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range import swiftclient from glance_store._drivers.swift import buffered from glance_store._drivers.swift import connection_manager as manager from glance_store._drivers.swift import store as swift from glance_store._drivers.swift import utils as sutils from glance_store import backend from glance_store import capabilities from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities CONF = cfg.CONF FAKE_UUID = lambda: str(uuid.uuid4()) FAKE_UUID2 = lambda: str(uuid.uuid4()) Store = swift.Store FIVE_KB = 5 * units.Ki FIVE_GB = 5 * units.Gi MAX_SWIFT_OBJECT_SIZE = FIVE_GB SWIFT_PUT_OBJECT_CALLS = 0 SWIFT_CONF = {'swift_store_auth_address': 'localhost:8080', 'swift_store_container': 'glance', 'swift_store_user': 'user', 'swift_store_key': 'key', 'swift_store_retry_get_count': 1, 'default_swift_reference': 'ref1' } # We stub out as little as possible to ensure that the code paths # between swift and swiftclient are tested # thoroughly def stub_out_swiftclient(stubs, swift_store_auth_version): fixture_containers = ['glance'] fixture_container_headers = {} fixture_headers = { 'glance/%s' % FAKE_UUID: { 'content-length': FIVE_KB, 'etag': 'c2e5db72bd7fd153f53ede5da5a06de3' }, 'glance/%s' % FAKE_UUID2: {'x-static-large-object': 'true', }, } fixture_objects = {'glance/%s' % FAKE_UUID: six.BytesIO(b"*" * FIVE_KB), 'glance/%s' % FAKE_UUID2: six.BytesIO(b"*" * FIVE_KB), } def fake_head_container(url, token, container, **kwargs): if container not in fixture_containers: msg = "No container %s found" % container status = http_client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) return fixture_container_headers def fake_put_container(url, token, container, **kwargs): fixture_containers.append(container) def fake_post_container(url, token, container, headers, **kwargs): for key, value in headers.items(): fixture_container_headers[key] = value def fake_put_object(url, token, container, name, contents, **kwargs): # PUT returns the ETag header for the newly-added object # Large object manifest... global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS += 1 CHUNKSIZE = 64 * units.Ki fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: if kwargs.get('headers'): etag = kwargs['headers']['ETag'] manifest = kwargs.get('headers').get('X-Object-Manifest') fixture_headers[fixture_key] = {'manifest': True, 'etag': etag, 'x-object-manifest': manifest} fixture_objects[fixture_key] = None return etag if hasattr(contents, 'read'): fixture_object = six.BytesIO() read_len = 0 chunk = contents.read(CHUNKSIZE) checksum = hashlib.md5() while chunk: fixture_object.write(chunk) read_len += len(chunk) checksum.update(chunk) chunk = contents.read(CHUNKSIZE) etag = checksum.hexdigest() else: fixture_object = six.BytesIO(contents) read_len = len(contents) etag = hashlib.md5(fixture_object.getvalue()).hexdigest() if read_len > MAX_SWIFT_OBJECT_SIZE: msg = ('Image size:%d exceeds Swift max:%d' % (read_len, MAX_SWIFT_OBJECT_SIZE)) raise swiftclient.ClientException( msg, http_status=http_client.REQUEST_ENTITY_TOO_LARGE) fixture_objects[fixture_key] = fixture_object fixture_headers[fixture_key] = { 'content-length': read_len, 'etag': etag} return etag else: msg = ("Object PUT failed - Object with key %s already exists" % fixture_key) raise swiftclient.ClientException(msg, http_status=http_client.CONFLICT) def fake_get_object(conn, container, name, **kwargs): # GET returns the tuple (list of headers, file object) fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: msg = "Object GET failed" status = http_client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) byte_range = None headers = kwargs.get('headers', dict()) if headers is not None: headers = dict((k.lower(), v) for k, v in headers.items()) if 'range' in headers: byte_range = headers.get('range') fixture = fixture_headers[fixture_key] if 'manifest' in fixture: # Large object manifest... we return a file containing # all objects with prefix of this fixture key chunk_keys = sorted([k for k in fixture_headers.keys() if k.startswith(fixture_key) and k != fixture_key]) result = six.BytesIO() for key in chunk_keys: result.write(fixture_objects[key].getvalue()) else: result = fixture_objects[fixture_key] if byte_range is not None: start = int(byte_range.split('=')[1].strip('-')) result = six.BytesIO(result.getvalue()[start:]) fixture_headers[fixture_key]['content-length'] = len( result.getvalue()) return fixture_headers[fixture_key], result def fake_head_object(url, token, container, name, **kwargs): # HEAD returns the list of headers for an object try: fixture_key = "%s/%s" % (container, name) return fixture_headers[fixture_key] except KeyError: msg = "Object HEAD failed - Object does not exist" status = http_client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) def fake_delete_object(url, token, container, name, **kwargs): # DELETE returns nothing fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: msg = "Object DELETE failed - Object does not exist" status = http_client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) else: del fixture_headers[fixture_key] del fixture_objects[fixture_key] def fake_http_connection(*args, **kwargs): return None def fake_get_auth(url, user, key, auth_version, **kwargs): if url is None: return None, None if 'http' in url and '://' not in url: raise ValueError('Invalid url %s' % url) # Check the auth version against the configured value if swift_store_auth_version != auth_version: msg = 'AUTHENTICATION failed (version mismatch)' raise swiftclient.ClientException(msg) return None, None stubs.Set(swiftclient.client, 'head_container', fake_head_container) stubs.Set(swiftclient.client, 'put_container', fake_put_container) stubs.Set(swiftclient.client, 'post_container', fake_post_container) stubs.Set(swiftclient.client, 'put_object', fake_put_object) stubs.Set(swiftclient.client, 'delete_object', fake_delete_object) stubs.Set(swiftclient.client, 'head_object', fake_head_object) stubs.Set(swiftclient.client.Connection, 'get_object', fake_get_object) stubs.Set(swiftclient.client, 'get_auth', fake_get_auth) stubs.Set(swiftclient.client, 'http_connection', fake_http_connection) class SwiftTests(object): def mock_keystone_client(self): # mock keystone client functions to avoid dependency errors swift.ks_v3 = mock.MagicMock() swift.ks_session = mock.MagicMock() swift.ks_client = mock.MagicMock() @property def swift_store_user(self): return 'tenant:user1' def test_get_size(self): """ Test that we can get the size of an object in the swift store """ uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) image_size = self.store.get_size(loc) self.assertEqual(5120, image_size) def test_get_size_with_multi_tenant_on(self): """Test that single tenant uris work with multi tenant on.""" uri = ("swift://%s:key@auth_address/glance/%s" % (self.swift_store_user, FAKE_UUID)) self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) # NOTE(markwash): ensure the image is found ctxt = mock.MagicMock() size = backend.get_size_from_backend(uri, context=ctxt) self.assertEqual(5120, size) def test_multi_tenant_with_swift_config(self): """ Test that Glance does not start when a config file is set on multi-tenant mode """ schemes = ['swift', 'swift+config'] for s in schemes: self.config(default_store=s, swift_store_config_file='not/none', swift_store_multi_tenant=True) self.assertRaises(exceptions.BadStoreConfiguration, Store, self.conf) def test_get(self): """Test a "normal" retrieval of an image in chunks.""" uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) (image_swift, image_size) = self.store.get(loc) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_with_retry(self): """ Test a retrieval where Swift does not get the full image in a single request. """ uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) ctxt = mock.MagicMock() (image_swift, image_size) = self.store.get(loc, context=ctxt) resp_full = b''.join([chunk for chunk in image_swift.wrapped]) resp_half = resp_full[:len(resp_full) // 2] resp_half = six.BytesIO(resp_half) manager = self.store.get_manager(loc.store_location, ctxt) image_swift.wrapped = swift.swift_retry_iter(resp_half, image_size, self.store, loc.store_location, manager) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_with_http_auth(self): """ Test a retrieval from Swift with an HTTP authurl. This is specified either via a Location header with swift+http:// or using http:// in the swift_store_auth_address config value """ loc = location.get_location_from_uri( "swift+http://%s:key@auth_address/glance/%s" % (self.swift_store_user, FAKE_UUID), conf=self.conf) ctxt = mock.MagicMock() (image_swift, image_size) = self.store.get(loc, context=ctxt) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_non_existing(self): """ Test that trying to retrieve a swift that doesn't exist raises an error """ loc = location.get_location_from_uri( "swift://%s:key@authurl/glance/noexist" % (self.swift_store_user), conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_buffered_reader_opts(self): self.config(swift_buffer_on_upload=True) self.config(swift_upload_buffer_dir=self.test_dir) try: self.store = Store(self.conf) except exceptions.BadStoreConfiguration: self.fail("Buffered Reader exception raised when it " "should not have been") def test_buffered_reader_with_invalid_path(self): self.config(swift_buffer_on_upload=True) self.config(swift_upload_buffer_dir="/some/path") self.store = Store(self.conf) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure) def test_buffered_reader_with_no_path_given(self): self.config(swift_buffer_on_upload=True) self.store = Store(self.conf) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_add(self): """Test that we can add an image via the swift backend.""" moves.reload_module(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = hashlib.md5(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) loc = "swift+https://tenant%%3Auser1:key@localhost:8080/glance/%s" expected_location = loc % (expected_image_id) image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 loc, size, checksum, _ = self.store.add(expected_image_id, image_swift, expected_swift_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) # Expecting a single object to be created on Swift i.e. no chunking. self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_multi_store(self): conf = copy.deepcopy(SWIFT_CONF) conf['default_swift_reference'] = 'store_2' self.config(**conf) moves.reload_module(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 loc = 'swift+config://store_2/glance/%s' expected_location = loc % (expected_image_id) location, size, checksum, arg = self.store.add(expected_image_id, image_swift, expected_swift_size) self.assertEqual(expected_location, location) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_multi_tenant_image_add_uses_users_context(self): expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) expected_container = 'container_' + expected_image_id loc = 'swift+https://some_endpoint/%s/%s' expected_location = loc % (expected_container, expected_image_id) image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.config(swift_store_container='container') self.config(swift_store_create_container_on_put=True) self.config(swift_store_multi_tenant=True) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] ctxt = mock.MagicMock( user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) store = swift.MultiTenantStore(self.conf) store.configure() loc, size, checksum, _ = store.add(expected_image_id, image_swift, expected_swift_size, context=ctxt) # ensure that image add uses user's context self.assertEqual(expected_location, loc) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_auth_url_variations(self): """ Test that we can add an image via the swift backend with a variety of different auth_address values """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) variations = { 'store_4': 'swift+config://store_4/glance/%s', 'store_5': 'swift+config://store_5/glance/%s', 'store_6': 'swift+config://store_6/glance/%s' } for variation, expected_location in variations.items(): image_id = str(uuid.uuid4()) expected_location = expected_location % image_id expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = \ hashlib.md5(expected_swift_contents).hexdigest() image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf['default_swift_reference'] = variation self.config(**conf) moves.reload_module(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() loc, size, checksum, _ = self.store.add(image_id, image_swift, expected_swift_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_no_container_no_create(self): """ Tests that adding an image with a non-existing container raises an appropriate exception """ conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = False conf['swift_store_container'] = 'noexist' self.config(**conf) moves.reload_module(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() image_swift = six.BytesIO(b"nevergonnamakeit") global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # We check the exception text to ensure the container # missing text is found in it, otherwise, we would have # simply used self.assertRaises here exception_caught = False try: self.store.add(str(uuid.uuid4()), image_swift, 0) except exceptions.BackendException as e: exception_caught = True self.assertIn("container noexist does not exist in Swift", encodeutils.exception_to_unicode(e)) self.assertTrue(exception_caught) self.assertEqual(0, SWIFT_PUT_OBJECT_CALLS) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_create(self): """ Tests that adding an image with a non-existing container creates the container automatically if flag is set """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = hashlib.md5(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/noexist/%s' expected_location = loc % (expected_image_id) image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = True conf['swift_store_container'] = 'noexist' self.config(**conf) moves.reload_module(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() loc, size, checksum, _ = self.store.add(expected_image_id, image_swift, expected_swift_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_multiple_containers_create(self): """ Tests that adding an image with a non-existing container while using multi containers will create the container automatically if flag is set """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = hashlib.md5(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) container = 'randomname_' + expected_image_id[:2] loc = 'swift+config://ref1/%s/%s' expected_location = loc % (container, expected_image_id) image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = True conf['swift_store_container'] = 'randomname' conf['swift_store_multiple_containers_seed'] = 2 self.config(**conf) moves.reload_module(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() loc, size, checksum, _ = self.store.add(expected_image_id, image_swift, expected_swift_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_multiple_containers_no_create(self): """ Tests that adding an image with a non-existing container while using multiple containers raises an appropriate exception """ conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = False conf['swift_store_container'] = 'randomname' conf['swift_store_multiple_containers_seed'] = 2 self.config(**conf) moves.reload_module(swift) self.mock_keystone_client() expected_image_id = str(uuid.uuid4()) expected_container = 'randomname_' + expected_image_id[:2] self.store = Store(self.conf) self.store.configure() image_swift = six.BytesIO(b"nevergonnamakeit") global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # We check the exception text to ensure the container # missing text is found in it, otherwise, we would have # simply used self.assertRaises here exception_caught = False try: self.store.add(expected_image_id, image_swift, 0) except exceptions.BackendException as e: exception_caught = True expected_msg = "container %s does not exist in Swift" expected_msg = expected_msg % expected_container self.assertIn(expected_msg, encodeutils.exception_to_unicode(e)) self.assertTrue(exception_caught) self.assertEqual(0, SWIFT_PUT_OBJECT_CALLS) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_with_verifier(self): """Test that the verifier is updated when verifier is provided.""" swift_size = FIVE_KB base_byte = b"12345678" swift_contents = base_byte * (swift_size // 8) image_id = str(uuid.uuid4()) image_swift = six.BytesIO(swift_contents) self.store = Store(self.conf) self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size custom_size = units.Ki verifier = mock.MagicMock(name='mock_verifier') try: self.store.large_object_size = custom_size self.store.large_object_chunk_size = custom_size self.store.add(image_id, image_swift, swift_size, verifier=verifier) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size # Confirm verifier update called expected number of times self.assertEqual(2 * swift_size / custom_size, verifier.update.call_count) # define one chunk of the contents swift_contents_piece = base_byte * (custom_size // 8) # confirm all expected calls to update have occurred calls = [mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b'')] verifier.update.assert_has_calls(calls) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_with_verifier_small(self): """Test that the verifier is updated for smaller images.""" swift_size = FIVE_KB base_byte = b"12345678" swift_contents = base_byte * (swift_size // 8) image_id = str(uuid.uuid4()) image_swift = six.BytesIO(swift_contents) self.store = Store(self.conf) self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size custom_size = 6 * units.Ki verifier = mock.MagicMock(name='mock_verifier') try: self.store.large_object_size = custom_size self.store.large_object_chunk_size = custom_size self.store.add(image_id, image_swift, swift_size, verifier=verifier) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size # Confirm verifier update called expected number of times self.assertEqual(2, verifier.update.call_count) # define one chunk of the contents swift_contents_piece = base_byte * (swift_size // 8) # confirm all expected calls to update have occurred calls = [mock.call(swift_contents_piece), mock.call(b'')] verifier.update.assert_has_calls(calls) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_multi_container_doesnt_impact_multi_tenant_add(self): expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) expected_container = 'container_' + expected_image_id loc = 'swift+https://some_endpoint/%s/%s' expected_location = loc % (expected_container, expected_image_id) image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.config(swift_store_container='container') self.config(swift_store_create_container_on_put=True) self.config(swift_store_multiple_containers_seed=2) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] ctxt = mock.MagicMock( user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) store = swift.MultiTenantStore(self.conf) store.configure() location, size, checksum, _ = store.add(expected_image_id, image_swift, expected_swift_size, context=ctxt) self.assertEqual(expected_location, location) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_large_object(self): """ Tests that adding a very large image. We simulate the large object by setting store.large_object_size to a small number and then verify that there have been a number of calls to put_object()... """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = hashlib.md5(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/glance/%s' expected_location = loc % (expected_image_id) image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.store = Store(self.conf) self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size try: self.store.large_object_size = units.Ki self.store.large_object_chunk_size = units.Ki loc, size, checksum, _ = self.store.add(expected_image_id, image_swift, expected_swift_size) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) # Expecting 6 objects to be created on Swift -- 5 chunks and 1 # manifest. self.assertEqual(6, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_contents) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_large_object_zero_size(self): """ Tests that adding an image to Swift which has both an unknown size and exceeds Swift's maximum limit of 5GB is correctly uploaded. We avoid the overhead of creating a 5GB object for this test by temporarily setting MAX_SWIFT_OBJECT_SIZE to 1KB, and then adding an object of 5KB. Bug lp:891738 """ # Set up a 'large' image of 5KB expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = hashlib.md5(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/glance/%s' expected_location = loc % (expected_image_id) image_swift = six.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # Temporarily set Swift MAX_SWIFT_OBJECT_SIZE to 1KB and add our image, # explicitly setting the image_length to 0 self.store = Store(self.conf) self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size global MAX_SWIFT_OBJECT_SIZE orig_max_swift_object_size = MAX_SWIFT_OBJECT_SIZE try: MAX_SWIFT_OBJECT_SIZE = units.Ki self.store.large_object_size = units.Ki self.store.large_object_chunk_size = units.Ki loc, size, checksum, _ = self.store.add(expected_image_id, image_swift, 0) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size MAX_SWIFT_OBJECT_SIZE = orig_max_swift_object_size self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) # Expecting 6 calls to put_object -- 5 chunks, and the manifest. self.assertEqual(6, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_contents) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_already_existing(self): """ Tests that adding an image with an existing identifier raises an appropriate exception """ self.store = Store(self.conf) self.store.configure() image_swift = six.BytesIO(b"nevergonnamakeit") self.assertRaises(exceptions.Duplicate, self.store.add, FAKE_UUID, image_swift, 0) def _option_required(self, key): conf = self.getConfig() conf[key] = None try: self.config(**conf) self.store = Store(self.conf) return not self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS) except Exception: return False def test_no_store_credentials(self): """ Tests that options without a valid credentials disables the add method """ self.store = Store(self.conf) self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'user': '', 'key': ''}} self.store.configure() self.assertFalse(self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS)) def test_no_auth_address(self): """ Tests that options without auth address disables the add method """ self.store = Store(self.conf) self.store.ref_params = {'ref1': {'auth_address': '', 'user': 'user1', 'key': 'key1'}} self.store.configure() self.assertFalse(self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS)) def test_delete(self): """ Test we can delete an existing image in the swift store """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) moves.reload_module(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch.object(swiftclient.client, 'delete_object') def test_delete_slo(self, mock_del_obj): """ Test we can delete an existing image stored as SLO, static large object """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) moves.reload_module(swift) self.store = Store(self.conf) self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % (self.swift_store_user, FAKE_UUID2) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertEqual(1, mock_del_obj.call_count) _, kwargs = mock_del_obj.call_args self.assertEqual('multipart-manifest=delete', kwargs.get('query_string')) @mock.patch.object(swiftclient.client, 'delete_object') def test_delete_nonslo_not_deleted_as_slo(self, mock_del_obj): """ Test that non-SLOs are not being deleted the SLO way """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) moves.reload_module(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % (self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertEqual(1, mock_del_obj.call_count) _, kwargs = mock_del_obj.call_args self.assertIsNone(kwargs.get('query_string')) def test_delete_with_reference_params(self): """ Test we can delete an existing image in the swift store """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) moves.reload_module(swift) # mock client because v3 uses it to receive auth_info self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() uri = "swift+config://ref1/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_delete_non_existing(self): """ Test that trying to delete a swift that doesn't exist raises an error """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) moves.reload_module(swift) self.store = Store(self.conf) self.store.configure() loc = location.get_location_from_uri( "swift://%s:key@authurl/glance/noexist" % (self.swift_store_user), conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.delete, loc) def test_delete_with_some_segments_failing(self): """ Tests that delete of a segmented object recovers from error(s) while deleting one or more segments. To test this we add a segmented object first and then delete it, while simulating errors on one or more segments. """ test_image_id = str(uuid.uuid4()) def fake_head_object(container, object_name): object_manifest = '/'.join([container, object_name]) + '-' return {'x-object-manifest': object_manifest} def fake_get_container(container, **kwargs): # Returning 5 fake segments return None, [{'name': '%s-%03d' % (test_image_id, x)} for x in range(1, 6)] def fake_delete_object(container, object_name): # Simulate error on 1st and 3rd segments global SWIFT_DELETE_OBJECT_CALLS SWIFT_DELETE_OBJECT_CALLS += 1 if object_name.endswith('-001') or object_name.endswith('-003'): raise swiftclient.ClientException('Object DELETE failed') else: pass conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) moves.reload_module(swift) self.store = Store(self.conf) self.store.configure() loc_uri = "swift+https://%s:key@localhost:8080/glance/%s" loc_uri = loc_uri % (self.swift_store_user, test_image_id) loc = location.get_location_from_uri(loc_uri) conn = self.store.get_connection(loc.store_location) conn.delete_object = fake_delete_object conn.head_object = fake_head_object conn.get_container = fake_get_container global SWIFT_DELETE_OBJECT_CALLS SWIFT_DELETE_OBJECT_CALLS = 0 self.store.delete(loc, connection=conn) # Expecting 6 delete calls, 5 for the segments and 1 for the manifest self.assertEqual(6, SWIFT_DELETE_OBJECT_CALLS) def test_read_acl_public(self): """ Test that we can set a public read acl. """ self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri(uri, conf=self.conf) ctxt = mock.MagicMock() store.set_acls(loc, public=True, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual("*:*", container_headers['X-Container-Read']) def test_read_acl_tenants(self): """ Test that we can set read acl for tenants. """ self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri(uri, conf=self.conf) read_tenants = ['matt', 'mark'] ctxt = mock.MagicMock() store.set_acls(loc, read_tenants=read_tenants, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual('matt:*,mark:*', container_headers[ 'X-Container-Read']) def test_write_acls(self): """ Test that we can set write acl for tenants. """ self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri(uri, conf=self.conf) read_tenants = ['frank', 'jim'] ctxt = mock.MagicMock() store.set_acls(loc, write_tenants=read_tenants, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual('frank:*,jim:*', container_headers[ 'X-Container-Write']) @mock.patch("glance_store._drivers.swift." "connection_manager.MultiTenantConnectionManager") def test_get_connection_manager_multi_tenant(self, manager_class): manager = mock.MagicMock() manager_class.return_value = manager self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() loc = mock.MagicMock() self.assertEqual(store.get_manager(loc), manager) @mock.patch("glance_store._drivers.swift." "connection_manager.SingleTenantConnectionManager") def test_get_connection_manager_single_tenant(self, manager_class): manager = mock.MagicMock() manager_class.return_value = manager store = Store(self.conf) store.configure() loc = mock.MagicMock() self.assertEqual(store.get_manager(loc), manager) def test_get_connection_manager_failed(self): store = swift.BaseStore(mock.MagicMock()) loc = mock.MagicMock() self.assertRaises(NotImplementedError, store.get_manager, loc) @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def test_init_client_multi_tenant(self, mock_client, mock_session, mock_identity): """Test that keystone client was initialized correctly""" # initialize store and connection parameters self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() ref_params = sutils.SwiftParams(self.conf).params default_ref = self.conf.glance_store.default_swift_reference default_swift_reference = ref_params.get(default_ref) # prepare client and session trustee_session = mock.MagicMock() trustor_session = mock.MagicMock() main_session = mock.MagicMock() trustee_client = mock.MagicMock() trustee_client.session.get_user_id.return_value = 'fake_user' trustor_client = mock.MagicMock() trustor_client.session.auth.get_auth_ref.return_value = { 'roles': [{'name': 'fake_role'}] } trustor_client.trusts.create.return_value = mock.MagicMock( id='fake_trust') main_client = mock.MagicMock() mock_session.Session.side_effect = [trustor_session, trustee_session, main_session] mock_client.Client.side_effect = [trustor_client, trustee_client, main_client] # initialize client ctxt = mock.MagicMock() client = store.init_client(location=mock.MagicMock(), context=ctxt) # test trustor usage mock_identity.V3Token.assert_called_once_with( auth_url=default_swift_reference.get('auth_address'), token=ctxt.auth_token, project_id=ctxt.tenant ) mock_session.Session.assert_any_call(auth=mock_identity.V3Token()) mock_client.Client.assert_any_call(session=trustor_session) # test trustee usage and trust creation tenant_name, user = default_swift_reference.get('user').split(':') mock_identity.V3Password.assert_any_call( auth_url=default_swift_reference.get('auth_address'), username=user, password=default_swift_reference.get('key'), project_name=tenant_name, user_domain_id=default_swift_reference.get('user_domain_id'), user_domain_name=default_swift_reference.get('user_domain_name'), project_domain_id=default_swift_reference.get('project_domain_id'), project_domain_name=default_swift_reference.get( 'project_domain_name') ) mock_session.Session.assert_any_call(auth=mock_identity.V3Password()) mock_client.Client.assert_any_call(session=trustee_session) trustor_client.trusts.create.assert_called_once_with( trustee_user='fake_user', trustor_user=ctxt.user, project=ctxt.tenant, impersonation=True, role_names=['fake_role'] ) mock_identity.V3Password.assert_any_call( auth_url=default_swift_reference.get('auth_address'), username=user, password=default_swift_reference.get('key'), trust_id='fake_trust', user_domain_id=default_swift_reference.get('user_domain_id'), user_domain_name=default_swift_reference.get('user_domain_name'), project_domain_id=default_swift_reference.get('project_domain_id'), project_domain_name=default_swift_reference.get( 'project_domain_name') ) mock_client.Client.assert_any_call(session=main_session) self.assertEqual(main_client, client) class TestStoreAuthV1(base.StoreBaseTest, SwiftTests, test_store_capabilities.TestStoreCapabilitiesChecking): _CONF = cfg.CONF def getConfig(self): conf = SWIFT_CONF.copy() conf['swift_store_auth_version'] = '1' conf['swift_store_user'] = 'tenant:user1' return conf def setUp(self): """Establish a clean test environment.""" super(TestStoreAuthV1, self).setUp() conf = self.getConfig() conf_file = 'glance-swift.conf' self.swift_config_file = self.copy_data_file(conf_file, self.test_dir) conf.update({'swift_store_config_file': self.swift_config_file}) moxfixture = self.useFixture(moxstubout.MoxStubout()) self.stubs = moxfixture.stubs stub_out_swiftclient(self.stubs, conf['swift_store_auth_version']) self.mock_keystone_client() self.store = Store(self.conf) self.config(**conf) self.store.configure() self.register_store_schemes(self.store, 'swift') self.addCleanup(self.conf.reset) class TestStoreAuthV2(TestStoreAuthV1): def getConfig(self): conf = super(TestStoreAuthV2, self).getConfig() conf['swift_store_auth_version'] = '2' conf['swift_store_user'] = 'tenant:user1' return conf def test_v2_with_no_tenant(self): uri = "swift://failme:key@auth_address/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.BadStoreUri, self.store.get, loc) def test_v2_multi_tenant_location(self): conf = self.getConfig() conf['swift_store_multi_tenant'] = True uri = "swift://auth_address/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.assertEqual('swift', loc.store_name) class TestStoreAuthV3(TestStoreAuthV1): def getConfig(self): conf = super(TestStoreAuthV3, self).getConfig() conf['swift_store_auth_version'] = '3' conf['swift_store_user'] = 'tenant:user1' return conf @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def test_init_client_single_tenant(self, mock_client, mock_session, mock_identity): """Test that keystone client was initialized correctly""" # initialize client store = Store(self.conf) store.configure() uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) ctxt = mock.MagicMock() store.init_client(location=loc.store_location, context=ctxt) # check that keystone was initialized correctly tenant = None if store.auth_version == '1' else "tenant" username = "tenant:user1" if store.auth_version == '1' else "user1" mock_identity.V3Password.assert_called_once_with( auth_url=loc.store_location.swift_url + '/', username=username, password="key", project_name=tenant, project_domain_id='default', project_domain_name=None, user_domain_id='default', user_domain_name=None,) mock_session.Session.assert_called_once_with( auth=mock_identity.V3Password()) mock_client.Client.assert_called_once_with( session=mock_session.Session()) class FakeConnection(object): def __init__(self, authurl=None, user=None, key=None, retries=5, preauthurl=None, preauthtoken=None, starting_backoff=1, tenant_name=None, os_options=None, auth_version="1", insecure=False, ssl_compression=True, cacert=None): if os_options is None: os_options = {} self.authurl = authurl self.user = user self.key = key self.preauthurl = preauthurl self.preauthtoken = preauthtoken self.tenant_name = tenant_name self.os_options = os_options self.auth_version = auth_version self.insecure = insecure self.cacert = cacert class TestSingleTenantStoreConnections(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestSingleTenantStoreConnections, self).setUp() moxfixture = self.useFixture(moxstubout.MoxStubout()) self.stubs = moxfixture.stubs self.stubs.Set(swiftclient, 'Connection', FakeConnection) self.store = swift.SingleTenantStore(self.conf) self.store.configure() specs = {'scheme': 'swift', 'auth_or_store_url': 'example.com/v2/', 'user': 'tenant:user1', 'key': 'key1', 'container': 'cont', 'obj': 'object'} self.location = swift.StoreLocation(specs, self.conf) self.addCleanup(self.conf.reset) def test_basic_connection(self): connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertIsNone(connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_conf_endpoint(self): ctx = mock.MagicMock(user='tenant:user1', tenant='tenant') self.config(swift_store_endpoint='https://internal.com') self.store.configure() connection = self.store.get_connection(self.location, context=ctx) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertEqual('https://internal.com', connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_conf_endpoint_no_context(self): self.config(swift_store_endpoint='https://internal.com') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertEqual('https://internal.com', connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_no_trailing_slash(self): self.location.auth_or_store_url = 'example.com/v2' connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) def test_connection_insecure(self): self.config(swift_store_auth_insecure=True) self.store.configure() connection = self.store.get_connection(self.location) self.assertTrue(connection.insecure) def test_connection_with_auth_v1(self): self.config(swift_store_auth_version='1') self.store.configure() self.location.user = 'auth_v1_user' connection = self.store.get_connection(self.location) self.assertEqual('1', connection.auth_version) self.assertEqual('auth_v1_user', connection.user) self.assertIsNone(connection.tenant_name) def test_connection_invalid_user(self): self.store.configure() self.location.user = 'invalid:format:user' self.assertRaises(exceptions.BadStoreUri, self.store.get_connection, self.location) def test_connection_missing_user(self): self.store.configure() self.location.user = None self.assertRaises(exceptions.BadStoreUri, self.store.get_connection, self.location) def test_connection_with_region(self): self.config(swift_store_region='Sahara') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'region_name': 'Sahara', 'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_service_type(self): self.config(swift_store_service_type='shoe-store') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'service_type': 'shoe-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_endpoint_type(self): self.config(swift_store_endpoint_type='internalURL') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'internalURL'}, connection.os_options) def test_bad_location_uri(self): self.store.configure() self.location.uri = 'http://bad_uri://' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_bad_location_uri_invalid_credentials(self): self.store.configure() self.location.uri = 'swift://bad_creds@uri/cont/obj' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_bad_location_uri_invalid_object_path(self): self.store.configure() self.location.uri = 'swift://user:key@uri/cont' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_ref_overrides_defaults(self): self.config(swift_store_auth_version='2', swift_store_user='testuser', swift_store_key='testpass', swift_store_auth_address='testaddress', swift_store_endpoint_type='internalURL', swift_store_config_file='somefile') self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'auth_version': '3', 'user': 'user:pass', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}} self.store.configure() self.assertEqual('user:pass', self.store.user) self.assertEqual('3', self.store.auth_version) self.assertEqual('authurl.com', self.store.auth_address) self.assertEqual('default', self.store.user_domain_id) self.assertEqual('ignored', self.store.user_domain_name) self.assertEqual('default', self.store.project_domain_id) self.assertEqual('ignored', self.store.project_domain_name) def test_with_v3_auth(self): self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'auth_version': '3', 'user': 'user:pass', 'key': 'password', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}} self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual('3', connection.auth_version) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}, connection.os_options) class TestMultiTenantStoreConnections(base.StoreBaseTest): def setUp(self): super(TestMultiTenantStoreConnections, self).setUp() moxfixture = self.useFixture(moxstubout.MoxStubout()) self.stubs = moxfixture.stubs self.stubs.Set(swiftclient, 'Connection', FakeConnection) self.context = mock.MagicMock( user='tenant:user1', tenant='tenant', auth_token='0123') self.store = swift.MultiTenantStore(self.conf) specs = {'scheme': 'swift', 'auth_or_store_url': 'example.com', 'container': 'cont', 'obj': 'object'} self.location = swift.StoreLocation(specs, self.conf) self.addCleanup(self.conf.reset) def test_basic_connection(self): self.store.configure() connection = self.store.get_connection(self.location, context=self.context) self.assertIsNone(connection.authurl) self.assertEqual('1', connection.auth_version) self.assertIsNone(connection.user) self.assertIsNone(connection.tenant_name) self.assertIsNone(connection.key) self.assertEqual('https://example.com', connection.preauthurl) self.assertEqual('0123', connection.preauthtoken) self.assertEqual({}, connection.os_options) def test_connection_does_not_use_endpoint_from_catalog(self): self.store.configure() self.context.service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'https://scexample.com', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] connection = self.store.get_connection(self.location, context=self.context) self.assertIsNone(connection.authurl) self.assertEqual('1', connection.auth_version) self.assertIsNone(connection.user) self.assertIsNone(connection.tenant_name) self.assertIsNone(connection.key) self.assertNotEqual('https://scexample.com', connection.preauthurl) self.assertEqual('https://example.com', connection.preauthurl) self.assertEqual('0123', connection.preauthtoken) self.assertEqual({}, connection.os_options) def test_connection_manager_does_not_use_endpoint_from_catalog(self): self.store.configure() self.context.service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'https://scexample.com', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] connection_manager = manager.MultiTenantConnectionManager( store=self.store, store_location=self.location, context=self.context ) conn = connection_manager._init_connection() self.assertNotEqual('https://scexample.com', conn.preauthurl) self.assertEqual('https://example.com', conn.preauthurl) class TestMultiTenantStoreContext(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): """Establish a clean test environment.""" super(TestMultiTenantStoreContext, self).setUp() conf = SWIFT_CONF.copy() self.store = Store(self.conf) self.config(**conf) self.store.configure() self.register_store_schemes(self.store, 'swift') service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'http://127.0.0.1:0', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] self.ctx = mock.MagicMock( service_catalog=service_catalog, user='tenant:user1', tenant='tenant', auth_token='0123') self.addCleanup(self.conf.reset) @requests_mock.mock() def test_download_context(self, m): """Verify context (ie token) is passed to swift on download.""" self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() uri = "swift+http://127.0.0.1/glance_123/123" loc = location.get_location_from_uri(uri, conf=self.conf) m.get("http://127.0.0.1/glance_123/123", headers={'Content-Length': '0'}) store.get(loc, context=self.ctx) self.assertEqual(b'0123', m.last_request.headers['X-Auth-Token']) @requests_mock.mock() def test_upload_context(self, m): """Verify context (ie token) is passed to swift on upload.""" head_req = m.head("http://127.0.0.1/glance_123", text='Some data', status_code=201) put_req = m.put("http://127.0.0.1/glance_123/123") self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() content = b'Some data' pseudo_file = six.BytesIO(content) store.add('123', pseudo_file, len(content), context=self.ctx) self.assertEqual(b'0123', head_req.last_request.headers['X-Auth-Token']) self.assertEqual(b'0123', put_req.last_request.headers['X-Auth-Token']) class TestCreatingLocations(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestCreatingLocations, self).setUp() moxfixture = self.useFixture(moxstubout.MoxStubout()) self.stubs = moxfixture.stubs conf = copy.deepcopy(SWIFT_CONF) self.store = Store(self.conf) self.config(**conf) moves.reload_module(swift) self.addCleanup(self.conf.reset) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] self.ctxt = mock.MagicMock(user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) def test_single_tenant_location(self): conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_container'] = 'container' conf_file = "glance-swift.conf" self.swift_config_file = self.copy_data_file(conf_file, self.test_dir) conf.update({'swift_store_config_file': self.swift_config_file}) conf['default_swift_reference'] = 'ref1' self.config(**conf) moves.reload_module(swift) store = swift.SingleTenantStore(self.conf) store.configure() location = store.create_location('image-id') self.assertEqual('swift+https', location.scheme) self.assertEqual('https://example.com', location.swift_url) self.assertEqual('container', location.container) self.assertEqual('image-id', location.obj) self.assertEqual('tenant:user1', location.user) self.assertEqual('key1', location.key) def test_single_tenant_location_http(self): conf_file = "glance-swift.conf" test_dir = self.useFixture(fixtures.TempDir()).path self.swift_config_file = self.copy_data_file(conf_file, test_dir) self.config(swift_store_container='container', default_swift_reference='ref2', swift_store_config_file=self.swift_config_file) store = swift.SingleTenantStore(self.conf) store.configure() location = store.create_location('image-id') self.assertEqual('swift+http', location.scheme) self.assertEqual('http://example.com', location.swift_url) def test_multi_tenant_location(self): self.config(swift_store_container='container') store = swift.MultiTenantStore(self.conf) store.configure() location = store.create_location('image-id', context=self.ctxt) self.assertEqual('swift+https', location.scheme) self.assertEqual('https://some_endpoint', location.swift_url) self.assertEqual('container_image-id', location.container) self.assertEqual('image-id', location.obj) self.assertIsNone(location.user) self.assertIsNone(location.key) def test_multi_tenant_location_http(self): store = swift.MultiTenantStore(self.conf) store.configure() self.ctxt.service_catalog[0]['endpoints'][0]['publicURL'] = \ 'http://some_endpoint' location = store.create_location('image-id', context=self.ctxt) self.assertEqual('swift+http', location.scheme) self.assertEqual('http://some_endpoint', location.swift_url) def test_multi_tenant_location_with_region(self): self.config(swift_store_region='WestCarolina') store = swift.MultiTenantStore(self.conf) store.configure() self.ctxt.service_catalog[0]['endpoints'][0]['region'] = 'WestCarolina' self.assertEqual('https://some_endpoint', store._get_endpoint(self.ctxt)) def test_multi_tenant_location_custom_service_type(self): self.config(swift_store_service_type='toy-store') self.ctxt.service_catalog[0]['type'] = 'toy-store' store = swift.MultiTenantStore(self.conf) store.configure() store._get_endpoint(self.ctxt) self.assertEqual('https://some_endpoint', store._get_endpoint(self.ctxt)) def test_multi_tenant_location_custom_endpoint_type(self): self.config(swift_store_endpoint_type='internalURL') store = swift.MultiTenantStore(self.conf) store.configure() self.assertEqual('https://some_internal_endpoint', store._get_endpoint(self.ctxt)) class TestChunkReader(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestChunkReader, self).setUp() conf = copy.deepcopy(SWIFT_CONF) Store(self.conf) self.config(**conf) def test_read_all_data(self): """ Replicate what goes on in the Swift driver with the repeated creation of the ChunkReader object """ CHUNKSIZE = 100 checksum = hashlib.md5() data_file = tempfile.NamedTemporaryFile() data_file.write(b'*' * units.Ki) data_file.flush() infile = open(data_file.name, 'rb') bytes_read = 0 while True: cr = swift.ChunkReader(infile, checksum, CHUNKSIZE) chunk = cr.read(CHUNKSIZE) if len(chunk) == 0: self.assertEqual(True, cr.is_zero_size) break bytes_read += len(chunk) self.assertEqual(units.Ki, bytes_read) self.assertEqual('fb10c6486390bec8414be90a93dfff3b', cr.checksum.hexdigest()) data_file.close() infile.close() def test_read_zero_size_data(self): """ Replicate what goes on in the Swift driver with the repeated creation of the ChunkReader object """ CHUNKSIZE = 100 checksum = hashlib.md5() data_file = tempfile.NamedTemporaryFile() infile = open(data_file.name, 'rb') bytes_read = 0 while True: cr = swift.ChunkReader(infile, checksum, CHUNKSIZE) chunk = cr.read(CHUNKSIZE) if len(chunk) == 0: break bytes_read += len(chunk) self.assertEqual(True, cr.is_zero_size) self.assertEqual(0, bytes_read) self.assertEqual('d41d8cd98f00b204e9800998ecf8427e', cr.checksum.hexdigest()) data_file.close() infile.close() class TestMultipleContainers(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestMultipleContainers, self).setUp() self.config(swift_store_multiple_containers_seed=3) self.store = swift.SingleTenantStore(self.conf) self.store.configure() def test_get_container_name_happy_path_with_seed_three(self): test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_fda' self.assertEqual(expected, actual) def test_get_container_name_with_negative_seed(self): self.assertRaises(ValueError, self.config, swift_store_multiple_containers_seed=-1) def test_get_container_name_with_seed_beyond_max(self): self.assertRaises(ValueError, self.config, swift_store_multiple_containers_seed=33) def test_get_container_name_with_max_seed(self): self.config(swift_store_multiple_containers_seed=32) self.store = swift.SingleTenantStore(self.conf) test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + test_image_id self.assertEqual(expected, actual) def test_get_container_name_with_dash(self): self.config(swift_store_multiple_containers_seed=10) self.store = swift.SingleTenantStore(self.conf) test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + 'fdae39a1-ba' self.assertEqual(expected, actual) def test_get_container_name_with_min_seed(self): self.config(swift_store_multiple_containers_seed=1) self.store = swift.SingleTenantStore(self.conf) test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + 'f' self.assertEqual(expected, actual) def test_get_container_name_with_multiple_containers_turned_off(self): self.config(swift_store_multiple_containers_seed=0) self.store.configure() test_image_id = 'random_id' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container' self.assertEqual(expected, actual) class TestBufferedReader(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestBufferedReader, self).setUp() self.config(swift_upload_buffer_dir=self.test_dir) s = b'1234567890' self.infile = six.BytesIO(s) self.infile.seek(0) self.checksum = hashlib.md5() self.verifier = mock.MagicMock(name='mock_verifier') total = 7 # not the full 10 byte string - defines segment boundary self.reader = buffered.BufferedReader(self.infile, self.checksum, total, self.verifier) self.addCleanup(self.conf.reset) def tearDown(self): super(TestBufferedReader, self).tearDown() self.reader.__exit__(None, None, None) def test_buffer(self): self.reader.read(4) self.assertTrue(self.reader._buffered, True) # test buffer position self.assertEqual(4, self.reader.tell()) # also test buffer contents buf = self.reader._tmpfile buf.seek(0) self.assertEqual(b'1234567', buf.read()) def test_read(self): buf = self.reader.read(4) # buffer and return 1234 self.assertEqual(b'1234', buf) buf = self.reader.read(4) # return 567 self.assertEqual(b'567', buf) self.assertEqual(7, self.reader.tell()) def test_read_limited(self): # read should not exceed the segment boundary described # by 'total' self.assertEqual(b'1234567', self.reader.read(100)) def test_reset(self): # test a reset like what swiftclient would do # if a segment upload failed. self.assertEqual(0, self.reader.tell()) self.reader.read(4) self.assertEqual(4, self.reader.tell()) self.reader.seek(0) self.assertEqual(0, self.reader.tell()) # confirm a read after reset self.assertEqual(b'1234', self.reader.read(4)) def test_partial_reset(self): # reset, but not all the way to the beginning self.reader.read(4) self.reader.seek(2) self.assertEqual(b'34567', self.reader.read(10)) def test_checksum(self): # the md5 checksum is updated only once on a full segment read expected_csum = hashlib.md5() expected_csum.update(b'1234567') self.reader.read(7) self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) def test_checksum_updated_only_once_w_full_segment_read(self): # Test that the checksum is updated only once when a full segment read # is followed by a seek and partial reads. expected_csum = hashlib.md5() expected_csum.update(b'1234567') self.reader.read(7) # attempted read of the entire chunk self.reader.seek(4) # seek back due to possible partial failure self.reader.read(1) # read one more byte # checksum was updated just once during the first attempted full read self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) def test_checksum_updates_during_partial_segment_reads(self): # Test to check that checksum is updated with only the bytes it has # not seen when the number of bytes being read is changed expected_csum = hashlib.md5() self.reader.read(4) expected_csum.update(b'1234') self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.reader.seek(0) # possible failure self.reader.read(2) self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.reader.read(4) # checksum missing two bytes expected_csum.update(b'56') # checksum updated with only the bytes it did not see self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) def test_checksum_rolling_calls(self): # Test that the checksum continues on to the next segment expected_csum = hashlib.md5() self.reader.read(7) expected_csum.update(b'1234567') self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) # another reader to complete reading the image file reader1 = buffered.BufferedReader(self.infile, self.checksum, 3, self.reader.verifier) reader1.read(3) expected_csum.update(b'890') self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) def test_verifier(self): # Test that the verifier is updated only once on a full segment read. self.reader.read(7) self.verifier.update.assert_called_once_with(b'1234567') def test_verifier_updated_only_once_w_full_segment_read(self): # Test that the verifier is updated only once when a full segment read # is followed by a seek and partial reads. self.reader.read(7) # attempted read of the entire chunk self.reader.seek(4) # seek back due to possible partial failure self.reader.read(5) # continue reading # verifier was updated just once during the first attempted full read self.verifier.update.assert_called_once_with(b'1234567') def test_verifier_updates_during_partial_segment_reads(self): # Test to check that verifier is updated with only the bytes it has # not seen when the number of bytes being read is changed self.reader.read(4) self.verifier.update.assert_called_once_with(b'1234') self.reader.seek(0) # possible failure self.reader.read(2) # verifier knows ahead self.verifier.update.assert_called_once_with(b'1234') self.reader.read(4) # verify missing 2 bytes # verifier updated with only the bytes it did not see self.verifier.update.assert_called_with(b'56') # verifier updated self.assertEqual(2, self.verifier.update.call_count) def test_verifier_rolling_calls(self): # Test that the verifier continues on to the next segment self.reader.read(7) self.verifier.update.assert_called_once_with(b'1234567') self.assertEqual(1, self.verifier.update.call_count) # another reader to complete reading the image file reader1 = buffered.BufferedReader(self.infile, self.checksum, 3, self.reader.verifier) reader1.read(3) self.verifier.update.assert_called_with(b'890') self.assertEqual(2, self.verifier.update.call_count) def test_light_buffer(self): # eventlet nonblocking fds means sometimes the buffer won't fill. # simulate testing where there is less in the buffer than a # full segment s = b'12' infile = six.BytesIO(s) infile.seek(0) total = 7 checksum = hashlib.md5() self.reader = buffered.BufferedReader(infile, checksum, total) self.reader.read(0) # read into buffer self.assertEqual(b'12', self.reader.read(7)) self.assertEqual(2, self.reader.tell()) def test_context_exit(self): # should close tempfile on context exit with self.reader: pass # file objects are not required to have a 'close' attribute if getattr(self.reader._tmpfile, 'closed'): self.assertTrue(self.reader._tmpfile.closed) glance_store-0.23.0/glance_store/tests/unit/test_rbd_store.py0000666000175100017510000003571513230237440024462 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import units import six from glance_store._drivers import rbd as rbd_store from glance_store import exceptions from glance_store import location as g_location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities class TestException(Exception): pass class MockRados(object): class Error(Exception): pass class ioctx(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def close(self, *args, **kwargs): pass class Rados(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def connect(self, *args, **kwargs): pass def open_ioctx(self, *args, **kwargs): return MockRados.ioctx() def shutdown(self, *args, **kwargs): pass def conf_get(self, *args, **kwargs): pass class MockRBD(object): class ImageExists(Exception): pass class ImageHasSnapshots(Exception): pass class ImageBusy(Exception): pass class ImageNotFound(Exception): pass class InvalidArgument(Exception): pass class Image(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): pass def create_snap(self, *args, **kwargs): pass def remove_snap(self, *args, **kwargs): pass def protect_snap(self, *args, **kwargs): pass def unprotect_snap(self, *args, **kwargs): pass def read(self, *args, **kwargs): raise NotImplementedError() def write(self, *args, **kwargs): raise NotImplementedError() def resize(self, *args, **kwargs): raise NotImplementedError() def discard(self, offset, length): raise NotImplementedError() def close(self): pass def list_snaps(self): raise NotImplementedError() def parent_info(self): raise NotImplementedError() def size(self): raise NotImplementedError() class RBD(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def create(self, *args, **kwargs): pass def remove(self, *args, **kwargs): pass def list(self, *args, **kwargs): raise NotImplementedError() def clone(self, *args, **kwargs): raise NotImplementedError() RBD_FEATURE_LAYERING = 1 class TestStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): """Establish a clean test environment.""" super(TestStore, self).setUp() rbd_store.rados = MockRados rbd_store.rbd = MockRBD self.store = rbd_store.Store(self.conf) self.store.configure() self.store.chunk_size = 2 self.called_commands_actual = [] self.called_commands_expected = [] self.store_specs = {'pool': 'fake_pool', 'image': 'fake_image', 'snapshot': 'fake_snapshot'} self.location = rbd_store.StoreLocation(self.store_specs, self.conf) # Provide enough data to get more than one chunk iteration. self.data_len = 3 * units.Ki self.data_iter = six.BytesIO(b'*' * self.data_len) def test_add_w_image_size_zero(self): """Assert that correct size is returned even though 0 was provided.""" self.store.chunk_size = units.Ki with mock.patch.object(rbd_store.rbd.Image, 'resize') as resize: with mock.patch.object(rbd_store.rbd.Image, 'write') as write: ret = self.store.add('fake_image_id', self.data_iter, 0) self.assertTrue(resize.called) self.assertTrue(write.called) self.assertEqual(ret[1], self.data_len) @mock.patch.object(MockRBD.Image, '__enter__') @mock.patch.object(rbd_store.Store, '_create_image') @mock.patch.object(rbd_store.Store, '_delete_image') def test_add_w_rbd_image_exception(self, delete, create, enter): def _fake_create_image(*args, **kwargs): self.called_commands_actual.append('create') return self.location def _fake_delete_image(target_pool, image_name, snapshot_name=None): self.assertEqual(self.location.pool, target_pool) self.assertEqual(self.location.image, image_name) self.assertEqual(self.location.snapshot, snapshot_name) self.called_commands_actual.append('delete') def _fake_enter(*args, **kwargs): raise exceptions.NotFound(image="fake_image_id") create.side_effect = _fake_create_image delete.side_effect = _fake_delete_image enter.side_effect = _fake_enter self.assertRaises(exceptions.NotFound, self.store.add, 'fake_image_id', self.data_iter, self.data_len) self.called_commands_expected = ['create', 'delete'] def test_add_duplicate_image(self): def _fake_create_image(*args, **kwargs): self.called_commands_actual.append('create') raise MockRBD.ImageExists() with mock.patch.object(self.store, '_create_image') as create_image: create_image.side_effect = _fake_create_image self.assertRaises(exceptions.Duplicate, self.store.add, 'fake_image_id', self.data_iter, self.data_len) self.called_commands_expected = ['create'] def test_add_with_verifier(self): """Assert 'verifier.update' is called when verifier is provided.""" self.store.chunk_size = units.Ki verifier = mock.MagicMock(name='mock_verifier') image_id = 'fake_image_id' file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = six.BytesIO(file_contents) with mock.patch.object(rbd_store.rbd.Image, 'write'): self.store.add(image_id, image_file, file_size, verifier=verifier) verifier.update.assert_called_with(file_contents) def test_delete(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') with mock.patch.object(MockRBD.RBD, 'remove') as remove_image: remove_image.side_effect = _fake_remove self.store.delete(g_location.Location('test_rbd_store', rbd_store.StoreLocation, self.conf, uri=self.location.get_uri())) self.called_commands_expected = ['remove'] def test_delete_image(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') with mock.patch.object(MockRBD.RBD, 'remove') as remove_image: remove_image.side_effect = _fake_remove self.store._delete_image('fake_pool', self.location.image) self.called_commands_expected = ['remove'] def test_delete_image_exc_image_not_found(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') raise MockRBD.ImageNotFound() with mock.patch.object(MockRBD.RBD, 'remove') as remove: remove.side_effect = _fake_remove self.assertRaises(exceptions.NotFound, self.store._delete_image, 'fake_pool', self.location.image) self.called_commands_expected = ['remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_snap(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.store._delete_image('fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap', 'remove_snap', 'remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_unprotected_snap(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise MockRBD.InvalidArgument() def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.store._delete_image('fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap', 'remove_snap', 'remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_snap_with_error(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise TestException() def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.assertRaises(TestException, self.store._delete_image, 'fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap'] def test_delete_image_w_snap_exc_image_busy(self): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise MockRBD.ImageBusy() with mock.patch.object(MockRBD.Image, 'unprotect_snap') as mocked: mocked.side_effect = _fake_unprotect_snap self.assertRaises(exceptions.InUseByStore, self.store._delete_image, 'fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap'] def test_delete_image_w_snap_exc_image_has_snap(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') raise MockRBD.ImageHasSnapshots() with mock.patch.object(MockRBD.RBD, 'remove') as remove: remove.side_effect = _fake_remove self.assertRaises(exceptions.HasSnapshot, self.store._delete_image, 'fake_pool', self.location.image) self.called_commands_expected = ['remove'] def test_get_partial_image(self): loc = g_location.Location('test_rbd_store', rbd_store.StoreLocation, self.conf, store_specs=self.store_specs) self.assertRaises(exceptions.StoreRandomGetNotSupported, self.store.get, loc, chunk_size=1) @mock.patch.object(MockRados.Rados, 'connect') def test_rados_connect_timeout(self, mock_rados_connect): socket_timeout = 1 self.config(rados_connect_timeout=socket_timeout) self.store.configure() with self.store.get_connection('conffile', 'rados_id'): mock_rados_connect.assert_called_with(timeout=socket_timeout) @mock.patch.object(MockRados.Rados, 'connect', side_effect=MockRados.Error) def test_rados_connect_error(self, _): rbd_store.rados.Error = MockRados.Error def test(): with self.store.get_connection('conffile', 'rados_id'): pass self.assertRaises(exceptions.BackendException, test) def test_create_image_conf_features(self): # Tests that we use non-0 features from ceph.conf and cast to int. fsid = 'fake' features = '3' conf_get_mock = mock.Mock(return_value=features) conn = mock.Mock(conf_get=conf_get_mock) ioctxt = mock.sentinel.ioctxt name = '1' size = 1024 order = 3 with mock.patch.object(rbd_store.rbd.RBD, 'create') as create_mock: location = self.store._create_image( fsid, conn, ioctxt, name, size, order) self.assertEqual(fsid, location.specs['fsid']) self.assertEqual(rbd_store.DEFAULT_POOL, location.specs['pool']) self.assertEqual(name, location.specs['image']) self.assertEqual(rbd_store.DEFAULT_SNAPNAME, location.specs['snapshot']) create_mock.assert_called_once_with(ioctxt, name, size, order, old_format=False, features=3) def tearDown(self): self.assertEqual(self.called_commands_expected, self.called_commands_actual) super(TestStore, self).tearDown() glance_store-0.23.0/glance_store/tests/unit/__init__.py0000666000175100017510000000000013230237440023152 0ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/unit/test_opts.py0000666000175100017510000001234413230237440023455 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pkg_resources from testtools import matchers from glance_store import backend from glance_store.tests import base def load_entry_point(entry_point, verify_requirements=False): """Load an entry-point without requiring dependencies.""" resolve = getattr(entry_point, 'resolve', None) require = getattr(entry_point, 'require', None) if resolve is not None and require is not None: if verify_requirements: entry_point.require() return entry_point.resolve() else: return entry_point.load(require=verify_requirements) class OptsTestCase(base.StoreBaseTest): def _check_opt_groups(self, opt_list, expected_opt_groups): self.assertThat(opt_list, matchers.HasLength(len(expected_opt_groups))) groups = [g for (g, _l) in opt_list] self.assertThat(groups, matchers.HasLength(len(expected_opt_groups))) for idx, group in enumerate(groups): self.assertEqual(expected_opt_groups[idx], group) def _check_opt_names(self, opt_list, expected_opt_names): opt_names = [o.name for (g, l) in opt_list for o in l] self.assertThat(opt_names, matchers.HasLength(len(expected_opt_names))) for opt in opt_names: self.assertIn(opt, expected_opt_names) def _test_entry_point(self, namespace, expected_opt_groups, expected_opt_names): opt_list = None for ep in pkg_resources.iter_entry_points('oslo.config.opts'): if ep.name == namespace: list_fn = load_entry_point(ep) opt_list = list_fn() break self.assertIsNotNone(opt_list) self._check_opt_groups(opt_list, expected_opt_groups) self._check_opt_names(opt_list, expected_opt_names) def test_list_api_opts(self): opt_list = backend._list_opts() expected_opt_groups = ['glance_store', 'glance_store'] expected_opt_names = [ 'default_store', 'stores', 'store_capabilities_update_min_interval', 'cinder_api_insecure', 'cinder_ca_certificates_file', 'cinder_catalog_info', 'cinder_endpoint_template', 'cinder_http_retries', 'cinder_os_region_name', 'cinder_state_transition_timeout', 'cinder_store_auth_address', 'cinder_store_user_name', 'cinder_store_password', 'cinder_store_project_name', 'cinder_volume_type', 'default_swift_reference', 'https_insecure', 'filesystem_store_datadir', 'filesystem_store_datadirs', 'filesystem_store_file_perm', 'filesystem_store_metadata_file', 'http_proxy_information', 'https_ca_certificates_file', 'rbd_store_ceph_conf', 'rbd_store_chunk_size', 'rbd_store_pool', 'rbd_store_user', 'rados_connect_timeout', 'rootwrap_config', 'swift_store_expire_soon_interval', 'sheepdog_store_address', 'sheepdog_store_chunk_size', 'sheepdog_store_port', 'swift_store_admin_tenants', 'swift_store_auth_address', 'swift_store_cacert', 'swift_store_auth_insecure', 'swift_store_auth_version', 'swift_store_config_file', 'swift_store_container', 'swift_store_create_container_on_put', 'swift_store_endpoint', 'swift_store_endpoint_type', 'swift_store_key', 'swift_store_large_object_chunk_size', 'swift_store_large_object_size', 'swift_store_multi_tenant', 'swift_store_multiple_containers_seed', 'swift_store_region', 'swift_store_retry_get_count', 'swift_store_service_type', 'swift_store_ssl_compression', 'swift_store_use_trusts', 'swift_store_user', 'swift_buffer_on_upload', 'swift_upload_buffer_dir', 'vmware_insecure', 'vmware_ca_file', 'vmware_api_retry_count', 'vmware_datastores', 'vmware_server_host', 'vmware_server_password', 'vmware_server_username', 'vmware_store_image_dir', 'vmware_task_poll_interval' ] self._check_opt_groups(opt_list, expected_opt_groups) self._check_opt_names(opt_list, expected_opt_names) self._test_entry_point('glance.store', expected_opt_groups, expected_opt_names) glance_store-0.23.0/glance_store/tests/unit/test_store_capabilities.py0000666000175100017510000001444313230237440026337 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store import capabilities as caps from glance_store.tests import base class FakeStoreWithStaticCapabilities(caps.StoreCapability): _CAPABILITIES = caps.BitMasks.READ_RANDOM | caps.BitMasks.DRIVER_REUSABLE class FakeStoreWithDynamicCapabilities(caps.StoreCapability): def __init__(self, *cap_list): super(FakeStoreWithDynamicCapabilities, self).__init__() if not cap_list: cap_list = [caps.BitMasks.READ_RANDOM, caps.BitMasks.DRIVER_REUSABLE] self.set_capabilities(*cap_list) class FakeStoreWithMixedCapabilities(caps.StoreCapability): _CAPABILITIES = caps.BitMasks.READ_RANDOM def __init__(self): super(FakeStoreWithMixedCapabilities, self).__init__() self.set_capabilities(caps.BitMasks.DRIVER_REUSABLE) class TestStoreCapabilitiesChecking(object): def test_store_capabilities_checked_on_io_operations(self): self.assertEqual('op_checker', self.store.add.__name__) self.assertEqual('op_checker', self.store.get.__name__) self.assertEqual('op_checker', self.store.delete.__name__) class TestStoreCapabilities(base.StoreBaseTest): def _verify_store_capabilities(self, store): # This function tested is_capable() as well. self.assertTrue(store.is_capable(caps.BitMasks.READ_RANDOM)) self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE)) self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS)) def test_static_capabilities_setup(self): self._verify_store_capabilities(FakeStoreWithStaticCapabilities()) def test_dynamic_capabilities_setup(self): self._verify_store_capabilities(FakeStoreWithDynamicCapabilities()) def test_mixed_capabilities_setup(self): self._verify_store_capabilities(FakeStoreWithMixedCapabilities()) def test_set_unset_capabilities(self): store = FakeStoreWithStaticCapabilities() self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS)) # Set and unset single capability on one time store.set_capabilities(caps.BitMasks.WRITE_ACCESS) self.assertTrue(store.is_capable(caps.BitMasks.WRITE_ACCESS)) store.unset_capabilities(caps.BitMasks.WRITE_ACCESS) self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS)) # Set and unset multiple capabilities on one time cap_list = [caps.BitMasks.WRITE_ACCESS, caps.BitMasks.WRITE_OFFSET] store.set_capabilities(*cap_list) self.assertTrue(store.is_capable(*cap_list)) store.unset_capabilities(*cap_list) self.assertFalse(store.is_capable(*cap_list)) def test_store_capabilities_property(self): store1 = FakeStoreWithDynamicCapabilities() self.assertTrue(hasattr(store1, 'capabilities')) store2 = FakeStoreWithMixedCapabilities() self.assertEqual(store1.capabilities, store2.capabilities) def test_cascaded_unset_capabilities(self): # Test read capability store = FakeStoreWithMixedCapabilities() self._verify_store_capabilities(store) store.unset_capabilities(caps.BitMasks.READ_ACCESS) cap_list = [caps.BitMasks.READ_ACCESS, caps.BitMasks.READ_OFFSET, caps.BitMasks.READ_CHUNK, caps.BitMasks.READ_RANDOM] for cap in cap_list: # To make sure all of them are unsetted. self.assertFalse(store.is_capable(cap)) self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE)) # Test write capability store = FakeStoreWithDynamicCapabilities(caps.BitMasks.WRITE_RANDOM, caps.BitMasks.DRIVER_REUSABLE) self.assertTrue(store.is_capable(caps.BitMasks.WRITE_RANDOM)) self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE)) store.unset_capabilities(caps.BitMasks.WRITE_ACCESS) cap_list = [caps.BitMasks.WRITE_ACCESS, caps.BitMasks.WRITE_OFFSET, caps.BitMasks.WRITE_CHUNK, caps.BitMasks.WRITE_RANDOM] for cap in cap_list: # To make sure all of them are unsetted. self.assertFalse(store.is_capable(cap)) self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE)) class TestStoreCapabilityConstants(base.StoreBaseTest): def test_one_single_capability_own_one_bit(self): cap_list = [ caps.BitMasks.READ_ACCESS, caps.BitMasks.WRITE_ACCESS, caps.BitMasks.DRIVER_REUSABLE, ] for cap in cap_list: self.assertEqual(1, bin(cap).count('1')) def test_combined_capability_bits(self): check = caps.StoreCapability.contains check(caps.BitMasks.READ_OFFSET, caps.BitMasks.READ_ACCESS) check(caps.BitMasks.READ_CHUNK, caps.BitMasks.READ_ACCESS) check(caps.BitMasks.READ_RANDOM, caps.BitMasks.READ_CHUNK) check(caps.BitMasks.READ_RANDOM, caps.BitMasks.READ_OFFSET) check(caps.BitMasks.WRITE_OFFSET, caps.BitMasks.WRITE_ACCESS) check(caps.BitMasks.WRITE_CHUNK, caps.BitMasks.WRITE_ACCESS) check(caps.BitMasks.WRITE_RANDOM, caps.BitMasks.WRITE_CHUNK) check(caps.BitMasks.WRITE_RANDOM, caps.BitMasks.WRITE_OFFSET) check(caps.BitMasks.RW_ACCESS, caps.BitMasks.READ_ACCESS) check(caps.BitMasks.RW_ACCESS, caps.BitMasks.WRITE_ACCESS) check(caps.BitMasks.RW_OFFSET, caps.BitMasks.READ_OFFSET) check(caps.BitMasks.RW_OFFSET, caps.BitMasks.WRITE_OFFSET) check(caps.BitMasks.RW_CHUNK, caps.BitMasks.READ_CHUNK) check(caps.BitMasks.RW_CHUNK, caps.BitMasks.WRITE_CHUNK) check(caps.BitMasks.RW_RANDOM, caps.BitMasks.READ_RANDOM) check(caps.BitMasks.RW_RANDOM, caps.BitMasks.WRITE_RANDOM) glance_store-0.23.0/glance_store/tests/unit/test_cinder_store.py0000666000175100017510000003700513230237440025151 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import errno import hashlib import mock import os import six import socket import tempfile import time import uuid from os_brick.initiator import connector from oslo_concurrency import processutils from oslo_utils import units from glance_store._drivers import cinder from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities class FakeObject(object): def __init__(self, **kwargs): for name, value in kwargs.items(): setattr(self, name, value) class TestCinderStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): super(TestCinderStore, self).setUp() self.store = cinder.Store(self.conf) self.store.configure() self.register_store_schemes(self.store, 'cinder') self.store.READ_CHUNKSIZE = 4096 self.store.WRITE_CHUNKSIZE = 4096 fake_sc = [{u'endpoints': [{u'publicURL': u'http://foo/public_url'}], u'endpoints_links': [], u'name': u'cinder', u'type': u'volumev2'}] self.context = FakeObject(service_catalog=fake_sc, user='fake_user', auth_token='fake_token', tenant='fake_tenant') def test_get_cinderclient(self): cc = cinder.get_cinderclient(self.conf, self.context) self.assertEqual('fake_token', cc.client.auth_token) self.assertEqual('http://foo/public_url', cc.client.management_url) def test_get_cinderclient_with_user_overriden(self): self.config(cinder_store_user_name='test_user') self.config(cinder_store_password='test_password') self.config(cinder_store_project_name='test_project') self.config(cinder_store_auth_address='test_address') cc = cinder.get_cinderclient(self.conf, self.context) self.assertIsNone(cc.client.auth_token) self.assertEqual('test_address', cc.client.management_url) def test_temporary_chown(self): class fake_stat(object): st_uid = 1 with mock.patch.object(os, 'stat', return_value=fake_stat()), \ mock.patch.object(os, 'getuid', return_value=2), \ mock.patch.object(processutils, 'execute') as mock_execute, \ mock.patch.object(cinder, 'get_root_helper', return_value='sudo'): with cinder.temporary_chown('test'): pass expected_calls = [mock.call('chown', 2, 'test', run_as_root=True, root_helper='sudo'), mock.call('chown', 1, 'test', run_as_root=True, root_helper='sudo')] self.assertEqual(expected_calls, mock_execute.call_args_list) @mock.patch.object(time, 'sleep') def test_wait_volume_status(self, mock_sleep): fake_manager = FakeObject(get=mock.Mock()) volume_available = FakeObject(manager=fake_manager, id='fake-id', status='available') volume_in_use = FakeObject(manager=fake_manager, id='fake-id', status='in-use') fake_manager.get.side_effect = [volume_available, volume_in_use] self.assertEqual(volume_in_use, self.store._wait_volume_status( volume_available, 'available', 'in-use')) fake_manager.get.assert_called_with('fake-id') mock_sleep.assert_called_once_with(0.5) @mock.patch.object(time, 'sleep') def test_wait_volume_status_unexpected(self, mock_sleep): fake_manager = FakeObject(get=mock.Mock()) volume_available = FakeObject(manager=fake_manager, id='fake-id', status='error') fake_manager.get.return_value = volume_available self.assertRaises(exceptions.BackendException, self.store._wait_volume_status, volume_available, 'available', 'in-use') fake_manager.get.assert_called_with('fake-id') @mock.patch.object(time, 'sleep') def test_wait_volume_status_timeout(self, mock_sleep): fake_manager = FakeObject(get=mock.Mock()) volume_available = FakeObject(manager=fake_manager, id='fake-id', status='available') fake_manager.get.return_value = volume_available self.assertRaises(exceptions.BackendException, self.store._wait_volume_status, volume_available, 'available', 'in-use') fake_manager.get.assert_called_with('fake-id') def _test_open_cinder_volume(self, open_mode, attach_mode, error): fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available') fake_volumes = FakeObject(get=lambda id: fake_volume, detach=mock.Mock()) fake_client = FakeObject(volumes=fake_volumes) _, fake_dev_path = tempfile.mkstemp(dir=self.test_dir) fake_devinfo = {'path': fake_dev_path} fake_connector = FakeObject( connect_volume=mock.Mock(return_value=fake_devinfo), disconnect_volume=mock.Mock()) @contextlib.contextmanager def fake_chown(path): yield def do_open(): with self.store._open_cinder_volume( fake_client, fake_volume, open_mode): if error: raise error def fake_factory(protocol, root_helper, **kwargs): self.assertEqual(fake_volume.initialize_connection.return_value, kwargs['conn']) return fake_connector root_helper = "sudo glance-rootwrap /etc/glance/rootwrap.conf" with mock.patch.object(cinder.Store, '_wait_volume_status', return_value=fake_volume), \ mock.patch.object(cinder, 'temporary_chown', side_effect=fake_chown), \ mock.patch.object(cinder, 'get_root_helper', return_value=root_helper), \ mock.patch.object(connector, 'get_connector_properties'), \ mock.patch.object(connector.InitiatorConnector, 'factory', side_effect=fake_factory): if error: self.assertRaises(error, do_open) else: do_open() fake_connector.connect_volume.assert_called_once_with(mock.ANY) fake_connector.disconnect_volume.assert_called_once_with( mock.ANY, fake_devinfo) fake_volume.attach.assert_called_once_with( None, None, attach_mode, host_name=socket.gethostname()) fake_volumes.detach.assert_called_once_with(fake_volume) def test_open_cinder_volume_rw(self): self._test_open_cinder_volume('wb', 'rw', None) def test_open_cinder_volume_ro(self): self._test_open_cinder_volume('rb', 'ro', None) def test_open_cinder_volume_error(self): self._test_open_cinder_volume('wb', 'rw', IOError) def test_cinder_configure_add(self): self.assertRaises(exceptions.BadStoreConfiguration, self.store._check_context, None) self.assertRaises(exceptions.BadStoreConfiguration, self.store._check_context, FakeObject(service_catalog=None)) self.store._check_context(FakeObject(service_catalog='fake')) def test_cinder_get(self): expected_size = 5 * units.Ki expected_file_contents = b"*" * expected_size volume_file = six.BytesIO(expected_file_contents) fake_client = FakeObject(auth_token=None, management_url=None) fake_volume_uuid = str(uuid.uuid4()) fake_volume = mock.MagicMock(id=fake_volume_uuid, metadata={'image_size': expected_size}, status='available') fake_volume.manager.get.return_value = fake_volume fake_volumes = FakeObject(get=lambda id: fake_volume) @contextlib.contextmanager def fake_open(client, volume, mode): self.assertEqual('rb', mode) yield volume_file with mock.patch.object(cinder, 'get_cinderclient') as mock_cc, \ mock.patch.object(self.store, '_open_cinder_volume', side_effect=fake_open): mock_cc.return_value = FakeObject(client=fake_client, volumes=fake_volumes) uri = "cinder://%s" % fake_volume_uuid loc = location.get_location_from_uri(uri, conf=self.conf) (image_file, image_size) = self.store.get(loc, context=self.context) expected_num_chunks = 2 data = b"" num_chunks = 0 for chunk in image_file: num_chunks += 1 data += chunk self.assertEqual(expected_num_chunks, num_chunks) self.assertEqual(expected_file_contents, data) def test_cinder_get_size(self): fake_client = FakeObject(auth_token=None, management_url=None) fake_volume_uuid = str(uuid.uuid4()) fake_volume = FakeObject(size=5, metadata={}) fake_volumes = {fake_volume_uuid: fake_volume} with mock.patch.object(cinder, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = FakeObject(client=fake_client, volumes=fake_volumes) uri = 'cinder://%s' % fake_volume_uuid loc = location.get_location_from_uri(uri, conf=self.conf) image_size = self.store.get_size(loc, context=self.context) self.assertEqual(fake_volume.size * units.Gi, image_size) def test_cinder_get_size_with_metadata(self): fake_client = FakeObject(auth_token=None, management_url=None) fake_volume_uuid = str(uuid.uuid4()) expected_image_size = 4500 * units.Mi fake_volume = FakeObject(size=5, metadata={'image_size': expected_image_size}) fake_volumes = {fake_volume_uuid: fake_volume} with mock.patch.object(cinder, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = FakeObject(client=fake_client, volumes=fake_volumes) uri = 'cinder://%s' % fake_volume_uuid loc = location.get_location_from_uri(uri, conf=self.conf) image_size = self.store.get_size(loc, context=self.context) self.assertEqual(expected_image_size, image_size) def _test_cinder_add(self, fake_volume, volume_file, size_kb=5, verifier=None): expected_image_id = str(uuid.uuid4()) expected_size = size_kb * units.Ki expected_file_contents = b"*" * expected_size image_file = six.BytesIO(expected_file_contents) expected_checksum = hashlib.md5(expected_file_contents).hexdigest() expected_location = 'cinder://%s' % fake_volume.id fake_client = FakeObject(auth_token=None, management_url=None) fake_volume.manager.get.return_value = fake_volume fake_volumes = FakeObject(create=mock.Mock(return_value=fake_volume)) self.config(cinder_volume_type='some_type') @contextlib.contextmanager def fake_open(client, volume, mode): self.assertEqual('wb', mode) yield volume_file with mock.patch.object(cinder, 'get_cinderclient') as mock_cc, \ mock.patch.object(self.store, '_open_cinder_volume', side_effect=fake_open): mock_cc.return_value = FakeObject(client=fake_client, volumes=fake_volumes) loc, size, checksum, _ = self.store.add(expected_image_id, image_file, expected_size, self.context, verifier) self.assertEqual(expected_location, loc) self.assertEqual(expected_size, size) self.assertEqual(expected_checksum, checksum) fake_volumes.create.assert_called_once_with( 1, name='image-%s' % expected_image_id, metadata={'image_owner': self.context.tenant, 'glance_image_id': expected_image_id, 'image_size': str(expected_size)}, volume_type='some_type') def test_cinder_add(self): fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) volume_file = six.BytesIO() self._test_cinder_add(fake_volume, volume_file) def test_cinder_add_with_verifier(self): fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) volume_file = six.BytesIO() verifier = mock.MagicMock() self._test_cinder_add(fake_volume, volume_file, 1, verifier) verifier.update.assert_called_with(b"*" * units.Ki) def test_cinder_add_volume_full(self): e = IOError() volume_file = six.BytesIO() e.errno = errno.ENOSPC fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) with mock.patch.object(volume_file, 'write', side_effect=e): self.assertRaises(exceptions.StorageFull, self._test_cinder_add, fake_volume, volume_file) fake_volume.delete.assert_called_once_with() def test_cinder_delete(self): fake_client = FakeObject(auth_token=None, management_url=None) fake_volume_uuid = str(uuid.uuid4()) fake_volume = FakeObject(delete=mock.Mock()) fake_volumes = {fake_volume_uuid: fake_volume} with mock.patch.object(cinder, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = FakeObject(client=fake_client, volumes=fake_volumes) uri = 'cinder://%s' % fake_volume_uuid loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc, context=self.context) fake_volume.delete.assert_called_once_with() glance_store-0.23.0/glance_store/tests/unit/test_swift_store_utils.py0000666000175100017510000000716413230237440026264 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from glance_store._drivers.swift import utils as sutils from glance_store import exceptions from glance_store.tests import base class TestSwiftParams(base.StoreBaseTest): def setUp(self): super(TestSwiftParams, self).setUp() conf_file = "glance-swift.conf" test_dir = self.useFixture(fixtures.TempDir()).path self.swift_config_file = self.copy_data_file(conf_file, test_dir) self.config(swift_store_config_file=self.swift_config_file) def test_multiple_swift_account_enabled(self): self.config(swift_store_config_file="glance-swift.conf") self.assertTrue( sutils.is_multiple_swift_store_accounts_enabled(self.conf)) def test_multiple_swift_account_disabled(self): self.config(swift_store_config_file=None) self.assertFalse( sutils.is_multiple_swift_store_accounts_enabled(self.conf)) def test_swift_config_file_doesnt_exist(self): self.config(swift_store_config_file='fake-file.conf') self.assertRaises(exceptions.BadStoreConfiguration, sutils.SwiftParams, self.conf) def test_swift_config_uses_default_values_multiple_account_disabled(self): default_user = 'user_default' default_key = 'key_default' default_auth_address = 'auth@default.com' default_account_reference = 'ref_default' conf = {'swift_store_config_file': None, 'swift_store_user': default_user, 'swift_store_key': default_key, 'swift_store_auth_address': default_auth_address, 'default_swift_reference': default_account_reference} self.config(**conf) swift_params = sutils.SwiftParams(self.conf).params self.assertEqual(1, len(swift_params.keys())) self.assertEqual(default_user, swift_params[default_account_reference]['user'] ) self.assertEqual(default_key, swift_params[default_account_reference]['key'] ) self.assertEqual(default_auth_address, swift_params[default_account_reference] ['auth_address'] ) def test_swift_store_config_validates_for_creds_auth_address(self): swift_params = sutils.SwiftParams(self.conf).params self.assertEqual('tenant:user1', swift_params['ref1']['user'] ) self.assertEqual('key1', swift_params['ref1']['key'] ) self.assertEqual('example.com', swift_params['ref1']['auth_address']) self.assertEqual('user2', swift_params['ref2']['user']) self.assertEqual('key2', swift_params['ref2']['key']) self.assertEqual('http://example.com', swift_params['ref2']['auth_address'] ) glance_store-0.23.0/glance_store/tests/unit/test_http_store.py0000666000175100017510000001775213230237440024673 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import requests import glance_store from glance_store._drivers import http from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities from glance_store.tests import utils class TestHttpStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): super(TestHttpStore, self).setUp() self.config(default_store='http', group='glance_store') http.Store.READ_CHUNKSIZE = 2 self.store = http.Store(self.conf) self.register_store_schemes(self.store, 'http') def _mock_requests(self): """Mock requests session object. Should be called when we need to mock request/response objects. """ request = mock.patch('requests.Session.request') self.request = request.start() self.addCleanup(request.stop) def test_http_get(self): self._mock_requests() self.request.return_value = utils.fake_response() uri = "http://netloc/path/to/file.tar.gz" expected_returns = ['I ', 'am', ' a', ' t', 'ea', 'po', 't,', ' s', 'ho', 'rt', ' a', 'nd', ' s', 'to', 'ut', '\n'] loc = location.get_location_from_uri(uri, conf=self.conf) (image_file, image_size) = self.store.get(loc) self.assertEqual(31, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) def test_http_partial_get(self): uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.StoreRandomGetNotSupported, self.store.get, loc, chunk_size=1) def test_http_get_redirect(self): # Add two layers of redirects to the response stack, which will # return the default 200 OK with the expected data after resolving # both redirects. self._mock_requests() redirect1 = {"location": "http://example.com/teapot.img"} redirect2 = {"location": "http://example.com/teapot_real.img"} responses = [utils.fake_response(), utils.fake_response(status_code=301, headers=redirect2), utils.fake_response(status_code=302, headers=redirect1)] def getresponse(*args, **kwargs): return responses.pop() self.request.side_effect = getresponse uri = "http://netloc/path/to/file.tar.gz" expected_returns = ['I ', 'am', ' a', ' t', 'ea', 'po', 't,', ' s', 'ho', 'rt', ' a', 'nd', ' s', 'to', 'ut', '\n'] loc = location.get_location_from_uri(uri, conf=self.conf) (image_file, image_size) = self.store.get(loc) self.assertEqual(0, len(responses)) self.assertEqual(31, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) def test_http_get_max_redirects(self): self._mock_requests() redirect = {"location": "http://example.com/teapot.img"} responses = ([utils.fake_response(status_code=302, headers=redirect)] * (http.MAX_REDIRECTS + 2)) def getresponse(*args, **kwargs): return responses.pop() self.request.side_effect = getresponse uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.MaxRedirectsExceeded, self.store.get, loc) def test_http_get_redirect_invalid(self): self._mock_requests() redirect = {"location": "http://example.com/teapot.img"} redirect_resp = utils.fake_response(status_code=307, headers=redirect) self.request.return_value = redirect_resp uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.BadStoreUri, self.store.get, loc) def test_http_get_not_found(self): self._mock_requests() fake = utils.fake_response(status_code=404, content="404 Not Found") self.request.return_value = fake uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_http_delete_raise_error(self): self._mock_requests() self.request.return_value = utils.fake_response() uri = "https://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.StoreDeleteNotSupported, self.store.delete, loc) self.assertRaises(exceptions.StoreDeleteNotSupported, glance_store.delete_from_backend, uri, {}) def test_http_add_raise_error(self): self.assertRaises(exceptions.StoreAddDisabled, self.store.add, None, None, None, None) self.assertRaises(exceptions.StoreAddDisabled, glance_store.add_to_backend, None, None, None, None, 'http') def test_http_get_size_with_non_existent_image_raises_Not_Found(self): self._mock_requests() self.request.return_value = utils.fake_response( status_code=404, content='404 Not Found') uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get_size, loc) self.request.assert_called_once_with('HEAD', uri, stream=True, allow_redirects=False) def test_http_get_size_bad_status_line(self): self._mock_requests() # Note(sabari): Low-level httplib.BadStatusLine will be raised as # ConnectionErorr after migrating to requests. self.request.side_effect = requests.exceptions.ConnectionError uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.BadStoreUri, self.store.get_size, loc) def test_http_store_location_initialization(self): """Test store location initialization from valid uris""" uris = [ "http://127.0.0.1:8000/ubuntu.iso", "http://openstack.com:80/ubuntu.iso", "http://[1080::8:800:200C:417A]:80/ubuntu.iso" ] for uri in uris: location.get_location_from_uri(uri) def test_http_store_location_initialization_with_invalid_url(self): """Test store location initialization from incorrect uris.""" incorrect_uris = [ "http://127.0.0.1:~/ubuntu.iso", "http://openstack.com:some_text/ubuntu.iso", "http://[1080::8:800:200C:417A]:some_text/ubuntu.iso" ] for uri in incorrect_uris: self.assertRaises(exceptions.BadStoreUri, location.get_location_from_uri, uri) def test_http_get_raises_remote_service_unavailable(self): """Test http store raises RemoteServiceUnavailable.""" uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.RemoteServiceUnavailable, self.store.get, loc) glance_store-0.23.0/glance_store/tests/unit/test_sheepdog_store.py0000666000175100017510000002034413230237440025501 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_concurrency import processutils from oslo_utils import units import oslotest import six from glance_store._drivers import sheepdog from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities class TestStoreLocation(oslotest.base.BaseTestCase): def test_process_spec(self): mock_conf = mock.Mock() fake_spec = { 'image': '6bd59e6e-c410-11e5-ab67-0a73f1fda51b', 'addr': '127.0.0.1', 'port': 7000, } loc = sheepdog.StoreLocation(fake_spec, mock_conf) self.assertEqual(fake_spec['image'], loc.image) self.assertEqual(fake_spec['addr'], loc.addr) self.assertEqual(fake_spec['port'], loc.port) def test_parse_uri(self): mock_conf = mock.Mock() fake_uri = ('sheepdog://127.0.0.1:7000' ':6bd59e6e-c410-11e5-ab67-0a73f1fda51b') loc = sheepdog.StoreLocation({}, mock_conf) loc.parse_uri(fake_uri) self.assertEqual('6bd59e6e-c410-11e5-ab67-0a73f1fda51b', loc.image) self.assertEqual('127.0.0.1', loc.addr) self.assertEqual(7000, loc.port) class TestSheepdogImage(oslotest.base.BaseTestCase): @mock.patch.object(processutils, 'execute') def test_run_command(self, mock_execute): image = sheepdog.SheepdogImage( '127.0.0.1', 7000, '6bd59e6e-c410-11e5-ab67-0a73f1fda51b', sheepdog.DEFAULT_CHUNKSIZE, ) image._run_command('create', None) expected_cmd = ( 'collie', 'vdi', 'create', '-a', '127.0.0.1', '-p', 7000, '6bd59e6e-c410-11e5-ab67-0a73f1fda51b', ) actual_cmd = mock_execute.call_args[0] self.assertEqual(expected_cmd, actual_cmd) class TestSheepdogStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): """Establish a clean test environment.""" super(TestSheepdogStore, self).setUp() def _fake_execute(*cmd, **kwargs): pass self.config(default_store='sheepdog', group='glance_store') execute = mock.patch.object(processutils, 'execute').start() execute.side_effect = _fake_execute self.addCleanup(execute.stop) self.store = sheepdog.Store(self.conf) self.store.configure() self.store_specs = {'image': '6bd59e6e-c410-11e5-ab67-0a73f1fda51b', 'addr': '127.0.0.1', 'port': 7000} @mock.patch.object(sheepdog.SheepdogImage, 'write') @mock.patch.object(sheepdog.SheepdogImage, 'create') @mock.patch.object(sheepdog.SheepdogImage, 'exist') def test_add_image(self, mock_exist, mock_create, mock_write): data = six.BytesIO(b'xx') mock_exist.return_value = False (uri, size, checksum, loc) = self.store.add('fake_image_id', data, 2) mock_exist.assert_called_once_with() mock_create.assert_called_once_with(2) mock_write.assert_called_once_with(b'xx', 0, 2) @mock.patch.object(sheepdog.SheepdogImage, 'write') @mock.patch.object(sheepdog.SheepdogImage, 'exist') def test_add_bad_size_with_image(self, mock_exist, mock_write): data = six.BytesIO(b'xx') mock_exist.return_value = False self.assertRaises(exceptions.Forbidden, self.store.add, 'fake_image_id', data, 'test') mock_exist.assert_called_once_with() self.assertEqual(mock_write.call_count, 0) @mock.patch.object(sheepdog.SheepdogImage, 'delete') @mock.patch.object(sheepdog.SheepdogImage, 'write') @mock.patch.object(sheepdog.SheepdogImage, 'create') @mock.patch.object(sheepdog.SheepdogImage, 'exist') def test_cleanup_when_add_image_exception(self, mock_exist, mock_create, mock_write, mock_delete): data = six.BytesIO(b'xx') mock_exist.return_value = False mock_write.side_effect = exceptions.BackendException self.assertRaises(exceptions.BackendException, self.store.add, 'fake_image_id', data, 2) mock_exist.assert_called_once_with() mock_create.assert_called_once_with(2) mock_write.assert_called_once_with(b'xx', 0, 2) mock_delete.assert_called_once_with() def test_add_duplicate_image(self): def _fake_run_command(command, data, *params): if command == "list -r": return "= fake_volume 0 1000" with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd: cmd.side_effect = _fake_run_command data = six.BytesIO(b'xx') self.assertRaises(exceptions.Duplicate, self.store.add, 'fake_image_id', data, 2) def test_get(self): def _fake_run_command(command, data, *params): if command == "list -r": return "= fake_volume 0 1000" with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd: cmd.side_effect = _fake_run_command loc = location.Location('test_sheepdog_store', sheepdog.StoreLocation, self.conf, store_specs=self.store_specs) ret = self.store.get(loc) self.assertEqual(1000, ret[1]) def test_partial_get(self): loc = location.Location('test_sheepdog_store', sheepdog.StoreLocation, self.conf, store_specs=self.store_specs) self.assertRaises(exceptions.StoreRandomGetNotSupported, self.store.get, loc, chunk_size=1) def test_get_size(self): def _fake_run_command(command, data, *params): if command == "list -r": return "= fake_volume 0 1000" with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd: cmd.side_effect = _fake_run_command loc = location.Location('test_sheepdog_store', sheepdog.StoreLocation, self.conf, store_specs=self.store_specs) ret = self.store.get_size(loc) self.assertEqual(1000, ret) def test_delete(self): called_commands = [] def _fake_run_command(command, data, *params): called_commands.append(command) if command == "list -r": return "= fake_volume 0 1000" with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd: cmd.side_effect = _fake_run_command loc = location.Location('test_sheepdog_store', sheepdog.StoreLocation, self.conf, store_specs=self.store_specs) self.store.delete(loc) self.assertEqual(['list -r', 'delete'], called_commands) def test_add_with_verifier(self): """Test that 'verifier.update' is called when verifier is provided.""" verifier = mock.MagicMock(name='mock_verifier') self.store.chunk_size = units.Ki image_id = 'fake_image_id' file_size = units.Ki # 1K file_contents = b"*" * file_size image_file = six.BytesIO(file_contents) def _fake_run_command(command, data, *params): pass with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd: cmd.side_effect = _fake_run_command self.store.add(image_id, image_file, file_size, verifier=verifier) verifier.update.assert_called_with(file_contents) glance_store-0.23.0/glance_store/tests/unit/test_exceptions.py0000666000175100017510000000430613230237440024650 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import encodeutils from oslotest import base import six import glance_store class TestExceptions(base.BaseTestCase): """Test routines in glance_store.common.utils.""" def test_backend_exception(self): msg = glance_store.BackendException() self.assertIn(u'', encodeutils.exception_to_unicode(msg)) def test_unsupported_backend_exception(self): msg = glance_store.UnsupportedBackend() self.assertIn(u'', encodeutils.exception_to_unicode(msg)) def test_redirect_exception(self): # Just checks imports work ok glance_store.RedirectException(url='http://localhost') def test_exception_no_message(self): msg = glance_store.NotFound() self.assertIn('Image %(image)s not found', encodeutils.exception_to_unicode(msg)) def test_exception_not_found_with_image(self): msg = glance_store.NotFound(image='123') self.assertIn('Image 123 not found', encodeutils.exception_to_unicode(msg)) def test_exception_with_message(self): msg = glance_store.NotFound('Some message') self.assertIn('Some message', encodeutils.exception_to_unicode(msg)) def test_exception_with_kwargs(self): msg = glance_store.NotFound('Message: %(foo)s', foo='bar') self.assertIn('Message: bar', encodeutils.exception_to_unicode(msg)) def test_non_unicode_error_msg(self): exc = glance_store.NotFound(str('test')) self.assertIsInstance(encodeutils.exception_to_unicode(exc), six.text_type) glance_store-0.23.0/glance_store/tests/unit/test_backend.py0000666000175100017510000000770713230237440024066 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack, LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the backend store API's""" import mock from glance_store import backend from glance_store import exceptions from glance_store.tests import base class TestStoreAddToBackend(base.StoreBaseTest): def setUp(self): super(TestStoreAddToBackend, self).setUp() self.image_id = "animage" self.data = "dataandstuff" self.size = len(self.data) self.location = "file:///ab/cde/fgh" self.checksum = "md5" def _bad_metadata(self, in_metadata): mstore = mock.Mock() mstore.add.return_value = (self.location, self.size, self.checksum, in_metadata) mstore.__str__ = lambda self: "hello" mstore.__unicode__ = lambda self: "hello" self.assertRaises(exceptions.BackendException, backend.store_add_to_backend, self.image_id, self.data, self.size, mstore) mstore.add.assert_called_once_with(self.image_id, mock.ANY, self.size, context=None, verifier=None) def _good_metadata(self, in_metadata): mstore = mock.Mock() mstore.add.return_value = (self.location, self.size, self.checksum, in_metadata) (location, size, checksum, metadata) = backend.store_add_to_backend(self.image_id, self.data, self.size, mstore) mstore.add.assert_called_once_with(self.image_id, mock.ANY, self.size, context=None, verifier=None) self.assertEqual(self.location, location) self.assertEqual(self.size, size) self.assertEqual(self.checksum, checksum) self.assertEqual(in_metadata, metadata) def test_empty(self): metadata = {} self._good_metadata(metadata) def test_string(self): metadata = {'key': u'somevalue'} self._good_metadata(metadata) def test_list(self): m = {'key': [u'somevalue', u'2']} self._good_metadata(m) def test_unicode_dict(self): inner = {'key1': u'somevalue', 'key2': u'somevalue'} m = {'topkey': inner} self._good_metadata(m) def test_unicode_dict_list(self): inner = {'key1': u'somevalue', 'key2': u'somevalue'} m = {'topkey': inner, 'list': [u'somevalue', u'2'], 'u': u'2'} self._good_metadata(m) def test_nested_dict(self): inner = {'key1': u'somevalue', 'key2': u'somevalue'} inner = {'newkey': inner} inner = {'anotherkey': inner} m = {'topkey': inner} self._good_metadata(m) def test_bad_top_level_nonunicode(self): metadata = {'key': b'a string'} self._bad_metadata(metadata) def test_bad_nonunicode_dict_list(self): inner = {'key1': u'somevalue', 'key2': u'somevalue', 'k3': [1, object()]} m = {'topkey': inner, 'list': [u'somevalue', u'2'], 'u': u'2'} self._bad_metadata(m) def test_bad_metadata_not_dict(self): self._bad_metadata([]) glance_store-0.23.0/glance_store/tests/unit/test_connection_manager.py0000666000175100017510000001737213230237440026327 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from glance_store._drivers.swift import connection_manager from glance_store._drivers.swift import store as swift_store from glance_store import exceptions from glance_store.tests import base class TestConnectionManager(base.StoreBaseTest): def setUp(self): super(TestConnectionManager, self).setUp() self.client = mock.MagicMock() self.client.session.get_auth_headers.return_value = { connection_manager.SwiftConnectionManager.AUTH_HEADER_NAME: "fake_token"} self.location = mock.create_autospec(swift_store.StoreLocation) self.context = mock.MagicMock() self.conf = mock.MagicMock() def prepare_store(self, multi_tenant=False): if multi_tenant: store = mock.create_autospec(swift_store.MultiTenantStore, conf=self.conf) else: store = mock.create_autospec(swift_store.SingleTenantStore, service_type="swift", endpoint_type="internal", region=None, conf=self.conf, auth_version='3') store.init_client.return_value = self.client return store def test_basic_single_tenant_cm_init(self): store = self.prepare_store() manager = connection_manager.SingleTenantConnectionManager( store=store, store_location=self.location ) store.init_client.assert_called_once_with(self.location, None) self.client.session.get_endpoint.assert_called_once_with( service_type=store.service_type, interface=store.endpoint_type, region_name=store.region ) store.get_store_connection.assert_called_once_with( "fake_token", manager.storage_url ) def test_basic_multi_tenant_cm_init(self): store = self.prepare_store(multi_tenant=True) manager = connection_manager.MultiTenantConnectionManager( store=store, store_location=self.location, context=self.context ) store.get_store_connection.assert_called_once_with( self.context.auth_token, manager.storage_url) def test_basis_multi_tenant_no_context(self): store = self.prepare_store(multi_tenant=True) self.assertRaises(exceptions.BadStoreConfiguration, connection_manager.MultiTenantConnectionManager, store=store, store_location=self.location) def test_multi_tenant_client_cm_with_client_creation_fails(self): store = self.prepare_store(multi_tenant=True) store.init_client.side_effect = [Exception] manager = connection_manager.MultiTenantConnectionManager( store=store, store_location=self.location, context=self.context, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, self.context) store.get_store_connection.assert_called_once_with( self.context.auth_token, manager.storage_url) self.assertFalse(manager.allow_reauth) def test_multi_tenant_client_cm_with_no_expiration(self): store = self.prepare_store(multi_tenant=True) manager = connection_manager.MultiTenantConnectionManager( store=store, store_location=self.location, context=self.context, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, self.context) # return the same connection because it should not be expired auth_ref = mock.MagicMock() self.client.session.auth.auth_ref = auth_ref auth_ref.will_expire_soon.return_value = False manager.get_connection() # check that we don't update connection store.get_store_connection.assert_called_once_with("fake_token", manager.storage_url) self.client.session.get_auth_headers.assert_called_once_with() def test_multi_tenant_client_cm_with_expiration(self): store = self.prepare_store(multi_tenant=True) manager = connection_manager.MultiTenantConnectionManager( store=store, store_location=self.location, context=self.context, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, self.context) # return the same connection because it should not be expired auth_ref = mock.MagicMock() self.client.session.auth.get_auth_ref.return_value = auth_ref auth_ref.will_expire_soon.return_value = True manager.get_connection() # check that we don't update connection self.assertEqual(2, store.get_store_connection.call_count) self.assertEqual(2, self.client.session.get_auth_headers.call_count) def test_single_tenant_client_cm_with_no_expiration(self): store = self.prepare_store() manager = connection_manager.SingleTenantConnectionManager( store=store, store_location=self.location, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, None) self.client.session.get_endpoint.assert_called_once_with( service_type=store.service_type, interface=store.endpoint_type, region_name=store.region ) # return the same connection because it should not be expired auth_ref = mock.MagicMock() self.client.session.auth.auth_ref = auth_ref auth_ref.will_expire_soon.return_value = False manager.get_connection() # check that we don't update connection store.get_store_connection.assert_called_once_with("fake_token", manager.storage_url) self.client.session.get_auth_headers.assert_called_once_with() def test_single_tenant_client_cm_with_expiration(self): store = self.prepare_store() manager = connection_manager.SingleTenantConnectionManager( store=store, store_location=self.location, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, None) self.client.session.get_endpoint.assert_called_once_with( service_type=store.service_type, interface=store.endpoint_type, region_name=store.region ) # return the same connection because it should not be expired auth_ref = mock.MagicMock() self.client.session.auth.get_auth_ref.return_value = auth_ref auth_ref.will_expire_soon.return_value = True manager.get_connection() # check that we don't update connection self.assertEqual(2, store.get_store_connection.call_count) self.assertEqual(2, self.client.session.get_auth_headers.call_count) glance_store-0.23.0/glance_store/tests/unit/test_store_base.py0000666000175100017510000000323413230237440024614 0ustar zuulzuul00000000000000# Copyright 2011-2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import glance_store as store from glance_store import backend from glance_store.tests import base class TestStoreBase(base.StoreBaseTest): def setUp(self): super(TestStoreBase, self).setUp() self.config(default_store='file', group='glance_store') @mock.patch.object(store.driver, 'LOG') def test_configure_does_not_raise_on_missing_driver_conf(self, mock_log): self.config(stores=['file'], group='glance_store') self.config(filesystem_store_datadir=None, group='glance_store') self.config(filesystem_store_datadirs=None, group='glance_store') for (__, store_instance) in backend._load_stores(self.conf): store_instance.configure() mock_log.warning.assert_called_once_with( "Failed to configure store correctly: Store filesystem " "could not be configured correctly. Reason: Specify " "at least 'filesystem_store_datadir' or " "'filesystem_store_datadirs' option Disabling add method.") glance_store-0.23.0/glance_store/tests/unit/test_filesystem_store.py0000666000175100017510000007433313230237440026076 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the filesystem backend store""" import errno import hashlib import json import mock import os import stat import uuid import fixtures from oslo_utils import units import six from six.moves import builtins # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance_store._drivers import filesystem from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities class TestStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): """Establish a clean test environment.""" super(TestStore, self).setUp() self.orig_chunksize = filesystem.Store.READ_CHUNKSIZE filesystem.Store.READ_CHUNKSIZE = 10 self.store = filesystem.Store(self.conf) self.config(filesystem_store_datadir=self.test_dir, stores=['glance.store.filesystem.Store'], group="glance_store") self.store.configure() self.register_store_schemes(self.store, 'file') def tearDown(self): """Clear the test environment.""" super(TestStore, self).tearDown() filesystem.ChunkedFile.CHUNKSIZE = self.orig_chunksize def _create_metadata_json_file(self, metadata): expected_image_id = str(uuid.uuid4()) jsonfilename = os.path.join(self.test_dir, "storage_metadata.%s" % expected_image_id) self.config(filesystem_store_metadata_file=jsonfilename, group="glance_store") with open(jsonfilename, 'w') as fptr: json.dump(metadata, fptr) def _store_image(self, in_metadata): expected_image_id = str(uuid.uuid4()) expected_file_size = 10 expected_file_contents = b"*" * expected_file_size image_file = six.BytesIO(expected_file_contents) self.store.FILESYSTEM_STORE_METADATA = in_metadata return self.store.add(expected_image_id, image_file, expected_file_size) def test_get(self): """Test a "normal" retrieval of an image in chunks.""" # First add an image... image_id = str(uuid.uuid4()) file_contents = b"chunk00000remainder" image_file = six.BytesIO(file_contents) loc, size, checksum, _ = self.store.add(image_id, image_file, len(file_contents)) # Now read it back... uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri(uri, conf=self.conf) (image_file, image_size) = self.store.get(loc) expected_data = b"chunk00000remainder" expected_num_chunks = 2 data = b"" num_chunks = 0 for chunk in image_file: num_chunks += 1 data += chunk self.assertEqual(expected_data, data) self.assertEqual(expected_num_chunks, num_chunks) def test_get_random_access(self): """Test a "normal" retrieval of an image in chunks.""" # First add an image... image_id = str(uuid.uuid4()) file_contents = b"chunk00000remainder" image_file = six.BytesIO(file_contents) loc, size, checksum, _ = self.store.add(image_id, image_file, len(file_contents)) # Now read it back... uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri(uri, conf=self.conf) data = b"" for offset in range(len(file_contents)): (image_file, image_size) = self.store.get(loc, offset=offset, chunk_size=1) for chunk in image_file: data += chunk self.assertEqual(file_contents, data) data = b"" chunk_size = 5 (image_file, image_size) = self.store.get(loc, offset=chunk_size, chunk_size=chunk_size) for chunk in image_file: data += chunk self.assertEqual(b'00000', data) self.assertEqual(chunk_size, image_size) def test_get_non_existing(self): """ Test that trying to retrieve a file that doesn't exist raises an error """ loc = location.get_location_from_uri( "file:///%s/non-existing" % self.test_dir, conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_add(self): """Test that we can add an image via the filesystem backend.""" filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = hashlib.md5(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (self.test_dir, expected_image_id) image_file = six.BytesIO(expected_file_contents) loc, size, checksum, _ = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) uri = "file:///%s/%s" % (self.test_dir, expected_image_id) loc = location.get_location_from_uri(uri, conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_with_verifier(self): """Test that 'verifier.update' is called when verifier is provided.""" verifier = mock.MagicMock(name='mock_verifier') self.store.chunk_size = units.Ki image_id = str(uuid.uuid4()) file_size = units.Ki # 1K file_contents = b"*" * file_size image_file = six.BytesIO(file_contents) self.store.add(image_id, image_file, file_size, verifier=verifier) verifier.update.assert_called_with(file_contents) def test_add_check_metadata_with_invalid_mountpoint_location(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}] location, size, checksum, metadata = self._store_image(in_metadata) self.assertEqual({}, metadata) def test_add_check_metadata_list_with_invalid_mountpoint_locations(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] location, size, checksum, metadata = self._store_image(in_metadata) self.assertEqual({}, metadata) def test_add_check_metadata_list_with_valid_mountpoint_locations(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/tmp'}, {'id': 'xyz1234', 'mountpoint': '/xyz'}] location, size, checksum, metadata = self._store_image(in_metadata) self.assertEqual(in_metadata[0], metadata) def test_add_check_metadata_bad_nosuch_file(self): expected_image_id = str(uuid.uuid4()) jsonfilename = os.path.join(self.test_dir, "storage_metadata.%s" % expected_image_id) self.config(filesystem_store_metadata_file=jsonfilename, group="glance_store") expected_file_size = 10 expected_file_contents = b"*" * expected_file_size image_file = six.BytesIO(expected_file_contents) location, size, checksum, metadata = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual(metadata, {}) def test_add_already_existing(self): """ Tests that adding an image with an existing identifier raises an appropriate exception """ filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = six.BytesIO(file_contents) location, size, checksum, _ = self.store.add(image_id, image_file, file_size) image_file = six.BytesIO(b"nevergonnamakeit") self.assertRaises(exceptions.Duplicate, self.store.add, image_id, image_file, 0) def _do_test_add_write_failure(self, errno, exception): filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size path = os.path.join(self.test_dir, image_id) image_file = six.BytesIO(file_contents) with mock.patch.object(builtins, 'open') as popen: e = IOError() e.errno = errno popen.side_effect = e self.assertRaises(exception, self.store.add, image_id, image_file, 0) self.assertFalse(os.path.exists(path)) def test_add_storage_full(self): """ Tests that adding an image without enough space on disk raises an appropriate exception """ self._do_test_add_write_failure(errno.ENOSPC, exceptions.StorageFull) def test_add_file_too_big(self): """ Tests that adding an excessively large image file raises an appropriate exception """ self._do_test_add_write_failure(errno.EFBIG, exceptions.StorageFull) def test_add_storage_write_denied(self): """ Tests that adding an image with insufficient filestore permissions raises an appropriate exception """ self._do_test_add_write_failure(errno.EACCES, exceptions.StorageWriteDenied) def test_add_other_failure(self): """ Tests that a non-space-related IOError does not raise a StorageFull exceptions. """ self._do_test_add_write_failure(errno.ENOTDIR, IOError) def test_add_cleanup_on_read_failure(self): """ Tests the partial image file is cleaned up after a read failure. """ filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size path = os.path.join(self.test_dir, image_id) image_file = six.BytesIO(file_contents) def fake_Error(size): raise AttributeError() with mock.patch.object(image_file, 'read') as mock_read: mock_read.side_effect = fake_Error self.assertRaises(AttributeError, self.store.add, image_id, image_file, 0) self.assertFalse(os.path.exists(path)) def test_delete(self): """ Test we can delete an existing image in the filesystem store """ # First add an image image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = six.BytesIO(file_contents) loc, size, checksum, _ = self.store.add(image_id, image_file, file_size) # Now check that we can delete it uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_delete_non_existing(self): """ Test that trying to delete a file that doesn't exist raises an error """ loc = location.get_location_from_uri( "file:///tmp/glance-tests/non-existing", conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.delete, loc) def test_delete_forbidden(self): """ Tests that trying to delete a file without permissions raises the correct error """ # First add an image image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = six.BytesIO(file_contents) loc, size, checksum, _ = self.store.add(image_id, image_file, file_size) uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri(uri, conf=self.conf) # Mock unlink to raise an OSError for lack of permissions # and make sure we can't delete the image with mock.patch.object(os, 'unlink') as unlink: e = OSError() e.errno = errno unlink.side_effect = e self.assertRaises(exceptions.Forbidden, self.store.delete, loc) # Make sure the image didn't get deleted self.store.get(loc) def test_configure_add_with_multi_datadirs(self): """ Tests multiple filesystem specified by filesystem_store_datadirs are parsed correctly. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='glance_store') self.store.configure_add() expected_priority_map = {100: [store_map[0]], 200: [store_map[1]]} expected_priority_list = [200, 100] self.assertEqual(expected_priority_map, self.store.priority_data_map) self.assertEqual(expected_priority_list, self.store.priority_list) def test_configure_add_with_metadata_file_success(self): metadata = {'id': 'asdf1234', 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual([metadata], self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_list_of_dicts_success(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 'xyz1234', 'mountpoint': '/tmp/'}] self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual(metadata, self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_success_list_val_for_some_key(self): metadata = {'akey': ['value1', 'value2'], 'id': 'asdf1234', 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual([metadata], self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_bad_data(self): metadata = {'akey': 10, 'id': 'asdf1234', 'mountpoint': '/tmp'} # only unicode is allowed self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_with_no_id_or_mountpoint(self): metadata = {'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = {'id': 'asdfg1234'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_id_or_mountpoint_is_not_string(self): metadata = {'id': 10, 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = {'id': 'asdf1234', 'mountpoint': 12345} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_list_with_no_id_or_mountpoint(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = [{'id': 'abcdefg'}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_add_check_metadata_list_id_or_mountpoint_is_not_string(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 1234, 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = [{'id': 'abcdefg', 'mountpoint': 1234}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_same_dir_multiple_times(self): """ Tests BadStoreConfiguration exception is raised if same directory is specified multiple times in filesystem_store_datadirs. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.clear_override('filesystem_store_datadir', group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200", store_map[0] + ":300"], group='glance_store') self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_same_dir_multiple_times_same_priority(self): """ Tests BadStoreConfiguration exception is raised if same directory is specified multiple times in filesystem_store_datadirs. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200", store_map[0] + ":100"], group='glance_store') try: self.store.configure() except exceptions.BadStoreConfiguration: self.fail("configure() raised BadStoreConfiguration unexpectedly!") # Test that we can add an image via the filesystem backend filesystem.ChunkedFile.CHUNKSIZE = 1024 expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = hashlib.md5(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (store_map[1], expected_image_id) image_file = six.BytesIO(expected_file_contents) loc, size, checksum, _ = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_with_multiple_dirs(self): """Test adding multiple filesystem directories.""" store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='glance_store') self.store.configure() # Test that we can add an image via the filesystem backend filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = hashlib.md5(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (store_map[1], expected_image_id) image_file = six.BytesIO(expected_file_contents) loc, size, checksum, _ = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_with_multiple_dirs_storage_full(self): """ Test StorageFull exception is raised if no filesystem directory is found that can store an image. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='glance_store') self.store.configure_add() def fake_get_capacity_info(mount_point): return 0 with mock.patch.object(self.store, '_get_capacity_info') as capacity: capacity.return_value = 0 filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size image_file = six.BytesIO(expected_file_contents) self.assertRaises(exceptions.StorageFull, self.store.add, expected_image_id, image_file, expected_file_size) def test_configure_add_with_file_perm(self): """ Tests filesystem specified by filesystem_store_file_perm are parsed correctly. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='glance_store') self.conf.set_override('filesystem_store_file_perm', 700, # -rwx------ group='glance_store') self.store.configure_add() self.assertEqual(self.store.datadir, store) def test_configure_add_with_unaccessible_file_perm(self): """ Tests BadStoreConfiguration exception is raised if an invalid file permission specified in filesystem_store_file_perm. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='glance_store') self.conf.set_override('filesystem_store_file_perm', 7, # -------rwx group='glance_store') self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_add_with_file_perm_for_group_other_users_access(self): """ Test that we can add an image via the filesystem backend with a required image file permission. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='glance_store') self.conf.set_override('filesystem_store_file_perm', 744, # -rwxr--r-- group='glance_store') # -rwx------ os.chmod(store, 0o700) self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) self.store.configure_add() filesystem.Store.WRITE_CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = hashlib.md5(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (store, expected_image_id) image_file = six.BytesIO(expected_file_contents) location, size, checksum, _ = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual(expected_location, location) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) # -rwx--x--x for store directory self.assertEqual(0o711, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) # -rwxr--r-- for image file mode = os.stat(expected_location[len('file:/'):])[stat.ST_MODE] perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8) self.assertEqual(perm, stat.S_IMODE(mode)) def test_add_with_file_perm_for_owner_users_access(self): """ Test that we can add an image via the filesystem backend with a required image file permission. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='glance_store') self.conf.set_override('filesystem_store_file_perm', 600, # -rw------- group='glance_store') # -rwx------ os.chmod(store, 0o700) self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) self.store.configure_add() filesystem.Store.WRITE_CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = hashlib.md5(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (store, expected_image_id) image_file = six.BytesIO(expected_file_contents) location, size, checksum, _ = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual(expected_location, location) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) # -rwx------ for store directory self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) # -rw------- for image file mode = os.stat(expected_location[len('file:/'):])[stat.ST_MODE] perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8) self.assertEqual(perm, stat.S_IMODE(mode)) glance_store-0.23.0/glance_store/tests/unit/test_vmware_store.py0000666000175100017510000007045313230237440025212 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack, LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the VMware Datastore backend store""" import hashlib import uuid import mock from oslo_utils import units from oslo_vmware import api from oslo_vmware import exceptions as vmware_exceptions from oslo_vmware.objects import datacenter as oslo_datacenter from oslo_vmware.objects import datastore as oslo_datastore import six import glance_store._drivers.vmware_datastore as vm_store from glance_store import backend from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities from glance_store.tests import utils FAKE_UUID = str(uuid.uuid4()) FIVE_KB = 5 * units.Ki VMWARE_DS = { 'debug': True, 'known_stores': ['vmware_datastore'], 'default_store': 'vsphere', 'vmware_server_host': '127.0.0.1', 'vmware_server_username': 'username', 'vmware_server_password': 'password', 'vmware_store_image_dir': '/openstack_glance', 'vmware_insecure': 'True', 'vmware_datastores': ['a:b:0'], } def format_location(host_ip, folder_name, image_id, datastores): """ Helper method that returns a VMware Datastore store URI given the component pieces. """ scheme = 'vsphere' (datacenter_path, datastore_name, weight) = datastores[0].split(':') return ("%s://%s/folder%s/%s?dcPath=%s&dsName=%s" % (scheme, host_ip, folder_name, image_id, datacenter_path, datastore_name)) def fake_datastore_obj(*args, **kwargs): dc_obj = oslo_datacenter.Datacenter(ref='fake-ref', name='fake-name') dc_obj.path = args[0] return oslo_datastore.Datastore(ref='fake-ref', datacenter=dc_obj, name=args[1]) class TestStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch('oslo_vmware.api.VMwareAPISession') def setUp(self, mock_api_session, mock_get_datastore): """Establish a clean test environment.""" super(TestStore, self).setUp() vm_store.Store.CHUNKSIZE = 2 default_store = VMWARE_DS['default_store'] self.config(default_store=default_store, stores=['vmware']) backend.register_opts(self.conf) self.config(group='glance_store', vmware_server_username='admin', vmware_server_password='admin', vmware_server_host=VMWARE_DS['vmware_server_host'], vmware_insecure=VMWARE_DS['vmware_insecure'], vmware_datastores=VMWARE_DS['vmware_datastores']) mock_get_datastore.side_effect = fake_datastore_obj backend.create_stores(self.conf) self.store = backend.get_store_from_scheme('vsphere') self.store.store_image_dir = ( VMWARE_DS['vmware_store_image_dir']) def _mock_http_connection(self): return mock.patch('six.moves.http_client.HTTPConnection') @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get(self, mock_api_session): """Test a "normal" retrieval of an image in chunks.""" expected_image_size = 31 expected_returns = ['I am a teapot, short and stout\n'] loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() (image_file, image_size) = self.store.get(loc) self.assertEqual(expected_image_size, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_non_existing(self, mock_api_session): """ Test that trying to retrieve an image that doesn't exist raises an error """ loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glan" "ce/%s?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch.object(vm_store.Store, '_build_vim_cookie_header') @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(vm_store._Reader, 'size') @mock.patch.object(api, 'VMwareAPISession') def test_add(self, fake_api_session, fake_size, fake_select_datastore, fake_cookie): """Test that we can add an image via the VMware backend.""" fake_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size hash_code = hashlib.md5(expected_contents) expected_checksum = hash_code.hexdigest() fake_size.__get__ = mock.Mock(return_value=expected_size) expected_cookie = 'vmware_soap_session=fake-uuid' fake_cookie.return_value = expected_cookie expected_headers = {'Content-Length': six.text_type(expected_size), 'Cookie': expected_cookie} with mock.patch('hashlib.md5') as md5: md5.return_value = hash_code expected_location = format_location( VMWARE_DS['vmware_server_host'], VMWARE_DS['vmware_store_image_dir'], expected_image_id, VMWARE_DS['vmware_datastores']) image = six.BytesIO(expected_contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() location, size, checksum, _ = self.store.add(expected_image_id, image, expected_size) _, kwargs = HttpConn.call_args self.assertEqual(expected_headers, kwargs['headers']) self.assertEqual(utils.sort_url_by_qs_keys(expected_location), utils.sort_url_by_qs_keys(location)) self.assertEqual(expected_size, size) self.assertEqual(expected_checksum, checksum) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(vm_store._Reader, 'size') @mock.patch('oslo_vmware.api.VMwareAPISession') def test_add_size_zero(self, mock_api_session, fake_size, fake_select_datastore): """ Test that when specifying size zero for the image to add, the actual size of the image is returned. """ fake_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size hash_code = hashlib.md5(expected_contents) expected_checksum = hash_code.hexdigest() fake_size.__get__ = mock.Mock(return_value=expected_size) with mock.patch('hashlib.md5') as md5: md5.return_value = hash_code expected_location = format_location( VMWARE_DS['vmware_server_host'], VMWARE_DS['vmware_store_image_dir'], expected_image_id, VMWARE_DS['vmware_datastores']) image = six.BytesIO(expected_contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() location, size, checksum, _ = self.store.add(expected_image_id, image, 0) self.assertEqual(utils.sort_url_by_qs_keys(expected_location), utils.sort_url_by_qs_keys(location)) self.assertEqual(expected_size, size) self.assertEqual(expected_checksum, checksum) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch('glance_store._drivers.vmware_datastore._Reader') def test_add_with_verifier(self, fake_reader, fake_select_datastore): """Test that the verifier is passed to the _Reader during add.""" verifier = mock.MagicMock(name='mock_verifier') image_id = str(uuid.uuid4()) size = FIVE_KB contents = b"*" * size image = six.BytesIO(contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() self.store.add(image_id, image, size, verifier=verifier) fake_reader.assert_called_with(image, verifier) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch('glance_store._drivers.vmware_datastore._Reader') def test_add_with_verifier_size_zero(self, fake_reader, fake_select_ds): """Test that the verifier is passed to the _ChunkReader during add.""" verifier = mock.MagicMock(name='mock_verifier') image_id = str(uuid.uuid4()) size = FIVE_KB contents = b"*" * size image = six.BytesIO(contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() self.store.add(image_id, image, 0, verifier=verifier) fake_reader.assert_called_with(image, verifier) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_delete(self, mock_api_session): """Test we can delete an existing image in the VMware store.""" loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s?" "dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() vm_store.Store._service_content = mock.Mock() self.store.delete(loc) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_delete_non_existing(self, mock_api_session): """ Test that trying to delete an image that doesn't exist raises an error """ loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s?" "dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch.object(self.store.session, 'wait_for_task') as mock_task: mock_task.side_effect = vmware_exceptions.FileNotFoundException self.assertRaises(exceptions.NotFound, self.store.delete, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_size(self, mock_api_session): """ Test we can get the size of an existing image in the VMware store """ loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() image_size = self.store.get_size(loc) self.assertEqual(image_size, 31) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_size_non_existing(self, mock_api_session): """ Test that trying to retrieve an image size that doesn't exist raises an error """ loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glan" "ce/%s?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get_size, loc) def test_reader_full(self): content = b'XXX' image = six.BytesIO(content) expected_checksum = hashlib.md5(content).hexdigest() reader = vm_store._Reader(image) ret = reader.read() self.assertEqual(content, ret) self.assertEqual(expected_checksum, reader.checksum.hexdigest()) self.assertEqual(len(content), reader.size) def test_reader_partial(self): content = b'XXX' image = six.BytesIO(content) expected_checksum = hashlib.md5(b'X').hexdigest() reader = vm_store._Reader(image) ret = reader.read(1) self.assertEqual(b'X', ret) self.assertEqual(expected_checksum, reader.checksum.hexdigest()) self.assertEqual(1, reader.size) def test_reader_with_verifier(self): content = b'XXX' image = six.BytesIO(content) verifier = mock.MagicMock(name='mock_verifier') reader = vm_store._Reader(image, verifier) reader.read() verifier.update.assert_called_with(content) def test_sanity_check_api_retry_count(self): """Test that sanity check raises if api_retry_count is <= 0.""" self.store.conf.glance_store.vmware_api_retry_count = -1 self.assertRaises(exceptions.BadStoreConfiguration, self.store._sanity_check) self.store.conf.glance_store.vmware_api_retry_count = 0 self.assertRaises(exceptions.BadStoreConfiguration, self.store._sanity_check) self.store.conf.glance_store.vmware_api_retry_count = 1 try: self.store._sanity_check() except exceptions.BadStoreConfiguration: self.fail() def test_sanity_check_task_poll_interval(self): """Test that sanity check raises if task_poll_interval is <= 0.""" self.store.conf.glance_store.vmware_task_poll_interval = -1 self.assertRaises(exceptions.BadStoreConfiguration, self.store._sanity_check) self.store.conf.glance_store.vmware_task_poll_interval = 0 self.assertRaises(exceptions.BadStoreConfiguration, self.store._sanity_check) self.store.conf.glance_store.vmware_task_poll_interval = 1 try: self.store._sanity_check() except exceptions.BadStoreConfiguration: self.fail() def test_sanity_check_multiple_datastores(self): self.store.conf.glance_store.vmware_api_retry_count = 1 self.store.conf.glance_store.vmware_task_poll_interval = 1 self.store.conf.glance_store.vmware_datastores = ['a:b:0', 'a:d:0'] try: self.store._sanity_check() except exceptions.BadStoreConfiguration: self.fail() def test_parse_datastore_info_and_weight_less_opts(self): datastore = 'a' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight_invalid_weight(self): datastore = 'a:b:c' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight_empty_opts(self): datastore = 'a: :0' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) datastore = ':b:0' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight(self): datastore = 'a:b:100' parts = self.store._parse_datastore_info_and_weight(datastore) self.assertEqual('a', parts[0]) self.assertEqual('b', parts[1]) self.assertEqual('100', parts[2]) def test_parse_datastore_info_and_weight_default_weight(self): datastore = 'a:b' parts = self.store._parse_datastore_info_and_weight(datastore) self.assertEqual('a', parts[0]) self.assertEqual('b', parts[1]) self.assertEqual(0, parts[2]) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_unexpected_status(self, mock_api_session, mock_select_datastore): expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = six.BytesIO(expected_contents) self.session = mock.Mock() with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=401) self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_unexpected_status_no_response_body(self, mock_api_session, mock_select_datastore): expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = six.BytesIO(expected_contents) self.session = mock.Mock() with self._mock_http_connection() as HttpConn: HttpConn.return_value = utils.fake_response(status_code=500, no_response_body=True) self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size) @mock.patch.object(api, 'VMwareAPISession') def test_reset_session(self, mock_api_session): self.store.reset_session() self.assertTrue(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_active(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = True self.store._build_vim_cookie_header(True) self.assertFalse(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_expired(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = False self.store._build_vim_cookie_header(True) self.assertTrue(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_expired_noverify(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = False self.store._build_vim_cookie_header() self.assertFalse(mock_api_session.called) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_add_ioerror(self, mock_api_session, mock_select_datastore): mock_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = six.BytesIO(expected_contents) self.session = mock.Mock() with mock.patch('requests.Session.request') as HttpConn: HttpConn.request.side_effect = IOError self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size) def test_qs_sort_with_literal_question_mark(self): url = 'scheme://example.com/path?key2=val2&key1=val1?sort=true' exp_url = 'scheme://example.com/path?key1=val1%3Fsort%3Dtrue&key2=val2' self.assertEqual(exp_url, utils.sort_url_by_qs_keys(url)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map(self, mock_api_session, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] mock_ds_obj.side_effect = fake_datastore_obj ret = self.store._build_datastore_weighted_map(datastores) ds = ret[200] self.assertEqual('e', ds[0].datacenter.path) self.assertEqual('f', ds[0].name) ds = ret[100] self.assertEqual(2, len(ds)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map_equal_weight(self, mock_api_session, mock_ds_obj): datastores = ['a:b:200', 'a:b:200'] mock_ds_obj.side_effect = fake_datastore_obj ret = self.store._build_datastore_weighted_map(datastores) ds = ret[200] self.assertEqual(2, len(ds)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map_empty_list(self, mock_api_session, mock_ds_ref): datastores = [] ret = self.store._build_datastore_weighted_map(datastores) self.assertEqual({}, ret) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_insufficient_freespace(self, mock_get_freespace, mock_ds_ref): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5, 5, 5] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp self.assertRaises(exceptions.StorageFull, self.store.select_datastore, image_size) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_insufficient_fs_one_ds(self, mock_get_freespace, mock_ds_ref): # Tests if fs is updated with just one datastore. datastores = ['a:b:100'] image_size = 10 self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp self.assertRaises(exceptions.StorageFull, self.store.select_datastore, image_size) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_equal_freespace(self, mock_get_freespace, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 mock_ds_obj.side_effect = fake_datastore_obj self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [11, 11, 11] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp ds = self.store.select_datastore(image_size) self.assertEqual('e', ds.datacenter.path) self.assertEqual('f', ds.name) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_contention(self, mock_get_freespace, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 mock_ds_obj.side_effect = fake_datastore_obj self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5, 11, 12] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp ds = self.store.select_datastore(image_size) self.assertEqual('c', ds.datacenter.path) self.assertEqual('d', ds.name) def test_select_datastore_empty_list(self): datastores = [] self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) self.assertRaises(exceptions.StorageFull, self.store.select_datastore, 10) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_datacenter_ref(self, mock_api_session): datacenter_path = 'Datacenter1' self.store._get_datacenter(datacenter_path) self.store.session.invoke_api.assert_called_with( self.store.session.vim, 'FindByInventoryPath', self.store.session.vim.service_content.searchIndex, inventoryPath=datacenter_path) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_redirect(self, mock_api_session): # Add two layers of redirects to the response stack, which will # return the default 200 OK with the expected data after resolving # both redirects. redirect1 = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} redirect2 = {"location": "https://example.com?dsName=ds2&dcPath=dc2"} responses = [utils.fake_response(), utils.fake_response(status_code=302, headers=redirect1), utils.fake_response(status_code=301, headers=redirect2)] def getresponse(*args, **kwargs): return responses.pop() expected_image_size = 31 expected_returns = ['I am a teapot, short and stout\n'] loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.side_effect = getresponse (image_file, image_size) = self.store.get(loc) self.assertEqual(expected_image_size, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_max_redirects(self, mock_api_session): redirect = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} responses = ([utils.fake_response(status_code=302, headers=redirect)] * (vm_store.MAX_REDIRECTS + 1)) def getresponse(*args, **kwargs): return responses.pop() loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.side_effect = getresponse self.assertRaises(exceptions.MaxRedirectsExceeded, self.store.get, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_redirect_invalid(self, mock_api_session): redirect = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=307, headers=redirect) self.assertRaises(exceptions.BadStoreUri, self.store.get, loc) glance_store-0.23.0/glance_store/tests/functional/0000775000175100017510000000000013230237776022250 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/functional/filesystem/0000775000175100017510000000000013230237776024434 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/functional/filesystem/__init__.py0000666000175100017510000000000013230237440026521 0ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/functional/filesystem/test_functional_filesystem.py0000666000175100017510000000251713230237440032446 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import shutil import tempfile from oslo_config import cfg from glance_store.tests.functional import base CONF = cfg.CONF logging.basicConfig() class TestFilesystem(base.BaseFunctionalTests): def __init__(self, *args, **kwargs): super(TestFilesystem, self).__init__('file', *args, **kwargs) def setUp(self): self.tmp_image_dir = tempfile.mkdtemp(prefix='glance_store_') CONF.set_override('filesystem_store_datadir', self.tmp_image_dir, group='glance_store') super(TestFilesystem, self).setUp() def tearDown(self): shutil.rmtree(self.tmp_image_dir) super(TestFilesystem, self).tearDown() glance_store-0.23.0/glance_store/tests/functional/__init__.py0000666000175100017510000000000013230237440024335 0ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/functional/base.py0000666000175100017510000000632013230237440023523 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. try: import configparser as ConfigParser except ImportError: from six.moves import configparser as ConfigParser from io import BytesIO import glance_store from oslo_config import cfg import testtools CONF = cfg.CONF UUID1 = '961973d8-3360-4364-919e-2c197825dbb4' UUID2 = 'e03cf3b1-3070-4497-a37d-9703edfb615b' UUID3 = '0d7f89b2-e236-45e9-b081-561cd3102e92' UUID4 = '165e9681-ea56-46b0-a84c-f148c752ef8b' IMAGE_BITS = b'I am a bootable image, I promise' class Base(testtools.TestCase): def __init__(self, driver_name, *args, **kwargs): super(Base, self).__init__(*args, **kwargs) self.driver_name = driver_name self.config = ConfigParser.RawConfigParser() self.config.read('functional_testing.conf') glance_store.register_opts(CONF) def setUp(self): super(Base, self).setUp() stores = self.config.get('tests', 'stores').split(',') if self.driver_name not in stores: self.skipTest('Not running %s store tests' % self.driver_name) CONF.set_override('stores', [self.driver_name], group='glance_store') CONF.set_override('default_store', self.driver_name, group='glance_store' ) glance_store.create_stores() self.store = glance_store.backend._load_store(CONF, self.driver_name) self.store.configure() class BaseFunctionalTests(Base): def test_add(self): image_file = BytesIO(IMAGE_BITS) loc, written, _, _ = self.store.add(UUID1, image_file, len(IMAGE_BITS)) self.assertEqual(len(IMAGE_BITS), written) def test_delete(self): image_file = BytesIO(IMAGE_BITS) loc, written, _, _ = self.store.add(UUID2, image_file, len(IMAGE_BITS)) location = glance_store.location.get_location_from_uri(loc) self.store.delete(location) def test_get_size(self): image_file = BytesIO(IMAGE_BITS) loc, written, _, _ = self.store.add(UUID3, image_file, len(IMAGE_BITS)) location = glance_store.location.get_location_from_uri(loc) size = self.store.get_size(location) self.assertEqual(len(IMAGE_BITS), size) def test_get(self): image_file = BytesIO(IMAGE_BITS) loc, written, _, _ = self.store.add(UUID3, image_file, len(IMAGE_BITS)) location = glance_store.location.get_location_from_uri(loc) image, size = self.store.get(location) self.assertEqual(len(IMAGE_BITS), size) data = b'' for chunk in image: data += chunk self.assertEqual(IMAGE_BITS, data) glance_store-0.23.0/glance_store/tests/functional/swift/0000775000175100017510000000000013230237776023404 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/functional/swift/test_functional_swift.py0000666000175100017510000000632113230237440030363 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import random import time from oslo_config import cfg import swiftclient from glance_store.tests.functional import base CONF = cfg.CONF logging.basicConfig() class TestSwift(base.BaseFunctionalTests): def __init__(self, *args, **kwargs): super(TestSwift, self).__init__('swift', *args, **kwargs) self.auth = self.config.get('admin', 'auth_address') user = self.config.get('admin', 'user') self.key = self.config.get('admin', 'key') self.region = self.config.get('admin', 'region') self.tenant, self.username = user.split(':') CONF.set_override('swift_store_user', user, group='glance_store') CONF.set_override('swift_store_auth_address', self.auth, group='glance_store') CONF.set_override('swift_store_key', self.key, group='glance_store') CONF.set_override('swift_store_create_container_on_put', True, group='glance_store') CONF.set_override('swift_store_region', self.region, group='glance_store') CONF.set_override('swift_store_create_container_on_put', True, group='glance_store') def setUp(self): self.container = ("glance_store_container_" + str(int(random.random() * 1000))) CONF.set_override('swift_store_container', self.container, group='glance_store') super(TestSwift, self).setUp() def tearDown(self): for x in range(1, 4): time.sleep(x) try: swift = swiftclient.client.Connection(auth_version='2', user=self.username, key=self.key, tenant_name=self.tenant, authurl=self.auth) _, objects = swift.get_container(self.container) for obj in objects: swift.delete_object(self.container, obj.get('name')) swift.delete_container(self.container) except Exception: if x < 3: pass else: raise else: break super(TestSwift, self).tearDown() glance_store-0.23.0/glance_store/tests/functional/swift/__init__.py0000666000175100017510000000000013230237440025471 0ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/functional/hooks/0000775000175100017510000000000013230237776023373 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/tests/functional/hooks/post_test_hook.sh0000777000175100017510000000462013230237440026766 0ustar zuulzuul00000000000000#!/bin/bash -xe # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This script is executed inside post_test_hook function in devstack gate. set -xe export GLANCE_STORE_DIR="$BASE/new/glance_store" SCRIPTS_DIR="/usr/os-testr-env/bin/" GLANCE_STORE_DRIVER=${1:-swift} function generate_test_logs { local path="$1" # Compress all $path/*.txt files and move the directories holding those # files to /opt/stack/logs. Files with .log suffix have their # suffix changed to .txt (so browsers will know to open the compressed # files and not download them). if [ -d "$path" ] then sudo find $path -iname "*.log" -type f -exec mv {} {}.txt \; -exec gzip -9 {}.txt \; sudo mv $path/* /opt/stack/logs/ fi } function generate_testr_results { if [ -f .testrepository/0 ]; then # Give job user rights to access tox logs sudo -H -u "$owner" chmod o+rw . sudo -H -u "$owner" chmod o+rw -R .testrepository if [[ -f ".testrepository/0" ]] ; then "subunit-1to2" < .testrepository/0 > ./testrepository.subunit $SCRIPTS_DIR/subunit2html ./testrepository.subunit testr_results.html gzip -9 ./testrepository.subunit gzip -9 ./testr_results.html sudo mv ./*.gz /opt/stack/logs/ fi fi } owner=jenkins # Get admin credentials cd $BASE/new/devstack source openrc admin admin # Go to the glance_store dir cd $GLANCE_STORE_DIR sudo chown -R $owner:stack $GLANCE_STORE_DIR sudo cp $GLANCE_STORE_DIR/functional_testing.conf.sample $GLANCE_STORE_DIR/functional_testing.conf # Set admin creds iniset $GLANCE_STORE_DIR/functional_testing.conf admin key $ADMIN_PASSWORD # Run tests echo "Running glance_store functional test suite" set +e # Preserve env for OS_ credentials sudo -E -H -u jenkins tox -e functional-$GLANCE_STORE_DRIVER EXIT_CODE=$? set -e # Collect and parse result generate_testr_results exit $EXIT_CODE glance_store-0.23.0/glance_store/tests/functional/hooks/gate_hook.sh0000777000175100017510000000175413230237440025667 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This script is executed inside gate_hook function in devstack gate. # NOTE(NiallBunting) The store to test is passed in here from the # project config. GLANCE_STORE_DRIVER=${1:-swift} ENABLED_SERVICES+=",key,glance" case $GLANCE_STORE_DRIVER in swift) ENABLED_SERVICES+=",s-proxy,s-account,s-container,s-object," ;; esac export GLANCE_STORE_DRIVER export ENABLED_SERVICES $BASE/new/devstack-gate/devstack-vm-gate.sh glance_store-0.23.0/glance_store/tests/utils.py0000666000175100017510000000514713230237440021615 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from six.moves import urllib import requests def sort_url_by_qs_keys(url): # NOTE(kragniz): this only sorts the keys of the query string of a url. # For example, an input of '/v2/tasks?sort_key=id&sort_dir=asc&limit=10' # returns '/v2/tasks?limit=10&sort_dir=asc&sort_key=id'. This is to prevent # non-deterministic ordering of the query string causing problems with unit # tests. parsed = urllib.parse.urlparse(url) # In python2.6, for arbitrary url schemes, query string # is not parsed from url. http://bugs.python.org/issue9374 path = parsed.path query = parsed.query if not query: path, query = parsed.path.split('?', 1) queries = urllib.parse.parse_qsl(query, True) sorted_query = sorted(queries, key=lambda x: x[0]) encoded_sorted_query = urllib.parse.urlencode(sorted_query, True) url_parts = (parsed.scheme, parsed.netloc, path, parsed.params, encoded_sorted_query, parsed.fragment) return urllib.parse.urlunparse(url_parts) class FakeHTTPResponse(object): def __init__(self, status=200, headers=None, data=None, *args, **kwargs): data = data or 'I am a teapot, short and stout\n' self.data = six.StringIO(data) self.read = self.data.read self.status = status self.headers = headers or {'content-length': len(data)} if not kwargs.get('no_response_body', False): self.body = None def getheader(self, name, default=None): return self.headers.get(name.lower(), default) def getheaders(self): return self.headers or {} def read(self, amt): self.data.read(amt) def release_conn(self): pass def close(self): self.data.close() def fake_response(status_code=200, headers=None, content=None, **kwargs): r = requests.models.Response() r.status_code = status_code r.headers = headers or {} r.raw = FakeHTTPResponse(status_code, headers, content, kwargs) return r glance_store-0.23.0/glance_store/tests/fakes.py0000666000175100017510000000150213230237440021535 0ustar zuulzuul00000000000000# Copyright 2014 Red hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store import driver from glance_store import exceptions class UnconfigurableStore(driver.Store): def configure(self, re_raise_bsc=False): raise exceptions.BadStoreConfiguration() glance_store-0.23.0/glance_store/locale/0000775000175100017510000000000013230237776020203 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/locale/en_GB/0000775000175100017510000000000013230237776021155 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/locale/en_GB/LC_MESSAGES/0000775000175100017510000000000013230237776022742 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/locale/en_GB/LC_MESSAGES/glance_store.po0000666000175100017510000002467213230237440025750 0ustar zuulzuul00000000000000# Andi Chandler , 2016. #zanata # Andreas Jaeger , 2016. #zanata # Andi Chandler , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: glance_store 0.21.1.dev17\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-09-22 13:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-10-05 01:01+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en-GB\n" "X-Generator: Zanata 3.9.6\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "" "\n" "List of enabled Glance stores.\n" "\n" "Register the storage backends to use for storing disk images\n" "as a comma separated list. The default stores enabled for\n" "storing disk images with Glance are ``file`` and ``http``.\n" "\n" "Possible values:\n" " * A comma separated list that could include:\n" " * file\n" " * http\n" " * swift\n" " * rbd\n" " * sheepdog\n" " * cinder\n" " * vmware\n" "\n" "Related Options:\n" " * default_store\n" "\n" msgstr "" "\n" "List of enabled Glance stores.\n" "\n" "Register the storage backends to use for storing disk images\n" "as a comma separated list. The default stores enabled for\n" "storing disk images with Glance are ``file`` and ``http``.\n" "\n" "Possible values:\n" " * A comma separated list that could include:\n" " * file\n" " * http\n" " * swift\n" " * rbd\n" " * sheepdog\n" " * cinder\n" " * vmware\n" "\n" "Related Options:\n" " * default_store\n" "\n" msgid "" "\n" "Minimum interval in seconds to execute updating dynamic storage\n" "capabilities based on current backend status.\n" "\n" "Provide an integer value representing time in seconds to set the\n" "minimum interval before an update of dynamic storage capabilities\n" "for a storage backend can be attempted. Setting\n" "``store_capabilities_update_min_interval`` does not mean updates\n" "occur periodically based on the set interval. Rather, the update\n" "is performed at the elapse of this interval set, if an operation\n" "of the store is triggered.\n" "\n" "By default, this option is set to zero and is disabled. Provide an\n" "integer value greater than zero to enable this option.\n" "\n" "NOTE: For more information on store capabilities and their updates,\n" "please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo/" "store-capabilities.html\n" "\n" "For more information on setting up a particular store in your\n" "deployment and help with the usage of this feature, please contact\n" "the storage driver maintainers listed here:\n" "http://docs.openstack.org/developer/glance_store/drivers/index.html\n" "\n" "Possible values:\n" " * Zero\n" " * Positive integer\n" "\n" "Related Options:\n" " * None\n" "\n" msgstr "" "\n" "Minimum interval in seconds to execute updating dynamic storage\n" "capabilities based on current backend status.\n" "\n" "Provide an integer value representing time in seconds to set the\n" "minimum interval before an update of dynamic storage capabilities\n" "for a storage backend can be attempted. Setting\n" "``store_capabilities_update_min_interval`` does not mean updates\n" "occur periodically based on the set interval. Rather, the update\n" "is performed at the elapse of this interval set, if an operation\n" "of the store is triggered.\n" "\n" "By default, this option is set to zero and is disabled. Provide an\n" "integer value greater than zero to enable this option.\n" "\n" "NOTE: For more information on store capabilities and their updates,\n" "please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo/" "store-capabilities.html\n" "\n" "For more information on setting up a particular store in your\n" "deployment and help with the usage of this feature, please contact\n" "the storage driver maintainers listed here:\n" "http://docs.openstack.org/developer/glance_store/drivers/index.html\n" "\n" "Possible values:\n" " * Zero\n" " * Positive integer\n" "\n" "Related Options:\n" " * None\n" "\n" msgid "" "\n" "The default scheme to use for storing images.\n" "\n" "Provide a string value representing the default scheme to use for\n" "storing images. If not set, Glance uses ``file`` as the default\n" "scheme to store images with the ``file`` store.\n" "\n" "NOTE: The value given for this configuration option must be a valid\n" "scheme for a store registered with the ``stores`` configuration\n" "option.\n" "\n" "Possible values:\n" " * file\n" " * filesystem\n" " * http\n" " * https\n" " * swift\n" " * swift+http\n" " * swift+https\n" " * swift+config\n" " * rbd\n" " * sheepdog\n" " * cinder\n" " * vsphere\n" "\n" "Related Options:\n" " * stores\n" "\n" msgstr "" "\n" "The default scheme to use for storing images.\n" "\n" "Provide a string value representing the default scheme to use for\n" "storing images. If not set, Glance uses ``file`` as the default\n" "scheme to store images with the ``file`` store.\n" "\n" "NOTE: The value given for this configuration option must be a valid\n" "scheme for a store registered with the ``stores`` configuration\n" "option.\n" "\n" "Possible values:\n" " * file\n" " * filesystem\n" " * http\n" " * https\n" " * swift\n" " * swift+http\n" " * swift+https\n" " * swift+config\n" " * rbd\n" " * sheepdog\n" " * cinder\n" " * vsphere\n" "\n" "Related Options:\n" " * stores\n" "\n" #, python-format msgid "" "A bad metadata structure was returned from the %(driver)s storage driver: " "%(metadata)s. %(e)s." msgstr "" "A bad metadata structure was returned from the %(driver)s storage driver: " "%(metadata)s. %(e)s." msgid "An unknown exception occurred" msgstr "An unknown exception occurred" #, python-format msgid "Auth service at URL %(url)s not found." msgstr "Auth service at URL %(url)s not found." msgid "Authorization failed." msgstr "Authorisation failed." msgid "" "Configuration for store failed. Adding images to this store is disabled." msgstr "" "Configuration for store failed. Adding images to this store is disabled." #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "Connect error/bad request to Auth service at URL %(url)s." msgid "Data supplied was not valid." msgstr "Data supplied was not valid." msgid "Deleting images from this store is not supported." msgstr "Deleting images from this store is not supported." #, python-format msgid "Driver %(driver_name)s could not be loaded." msgstr "Driver %(driver_name)s could not be loaded." #, python-format msgid "Error: cooperative_iter exception %s" msgstr "Error: cooperative_iter exception %s" #, python-format msgid "Failed to configure store correctly: %s Disabling add method." msgstr "Failed to configure store correctly: %s Disabling add method." msgid "Getting images from this store is not supported." msgstr "Getting images from this store is not supported." #, python-format msgid "" "Getting images randomly from this store is not supported. Offset: " "%(offset)s, length: %(chunk_size)s" msgstr "" "Getting images randomly from this store is not supported. Offset: " "%(offset)s, length: %(chunk_size)s" #, python-format msgid "Image %(image)s already exists" msgstr "Image %(image)s already exists" #, python-format msgid "Image %(image)s not found" msgstr "Image %(image)s not found" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Maximum redirects (%(redirects)s) was exceeded." #, python-format msgid "Missing required credential: %(required)s" msgstr "Missing required credential: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgid "Permission to write image storage media denied." msgstr "Permission to write image storage media denied." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Redirecting to %(uri)s for authorisation." msgid "Remote server where the image is present is unavailable." msgstr "Remote server where the image is present is unavailable." msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Response from Keystone does not contain a Glance endpoint." msgid "Skipping store.set_acls... not implemented." msgstr "Skipping store.set_acls... not implemented." #, python-format msgid "" "Store %(store_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "Store %(store_name)s could not be configured correctly. Reason: %(reason)s" #, python-format msgid "Store for scheme %s not found" msgstr "Store for scheme %s not found" #, python-format msgid "The Store URI was malformed: %(uri)s" msgstr "The Store URI was malformed: %(uri)s" msgid "The image cannot be deleted because it has snapshot(s)." msgstr "The image cannot be deleted because it has snapshot(s)." msgid "" "The image cannot be deleted because it is in use through the backend store " "outside of Glance." msgstr "" "The image cannot be deleted because it is in use through the backend store " "outside of Glance." #, python-format msgid "" "The image metadata key %(key)s has an invalid type of %(type)s. Only dict, " "list, and unicode are supported." msgstr "" "The image metadata key %(key)s has an invalid type of %(type)s. Only dict, " "list, and unicode are supported." #, python-format msgid "" "The storage driver %(driver)s returned invalid metadata %(metadata)s. This " "must be a dictionary type" msgstr "" "The storage driver %(driver)s returned invalid metadata %(metadata)s. This " "must be a dictionary type" msgid "There is not enough disk space on the image storage media." msgstr "There is not enough disk space on the image storage media." #, python-format msgid "Unknown scheme '%(scheme)s' found in URI" msgstr "Unknown scheme '%(scheme)s' found in URI" msgid "You are not authenticated." msgstr "You are not authenticated." msgid "You are not authorized to complete this action." msgstr "You are not authorised to complete this action." glance_store-0.23.0/glance_store/locale/ko_KR/0000775000175100017510000000000013230237776021210 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/locale/ko_KR/LC_MESSAGES/0000775000175100017510000000000013230237776022775 5ustar zuulzuul00000000000000glance_store-0.23.0/glance_store/locale/ko_KR/LC_MESSAGES/glance_store.po0000666000175100017510000001432413230237440025774 0ustar zuulzuul00000000000000# Andreas Jaeger , 2016. #zanata # Jongwoo Han , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: glance_store 0.21.1.dev8\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-08-01 16:22+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-08-12 04:24+0000\n" "Last-Translator: Jongwoo Han \n" "Language-Team: Korean (South Korea)\n" "Language: ko-KR\n" "X-Generator: Zanata 3.9.6\n" "Plural-Forms: nplurals=1; plural=0\n" #, python-format msgid "" "A bad metadata structure was returned from the %(driver)s storage driver: " "%(metadata)s. %(e)s." msgstr "" "스토리지 드라이버 %(driver)s 가 잘못된 메타데이터 structure 를 반환했습니" "다 : %(metadata)s. %(e)s." msgid "An unknown exception occurred" msgstr "알 수 없는 예외가 발생했음" #, python-format msgid "Auth service at URL %(url)s not found." msgstr "URL %(url)s의 Auth 서비스를 찾을 수 없습니다." msgid "Authorization failed." msgstr "권한 부여에 실패했습니다. " msgid "" "Configuration for store failed. Adding images to this store is disabled." msgstr "" "저장소 설정이 실패했습니다. 이 저장소에 이미지를 추가하는 기능은 꺼집니다." #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "연결 오류/URL %(url)s에서 Auth 서비스에 대한 잘못된 요청입니다." msgid "Data supplied was not valid." msgstr "제공된 데이터가 올바르지 않습니다." msgid "Deleting images from this store is not supported." msgstr "이 저장소에서 이미지를 지우는 것은 지원되지 않습니다." #, python-format msgid "Driver %(driver_name)s could not be loaded." msgstr " %(driver_name)s 드라이버를 로드하지 못했습니다." #, python-format msgid "Error: cooperative_iter exception %s" msgstr "오류: cooperative_iter 예외 %s" #, python-format msgid "Failed to configure store correctly: %s Disabling add method." msgstr "" "저장소를 제대로 설정하지 못했습니다. : %s 에서 add method를 제외합니다." msgid "Getting images from this store is not supported." msgstr "이 저장소에서 이미지를 가져오는 것은 지원되지 않습니다." #, python-format msgid "" "Getting images randomly from this store is not supported. Offset: " "%(offset)s, length: %(chunk_size)s" msgstr "" "이 저장소에서 이미지를 랜덤하게 가져오는 것은 지원되지 않습니다. 위치: " "%(offset)s, 길이: %(chunk_size)s" #, python-format msgid "Image %(image)s already exists" msgstr "이미지 %(image)s 가 이미 있습니다." #, python-format msgid "Image %(image)s not found" msgstr "이미지 %(image)s 가 없습니다." #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "인증 전략이 올바르지 않음. 예상: \"%(expected)s\", 수신: \"%(received)s\"" #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "최대 경로 재지정(%(redirects)s)에 도달했습니다." #, python-format msgid "Missing required credential: %(required)s" msgstr "필수 신임 정보 누락: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "다중 '이미지' 서비스가 %(region)s 리젼에 일치합니다. 이는 일반적으로 리젼이 " "필요하지만 아직 리젼을 제공하지 않은 경우 발생합니다." msgid "Permission to write image storage media denied." msgstr "이미지 스토리지 미디어에 쓰기 권한이 거부되었습니다." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "권한 부여를 위해 %(uri)s(으)로 경로 재지정 중입니다." msgid "Remote server where the image is present is unavailable." msgstr "이 이미지가 있는 원격 서버에 접속할 수 없습니다." msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Keystone의 응답에 Glance 엔드포인트가 들어있지 않습니다." msgid "Skipping store.set_acls... not implemented." msgstr " store.set_acls... 는 구현되지 않았으므로 skip합니다." #, python-format msgid "" "Store %(store_name)s could not be configured correctly. Reason: %(reason)s" msgstr "저장소 %(store_name)s 가 제대로 설정되지 않습니다. 원인: %(reason)s" #, python-format msgid "Store for scheme %s not found" msgstr "%s 스키마에 대한 저장소를 찾을 수 없음" #, python-format msgid "The Store URI was malformed: %(uri)s" msgstr "저장소 URI가 잘못된 형식입니다: %(uri)s" msgid "The image cannot be deleted because it has snapshot(s)." msgstr "스냅샷이 있으므로 이 이미지를 지울 수 없습니다." msgid "" "The image cannot be deleted because it is in use through the backend store " "outside of Glance." msgstr "" "이미지는 Glance가 사용하지 않는 백엔드 저장소에서 사용중이므로 삭제할 수 없습" "니다." #, python-format msgid "" "The image metadata key %(key)s has an invalid type of %(type)s. Only dict, " "list, and unicode are supported." msgstr "" "이미지 메타데이터 키 %(key)s 가 잘못된 타입인 %(type)s 타입입니다. dict, " "list, unicode만이 지원됩니다." #, python-format msgid "" "The storage driver %(driver)s returned invalid metadata %(metadata)s. This " "must be a dictionary type" msgstr "" "스토리지 드라이버 %(driver)s 가 잘못된 메타데이터 %(metadata)s 를 반환했습니" "다. 이 값은 반드시 dict 타입이어야 합니다." msgid "There is not enough disk space on the image storage media." msgstr "이미지 스토리지 매체에 충분한 저장 공간이 부족합니다." #, python-format msgid "Unknown scheme '%(scheme)s' found in URI" msgstr "URI에 알 수 없는 스킴인 '%(scheme)s' 가 있습니다." msgid "You are not authenticated." msgstr "인증되지 않은 사용자입니다." msgid "You are not authorized to complete this action." msgstr "이 조치를 완료할 권한이 없습니다. " glance_store-0.23.0/glance_store/driver.py0000666000175100017510000001415713230237440020607 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base class for all storage backends""" import logging from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils import importutils from oslo_utils import units from glance_store import capabilities from glance_store import exceptions from glance_store.i18n import _ LOG = logging.getLogger(__name__) class Store(capabilities.StoreCapability): OPTIONS = None READ_CHUNKSIZE = 4 * units.Mi # 4M WRITE_CHUNKSIZE = READ_CHUNKSIZE def __init__(self, conf): """ Initialize the Store """ super(Store, self).__init__() self.conf = conf self.store_location_class = None try: if self.OPTIONS is not None: self.conf.register_opts(self.OPTIONS, group='glance_store') except cfg.DuplicateOptError: pass def configure(self, re_raise_bsc=False): """ Configure the store to use the stored configuration options and initialize capabilities based on current configuration. Any store that needs special configuration should implement this method. """ try: self.configure_add() except exceptions.BadStoreConfiguration as e: self.unset_capabilities(capabilities.BitMasks.WRITE_ACCESS) msg = (_(u"Failed to configure store correctly: %s " "Disabling add method.") % encodeutils.exception_to_unicode(e)) LOG.warning(msg) if re_raise_bsc: raise finally: self.update_capabilities() def get_schemes(self): """ Returns a tuple of schemes which this store can handle. """ raise NotImplementedError def get_store_location_class(self): """ Returns the store location class that is used by this store. """ if not self.store_location_class: class_name = "%s.StoreLocation" % (self.__module__) LOG.debug("Late loading location class %s", class_name) self.store_location_class = importutils.import_class(class_name) return self.store_location_class def configure_add(self): """ This is like `configure` except that it's specifically for configuring the store to accept objects. If the store was not able to successfully configure itself, it should raise `exceptions.BadStoreConfiguration`. """ # NOTE(flaper87): This should probably go away @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance.exceptions.NotFound` if image does not exist """ raise NotImplementedError def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns the size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ raise NotImplementedError @capabilities.check def add(self, image_id, image_file, image_size, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :retval: tuple of URL in backing store, bytes written, checksum and a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already existed """ raise NotImplementedError @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ raise NotImplementedError def set_acls(self, location, public=False, read_tenants=None, write_tenants=None, context=None): """ Sets the read and write access control list for an image in the backend store. :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :param public: A boolean indicating whether the image should be public. :param read_tenants: A list of tenant strings which should be granted read access for an image. :param write_tenants: A list of tenant strings which should be granted write access for an image. """ raise NotImplementedError glance_store-0.23.0/glance_store/backend.py0000666000175100017510000003727613230237440020712 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging from oslo_config import cfg from oslo_utils import encodeutils import six from stevedore import driver from stevedore import extension from glance_store import capabilities from glance_store import exceptions from glance_store.i18n import _ from glance_store import location CONF = cfg.CONF LOG = logging.getLogger(__name__) _STORE_OPTS = [ cfg.ListOpt('stores', default=['file', 'http'], help=_(""" List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are ``file`` and ``http``. Possible values: * A comma separated list that could include: * file * http * swift * rbd * sheepdog * cinder * vmware Related Options: * default_store """)), cfg.StrOpt('default_store', default='file', choices=('file', 'filesystem', 'http', 'https', 'swift', 'swift+http', 'swift+https', 'swift+config', 'rbd', 'sheepdog', 'cinder', 'vsphere'), help=_(""" The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses ``file`` as the default scheme to store images with the ``file`` store. NOTE: The value given for this configuration option must be a valid scheme for a store registered with the ``stores`` configuration option. Possible values: * file * filesystem * http * https * swift * swift+http * swift+https * swift+config * rbd * sheepdog * cinder * vsphere Related Options: * stores """)), cfg.IntOpt('store_capabilities_update_min_interval', default=0, min=0, help=_(""" Minimum interval in seconds to execute updating dynamic storage capabilities based on current backend status. Provide an integer value representing time in seconds to set the minimum interval before an update of dynamic storage capabilities for a storage backend can be attempted. Setting ``store_capabilities_update_min_interval`` does not mean updates occur periodically based on the set interval. Rather, the update is performed at the elapse of this interval set, if an operation of the store is triggered. By default, this option is set to zero and is disabled. Provide an integer value greater than zero to enable this option. NOTE: For more information on store capabilities and their updates, please visit: https://specs.openstack.org/openstack/glance-specs/\ specs/kilo/store-capabilities.html For more information on setting up a particular store in your deployment and help with the usage of this feature, please contact the storage driver maintainers listed here: http://docs.openstack.org/developer/glance_store/drivers/index.html Possible values: * Zero * Positive integer Related Options: * None """)), ] _STORE_CFG_GROUP = 'glance_store' def _list_opts(): driver_opts = [] mgr = extension.ExtensionManager('glance_store.drivers') # NOTE(zhiyan): Handle available drivers entry_points provided # NOTE(nikhil): Return a sorted list of drivers to ensure that the sample # configuration files generated by oslo config generator retain the order # in which the config opts appear across different runs. If this order of # config opts is not preserved, some downstream packagers may see a long # diff of the changes though not relevant as only order has changed. See # some more details at bug 1619487. drivers = sorted([ext.name for ext in mgr]) handled_drivers = [] # Used to handle backwards-compatible entries for store_entry in drivers: driver_cls = _load_store(None, store_entry, False) if driver_cls and driver_cls not in handled_drivers: if getattr(driver_cls, 'OPTIONS', None) is not None: driver_opts += driver_cls.OPTIONS handled_drivers.append(driver_cls) # NOTE(zhiyan): This separated approach could list # store options before all driver ones, which easier # to read and configure by operator. return ([(_STORE_CFG_GROUP, _STORE_OPTS)] + [(_STORE_CFG_GROUP, driver_opts)]) def register_opts(conf): opts = _list_opts() for group, opt_list in opts: LOG.debug("Registering options for group %s" % group) for opt in opt_list: conf.register_opt(opt, group=group) class Indexable(object): """Indexable for file-like objs iterators Wrapper that allows an iterator or filelike be treated as an indexable data structure. This is required in the case where the return value from Store.get() is passed to Store.add() when adding a Copy-From image to a Store where the client library relies on eventlet GreenSockets, in which case the data to be written is indexed over. """ def __init__(self, wrapped, size): """ Initialize the object :param wrappped: the wrapped iterator or filelike. :param size: the size of data available """ self.wrapped = wrapped self.size = int(size) if size else (wrapped.len if hasattr(wrapped, 'len') else 0) self.cursor = 0 self.chunk = None def __iter__(self): """ Delegate iteration to the wrapped instance. """ for self.chunk in self.wrapped: yield self.chunk def __getitem__(self, i): """ Index into the next chunk (or previous chunk in the case where the last data returned was not fully consumed). :param i: a slice-to-the-end """ start = i.start if isinstance(i, slice) else i if start < self.cursor: return self.chunk[(start - self.cursor):] self.chunk = self.another() if self.chunk: self.cursor += len(self.chunk) return self.chunk def another(self): """Implemented by subclasses to return the next element.""" raise NotImplementedError def getvalue(self): """ Return entire string value... used in testing """ return self.wrapped.getvalue() def __len__(self): """ Length accessor. """ return self.size def _load_store(conf, store_entry, invoke_load=True): try: LOG.debug("Attempting to import store %s", store_entry) mgr = driver.DriverManager('glance_store.drivers', store_entry, invoke_args=[conf], invoke_on_load=invoke_load) return mgr.driver except RuntimeError as e: LOG.warning("Failed to load driver %(driver)s. The " "driver will be disabled" % dict(driver=str([driver, e]))) def _load_stores(conf): for store_entry in set(conf.glance_store.stores): try: # FIXME(flaper87): Don't hide BadStoreConfiguration # exceptions. These exceptions should be propagated # to the user of the library. store_instance = _load_store(conf, store_entry) if not store_instance: continue yield (store_entry, store_instance) except exceptions.BadStoreConfiguration: continue def create_stores(conf=CONF): """ Registers all store modules and all schemes from the given config. Duplicates are not re-registered. """ store_count = 0 for (store_entry, store_instance) in _load_stores(conf): try: schemes = store_instance.get_schemes() store_instance.configure(re_raise_bsc=False) except NotImplementedError: continue if not schemes: raise exceptions.BackendException('Unable to register store %s. ' 'No schemes associated with it.' % store_entry) else: LOG.debug("Registering store %s with schemes %s", store_entry, schemes) scheme_map = {} loc_cls = store_instance.get_store_location_class() for scheme in schemes: scheme_map[scheme] = { 'store': store_instance, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_map(scheme_map) store_count += 1 return store_count def verify_default_store(): scheme = CONF.glance_store.default_store try: get_store_from_scheme(scheme) except exceptions.UnknownScheme: msg = _("Store for scheme %s not found") % scheme raise RuntimeError(msg) def get_known_schemes(): """Returns list of known schemes.""" return location.SCHEME_TO_CLS_MAP.keys() def get_store_from_scheme(scheme): """ Given a scheme, return the appropriate store object for handling that scheme. """ if scheme not in location.SCHEME_TO_CLS_MAP: raise exceptions.UnknownScheme(scheme=scheme) scheme_info = location.SCHEME_TO_CLS_MAP[scheme] store = scheme_info['store'] if not store.is_capable(capabilities.BitMasks.DRIVER_REUSABLE): # Driver instance isn't stateless so it can't # be reused safely and need recreation. store_entry = scheme_info['store_entry'] store = _load_store(store.conf, store_entry, invoke_load=True) store.configure() try: scheme_map = {} loc_cls = store.get_store_location_class() for scheme in store.get_schemes(): scheme_map[scheme] = { 'store': store, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_map(scheme_map) except NotImplementedError: scheme_info['store'] = store return store def get_store_from_uri(uri): """ Given a URI, return the store object that would handle operations on the URI. :param uri: URI to analyze """ scheme = uri[0:uri.find('/') - 1] return get_store_from_scheme(scheme) def get_from_backend(uri, offset=0, chunk_size=None, context=None): """Yields chunks of data from backend specified by uri.""" loc = location.get_location_from_uri(uri, conf=CONF) store = get_store_from_uri(uri) return store.get(loc, offset=offset, chunk_size=chunk_size, context=context) def get_size_from_backend(uri, context=None): """Retrieves image size from backend specified by uri.""" loc = location.get_location_from_uri(uri, conf=CONF) store = get_store_from_uri(uri) return store.get_size(loc, context=context) def delete_from_backend(uri, context=None): """Removes chunks of data from backend specified by uri.""" loc = location.get_location_from_uri(uri, conf=CONF) store = get_store_from_uri(uri) return store.delete(loc, context=context) def get_store_from_location(uri): """ Given a location (assumed to be a URL), attempt to determine the store from the location. We use here a simple guess that the scheme of the parsed URL is the store... :param uri: Location to check for the store """ loc = location.get_location_from_uri(uri, conf=CONF) return loc.store_name def check_location_metadata(val, key=''): if isinstance(val, dict): for key in val: check_location_metadata(val[key], key=key) elif isinstance(val, list): ndx = 0 for v in val: check_location_metadata(v, key='%s[%d]' % (key, ndx)) ndx = ndx + 1 elif not isinstance(val, six.text_type): raise exceptions.BackendException(_("The image metadata key %(key)s " "has an invalid type of %(type)s. " "Only dict, list, and unicode are " "supported.") % dict(key=key, type=type(val))) def store_add_to_backend(image_id, data, size, store, context=None, verifier=None): """ A wrapper around a call to each stores add() method. This gives glance a common place to check the output :param image_id: The image add to which data is added :param data: The data to be stored :param size: The length of the data in bytes :param store: The store to which the data is being added :param context: The request context :param verifier: An object used to verify signatures for images :return: The url location of the file, the size amount of data, the checksum of the data the storage systems metadata dictionary for the location """ (location, size, checksum, metadata) = store.add(image_id, data, size, context=context, verifier=verifier) if metadata is not None: if not isinstance(metadata, dict): msg = (_("The storage driver %(driver)s returned invalid " " metadata %(metadata)s. This must be a dictionary type") % dict(driver=str(store), metadata=str(metadata))) LOG.error(msg) raise exceptions.BackendException(msg) try: check_location_metadata(metadata) except exceptions.BackendException as e: e_msg = (_("A bad metadata structure was returned from the " "%(driver)s storage driver: %(metadata)s. %(e)s.") % dict(driver=encodeutils.exception_to_unicode(store), metadata=encodeutils.exception_to_unicode(metadata), e=encodeutils.exception_to_unicode(e))) LOG.error(e_msg) raise exceptions.BackendException(e_msg) return (location, size, checksum, metadata) def add_to_backend(conf, image_id, data, size, scheme=None, context=None, verifier=None): if scheme is None: scheme = conf['glance_store']['default_store'] store = get_store_from_scheme(scheme) return store_add_to_backend(image_id, data, size, store, context, verifier) def set_acls(location_uri, public=False, read_tenants=[], write_tenants=None, context=None): if write_tenants is None: write_tenants = [] loc = location.get_location_from_uri(location_uri, conf=CONF) scheme = get_store_from_location(location_uri) store = get_store_from_scheme(scheme) try: store.set_acls(loc, public=public, read_tenants=read_tenants, write_tenants=write_tenants, context=context) except NotImplementedError: LOG.debug(_("Skipping store.set_acls... not implemented.")) glance_store-0.23.0/AUTHORS0000664000175100017510000000777113230237775015362 0ustar zuulzuul00000000000000Abhijeet Malawade Adam Kijak Andrea Rosa Andreas Jaeger Andreas Jaeger Andrey Pavlov Andrey Pavlov Ankit Agrawal Arnaud Legendre Ben Roble Brian D. Elliott Brian Rosmaita Brian Rosmaita Brianna Poulos Cao Xuan Hoang ChangBo Guo(gcb) Christian Schwede Cindy Pallares Cyril Roelandt Dan Prince Danny Al-Gaaf Darja Malyavkina Dharini Chandrasekar Doug Hellmann Drew Varner Edgar Magana Eric Brown Eric Harney Erno Kuvaja Erno Kuvaja Flavio Percoco Giridhar Jayavelu Gorka Eguileor Haikel Guemar Hemanth Makkapati Ian Cordasco Ian Cordasco Ian Cordasco Itisha Dewan Jake Yip Jamie Lennox Jamie Lennox Jens Rosenboom Jeremy Stanley Jesse J. Cook Jian Wen JordanP Josh Durgin Jun Hong Li Kairat Kushaev Li Wei LiuNanke Louis Taylor Louis Taylor Masashi Ozawa Matt Riedemann Matt Smith Mike Fedosin Mingda Sun Nguyen Hung Phuong Niall Bunting NiallBunting Nikhil Komawar Nikhil Komawar Nina Goradia Oleksii Chuprykov Ondřej Nový OpenStack Release Bot Radoslaw Smigielski Rajesh Tailor Ronald Bradford RustShen Sabari Kumar Murugesan Sean McGinnis Shuquan Huang Stuart McLaren Stuart McLaren Szymon Datko Sławek Kapłoński THOMAS J. COCOZZELLO Taylor Peoples Thomas Bechtold Tim Burke Tom Cocozzello Tomoki Sekiyama Tomoki Sekiyama Victor Sergeyev Victor Stinner Vikhyat Umrao Weijin Wang YAMADA Hideki Zhi Yan Liu Zoltan Arnold Nagy Zuul ankitagrawal gengchc2 hgangwx@cn.ibm.com kairat_kushaev liuyamin luqitao ricolin skseeker song jian yuyafei zhangdaolong zhangsong zhengyao1 glance_store-0.23.0/.testr.conf0000666000175100017510000000055213230237440016355 0ustar zuulzuul00000000000000[DEFAULT] test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \ ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./glance_store/tests/unit} $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list