././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/0000775000175000017500000000000000000000000014214 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/.stestr.conf0000664000175000017500000000007100000000000016463 0ustar00zuulzuul00000000000000[DEFAULT] test_path=./glance_store/tests/unit top_dir=./ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/.zuul.yaml0000664000175000017500000001306100000000000016156 0ustar00zuulzuul00000000000000- job: name: glance_store-dsvm-functional-base parent: devstack-tox-functional description: | Base job for devstack-based functional tests for glance_store Can only be used directly if a 'functional' testenv is defined in tox.ini (which currently is not the case). required-projects: - openstack/glance_store timeout: 4200 vars: devstack_localrc: LIBS_FROM_GIT: glance_store # Hardcode glance_store path so the job can be run on glance patches zuul_work_dir: src/opendev.org/openstack/glance_store - job: name: glance_store-dsvm-functional-filesystem parent: glance_store-dsvm-functional-base vars: tox_envlist: functional-filesystem - job: name: glance_store-dsvm-functional-swift parent: glance_store-dsvm-functional-base required-projects: - openstack/swift vars: tox_envlist: functional-swift devstack_services: s-account: true s-container: true s-object: true s-proxy: true - job: name: glance_store-tox-cinder-tips-base parent: tox abstract: true description: Abstract job for glance_store vs. cinder nodeset: ubuntu-focal required-projects: - name: openstack/os-brick - name: openstack/python-cinderclient - job: name: glance_store-tox-py3-cinder-tips parent: glance_store-tox-cinder-tips-base description: | glance_store py3 unit tests vs. cinder masters vars: tox_envlist: py3 - job: name: glance_store-tox-keystone-tips-base parent: tox abstract: true description: Abstract job for glance_store vs. keystone nodeset: ubuntu-focal required-projects: - name: openstack/keystoneauth - name: openstack/python-keystoneclient - job: name: glance_store-tox-py3-keystone-tips parent: glance_store-tox-keystone-tips-base description: | glance_store py3 unit tests vs. keystone masters vars: tox_envlist: py3 - job: name: glance_store-tox-oslo-tips-base parent: tox abstract: true description: Abstract job for glance_store vs. oslo nodeset: ubuntu-focal required-projects: - name: openstack/oslo.concurrency - name: openstack/oslo.config - name: openstack/oslo.i18n - name: openstack/oslo.privsep - name: openstack/oslo.rootwrap - name: openstack/oslo.serialization - name: openstack/oslo.utils - name: openstack/oslo.vmware - name: openstack/stevedore - job: name: glance_store-tox-py3-oslo-tips parent: glance_store-tox-oslo-tips-base description: | glance_store py3 unit tests vs. oslo masters vars: tox_envlist: py3 - job: name: glance_store-tox-swift-tips-base parent: tox abstract: true description: Abstract job for glance_store vs. swift nodeset: ubuntu-focal required-projects: - name: openstack/python-swiftclient - job: name: glance_store-tox-py3-swift-tips parent: glance_store-tox-swift-tips-base description: | glance_store py3 unit tests vs. swift masters vars: tox_envlist: py3 - job: name: glance_store-src-ceph-tempest parent: devstack-plugin-ceph-tempest-py3 description: | Runs tempest tests with the latest glance_store and the Ceph backend Former names for this job were: * legacy-tempest-dsvm-full-ceph-plugin-src-glance_store required-projects: - opendev.org/openstack/glance_store timeout: 10800 vars: tempest_test_regex: (^tempest\.(api|scenario)|(^cinder_tempest_plugin)) - job: name: cross-glance-tox-functional parent: openstack-tox description: | Run cross-project glance functional tests on glance_store. vars: zuul_work_dir: src/opendev.org/openstack/glance tox_envlist: functional required-projects: - openstack/glance - openstack/glance_store - project: templates: - check-requirements - lib-forward-testing-python3 - openstack-python3-jobs - publish-openstack-docs-pti - release-notes-jobs-python3 check: jobs: - cross-glance-tox-functional - glance_store-src-ceph-tempest: irrelevant-files: &tempest-irrelevant-files - ^doc/.*$ - ^releasenotes/.*$ - ^.*\.rst$ - ^(test-|)requirements.txt$ - ^setup.cfg$ - ^tox.ini$ experimental: jobs: - glance_store-dsvm-functional-filesystem - glance_store-dsvm-functional-swift periodic: jobs: # NOTE(rosmaita): we only want the "tips" jobs to be run against # master, hence the 'branches' qualifiers below. Without them, when # a stable branch is cut, the tests would be run against the stable # branch as well, which is pointless because these libraries are # frozen (more or less) in the stable branches. # # The "tips" jobs can be removed from the stable branch .zuul.yaml # files if someone is so inclined, but that would require manual # maintenance, so we do not do it by default. Another option is # to define these jobs in the openstack/project-config repo. # That would make us less agile in adjusting these tests, so we # aren't doing that either. - glance_store-tox-py3-cinder-tips: branches: master - glance_store-tox-py3-keystone-tips: branches: master - glance_store-tox-py3-oslo-tips: branches: master - glance_store-tox-py3-swift-tips: branches: master ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/AUTHORS0000664000175000017500000001453100000000000015270 0ustar00zuulzuul00000000000000Abhijeet Malawade Abhishek Kekane Adam Kijak Adam Zhang Ade Lee Akihiro Motoki Alexandre Arents Alfredo Moralejo Andrea Rosa Andreas Jaeger Andreas Jaeger Andrew Bogott Andrey Pavlov Andrey Pavlov Andrii Ostapenko Ankit Agrawal Arnaud Legendre Arnaud Morin Ben Roble Brian D. Elliott Brian Rosmaita Brian Rosmaita Brianna Poulos Cao Xuan Hoang ChangBo Guo(gcb) Christian Rohmann Christian Schwede Chuck Short Cindy Pallares Corey Bryant Cyril Roelandt Dan Prince Dan Smith Dan Smith Danny Al-Gaaf Darja Malyavkina Dharini Chandrasekar Doug Hellmann Dr. Jens Harbott Drew Varner Edgar Magana Elancheran T.S Eric Brown Eric Harney Erno Kuvaja Erno Kuvaja Flavio Percoco Ghanshyam Mann Giridhar Jayavelu Gorka Eguileor Haikel Guemar Hemanth Makkapati Hervé Beraud Hoang Trung Hieu Ian Cordasco Ian Cordasco Ian Cordasco Itisha Dewan Jake Yip James Page Jamie Lennox Jamie Lennox Jens Rosenboom Jeremy Stanley Jesse J. Cook Jian Wen JordanP Josh Durgin Jun Hong Li Kairat Kushaev LeopardMa Li Wei Liang Fang LiuNanke Louis Taylor Louis Taylor Lucian Petrut Luigi Toscano Masashi Ozawa Matt Riedemann Matt Smith Michal Arbet Mike Durnosvystov Mike Fedosin Mingda Sun Mohammed Naser Naohiro Sameshima Nguyen Hai Nguyen Hung Phuong Niall Bunting NiallBunting Nikhil Komawar Nikhil Komawar Nina Goradia Nobuto Murata Oleksii Chuprykov Ondřej Nový OpenStack Release Bot Paul Belanger Pavlo Shchelokovskyy Radoslaw Smigielski Rajat Dhasmana Rajesh Tailor Ronald Bradford RustShen Sabari Kumar Murugesan Scott McClymont Sean McGinnis Sean McGinnis Shuquan Huang Stefan Dinescu Stephen Finucane Stuart McLaren Stuart McLaren Szymon Datko Sławek Kapłoński THOMAS J. COCOZZELLO Takashi Kajinami Takashi Kajinami Takashi Natsume Taylor Peoples Thomas Bechtold Tim Burke Tom Cocozzello Tomoki Sekiyama Tomoki Sekiyama Tony Breeds Victor Coutellier Victor Sergeyev Victor Stinner Vikhyat Umrao Vincent Untz Vladislav Kuzmin Weijin Wang XiaojueGuan Xinxin Shen XinxinShen YAMADA Hideki Yu Shengzuo Zhi Yan Liu Zoltan Arnold Nagy anguoming ankitagrawal asmita singh caoyuan chenjiao gengchc2 hgangwx@cn.ibm.com kairat_kushaev khashf <6059347+khashf@users.noreply.github.com> liuyamin liyou01 lujie luqitao ricolin skseeker song jian wangxiyuan whoami-rajat wu.chunyang xuanyandong yanghuichan yfzhao yuyafei zhangbailin zhangboye zhangdaolong zhangsong zhengyao1 zhufl ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/ChangeLog0000664000175000017500000007364100000000000016001 0ustar00zuulzuul00000000000000CHANGES ======= 4.8.1 ----- * Imported Translations from Zanata * reno: Update master for unmaintained/zed 4.8.0 ----- * Update status of VMWare store driver * Update maintainer of rbd driver and cinder driver * Use normal credentials for legacy image update * reno: Update master for xena Unmaintained status * reno: Update master for wallaby Unmaintained status * reno: Update master for victoria Unmaintained status * Update master for stable/2024.1 4.7.0 ----- * reno: Update master for yoga Unmaintained status * Remove \_snapshot\_has\_external\_reference from rbd driver * Bump hacking * s3: Do not log access keys * Do not show access\_key in s3 driver * RBD: Use rados\_connect\_timeout to override timeout * rbd: compute appropriate resize amount before resizing image * Update python classifier in setup.cfg * Remove unnecessary ceilometer service overrides * Increase timeout of glance\_store-src-ceph-tempest * Remove unused httplib2 * Remove unused test tools * cinder: Catch missing dependencies * Update master for stable/2023.2 * Deprecate VMWare Datastore 4.6.1 ----- * Imported Translations from Zanata * Imported Translations from Zanata 4.6.0 ----- * RBD: Trash image when snapshots prevent deletion * Add per-store weight config element * RBD: Wrap RBD calls in native threads * Make ceph job voting 4.5.0 ----- * Revert "RBD: Wrap RBD calls in native threads" * Update 'extras' for cinder driver * Imported Translations from Zanata 4.4.0 ----- * Add force to os-brick disconnect * Run cinder driver unit tests * RBD: Wrap RBD calls in native threads * Update master for stable/2023.1 * Do not always import boto3 * rbd: Disable rbd stores if libraries are not available * cinder: Disable cinder stores if cinderclient is not installed * move attachment\_update to try block * Fix misuse of assertTrue * Imported Translations from Zanata 4.3.0 ----- * Replace deprecated UPPER\_CONSTRAINTS\_FILE variable * Imported Translations from Zanata * Cinder: Add support to extend attached volumes * Refactor/restructure glance cinder store 4.2.0 ----- * Fix tox4 error * Add region\_name option to s3 store * [test-only] OverflowError running on 32-bit systems * Imported Translations from Zanata * Switch to 2023.1 Python3 unit tests and generic template name * Update master for stable/zed * Imported Translations from Zanata * Imported Translations from Zanata 4.1.0 ----- * Tests: Mock sleep in cinder test\_attachment\_create\_retries * Swift: Honor \*\_domain\_name parameters * Do not loose url queries on redirects * Rbd: Deprecate unused rados\_connect\_timeout * Add debug logs to cinder store * Remove logic for Python <= 2.6 4.0.1 ----- * Support os-brick specific lock\_path for Cinder * Imported Translations from Zanata * Remove Python 2 support 4.0.0 ----- * Update python testing as per zed cycle teting runtime * Cinder: Correct exception logging during attach * Correct retry interval during attach volume * Add coverage for add method * Add exception coverage for get, get\_size, delete * Add coverage for helper methods * Add coverage for get\_cinderclient and \_check\_context * Remove redundant try except around volume create * Add coverage for StoreLocation * Add coverage for get\_cinder\_session * Remove six usage * Refactor cinder store tests[2/2] * Refactor cinder store tests[1/2] * Replace FakeObject with MagicMock[2/2] * Replace FakeObject with MagicMock[1/2] * Update master for stable/yoga 3.0.0 ----- * Cinder store: Wait for device resize * Correct attachment\_complete call * Pass valid IP address to os-brick * [RBD] Clone v2: Image is unusable if deletion fails * Updating python testing classifier as per Yoga testing runtime * Imported Translations from Zanata * nit: Correct debug log * Cleanup devstack jobs * Fix documentation build with Sphinx>=4.2.0 * Fix typos * Add Python3 yoga unit tests * Update master for stable/xena 2.7.0 ----- * Xena cycle Release Notes * Raise correct exception from "Quota full" * Add volume multiattach handling * Drop lower-constraints job * Glance cinder nfs: Block creating qcow2 volumes * Doc: Use Block Storage API v3 * Add cinder's new attachment support 2.6.0 ----- * s3: Optimize WRITE\_CHUNKSIZE to minimize an overhead * setup.cfg: Replace dashes with underscores * Allow any Keystone domain for cinder store * vmware: Use cookiejar from oslo.vmware client directly * Pass multipath config while creating connector object * Add Python3 xena unit tests * Update master for stable/wallaby * swift: Take into account swift\_store\_endpoint 2.5.0 ----- * Wallaby cycle Release Notes * Run glance functional job on glance\_store * Validate volume type during volume create * Cinder store: Use v3 API by default * Fix lower\_constraints and requirements * Replace md5 with oslo version 2.4.0 ----- * Imported Translations from Zanata * Add Python3 wallaby unit tests * Update master for stable/victoria * Update user/project referencing from context 2.3.0 ----- * Drop snapshot in use log from ERROR to WARN * Add a little more test coverage for rbd resize logic * Bring FakeData utility over from glance * Correct default type name reference * Support Cinder multiple stores * Handle sparse images in glance\_store * Ramp up rbd resize to avoid excessive calls 2.2.0 ----- * Copy data files to glance upon installation * [Trivial]Add missing white space between words * [goal] Migrate glance\_store jobs to focal * zuul: glance\_store-src-ceph-tempest replaces a legacy job * use stevedore to load extensions * requirements: Drop os-testr * Remove translation sections from setup.cfg * Fix mock import in unit tests * Don't allow image creation with encrypted nfs volumes 2.1.0 ----- * Release notes for Victoria Milestone 1 * Stop to use the \_\_future\_\_ module * Switch to newer openstackdocstheme and reno versions * Cap jsonschema 3.2.0 as the minimal version * Fix hacking min version to 3.0.1 * Imported Translations from Zanata * Add lock per share for cinder nfs mount/umount * Clarify the filesystem\_store\_metadata\_file config option * Fix: API returns 503 if one of the store is mis-configured * Bump default tox env from py37 to py38 * Add py38 package metadata * Bump cinder/os-brick requirements * Use unittest.mock instead of third party mock * Imported Translations from Zanata * Add Python3 victoria unit tests * Enforce constraints for docs dependencies * Cleanup py27 support * Update hacking for Python3 * Update master for stable/ussuri 2.0.0 ----- * Release note for 1.2.0 * Add config for cinder mounting needs * Refactor methods in cinder store * Add S3 store support to glance\_store * Fix for BufferedReader sets is\_zero\_size true for a chunk * Image upload fails if cinder multipath is enabled * Drop support for tempest-full * Restore quotes removal for swift config in Python3 * Drop python 2.7 support and testing * Re-use swift\_store\_cacert for Keystone session * doc: Clean up unnecessary left vertical lines * Imported Translations from Zanata 1.1.0 ----- * Remove sheepdog store driver * Add release notes link in readme * Imported Translations from Zanata * Release note for 1.0.1 * Update master for stable/train * Register reserved store configs * Remove warning filter * Set zero size only when nothing is written * Fix option load for swift/vmware 1.0.0 ----- * Release note and documentation for 1.0.0 * Deprecate Sheepdog driver * Change location metadata key 'backend' to 'store' * Remove sheepdog tests from zuul config * Add Python 3 Train unit tests 0.29.1 ------ * Add 0.29.1 releasenotes * Revert "Change location metadata key 'backend' to 'store'" 0.29.0 ------ * Rethinking file system access * Remove outdated line in tox.ini * Change location metadata key 'backend' to 'store' * Add location prefix url to store instance * Add releasenote for option removal * Removed 'store\_capabilities\_update\_min\_interval' config option * Dropping the py35 testing * Modify deprecation warning for stores options * Cap sphinx for py2 to match global requirements * Replace git.openstack.org URLs with opendev.org URLs * Fix failing tips-py35 jobs * OpenDev Migration Patch * Do not include ETag when puting manifest in chunked uploads * Update irrelevant-files for tempest tests * Python3: Fix return type on CooperativeReader.read * Uncap jsonschema * Update master for stable/stein * Prevent unicode object error from zero-byte read * Return bytes even when get()ing a zero-byte image from swift 0.28.0 ------ * Stein cycle Release Notes * Update help text for rbd\_ceph\_conf * Filesystem driver: add chunk size config option * Fix python3 compatibility of rbd get\_fsid * Do not raise StopIteration * add python 3.7 unit test job * Fix some types in the FS and VMware drivers * Imported Translations from Zanata * Update mailinglist from dev to discuss * Use template for lower-constraints * Update deprecation notices * Catch rbd NoSpace exception * Remove moxstubout usage 0.27.0 ------ * Add statement explaining "tips" job configuration * Provision to add new config options in sample config file * Imported Translations from Zanata * add lib-forward-testing-python3 test job * add python 3.6 unit test job * switch documentation job to new PTI * Fix defaults for ConfigParser * Change rbd default conf path * import zuul job settings from project-config * remove bandit from testing * Refactor periodic "tips" jobs * Imported Translations from Zanata * Move doc8 to test requirements * Remove team diversity tags note in README * Wrap interface function for multihash correctly * cinder: Support os-brick privsep filters * Update reno for stable/rocky 0.26.0 ------ * Consider Cinder back-end as production ready * Remove config option help translation * Deprecate store\_add\_to\_backend() * Multihash Implementation for Glance 0.25.0 ------ * Address multi store nits * Add release notes for 0.25.0 * Multi store support for cinder driver * Update tox.ini to conform to the PTI * Follow the new PTI for document build * Multi store support for http, swift, sheepdog and vmware driver * Enable multi store support for glance * Deprecate stores, default\_store config options * specify region on creating cinderclient * cinder: Specify mountpoint host param to attach API * Deprecate store\_capabilities\_update\_min\_interval * Update links in README 0.24.0 ------ * use only exceptions for uri validations * fix tox python3 overrides * Update conf.py to align with openstackdocstheme * Add periodic tips jobs * Add glance\_store disclaimer to docs * Remove tox\_install.sh * uncap eventlet * add lower-constraints job * Fix wrong links in glance\_store * Updated from global requirements * Updated from global requirements * Clean imports in code * Imported Translations from Zanata * Migrate legacy jobs to project repository * Updated from global requirements * Imported Translations from Zanata * Add doc8 to pep8 check for glance\_store project * Add .stestr to gitignore * Update reno for stable/queens * Updated from global requirements * process spelling error * Imported Translations from Zanata 0.23.0 ------ * Add Queens release note * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Fix some wrong url and add license * Updated from global requirements * Updated from global requirements * Fix BufferedReader writing zero size chunks * Updated from global requirements * Updated from global requirements * Use cached auth\_ref instead of gettin a new one each time * Remove setting of version/release from releasenotes * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * TrivialFix: Correct reST field lists in docstrings * Revert "Remove team:diverse-affiliation from tags" * Expand sz to size * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements * Update reno for stable/pike * Updated from global requirements * Updated from global requirements * Updated from global requirements * Remove team:diverse-affiliation from tags 0.21.0 ------ * Updated from global requirements * Add release note for Pike * Cinder driver: TypeError in \_open\_cinder\_volume * Updated from global requirements * Updated from global requirements * Updated from global requirements * set warning-is-error for documentation build * switch from oslosphinx to openstackdocstheme * rearrange existing documentation according to the new standard layout * Updated from global requirements * Updated from global requirements * Fix html\_last\_updated\_fmt for Python3 * Fixed tests due to updated oslo.config * Initialize privsep root\_helper command * Don't fail when trying to unprotect unprotected snapshot on RBD * Updated from global requirements * Add python 3.5 in classifier and envlist * Imported Translations from Zanata * Update maintainer's email address * Updated from global requirements * Buffered reader: Upload recovery for swift store * Updated from global requirements * Replace six.iteritems() with .items() * Removes unnecessary utf-8 coding for glance\_store * Use HostAddressOpt for store opts that accept IP and hostnames * Updated from global requirements * An unit test passes because is launched as non-root user * Update test requirement * Updated from global requirements * Updated from global requirements * Fix SafeConfigParser DeprecationWarning in Python 3.2+ * Update reno for stable/ocata * Correct error msg variable that could be unassigned * Fixing string formatting bug in log message 0.20.0 ------ * Updated from global requirements * Remove debtcollector in requirements.txt * Log at error when we intend to reraise the exception * Suppress oslo-config DeprecationWarning during functional test * Disable verification for Keystone session in Swift 0.19.0 ------ * Raise exc when using multi-tenant and swift+config * Updated from global requirements * Use storage\_url in DB for multi-tenant swift store * Add alt text for badges * Fix a typo in help text * Show team and repo badges on README * take into consideration created volume size in cinder backend * Updated from global requirements * Move rootwrap config files from etc/\* into etc/glance/\* * Update README * Convert to keystoneauth * Updated from global requirements * Fix a typo in rootwrap.conf and glance\_cinder\_store.filters * Fix dbg msg when swift can't determine image size * Refactor get\_manager\_for\_store in an OO manner * Add cinder\_volume\_type to cinder store configuration * Enable release notes translation * Updated from global requirements * Do not require entry-point dependencies in tests * Updated from global requirements * Updated from global requirements * Updated from global requirements * Sheepdog: fix command execution failure * Update home-page url in setup.cfg * Do not call image.stat() when we only need the size * TrivialFix: Merge imports in code * standardize release note page ordering * Clean imports in code * Reason to return sorted list of drivers for opts * Updated from global requirements * Always return a sorted list of drivers for configs * Fix doc build if git is absent * Improve tools/tox\_install.sh * Update reno for stable/newton 0.18.0 ------ * Fix header passed to requests * Updated from global requirements 0.17.0 ------ * Add release notes for 0.17.0 * Release note for glance\_store configuration opts * Improving help text for Swift store opts * Improving help text for Swift store util opts * Improve help text of cinder driver opts * Fix help text of swift\_store\_config\_file * Improving help text for backend store opts * Remove "Services which consume this" section * Improve the help text for Swift driver opts * Updated from global requirements * Improving help text for Sheepdog opts * Use constraints for all tox environments * Improve help text of http driver opts * Improve help text of filesystem store opts * Improve help text of rbd driver opts * Improving help text for Glance store Swift opts * Remove deprecated exceptions * Improve the help text for vmware datastore driver opts * Updated from global requirements 0.16.0 ------ * Updated from global requirements * Updated from global requirements * Remove S3 driver 0.15.0 ------ * Fix cinder config string as per current i18n state * Sheepdog:modify default addr * Cleanup i18n marker functions to match Oslo usage * Updated from global requirements * Don't include openstack/common in flake8 exclude list 0.14.0 ------ * Add bandit to pep8 and bandit testenv * Remove unused variable in vmware store * Imported Translations from Zanata * Split functional tests apart * Updated from global requirements * Check that size is a number * Replace dict.iterkeys with six.iterkeys to make PY3 compatible * cinder: Fix get\_size return value * The function add calculation size\_gb need improve * Updated from global requirements * Updated from global requirements * Fix argument order for assertEqual to (expected, observed) * Updated from global requirements * Updated from global requirements * Remove -c from tox.ini * tox respects upper-constraints.txt * Updated from global requirements * Updated from global requirements * Updated from global requirements * Updated from global requirements * Fix minor misspellings affecting Config Reference Guide * Remove verbose option from glance\_store tests * Updated from global requirements * Updated from global requirements * Improve help text of swift driver opts * Updated from global requirements * Add functional tests for swift * Imported Translations from Zanata * Updated from global requirements * Updated from global requirements * Fix releasenotes to pass reno gates * Updated from global requirements * tox: use os-testr instead of testr * Fix swiftclient mocks * Deprecate swift driver options properly * Fix typos in config files * Setup defaults for swift driver authentication * Fix doc generation warnings and errors * trivial:fixing one W503 pep8 error * Module docs are not generated * Fix cinder store to support Cinder RemoteFS backends * Missing params in store\_add\_to\_backend docstring * Mock swiftclient's functions in tests * Update reno for stable/mitaka 0.13.0 ------ * Add https ca\_file and insecure options to VMware Store * swift: Do not search storage\_url for ks v2 0.12.0 ------ * Fix misspelling in the releasenote support-cinder-upload * Add new config options for HTTPS store * Implement get, add and delete for cinder store * Implement re-authentication for swift driver * Implement swift store connection manager * Updated from global requirements * test\_http\_get\_redirect is not testing redirects correctly * Switch VMWare Datastore to use Requests * Updated from global requirements * Add base for functional tests * Add small image verifier for swift backend * Switch HTTP store to using requests 0.11.0 ------ * Change approach to request storage url for multi-tenant store * Remove unused parameters from swift connection init * Sheepdog: fix image-download failure * LOG.warn is deprecated in python3 * Updated from global requirements * Updated from global requirements * Use url\_for from keystoneclient in swift store * Remove deprecated datastore\_name, datacenter\_path * Add backend tests from glance * Fix some inconsistency in docstrings * Updated from global requirements * Change Swift zero-size chunk behaviour * Sheepdog: fix upload failure in API v2 * Remove unnecessary re-raise of NotFound exception * Updated from global requirements * Add signature verifier to backend drivers * Use oslo\_utils.encodeutils.exception\_to\_unicode() * Fix default mutables for set\_acls * Deprecate unused Exceptions * Remove unnecessary auth module * Updated from global requirements * Deprecate the S3 driver * Document supported drivers and maintainers * Remove the gridfs driver * Set documented default directory for filesystem * Imported Translations from Zanata * Updated from global requirements * Swift store: do not send a 0 byte chunk * Store.get\_size: handle HTTPException * Replace deprecated library function os.popen() with subprocess * Updated from global requirements * Deprecated tox -downloadcache option removed * Add docs section to tox.ini * Replace assertEqual(None, \*) with assertIsNone in tests * Updated from global requirements * Remove duplicate keys from dictionary * Remove unreachable code * Sheepdog: Change storelocation format * Updated from global requirements * Add reno for release notes management in glance\_store * Put py34 first in the env order of tox * Updated from global requirements * Add list of supported stores to help * Add functional testing devstack gate hooks 0.10.0 ------ * Rel notes for 0.10.0 * Updated from global requirements * Remove useless config.py file * vmware: check for response body in error conditions * remove default=None for config options * Updated from global requirements * Imported Translations from Zanata * Updated from global requirements * Updated from global requirements * Remove deprecated glance\_store opts from default section * Updated from global requirements * Improving GlanceStoreException * Activate pep8 check that \_ is imported * '\_' is used by i18n * VMware: Fix missing space in error message * Handle swift store's optional dependency * Fix swift store tests for latest swiftclient 0.9.1 ----- * rbd: re-add the absolute\_import and with\_statement imports 0.9.0 ----- * Release notes 0.9.0 and corrected library version * Updated from global requirements * Catch InvalidURL when requesting store size * Imported Translations from Transifex * Add proxy support to S3 Store * Prevent glance-api hangups during connection to rbd * rbd driver cannot delete residual image from ceph in some cases 0.8.0 ----- * Imported Translations from Transifex * Add explicit dependencies for store dependencies * Support V3 authentication with swift 0.7.1 ----- * rbd: make sure features is an int when passed to librbd.create 0.7.0 ----- * setup.cfg: add Python 3 classifiers * Remove usage of assert\_called\_once in mocks * Add .eggs/\* to .gitignore * Imported Translations from Transifex * Updated from global requirements * Make cinderclient a more optional dependency * Port S3 driver to Python 3 * Do not used named args when using swiftclient * logging failed exception info for add image operation * Fix random test error in swift store delete * Port swift driver to Python 3 * Port vmware driver to Python 3 * RBD: Reading rbd\_default\_features from ceph.conf * Move glance\_store tests into the main package * Use six.moves to fix imports on Python 3 * Move python-cinderclient to test-requirements.txt * Updated from global requirements 0.6.0 ----- * Add release notes for 0.6.0 * Drop py26 support * Port remaining tests to Python 3 * Fix Python 3 issues * Close a file to fix a resource warning on Python 3 * Port exception\_to\_str() to Python 3 * Disable propagating BadStoreConfiguration * Sync up with global-requirements 0.5.0 ----- * Add release notes for 0.5.0 * Drop use of 'oslo' namespace package * Fix RBD delete image on creation failure * Use is\_valid\_ipv6() from oslo.utils * Properly instantiate Forbidden exception * Update README to work with release tools * Remove ordereddict from requirements * gridfs: add pymongo to test-requirements and update tests * Add release notes for 0.1.10-0.3.0 * Only warn on duplicate path on fs backend * Propagate BadStoreConfiguration to library user * Handle optional dependency in vmware store * Update oslo libraries * Initialize vmware session during store creation 0.4.0 ----- * Add release notes for 0.4.0 * Fix intermittent failure in test\_vmware\_store * Deprecate the gridfs store * Remove incubative openstack.common.context module * Update help text with sample conf * Use oslo\_config.cfg.ConfigOpts in glance\_store * Make dependency on boto entirely conditional * Move from oslo.utils to oslo\_utils (supplement) * Fix timeout during upload from slow resource 0.3.0 ----- * Throw NotFound exception when template is gone * Deprecate VMware store single datastore options * Use oslo\_utils.units where appropriate * VMware: Support Multiple Datastores 0.2.0 ----- * Correct such logic in store.get() when chunk\_size param provided * Support for deleting images stored as SLO in Swift * Enable DRIVER\_REUSABLE for vmware store 0.1.12 ------ * Show fully qualified store name in update\_capabilities() logging * Move to hacking 0.10 * Fix sorting query string keys for arbitrary url schemes * Unify using six.moves.range rename everywhere 0.1.11 ------ * Remove duplicate key * Add coverage report to run\_test.sh * Use a named enum for capability values * Check VMware session before uploading image * Add capabilities to storage driver * Fixing PEP8 E712 and E265 * Convert httpretty tests to requests-mock * Replace snet config with endpoint config * Rename oslo.concurrency to oslo\_concurrency * Remove retry on failed uploads to VMware datastore * Remove old dependencies * Validate metadata JSON file * Use default datacenter\_path from oslo.vmware * Remove unused exception StorageQuotaFull * Move from oslo.config to oslo\_config * Move from oslo.utils to oslo\_utils * Add needed extra space to error message * Define a new parameter to pass CA cert file * Use testr directly from tox * Raise appropriate exception if socket error occurs * Swift Store to use Multiple Containers * Use testr directly from tox * Remove deprecated options * Correct GlanceStoreException to provide valid message - glance\_store * Catch NotFound exception in http.Store get\_size * VMware store: Re-use api session token 0.1.10 ------ 0.1.9 ----- * Test swift multi-tenant store get context * Test swift multi-tenant store add context * Use oslo.concurrency * Move cinder store to use auth\_token * Swift Multi-tenant store: Fix image upload * Use statvfs instead of df to get available space * Fix public image ACL in multi-tenant Swift mode * Updated run\_tests.sh to run tests in debug mode * Remove validate\_location * Imported Translations from Transifex * Add coverage to test-requirements.txt * Imported Translations from Transifex * Switch to using oslo.utils * Remove network\_utils * Recover from errors while deleting image segments * VMware store: Use the Content-Length if available * Backporting S3 multi-part upload functionality to glace\_store * Make rbd store's pool handling more universal * s3\_store\_host parameter with port number * Enhance configuration handling * Enable F841 check * Portback part change of adding status field to image location * Mark glance\_store as being a universal wheel * Imported Translations from Transifex * Use oslo.serialization * Fix H402 * Portback part change of enabling F821 check * Adding common.utils.exception\_to\_str() to avoid encoding issue * Replace stubout with oslotest.moxstubout * Fix RBD store to use READ\_CHUNKSIZE and correct return of get() * Add a run\_tests.sh * Run tests parallel by default * Add ordereddict to reqs for py2.6 compatibility * rbd: fix chunk size units * Imported Translations from Transifex * Stop using intersphinx * Cleanup shebang in non-executable module * Correct Sheepdog store configuration * Correct base class of no\_conf driver * Handle session timeout in the VMware store * Add entry-point for oslo.config options and update registering logic * Configure the stores explicitly * Imported Translations from Transifex * Return the right filesize when chunk\_size != None * Allowing operator to configure a permission for image file in fs store * Align swift's store API 0.1.7 ----- * Add \`OPTIONS\` attribute to swift.Store function 0.1.5 ----- * Add missing stores to setup.cfg * Set group for DeprecatedOpts * Complete random\_access for the filesystem store * Work toward Python 3.4 support and testing 0.1.3 ----- * Register store's configs w/o creating instances 0.1.2 ----- * Add deprecated options support for storage drivers * Rename locale files for glance\_store rename * Update .gitreview for project rename 0.1.1 ----- * Rename glance.store to glance\_store * Port of 97882f796c0e8969c606ae723d14b6b443e2e2f9 * Port of 502be24afa122eef08186001e54c1e1180114ccf * Fix collection order issues and unit test failures 0.1.0 ----- 0.0.1a2 ------- * Fix development classifier * Imported Translations from Transifex * Package glance's package entirely 0.0.1a1 ------- * Split CHUNKSIZE into WRITE/READ\_CHUNKSIZE * Port swift store * Add validate\_location * Fix some Exceptions incompatibilities * Imported Translations from Transifex * Setup for glance.store for translation * Set the right classifiers in setup.cfg * Remove version string from setup.cfg * Add .gitreview to the repo * Fix flake8 errors * Adopt oslo.i18n * Pull multipath support from glance/master * Update from oslo-incubator * Pass offset and chunk\_size to the \`get\` method * Migrate vmware store * Move FakeHTTPResponse to a common utils module * Removed commented code * Remove deprecated \_schedule\_delayed\_delete\_from\_backend function * BugFix: Point to the exceptions module * BugFix: define scheme outside the \`try\` block * Add a way to register store options * Update functions signatures w/ optional context * Remove old scrubber options * Move exceptions out of common and add backends.py * Use exception * Remove dependency on oslo-log * Add offset and chunk\_size to the get method * Migrate the rbd store * Use register\_store\_schemes everywhere * Add missing context keyword to the s3 store * Migrate cinder store * Remove location\_strategy, it belongs to Glance * S3 store ported * Move options registration to \_\_init\_\_ * GridFS Store * Port sheepdog and its test suite * Update from oslo-inc and added processutils * Fix http store tests * Added fake driver, restored base tests, fixed load driver issue * Use context when needed * Add context=None to http store methods * Remove old exceptions * HTTP migrated * Accept a message keyword in exceptions * Filesystem driver restored * Move drivers under \_driver * Added testr * Config & Import fixes * Move base test to glance/store * Deprecate old options, make the list shorter * Add glance.store common * Add tests w/ some fixes, although they don't run yet * Update gitignore * Add requirements and testr * Add oslo-inc modules * Copying from glance ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/LICENSE0000664000175000017500000002363700000000000015234 0ustar00zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/PKG-INFO0000664000175000017500000000541500000000000015316 0ustar00zuulzuul00000000000000Metadata-Version: 2.1 Name: glance_store Version: 4.8.1 Summary: OpenStack Image Service Store Library Home-page: https://docs.openstack.org/glance_store/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/glance_store.svg :target: https://governance.openstack.org/tc/reference/tags/index.html :alt: The following tags have been asserted for the Glance Store Library: "project:official", "stable:follows-policy", "vulnerability:managed". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for glance_store. (The SVG in the governance repo is updated automatically.) .. Change things from this point on Glance Store Library ==================== Glance's stores library This library has been extracted from the Glance source code for the specific use of the Glance and Glare projects. The API it exposes is not stable, has some shortcomings, and is not a general purpose interface. We would eventually like to change this, but for now using this library outside of Glance or Glare will not be supported by the core team. * License: Apache License, Version 2.0 * Documentation: https://docs.openstack.org/glance_store/latest/ * Source: https://opendev.org/openstack/glance_store/ * Bugs: https://bugs.launchpad.net/glance-store * Release notes: https://docs.openstack.org/releasenotes/glance_store/index.html Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: OpenStack Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Requires-Python: >=3.8 Provides-Extra: cinder Provides-Extra: s3 Provides-Extra: swift Provides-Extra: test Provides-Extra: vmware ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/README.rst0000664000175000017500000000256400000000000015712 0ustar00zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/glance_store.svg :target: https://governance.openstack.org/tc/reference/tags/index.html :alt: The following tags have been asserted for the Glance Store Library: "project:official", "stable:follows-policy", "vulnerability:managed". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for glance_store. (The SVG in the governance repo is updated automatically.) .. Change things from this point on Glance Store Library ==================== Glance's stores library This library has been extracted from the Glance source code for the specific use of the Glance and Glare projects. The API it exposes is not stable, has some shortcomings, and is not a general purpose interface. We would eventually like to change this, but for now using this library outside of Glance or Glare will not be supported by the core team. * License: Apache License, Version 2.0 * Documentation: https://docs.openstack.org/glance_store/latest/ * Source: https://opendev.org/openstack/glance_store/ * Bugs: https://bugs.launchpad.net/glance-store * Release notes: https://docs.openstack.org/releasenotes/glance_store/index.html ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1652095 glance_store-4.8.1/doc/0000775000175000017500000000000000000000000014761 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/doc/requirements.txt0000664000175000017500000000052100000000000020243 0ustar00zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. sphinx>=2.0.0,!=2.1.0 # BSD openstackdocstheme>=2.2.1 # Apache-2.0 reno>=3.1.0 # Apache-2.0 sphinxcontrib-apidoc>=0.2.0 # BSD ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1652095 glance_store-4.8.1/doc/source/0000775000175000017500000000000000000000000016261 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/doc/source/conf.py0000664000175000017500000000757400000000000017575 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys sys.path.insert(0, os.path.abspath('../..')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['openstackdocstheme', 'sphinxcontrib.apidoc'] # openstackdocstheme options openstackdocs_repo_name = 'openstack/glance_store' openstackdocs_auto_name = False openstackdocs_bug_project = 'glance-store' openstackdocs_bug_tag = '' # sphinxcontrib.apidoc options apidoc_module_dir = '../../glance_store' apidoc_output_dir = 'reference/api' apidoc_excluded_paths = [ 'test', 'tests/*'] apidoc_separate_modules = True # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # Add any paths that contain templates here, relative to this directory. # templates_path = [] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = 'glance_store' copyright = '2014, OpenStack Foundation' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' # html_static_path = ['static'] html_theme = 'openstackdocs' # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project modindex_common_prefix = ['glance_store.'] # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, '%s Documentation' % project, 'OpenStack Foundation', 'manual'), ] # The autodoc module imports every module to check for import # errors. Since the fs_mount module is self initializing, it # requires configurations that aren't loaded till that time. # It would never happen in a real scenario as it is only imported # from cinder store after the config are loaded but to handle doc # failures, we mock it here. # The cinder_utils module imports external dependencies like # cinderclient, retrying etc which are not recognized by # autodoc, hence, are mocked here. These dependencies are installed # during an actual deployment and won't cause any issue during usage. autodoc_mock_imports = ['glance_store.common.fs_mount', 'glance_store.common.cinder_utils'] # Since version 4.2.0, Sphinx emits a warning when encountering a mocked # object, leading to the following error: # "A mocked object is detected: 'glance_store.common.cinder_utils'" # To prevent this, we disable all warnings from the autodoc extension, since # there is no finer grain yet. suppress_warnings = ['autodoc.*'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/doc/source/index.rst0000664000175000017500000000132000000000000020116 0ustar00zuulzuul00000000000000============== glance_store ============== The glance_store library supports the creation, deletion and gather of data assets from/to a set of several, different, storage technologies. .. warning:: This library has been extracted from the Glance source code for the specific use of the Glance and Glare projects. The API it exposes is not stable, has some shortcomings, and is not a general purpose interface. We would eventually like to change this, but for now using this library outside of Glance or Glare will not be supported by the core team. .. toctree:: :maxdepth: 1 user/index reference/index .. rubric:: Indices and tables * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1652095 glance_store-4.8.1/doc/source/reference/0000775000175000017500000000000000000000000020217 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/doc/source/reference/index.rst0000664000175000017500000000021200000000000022053 0ustar00zuulzuul00000000000000============================== glance-store Reference Guide ============================== .. toctree:: :maxdepth: 1 api/modules ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1652095 glance_store-4.8.1/doc/source/user/0000775000175000017500000000000000000000000017237 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/doc/source/user/drivers.rst0000664000175000017500000000233600000000000021453 0ustar00zuulzuul00000000000000 Glance Store Drivers ==================== Glance store supports several different drivers. These drivers live within the library's code base and they are maintained by either members of the Glance community or OpenStack in general. Please, find below the table of supported drivers and maintainers: .. list-table:: :header-rows: 1 * - Driver - Status - Maintainer - Email - IRC Nick * - File System - Supported - Glance Team - openstack-discuss@lists.openstack.org - openstack-glance * - HTTP - Supported - Glance Team - openstack-discuss@lists.openstack.org - openstack-glance * - RBD - Supported - Glance Team - openstack-discuss@lists.openstack.org - openstack-glance * - Cinder - Supported - Rajat Dhasmana - rajatdhasmana@gmail.com - whoami-rajat * - Swift - Supported - Matthew Oliver - matt@oliver.net.au - mattoliverau * - VMware - Deprecated - N/A - N/A - * - S3 - Supported - Naohiro Sameshima - naohiro.sameshima@global.ntt - nao-shark .. note:: VMWare driver was deprecated in 2024.1 release, because of lack of CI and active maintainers ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/doc/source/user/index.rst0000664000175000017500000000017700000000000021105 0ustar00zuulzuul00000000000000================================= glance-store User Documentation ================================= .. toctree:: drivers ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/etc/0000775000175000017500000000000000000000000014767 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1652095 glance_store-4.8.1/etc/glance/0000775000175000017500000000000000000000000016220 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/etc/glance/rootwrap.conf0000664000175000017500000000171400000000000020747 0ustar00zuulzuul00000000000000# Configuration for glance-rootwrap # This file should be owned by (and only-writable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writeable by root ! filters_path=/etc/glance/rootwrap.d,/usr/share/glance/rootwrap # List of directories to search executables in, in case filters do not # explicitely specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, local0, local1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1652095 glance_store-4.8.1/etc/glance/rootwrap.d/0000775000175000017500000000000000000000000020317 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/etc/glance/rootwrap.d/glance_cinder_store.filters0000664000175000017500000000132100000000000025677 0ustar00zuulzuul00000000000000# glance-rootwrap command filters for glance cinder store # This file should be owned by (and only-writable by) the root user [Filters] # cinder store driver disk_chown: RegExpFilter, chown, root, chown, \d+, /dev/(?!.*/\.\.).* # os-brick library commands # os_brick.privileged.run_as_root oslo.privsep context # This line ties the superuser privs with the config files, context name, # and (implicitly) the actual python code invoked. privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.* chown: CommandFilter, chown, root mount: CommandFilter, mount, root umount: CommandFilter, umount, root././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1692107 glance_store-4.8.1/glance_store/0000775000175000017500000000000000000000000016661 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/__init__.py0000664000175000017500000000137100000000000020774 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from .multi_backend import * # noqa from .backend import * # noqa from .driver import * # noqa from .exceptions import * # noqa ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1692107 glance_store-4.8.1/glance_store/_drivers/0000775000175000017500000000000000000000000020476 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/__init__.py0000664000175000017500000000000000000000000022575 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1732116 glance_store-4.8.1/glance_store/_drivers/cinder/0000775000175000017500000000000000000000000021742 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/cinder/__init__.py0000664000175000017500000000125500000000000024056 0ustar00zuulzuul00000000000000# Copyright 2023 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store._drivers.cinder.store import * # noqa././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/cinder/base.py0000664000175000017500000000461000000000000023227 0ustar00zuulzuul00000000000000# Copyright 2023 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import importutils from os_brick.initiator import connector NFS = 'nfs' SCALEIO = "scaleio" BASE = 'glance_store._drivers.cinder.base.BaseBrickConnectorInterface' _connector_mapping = { NFS: 'glance_store._drivers.cinder.nfs.NfsBrickConnector', SCALEIO: 'glance_store._drivers.cinder.scaleio.ScaleIOBrickConnector', } def factory(*args, **kwargs): connection_info = kwargs.get('connection_info') protocol = connection_info['driver_volume_type'] connector = _connector_mapping.get(protocol, BASE) conn_cls = importutils.import_class(connector) return conn_cls(*args, **kwargs) class BaseBrickConnectorInterface(object): def __init__(self, *args, **kwargs): self.connection_info = kwargs.get('connection_info') self.root_helper = kwargs.get('root_helper') self.use_multipath = kwargs.get('use_multipath') self.conn = connector.InitiatorConnector.factory( self.connection_info['driver_volume_type'], self.root_helper, conn=self.connection_info, use_multipath=self.use_multipath) def connect_volume(self, volume): device = self.conn.connect_volume(self.connection_info) return device def disconnect_volume(self, device): # Bug #2004555: use force so there aren't any leftovers self.conn.disconnect_volume(self.connection_info, device, force=True) def extend_volume(self): self.conn.extend_volume(self.connection_info) def yield_path(self, volume, volume_path): """ This method returns the volume file path. The reason for it's implementation is to fix Bug#2000584. More information is added in the ScaleIO connector which makes actual use of it's implementation. """ return volume_path ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/cinder/nfs.py0000664000175000017500000001000500000000000023076 0ustar00zuulzuul00000000000000# Copyright 2023 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib import logging import os import socket from oslo_config import cfg from glance_store._drivers.cinder import base from glance_store.common import cinder_utils from glance_store.common import fs_mount as mount from glance_store.common import utils from glance_store import exceptions from glance_store.i18n import _ CONF = cfg.CONF LOG = logging.getLogger(__name__) class NfsBrickConnector(base.BaseBrickConnectorInterface): def __init__(self, *args, **kwargs): self.volume = kwargs.get('volume') self.connection_info = kwargs.get('connection_info') self.root_helper = kwargs.get('root_helper') self.mount_point_base = kwargs.get('mountpoint_base') self.attachment_obj = kwargs.get('attachment_obj') self.client = kwargs.get('client') self.host = socket.gethostname() self.volume_api = cinder_utils.API() def _get_mount_path(self, share, mount_point_base): """Returns the mount path prefix using the mount point base and share. :returns: The mount path prefix. """ return os.path.join(self.mount_point_base, NfsBrickConnector.get_hash_str(share)) @staticmethod def get_hash_str(base_str): """Returns string representing SHA256 hash of base_str in hex format. If base_str is a Unicode string, encode it to UTF-8. """ if isinstance(base_str, str): base_str = base_str.encode('utf-8') return hashlib.sha256(base_str).hexdigest() def connect_volume(self, volume): # The format info of nfs volumes is exposed via attachment_get # API hence it is not available in the connection info of # attachment object received from attachment_update and we # need to do this call vol_attachment = self.volume_api.attachment_get( self.client, self.attachment_obj.id) if (volume.encrypted or vol_attachment.connection_info['format'] == 'qcow2'): issue_type = 'Encrypted' if volume.encrypted else 'qcow2' msg = (_('%(issue_type)s volume creation for cinder nfs ' 'is not supported from glance_store. Failed to ' 'create volume %(volume_id)s') % {'issue_type': issue_type, 'volume_id': volume.id}) LOG.error(msg) raise exceptions.BackendException(msg) @utils.synchronized(self.connection_info['export']) def connect_volume_nfs(): export = self.connection_info['export'] vol_name = self.connection_info['name'] mountpoint = self._get_mount_path( export, os.path.join(self.mount_point_base, 'nfs')) options = self.connection_info['options'] mount.mount( 'nfs', export, vol_name, mountpoint, self.host, self.root_helper, options) return {'path': os.path.join(mountpoint, vol_name)} device = connect_volume_nfs() return device def disconnect_volume(self, device): @utils.synchronized(self.connection_info['export']) def disconnect_volume_nfs(): path, vol_name = device['path'].rsplit('/', 1) mount.umount(vol_name, path, self.host, self.root_helper) disconnect_volume_nfs() def extend_volume(self): raise NotImplementedError ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/cinder/scaleio.py0000664000175000017500000000600200000000000023731 0ustar00zuulzuul00000000000000# Copyright 2023 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import math import os import time from oslo_config import cfg from oslo_utils import units from glance_store._drivers.cinder import base from glance_store import exceptions from glance_store.i18n import _ CONF = cfg.CONF LOG = logging.getLogger(__name__) class ScaleIOBrickConnector(base.BaseBrickConnectorInterface): @staticmethod def _get_device_size(device_file): # The seek position is corrected after every extend operation # with the bytes written (which is after this wait call) so we # don't need to worry about setting it back to original position device_file.seek(0, os.SEEK_END) # There are other ways to determine the file size like os.stat # or os.path.getsize but it requires file name attribute which # we don't have for the RBD file wrapper RBDVolumeIOWrapper device_size = device_file.tell() device_size = int(math.ceil(float(device_size) / units.Gi)) return device_size @staticmethod def _wait_resize_device(volume, device_file): timeout = 20 max_recheck_wait = 10 tries = 0 elapsed = 0 while ScaleIOBrickConnector._get_device_size( device_file) < volume.size: wait = min(0.5 * 2 ** tries, max_recheck_wait) time.sleep(wait) tries += 1 elapsed += wait if elapsed >= timeout: msg = (_('Timeout while waiting while volume %(volume_id)s ' 'to resize the device in %(tries)s tries.') % {'volume_id': volume.id, 'tries': tries}) LOG.error(msg) raise exceptions.BackendException(msg) def yield_path(self, volume, volume_path): """ This method waits for the LUN size to match the volume size. This method is created to fix Bug#2000584 where NFS sparse volumes timeout waiting for the file size to match the volume.size field. The reason is that the volume is sparse and only takes up space of data which is written to it (similar to thin provisioned volumes). """ # Sometimes the extended LUN on storage side takes time # to reflect in the device so we wait until the device # size is equal to the extended volume size. ScaleIOBrickConnector._wait_resize_device(volume, volume_path) return volume_path ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/cinder/store.py0000664000175000017500000012640700000000000023462 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for Cinder""" import contextlib import errno import importlib import logging import math import os import shlex import socket import time from keystoneauth1.access import service_catalog as keystone_sc from keystoneauth1 import exceptions as keystone_exc from keystoneauth1 import identity as ksa_identity from keystoneauth1 import session as ksa_session from keystoneauth1 import token_endpoint as ksa_token_endpoint from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import strutils from oslo_utils import units from glance_store._drivers.cinder import base from glance_store import capabilities from glance_store.common import attachment_state_manager from glance_store.common import cinder_utils from glance_store.common import utils import glance_store.driver from glance_store import exceptions from glance_store.i18n import _, _LE, _LI, _LW import glance_store.location try: from cinderclient import api_versions from cinderclient import exceptions as cinder_exception from cinderclient.v3 import client as cinderclient import os_brick from os_brick.initiator import connector from oslo_privsep import priv_context except ImportError: api_versions = None cinder_exception = None cinderclient = None os_brick = None connector = None priv_context = None CONF = cfg.CONF LOG = logging.getLogger(__name__) _CINDER_OPTS = [ cfg.StrOpt('cinder_catalog_info', default='volumev3::publicURL', help=""" Information to match when looking for cinder in the service catalog. When the ``cinder_endpoint_template`` is not set and any of ``cinder_store_auth_address``, ``cinder_store_user_name``, ``cinder_store_project_name``, ``cinder_store_password`` is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. ``cinder_os_region_name``, if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the ``openstack catalog list`` command. Possible values: * A string of of the following form: ``::`` At least ``service_type`` and ``interface`` should be specified. ``service_name`` can be omitted. Related options: * cinder_os_region_name * cinder_endpoint_template * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_password * cinder_store_project_domain_name * cinder_store_user_domain_name """), cfg.StrOpt('cinder_endpoint_template', default=None, help=""" Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if ``cinder_store_auth_address``, ``cinder_store_user_name``, ``cinder_store_project_name``, and ``cinder_store_password`` are specified. If this configuration option is set, ``cinder_catalog_info`` will be ignored. Possible values: * URL template string for cinder endpoint, where ``%%(tenant)s`` is replaced with the current tenant (project) name. For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s`` Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_password * cinder_store_project_domain_name * cinder_store_user_domain_name * cinder_catalog_info """), cfg.StrOpt('cinder_os_region_name', deprecated_name='os_region_name', default=None, help=""" Region name to lookup cinder service from the service catalog. This is used only when ``cinder_catalog_info`` is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: * A string that is a valid region name. Related options: * cinder_catalog_info """), cfg.StrOpt('cinder_ca_certificates_file', help=""" Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. ``cinder_api_insecure`` must be set to ``True`` to enable the verification. Possible values: * Path to a ca certificates file Related options: * cinder_api_insecure """), cfg.IntOpt('cinder_http_retries', min=0, default=3, help=""" Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: * A positive integer Related options: * None """), cfg.IntOpt('cinder_state_transition_timeout', min=0, default=300, help=""" Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from ``creating`` to ``available`` after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. `error``), the image creation fails. Possible values: * A positive integer Related options: * None """), cfg.BoolOpt('cinder_api_insecure', default=False, help=""" Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by ``cinder_ca_certificates_file`` option. Possible values: * True * False Related options: * cinder_ca_certificates_file """), cfg.StrOpt('cinder_store_auth_address', default=None, help=""" The address where the cinder authentication service is listening. When all of ``cinder_store_auth_address``, ``cinder_store_user_name``, ``cinder_store_project_name``, and ``cinder_store_password`` options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: * A valid authentication service address, for example: ``http://openstack.example.org/identity/v2.0`` Related options: * cinder_store_user_name * cinder_store_password * cinder_store_project_name * cinder_store_project_domain_name * cinder_store_user_domain_name """), cfg.StrOpt('cinder_store_user_name', default=None, help=""" User name to authenticate against cinder. This must be used with all the following non-domain-related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: * A valid user name Related options: * cinder_store_auth_address * cinder_store_password * cinder_store_project_name * cinder_store_project_domain_name * cinder_store_user_domain_name """), cfg.StrOpt('cinder_store_user_domain_name', default='Default', help=""" Domain of the user to authenticate against cinder. Possible values: * A valid domain name for the user specified by ``cinder_store_user_name`` Related options: * cinder_store_auth_address * cinder_store_password * cinder_store_project_name * cinder_store_project_domain_name * cinder_store_user_name """), cfg.StrOpt('cinder_store_password', secret=True, help=""" Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: * A valid password for the user specified by ``cinder_store_user_name`` Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_project_domain_name * cinder_store_user_domain_name """), cfg.StrOpt('cinder_store_project_name', default=None, help=""" Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified (except domain-related options), the user of the current context is used. Possible values: * A valid project name Related options: * ``cinder_store_auth_address`` * ``cinder_store_user_name`` * ``cinder_store_password`` * ``cinder_store_project_domain_name`` * ``cinder_store_user_domain_name`` """), cfg.StrOpt('cinder_store_project_domain_name', default='Default', help=""" Domain of the project where the image volume is stored in cinder. Possible values: * A valid domain name of the project specified by ``cinder_store_project_name`` Related options: * ``cinder_store_auth_address`` * ``cinder_store_user_name`` * ``cinder_store_password`` * ``cinder_store_project_domain_name`` * ``cinder_store_user_domain_name`` """), cfg.StrOpt('rootwrap_config', default='/etc/glance/rootwrap.conf', help=""" Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: * Path to the rootwrap config file Related options: * None """), cfg.StrOpt('cinder_volume_type', default=None, help=""" Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: * A valid volume type from cinder Related options: * None NOTE: You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. """), cfg.BoolOpt('cinder_enforce_multipath', default=False, help=""" If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: * True or False Related options: * cinder_use_multipath """), cfg.BoolOpt('cinder_use_multipath', default=False, help=""" Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: * True or False Related options: * cinder_enforce_multipath """), cfg.StrOpt('cinder_mount_point_base', default='/var/lib/glance/mnt', help=""" Directory where the NFS volume is mounted on the glance node. Possible values: * A string representing absolute path of mount point. """), cfg.BoolOpt('cinder_do_extend_attached', default=False, help=""" If this is set to True, glance will perform an extend operation on the attached volume. Only enable this option if the cinder backend driver supports the functionality of extending online (in-use) volumes. Supported from cinder microversion 3.42 and onwards. By default, it is set to False. Possible values: * True or False """), ] CINDER_SESSION = None def _reset_cinder_session(): global CINDER_SESSION CINDER_SESSION = None def get_cinder_session(conf): global CINDER_SESSION if not CINDER_SESSION: auth = ksa_identity.V3Password( password=conf.cinder_store_password, username=conf.cinder_store_user_name, user_domain_name=conf.cinder_store_user_domain_name, project_name=conf.cinder_store_project_name, project_domain_name=conf.cinder_store_project_domain_name, auth_url=conf.cinder_store_auth_address ) if conf.cinder_api_insecure: verify = False elif conf.cinder_ca_certificates_file: verify = conf.cinder_ca_certificates_file else: verify = True CINDER_SESSION = ksa_session.Session(auth=auth, verify=verify) return CINDER_SESSION class StoreLocation(glance_store.location.StoreLocation): """Class describing a Cinder URI.""" def process_specs(self): self.scheme = self.specs.get('scheme', 'cinder') self.volume_id = self.specs.get('volume_id') def get_uri(self): if self.backend_group: return "cinder://%s/%s" % (self.backend_group, self.volume_id) return "cinder://%s" % self.volume_id def parse_uri(self, uri): self.validate_schemas(uri, valid_schemas=('cinder://',)) self.scheme = 'cinder' self.volume_id = uri.split('/')[-1] if not utils.is_uuid_like(self.volume_id): reason = _("URI contains invalid volume ID") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) class Store(glance_store.driver.Store): """Cinder backend store adapter.""" _CAPABILITIES = (capabilities.BitMasks.READ_RANDOM | capabilities.BitMasks.WRITE_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _CINDER_OPTS EXAMPLE_URL = "cinder://" def __init__(self, *args, **kargs): super(Store, self).__init__(*args, **kargs) # We are importing it here to let the config options load # before we use them in the fs_mount file self.mount = importlib.import_module('glance_store.common.fs_mount') self._set_url_prefix() if self.backend_group: self.store_conf = getattr(self.conf, self.backend_group) else: self.store_conf = self.conf.glance_store self.volume_api = cinder_utils.API() if os_brick: os_brick.setup(CONF) # The purpose of this map is to store the connector object for a # particular volume as we will need to call os-brick extend_volume # method for the kernel to realize the new size change after cinder # extends the volume # We only use it when creating the image so a volume will only have # one mapping to a particular connector self.volume_connector_map = {} def _set_url_prefix(self): self._url_prefix = "cinder://" if self.backend_group: self._url_prefix = "cinder://%s" % self.backend_group def configure_add(self): """ Check to verify if the volume types configured for the cinder store exist in deployment and if not, log a warning. """ for module_name, module in [('cinderclient', cinderclient), ('os-brick', os_brick), ('oslo-privsep', priv_context)]: if module is None: reason = _("%s is not available." % module_name) LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="cinder", reason=reason) cinder_volume_type = self.store_conf.cinder_volume_type if cinder_volume_type: # NOTE: `cinder_volume_type` is configured, check # configured volume_type is available in cinder or not cinder_client = self.get_cinderclient() try: # We don't even need the volume type object, as long # as this returns clean, we know the name is good. cinder_client.volume_types.find(name=cinder_volume_type) # No need to worry about a NoUniqueMatch as volume type name # is unique except cinder_exception.NotFound: reason = (_LW("Invalid `cinder_volume_type %s`" % cinder_volume_type)) LOG.warning(reason) except cinder_exception.ClientException: pass def is_image_associated_with_store(self, context, volume_id): """ Updates legacy images URL to respective stores. This method checks the volume type of the volume associated with the image against the configured stores. It returns true if the cinder_volume_type configured in the store matches with the volume type of the image-volume. When cinder_volume_type is not configured then the it checks it against default_volume_type set in cinder. If above both conditions doesn't meet, it returns false. """ try: # We will use either the service credentials defined in # config file or the user context credentials cinder_client = self.get_cinderclient(context=context) cinder_volume_type = self.store_conf.cinder_volume_type # Here we are assuming that the volume is stored in the # service project or context user's project else this # will return NotFound exception. # Ideally we should be using service user's credentials # defined in the config and the volume should be stored # in the service (internal) project else we are opening the # image-volume to modification by users which might lead # to corruption of image. try: volume = cinder_client.volumes.get(volume_id) except cinder_exception.NotFound: reason = (_LW("Image-Volume %s not found. If you have " "upgraded your environment from single store " "to multi store, transfer all your " "Image-Volumes from user projects to service " "project." % volume_id)) LOG.warning(reason) return False if cinder_volume_type and volume.volume_type == cinder_volume_type: return True elif not cinder_volume_type: default_type = cinder_client.volume_types.default() if volume.volume_type == default_type.name: return True except Exception: # Glance calls this method to update legacy images URL # If an exception occurs due to image/volume is non-existent or # any other reason, we return False (i.e. the image location URL # won't be updated) and it is glance's responsibility to handle # the case when the image failed to update pass return False def get_root_helper(self): rootwrap = self.store_conf.rootwrap_config return 'sudo glance-rootwrap %s' % rootwrap def is_user_overriden(self): return all([self.store_conf.get('cinder_store_' + key) for key in ['user_name', 'password', 'project_name', 'auth_address']]) def get_cinderclient(self, context=None, version='3.0'): user_overriden = self.is_user_overriden() session = get_cinder_session(self.store_conf) if user_overriden: username = self.store_conf.cinder_store_user_name url = self.store_conf.cinder_store_auth_address # use auth that is already in the session auth = None else: username = context.user_id project = context.project_id # noauth extracts user_id:project_id from auth_token token = context.auth_token or '%s:%s' % (username, project) if self.store_conf.cinder_endpoint_template: template = self.store_conf.cinder_endpoint_template url = template % context.to_dict() else: info = self.store_conf.cinder_catalog_info service_type, service_name, interface = info.split(':') try: catalog = keystone_sc.ServiceCatalogV2( context.service_catalog) url = catalog.url_for( region_name=self.store_conf.cinder_os_region_name, service_type=service_type, service_name=service_name, interface=interface) except keystone_exc.EndpointNotFound: reason = _("Failed to find Cinder from a service catalog.") raise exceptions.BadStoreConfiguration(store_name="cinder", reason=reason) auth = ksa_token_endpoint.Token(endpoint=url, token=token) api_version = api_versions.APIVersion(version) c = cinderclient.Client( session=session, auth=auth, region_name=self.store_conf.cinder_os_region_name, retries=self.store_conf.cinder_http_retries, api_version=api_version) LOG.debug( 'Cinderclient connection created for user %(user)s using URL: ' '%(url)s.', {'user': username, 'url': url}) return c @contextlib.contextmanager def temporary_chown(self, path): owner_uid = os.getuid() orig_uid = os.stat(path).st_uid if orig_uid != owner_uid: processutils.execute( 'chown', owner_uid, path, run_as_root=True, root_helper=self.get_root_helper()) try: yield finally: if orig_uid != owner_uid: processutils.execute( 'chown', orig_uid, path, run_as_root=True, root_helper=self.get_root_helper()) def get_schemes(self): return ('cinder',) def _check_context(self, context, require_tenant=False): user_overriden = self.is_user_overriden() if user_overriden and not require_tenant: return if context is None: reason = _("Cinder storage requires a context.") raise exceptions.BadStoreConfiguration(store_name="cinder", reason=reason) if not user_overriden and context.service_catalog is None: reason = _("Cinder storage requires a service catalog.") raise exceptions.BadStoreConfiguration(store_name="cinder", reason=reason) def _wait_volume_status(self, volume, status_transition, status_expected): max_recheck_wait = 15 timeout = self.store_conf.cinder_state_transition_timeout volume = volume.manager.get(volume.id) tries = 0 elapsed = 0 while volume.status == status_transition: if elapsed >= timeout: msg = (_('Timeout while waiting while volume %(volume_id)s ' 'status is %(status)s.') % {'volume_id': volume.id, 'status': status_transition}) LOG.error(msg) raise exceptions.BackendException(msg) wait = min(0.5 * 2 ** tries, max_recheck_wait) time.sleep(wait) tries += 1 elapsed += wait volume = volume.manager.get(volume.id) if volume.status != status_expected: msg = (_('The status of volume %(volume_id)s is unexpected: ' 'status = %(status)s, expected = %(expected)s.') % {'volume_id': volume.id, 'status': volume.status, 'expected': status_expected}) LOG.error(msg) raise exceptions.BackendException(msg) return volume def _get_host_ip(self, host): try: return socket.getaddrinfo(host, None, socket.AF_INET6)[0][4][0] except socket.gaierror: return socket.getaddrinfo(host, None, socket.AF_INET)[0][4][0] @contextlib.contextmanager def _open_cinder_volume(self, client, volume, mode): attach_mode = 'rw' if mode == 'wb' else 'ro' device = None root_helper = self.get_root_helper() priv_context.init(root_helper=shlex.split(root_helper)) host = socket.gethostname() my_ip = self._get_host_ip(host) use_multipath = self.store_conf.cinder_use_multipath enforce_multipath = self.store_conf.cinder_enforce_multipath volume_id = volume.id connector_prop = connector.get_connector_properties( root_helper, my_ip, use_multipath, enforce_multipath, host=host) if volume.multiattach: attachment = attachment_state_manager.attach(client, volume_id, host, mode=attach_mode) else: attachment = self.volume_api.attachment_create(client, volume_id, mode=attach_mode) LOG.debug('Attachment %(attachment_id)s created successfully.', {'attachment_id': attachment['id']}) volume = volume.manager.get(volume_id) attachment_id = attachment['id'] connection_info = None try: attachment = self.volume_api.attachment_update( client, attachment_id, connector_prop, mountpoint='glance_store') LOG.debug('Attachment %(attachment_id)s updated successfully with ' 'connection info %(conn_info)s', {'attachment_id': attachment_id, 'conn_info': strutils.mask_dict_password( attachment.connection_info)}) connection_info = attachment.connection_info conn = base.factory( connection_info['driver_volume_type'], volume=volume, connection_info=connection_info, root_helper=root_helper, use_multipath=use_multipath, mountpoint_base=self.store_conf.cinder_mount_point_base, attachment_obj=attachment, client=client) device = conn.connect_volume(volume) # Complete the attachment (marking the volume "in-use") after # the connection with os-brick is complete self.volume_api.attachment_complete(client, attachment_id) LOG.debug('Attachment %(attachment_id)s completed successfully.', {'attachment_id': attachment_id}) self.volume_connector_map[volume.id] = conn if (connection_info['driver_volume_type'] == 'rbd' and not conn.conn.do_local_attach): yield device['path'] else: with self.temporary_chown( device['path']), open(device['path'], mode) as f: yield conn.yield_path(volume, f) except Exception: LOG.exception(_LE('Exception while accessing to cinder volume ' '%(volume_id)s.'), {'volume_id': volume.id}) raise finally: if device: try: if volume.multiattach: attachment_state_manager.detach( client, attachment_id, volume_id, host, conn, connection_info, device) else: conn.disconnect_volume(device) if self.volume_connector_map.get(volume.id): del self.volume_connector_map[volume.id] except Exception: LOG.exception(_LE('Failed to disconnect volume ' '%(volume_id)s.'), {'volume_id': volume.id}) if not volume.multiattach: self.volume_api.attachment_delete(client, attachment_id) def _cinder_volume_data_iterator(self, client, volume, max_size, offset=0, chunk_size=None, partial_length=None): chunk_size = chunk_size if chunk_size else self.READ_CHUNKSIZE partial = partial_length is not None with self._open_cinder_volume(client, volume, 'rb') as fp: if offset: fp.seek(offset) max_size -= offset while True: if partial: size = min(chunk_size, partial_length, max_size) else: size = min(chunk_size, max_size) chunk = fp.read(size) if chunk: yield chunk max_size -= len(chunk) if max_size <= 0: break if partial: partial_length -= len(chunk) if partial_length <= 0: break else: break @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :param offset: offset to start reading :param chunk_size: size to read, or None to get all the image :param context: Request context :raises: `glance_store.exceptions.NotFound` if image does not exist """ loc = location.store_location self._check_context(context) try: client = self.get_cinderclient(context, version='3.54') volume = client.volumes.get(loc.volume_id) size = int(volume.metadata.get('image_size', volume.size * units.Gi)) iterator = self._cinder_volume_data_iterator( client, volume, size, offset=offset, chunk_size=self.READ_CHUNKSIZE, partial_length=chunk_size) return (iterator, chunk_size or size) except cinder_exception.NotFound: reason = _("Failed to get image size due to " "volume can not be found: %s") % loc.volume_id LOG.error(reason) raise exceptions.NotFound(reason) except cinder_exception.ClientException as e: msg = (_('Failed to get image volume %(volume_id)s: %(error)s') % {'volume_id': loc.volume_id, 'error': e}) LOG.error(msg) raise exceptions.BackendException(msg) def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file and returns the image size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist :rtype: int """ loc = location.store_location try: self._check_context(context) volume = self.get_cinderclient(context).volumes.get(loc.volume_id) return int(volume.metadata.get('image_size', volume.size * units.Gi)) except cinder_exception.NotFound: raise exceptions.NotFound(image=loc.volume_id) except Exception: LOG.exception(_LE("Failed to get image size due to " "internal error.")) return 0 def _call_offline_extend(self, volume, size_gb): size_gb += 1 LOG.debug("Extending (offline) volume %(volume_id)s to %(size)s GB.", {'volume_id': volume.id, 'size': size_gb}) volume.extend(volume, size_gb) try: volume = self._wait_volume_status(volume, 'extending', 'available') size_gb = volume.size return size_gb except exceptions.BackendException: raise exceptions.StorageFull() def _call_online_extend(self, client, volume, size_gb): size_gb += 1 LOG.debug("Extending (online) volume %(volume_id)s to %(size)s GB.", {'volume_id': volume.id, 'size': size_gb}) self.volume_api.extend_volume(client, volume, size_gb) try: volume = self._wait_volume_status(volume, 'extending', 'in-use') size_gb = volume.size return size_gb except exceptions.BackendException: raise exceptions.StorageFull() def _write_data(self, f, write_props): LOG.debug('Writing data to volume with write properties: ' 'bytes_written: %s, size_gb: %s, need_extend: %s, ' 'image_size: %s' % (write_props.bytes_written, write_props.size_gb, write_props.need_extend, write_props.image_size)) f.seek(write_props.bytes_written) if write_props.buf: f.write(write_props.buf) write_props.bytes_written += len(write_props.buf) while True: write_props.buf = write_props.image_file.read( self.WRITE_CHUNKSIZE) if not write_props.buf: write_props.need_extend = False return write_props.os_hash_value.update(write_props.buf) write_props.checksum.update(write_props.buf) if write_props.verifier: write_props.verifier.update(write_props.buf) if ((write_props.bytes_written + len(write_props.buf)) > ( write_props.size_gb * units.Gi) and (write_props.image_size == 0)): return f.write(write_props.buf) write_props.bytes_written += len(write_props.buf) def _offline_extend(self, client, volume, write_props): while write_props.need_extend: with self._open_cinder_volume(client, volume, 'wb') as f: self._write_data(f, write_props) if write_props.need_extend: write_props.size_gb = self._call_offline_extend( volume, write_props.size_gb) def _online_extend(self, client, volume, write_props): with self._open_cinder_volume(client, volume, 'wb') as f: # Th connector is initialized in _open_cinder_volume method # and by mapping it with the volume ID, we are able to fetch # it here conn = self.volume_connector_map[volume.id] while write_props.need_extend: self._write_data(f, write_props) if write_props.need_extend: # we already initialize a client with MV 3.54 and # we require 3.42 for online extend so we should # be good here. write_props.size_gb = self._call_online_extend( client, volume, write_props.size_gb) # Call os-brick to resize the LUN on the host conn.extend_volume() # WriteProperties class is useful to allow us to modify immutable # objects in the called methods class WriteProperties: def __init__(self, *args, **kwargs): self.bytes_written = kwargs.get('bytes_written') self.size_gb = kwargs.get('size_gb') self.buf = kwargs.get('buf') self.image_file = kwargs.get('image_file') self.need_extend = kwargs.get('need_extend') self.image_size = kwargs.get('image_size') self.verifier = kwargs.get('verifier') self.checksum = kwargs.get('checksum') self.os_hash_value = kwargs.get('os_hash_value') @glance_store.driver.back_compat_add @capabilities.check def add(self, image_id, image_file, image_size, hashing_algo, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param hashing_algo: A hashlib algorithm identifier (string) :param context: The request context :param verifier: An object used to verify signatures for images :returns: tuple of: (1) URL in backing store, (2) bytes written, (3) checksum, (4) multihash value, and (5) a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already exists """ self._check_context(context, require_tenant=True) client = self.get_cinderclient(context, version='3.54') os_hash_value = utils.get_hasher(hashing_algo, False) checksum = utils.get_hasher('md5', False) bytes_written = 0 size_gb = int(math.ceil(float(image_size) / units.Gi)) if size_gb == 0: size_gb = 1 name = "image-%s" % image_id owner = context.project_id metadata = {'glance_image_id': image_id, 'image_size': str(image_size), 'image_owner': owner} volume_type = self.store_conf.cinder_volume_type LOG.debug('Creating a new volume: image_size=%d size_gb=%d type=%s', image_size, size_gb, volume_type or 'None') if image_size == 0: LOG.info(_LI("Since image size is zero, we will be doing " "resize-before-write for each GB which " "will be considerably slower than normal.")) volume = self.volume_api.create(client, size_gb, name=name, metadata=metadata, volume_type=volume_type) volume = self._wait_volume_status(volume, 'creating', 'available') size_gb = volume.size failed = True need_extend = True buf = None online_extend = self.store_conf.cinder_do_extend_attached write_props = self.WriteProperties( bytes_written=bytes_written, size_gb=size_gb, buf=buf, image_file=image_file, need_extend=need_extend, image_size=image_size, verifier=verifier, checksum=checksum, os_hash_value=os_hash_value) try: if online_extend: # we already initialize a client with MV 3.54 and # we require 3.42 for online extend so we should # be good here. self._online_extend(client, volume, write_props) else: self._offline_extend(client, volume, write_props) failed = False except IOError as e: # Convert IOError reasons to Glance Store exceptions errors = {errno.EFBIG: exceptions.StorageFull(), errno.ENOSPC: exceptions.StorageFull(), errno.EACCES: exceptions.StorageWriteDenied()} raise errors.get(e.errno, e) finally: if failed: LOG.error(_LE("Failed to write to volume %(volume_id)s."), {'volume_id': volume.id}) try: volume.delete() except Exception: LOG.exception(_LE('Failed to delete of volume ' '%(volume_id)s.'), {'volume_id': volume.id}) if write_props.image_size == 0: metadata.update({'image_size': str(write_props.bytes_written)}) volume.update_all_metadata(metadata) volume.update_readonly_flag(volume, True) hash_hex = write_props.os_hash_value.hexdigest() checksum_hex = write_props.checksum.hexdigest() LOG.debug("Wrote %(bytes_written)d bytes to volume %(volume_id)s " "with checksum %(checksum_hex)s.", {'bytes_written': write_props.bytes_written, 'volume_id': volume.id, 'checksum_hex': checksum_hex}) image_metadata = {} location_url = 'cinder://%s' % volume.id if self.backend_group: image_metadata['store'] = self.backend_group location_url = 'cinder://%s/%s' % (self.backend_group, volume.id) return (location_url, write_props.bytes_written, checksum_hex, hash_hex, image_metadata) @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist :raises: Forbidden if cannot delete because of permissions """ loc = location.store_location self._check_context(context) client = self.get_cinderclient(context) try: self.volume_api.delete(client, loc.volume_id) except cinder_exception.NotFound: raise exceptions.NotFound(image=loc.volume_id) except cinder_exception.ClientException as e: msg = (_('Failed to delete volume %(volume_id)s: %(error)s') % {'volume_id': loc.volume_id, 'error': e}) raise exceptions.BackendException(msg) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/filesystem.py0000664000175000017500000007506600000000000023252 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A simple filesystem-backed store """ import errno import logging import os import stat import urllib import jsonschema from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import units import glance_store from glance_store import capabilities from glance_store.common import utils import glance_store.driver from glance_store import exceptions from glance_store.i18n import _, _LE, _LW import glance_store.location LOG = logging.getLogger(__name__) _FILESYSTEM_CONFIGS = [ cfg.StrOpt('filesystem_store_datadir', default='/var/lib/glance/images', help=""" Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which ``glance-api`` runs. If the write access isn't available, a ``BadStoreConfiguration`` exception is raised and the filesystem store may not be available for adding new images. NOTE: This directory is used only when filesystem store is used as a storage backend. Either ``filesystem_store_datadir`` or ``filesystem_store_datadirs`` option must be specified in ``glance-api.conf``. If both options are specified, a ``BadStoreConfiguration`` will be raised and the filesystem store may not be available for adding new images. Possible values: * A valid path to a directory Related options: * ``filesystem_store_datadirs`` * ``filesystem_store_file_perm`` """), cfg.MultiStrOpt('filesystem_store_datadirs', help=""" List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the ``filesystem_store_datadir`` configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html NOTE: This directory is used only when filesystem store is used as a storage backend. Either ``filesystem_store_datadir`` or ``filesystem_store_datadirs`` option must be specified in ``glance-api.conf``. If both options are specified, a ``BadStoreConfiguration`` will be raised and the filesystem store may not be available for adding new images. Possible values: * List of strings of the following form: * ``:`` Related options: * ``filesystem_store_datadir`` * ``filesystem_store_file_perm`` """), cfg.StrOpt('filesystem_store_metadata_file', help=""" Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys ``id`` and ``mountpoint``. The value for both keys should be a string. Possible values: * A valid path to the store metadata file Related options: * None """), cfg.IntOpt('filesystem_store_file_perm', default=0, help=""" File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: * A valid file access permission * Zero * Any negative integer Related options: * None """), cfg.IntOpt('filesystem_store_chunk_size', default=64 * units.Ki, min=1, help=""" Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: * Any positive integer value Related options: * None """), cfg.BoolOpt('filesystem_thin_provisioning', default=False, help=""" Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: * True * False Related options: * None """), ] MULTI_FILESYSTEM_METADATA_SCHEMA = { "type": "array", "items": { "type": "object", "properties": { "id": {"type": "string"}, "mountpoint": {"type": "string"} }, "required": ["id", "mountpoint"], } } class StoreLocation(glance_store.location.StoreLocation): """Class describing a Filesystem URI.""" def process_specs(self): self.scheme = self.specs.get('scheme', 'file') self.path = self.specs.get('path') def get_uri(self): return "file://%s" % self.path def parse_uri(self, uri): """ Parse URLs. This method fixes an issue where credentials specified in the URL are interpreted differently in Python 2.6.1+ than prior versions of Python. """ pieces = urllib.parse.urlparse(uri) self.validate_schemas(uri, valid_schemas=('file://', 'filesystem://')) self.scheme = pieces.scheme path = (pieces.netloc + pieces.path).strip() if path == '': reason = _("No path specified in URI") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) self.path = path class ChunkedFile(object): """ We send this back to the Glance API server as something that can iterate over a large file """ def __init__(self, filepath, offset=0, chunk_size=4096, partial_length=None): self.filepath = filepath self.chunk_size = chunk_size self.partial_length = partial_length self.partial = self.partial_length is not None self.fp = open(self.filepath, 'rb') if offset: self.fp.seek(offset) def __iter__(self): """Return an iterator over the image file.""" try: if self.fp: while True: if self.partial: size = min(self.chunk_size, self.partial_length) else: size = self.chunk_size chunk = self.fp.read(size) if chunk: yield chunk if self.partial: self.partial_length -= len(chunk) if self.partial_length <= 0: break else: break finally: self.close() def close(self): """Close the internal file pointer""" if self.fp: self.fp.close() self.fp = None class Store(glance_store.driver.Store): _CAPABILITIES = (capabilities.BitMasks.READ_RANDOM | capabilities.BitMasks.WRITE_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _FILESYSTEM_CONFIGS FILESYSTEM_STORE_METADATA = None def get_schemes(self): return ('file', 'filesystem') def _check_write_permission(self, datadir): """ Checks if directory created to write image files has write permission. :datadir is a directory path in which glance wites image files. :raises: BadStoreConfiguration exception if datadir is read-only. """ if not os.access(datadir, os.W_OK): msg = (_("Permission to write in %s denied") % datadir) LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=msg) def _set_exec_permission(self, datadir): """ Set the execution permission of owner-group and/or other-users to image directory if the image file which contained needs relevant access permissions. :datadir is a directory path in which glance writes image files. """ if self.backend_group: fstore_perm = getattr( self.conf, self.backend_group).filesystem_store_file_perm else: fstore_perm = self.conf.glance_store.filesystem_store_file_perm if fstore_perm <= 0: return try: mode = os.stat(datadir)[stat.ST_MODE] perm = int(str(fstore_perm), 8) if perm & stat.S_IRWXO > 0: if not mode & stat.S_IXOTH: # chmod o+x mode |= stat.S_IXOTH os.chmod(datadir, mode) if perm & stat.S_IRWXG > 0: if not mode & stat.S_IXGRP: # chmod g+x os.chmod(datadir, mode | stat.S_IXGRP) except (IOError, OSError): LOG.warning(_LW("Unable to set execution permission of " "owner-group and/or other-users to datadir: %s") % datadir) def _create_image_directories(self, directory_paths): """ Create directories to write image files if it does not exist. :directory_paths is a list of directories belonging to glance store. :raises: BadStoreConfiguration exception if creating a directory fails. """ for datadir in directory_paths: if os.path.exists(datadir): self._check_write_permission(datadir) self._set_exec_permission(datadir) else: msg = _("Directory to write image files does not exist " "(%s). Creating.") % datadir LOG.info(msg) try: os.makedirs(datadir) self._check_write_permission(datadir) self._set_exec_permission(datadir) except (IOError, OSError): if os.path.exists(datadir): # NOTE(markwash): If the path now exists, some other # process must have beat us in the race condition. # But it doesn't hurt, so we can safely ignore # the error. self._check_write_permission(datadir) self._set_exec_permission(datadir) continue reason = _("Unable to create datadir: %s") % datadir LOG.error(reason) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=reason) def _validate_metadata(self, metadata_file): """Validate metadata against json schema. If metadata is valid then cache metadata and use it when creating new image. :param metadata_file: JSON metadata file path :raises: BadStoreConfiguration exception if metadata is not valid. """ try: with open(metadata_file, 'rb') as fptr: metadata = jsonutils.load(fptr) if isinstance(metadata, dict): # If metadata is of type dictionary # i.e. - it contains only one mountpoint # then convert it to list of dictionary. metadata = [metadata] # Validate metadata against json schema jsonschema.validate(metadata, MULTI_FILESYSTEM_METADATA_SCHEMA) glance_store.check_location_metadata(metadata) self.FILESYSTEM_STORE_METADATA = metadata except (jsonschema.exceptions.ValidationError, exceptions.BackendException, ValueError) as vee: err_msg = encodeutils.exception_to_unicode(vee) reason = _('The JSON in the metadata file %(file)s is ' 'not valid and it can not be used: ' '%(vee)s.') % dict(file=metadata_file, vee=err_msg) LOG.error(reason) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=reason) except IOError as ioe: err_msg = encodeutils.exception_to_unicode(ioe) reason = _('The path for the metadata file %(file)s could ' 'not be accessed: ' '%(ioe)s.') % dict(file=metadata_file, ioe=err_msg) LOG.error(reason) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=reason) def configure_add(self): """ Configure the Store to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exceptions.BadStoreConfiguration` """ if self.backend_group: store_conf = getattr(self.conf, self.backend_group) else: store_conf = self.conf.glance_store fdir = store_conf.filesystem_store_datadir fdirs = store_conf.filesystem_store_datadirs fstore_perm = store_conf.filesystem_store_file_perm meta_file = store_conf.filesystem_store_metadata_file self.thin_provisioning = store_conf.\ filesystem_thin_provisioning self.chunk_size = store_conf.filesystem_store_chunk_size self.READ_CHUNKSIZE = self.chunk_size self.WRITE_CHUNKSIZE = self.READ_CHUNKSIZE if not (fdir or fdirs): reason = (_("Specify at least 'filesystem_store_datadir' or " "'filesystem_store_datadirs' option")) LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="filesystem", reason=reason) if fdir and fdirs: reason = (_("Specify either 'filesystem_store_datadir' or " "'filesystem_store_datadirs' option")) LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="filesystem", reason=reason) if fstore_perm > 0: perm = int(str(fstore_perm), 8) if not perm & stat.S_IRUSR: reason = _LE("Specified an invalid " "'filesystem_store_file_perm' option which " "could make image file to be unaccessible by " "glance service.") LOG.error(reason) reason = _("Invalid 'filesystem_store_file_perm' option.") raise exceptions.BadStoreConfiguration(store_name="filesystem", reason=reason) self.multiple_datadirs = False directory_paths = set() if fdir: self.datadir = fdir directory_paths.add(self.datadir) else: self.multiple_datadirs = True self.priority_data_map = {} for datadir in fdirs: (datadir_path, priority) = self._get_datadir_path_and_priority(datadir) priority_paths = self.priority_data_map.setdefault( priority, []) self._check_directory_paths(datadir_path, directory_paths, priority_paths) directory_paths.add(datadir_path) priority_paths.append(datadir_path) self.priority_list = sorted(self.priority_data_map, reverse=True) self._create_image_directories(directory_paths) if self.backend_group: self._set_url_prefix() if meta_file: self._validate_metadata(meta_file) def _set_url_prefix(self): path = self._find_best_datadir(0) self._url_prefix = "%s://%s" % ('file', path) def _check_directory_paths(self, datadir_path, directory_paths, priority_paths): """ Checks if directory_path is already present in directory_paths. :datadir_path is directory path. :datadir_paths is set of all directory paths. :raises: BadStoreConfiguration exception if same directory path is already present in directory_paths. """ if datadir_path in directory_paths: msg = (_("Directory %(datadir_path)s specified " "multiple times in filesystem_store_datadirs " "option of filesystem configuration") % {'datadir_path': datadir_path}) # If present with different priority it's a bad configuration if datadir_path not in priority_paths: LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=msg) # Present with same prio (exact duplicate) only deserves a warning LOG.warning(msg) def _get_datadir_path_and_priority(self, datadir): """ Gets directory paths and its priority from filesystem_store_datadirs option in glance-api.conf. :param datadir: is directory path with its priority. :returns: datadir_path as directory path priority as priority associated with datadir_path :raises: BadStoreConfiguration exception if priority is invalid or empty directory path is specified. """ priority = 0 parts = [part.strip() for part in datadir.rsplit(":", 1)] datadir_path = parts[0] if len(parts) == 2 and parts[1]: try: priority = int(parts[1]) except ValueError: msg = (_("Invalid priority value %(priority)s in " "filesystem configuration") % {'priority': priority}) LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=msg) if not datadir_path: msg = _("Invalid directory specified in filesystem configuration") LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="filesystem", reason=msg) return datadir_path, priority @staticmethod def _resolve_location(location): filepath = location.store_location.path if not os.path.exists(filepath): raise exceptions.NotFound(image=filepath) filesize = os.path.getsize(filepath) return filepath, filesize def _get_metadata(self, filepath): """Return metadata dictionary. If metadata is provided as list of dictionaries then return metadata as dictionary containing 'id' and 'mountpoint'. If there are multiple nfs directories (mountpoints) configured for glance, then we need to create metadata JSON file as list of dictionaries containing all mountpoints with unique id. But Nova will not be able to find in which directory (mountpoint) image is present if we store list of dictionary(containing mountpoints) in glance image metadata. So if there are multiple mountpoints then we will return dict containing exact mountpoint where image is stored. If image path does not start with any of the 'mountpoint' provided in metadata JSON file then error is logged and empty dictionary is returned. :param filepath: Path of image on store :returns: metadata dictionary """ if self.FILESYSTEM_STORE_METADATA: for image_meta in self.FILESYSTEM_STORE_METADATA: if filepath.startswith(image_meta['mountpoint']): return image_meta reason = (_LE("The image path %(path)s does not match with " "any of the mountpoint defined in " "metadata: %(metadata)s. An empty dictionary " "will be returned to the client.") % dict(path=filepath, metadata=self.FILESYSTEM_STORE_METADATA)) LOG.error(reason) return {} @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ filepath, filesize = self._resolve_location(location) msg = _("Found image at %s. Returning in ChunkedFile.") % filepath LOG.debug(msg) return (ChunkedFile(filepath, offset=offset, chunk_size=self.READ_CHUNKSIZE, partial_length=chunk_size), chunk_size or filesize) def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file and returns the image size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist :rtype: int """ filepath, filesize = self._resolve_location(location) msg = _("Found image at %s.") % filepath LOG.debug(msg) return filesize @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist :raises: Forbidden if cannot delete because of permissions """ loc = location.store_location fn = loc.path if os.path.exists(fn): try: LOG.debug(_("Deleting image at %(fn)s"), {'fn': fn}) os.unlink(fn) except OSError: raise exceptions.Forbidden( message=(_("You cannot delete file %s") % fn)) else: raise exceptions.NotFound(image=fn) def _get_capacity_info(self, mount_point): """Calculates total available space for given mount point. :mount_point is path of glance data directory """ # Calculate total available space stvfs_result = os.statvfs(mount_point) total_available_space = stvfs_result.f_bavail * stvfs_result.f_bsize return max(0, total_available_space) def _find_best_datadir(self, image_size): """Finds the best datadir by priority and free space. Traverse directories returning the first one that has sufficient free space, in priority order. If two suitable directories have the same priority, choose the one with the most free space available. :param image_size: size of image being uploaded. :returns: best_datadir as directory path of the best priority datadir. :raises: exceptions.StorageFull if there is no datadir in self.priority_data_map that can accommodate the image. """ if not self.multiple_datadirs: return self.datadir best_datadir = None max_free_space = 0 for priority in self.priority_list: for datadir in self.priority_data_map.get(priority): free_space = self._get_capacity_info(datadir) if free_space >= image_size and free_space > max_free_space: max_free_space = free_space best_datadir = datadir # If datadir is found which can accommodate image and has maximum # free space for the given priority then break the loop, # else continue to lookup further. if best_datadir: break else: msg = (_("There is no enough disk space left on the image " "storage media. requested=%s") % image_size) LOG.exception(msg) raise exceptions.StorageFull(message=msg) return best_datadir @glance_store.driver.back_compat_add @capabilities.check def add(self, image_id, image_file, image_size, hashing_algo, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param hashing_algo: A hashlib algorithm identifier (string) :param context: The request context :param verifier: An object used to verify signatures for images :returns: tuple of: (1) URL in backing store, (2) bytes written, (3) checksum, (4) multihash value, and (5) a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already exists :note:: By default, the backend writes the image data to a file `//`, where is the value of the filesystem_store_datadir configuration option and is the supplied image ID. """ datadir = self._find_best_datadir(image_size) filepath = os.path.join(datadir, str(image_id)) if os.path.exists(filepath): raise exceptions.Duplicate(image=filepath) os_hash_value = utils.get_hasher(hashing_algo, False) checksum = utils.get_hasher('md5', False) bytes_written = 0 try: with open(filepath, 'wb') as f: for buf in utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE): bytes_written += len(buf) os_hash_value.update(buf) checksum.update(buf) if verifier: verifier.update(buf) if self.thin_provisioning and not any(buf): f.truncate(bytes_written) f.seek(0, os.SEEK_END) else: f.write(buf) except IOError as e: if e.errno != errno.EACCES: self._delete_partial(filepath, image_id) errors = {errno.EFBIG: exceptions.StorageFull(), errno.ENOSPC: exceptions.StorageFull(), errno.EACCES: exceptions.StorageWriteDenied()} raise errors.get(e.errno, e) except Exception: with excutils.save_and_reraise_exception(): self._delete_partial(filepath, image_id) hash_hex = os_hash_value.hexdigest() checksum_hex = checksum.hexdigest() metadata = self._get_metadata(filepath) LOG.debug(("Wrote %(bytes_written)d bytes to %(filepath)s with " "checksum %(checksum_hex)s and multihash %(hash_hex)s"), {'bytes_written': bytes_written, 'filepath': filepath, 'checksum_hex': checksum_hex, 'hash_hex': hash_hex}) if self.backend_group: fstore_perm = getattr( self.conf, self.backend_group).filesystem_store_file_perm else: fstore_perm = self.conf.glance_store.filesystem_store_file_perm if fstore_perm > 0: perm = int(str(fstore_perm), 8) try: os.chmod(filepath, perm) except (IOError, OSError): LOG.warning(_LW("Unable to set permission to image: %s") % filepath) # Add store backend information to location metadata if self.backend_group: metadata['store'] = self.backend_group return ('file://%s' % filepath, bytes_written, checksum_hex, hash_hex, metadata) @staticmethod def _delete_partial(filepath, iid): try: os.unlink(filepath) except Exception as e: msg = _('Unable to remove partial image ' 'data for image %(iid)s: %(e)s') LOG.error(msg % dict(iid=iid, e=encodeutils.exception_to_unicode(e))) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/http.py0000664000175000017500000002771600000000000022044 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import urllib from oslo_config import cfg from oslo_utils import encodeutils import requests from glance_store import capabilities import glance_store.driver from glance_store import exceptions from glance_store.i18n import _, _LI import glance_store.location LOG = logging.getLogger(__name__) MAX_REDIRECTS = 5 _HTTP_OPTS = [ cfg.StrOpt('https_ca_certificates_file', help=""" Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the ``https_insecure`` option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: * A valid path to a CA file Related options: * https_insecure """), cfg.BoolOpt('https_insecure', default=True, help=""" Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if ``https_ca_certificates_file`` is set. The remote server certificate will then be verified using the file specified using the ``https_ca_certificates_file`` option. Possible values: * True * False Related options: * https_ca_certificates_file """), cfg.DictOpt('http_proxy_information', default={}, help=""" The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: * A comma separated list of scheme:proxy pairs as described above Related options: * None """)] class StoreLocation(glance_store.location.StoreLocation): """Class describing an HTTP(S) URI.""" def process_specs(self): self.scheme = self.specs.get('scheme', 'http') self.netloc = self.specs['netloc'] self.user = self.specs.get('user') self.password = self.specs.get('password') self.path = self.specs.get('path') self.query = self.spec.get('query') def _get_credstring(self): if self.user: return '%s:%s@' % (self.user, self.password) return '' def _get_query_string(self): if self.query: return "?%s" % self.query return "" def get_uri(self): return "%s://%s%s%s%s" % ( self.scheme, self._get_credstring(), self.netloc, self.path, self._get_query_string() ) def parse_uri(self, uri): """ Parse URLs. This method fixes an issue where credentials specified in the URL are interpreted differently in Python 2.6.1+ than prior versions of Python. """ pieces = urllib.parse.urlparse(uri) self.validate_schemas(uri, valid_schemas=('https://', 'http://')) self.scheme = pieces.scheme netloc = pieces.netloc path = pieces.path try: if '@' in netloc: creds, netloc = netloc.split('@') else: creds = None except ValueError: # Python 2.6.1 compat # see lp659445 and Python issue7904 if '@' in path: creds, path = path.split('@') else: creds = None if creds: try: self.user, self.password = creds.split(':') except ValueError: reason = _("Credentials are not well-formatted.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) else: self.user = None if netloc == '': LOG.info(_LI("No address specified in HTTP URL")) raise exceptions.BadStoreUri(uri=uri) else: # IPv6 address has the following format [1223:0:0:..]: # we need to be sure that we are validating port in both IPv4,IPv6 delimiter = "]:" if netloc.count(":") > 1 else ":" host, dlm, port = netloc.partition(delimiter) # if port is present in location then validate port format if port and not port.isdigit(): raise exceptions.BadStoreUri(uri=uri) self.netloc = netloc self.path = path self.query = pieces.query def http_response_iterator(conn, response, size): """ Return an iterator for a file-like object. :param conn: HTTP(S) Connection :param response: urllib3.HTTPResponse object :param size: Chunk size to iterate with """ try: chunk = response.read(size) while chunk: yield chunk chunk = response.read(size) finally: conn.close() class Store(glance_store.driver.Store): """An implementation of the HTTP(S) Backend Adapter""" _CAPABILITIES = (capabilities.BitMasks.READ_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _HTTP_OPTS def __init__(self, *args, **kargs): super(Store, self).__init__(*args, **kargs) if self.backend_group: self._set_url_prefix() def _set_url_prefix(self): # NOTE(abhishekk): HTTP store url either starts with http # or https, so default _url_prefix is set to http. self._url_prefix = "http" @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() """ try: conn, resp, content_length = self._query(location, 'GET') except requests.exceptions.ConnectionError: reason = _("Remote server where the image is present " "is unavailable.") LOG.exception(reason) raise exceptions.RemoteServiceUnavailable(message=reason) iterator = http_response_iterator(conn, resp, self.READ_CHUNKSIZE) class ResponseIndexable(glance_store.Indexable): def another(self): try: return next(self.wrapped) except StopIteration: return '' return (ResponseIndexable(iterator, content_length), content_length) def get_schemes(self): return ('http', 'https') def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns the size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() """ conn = None try: conn, resp, size = self._query(location, 'HEAD') except requests.exceptions.ConnectionError as exc: err_msg = encodeutils.exception_to_unicode(exc) reason = _("The HTTP URL is invalid: %s") % err_msg LOG.info(reason) raise exceptions.BadStoreUri(message=reason) finally: # NOTE(sabari): Close the connection as the request was made with # stream=True if conn is not None: conn.close() return size def _query(self, location, verb): redirects_followed = 0 while redirects_followed < MAX_REDIRECTS: loc = location.store_location conn = self._get_response(loc, verb) # NOTE(sigmavirus24): If it was generally successful, break early if conn.status_code < 300: break self._check_store_uri(conn, loc) redirects_followed += 1 # NOTE(sigmavirus24): Close the response so we don't leak sockets conn.close() location = self._new_location(location, conn.headers['location']) else: reason = (_("The HTTP URL exceeded %s maximum " "redirects.") % MAX_REDIRECTS) LOG.debug(reason) raise exceptions.MaxRedirectsExceeded(message=reason) resp = conn.raw content_length = int(resp.getheader('content-length', 0)) return (conn, resp, content_length) def _new_location(self, old_location, url): store_name = old_location.store_name store_class = old_location.store_location.__class__ image_id = old_location.image_id store_specs = old_location.store_specs return glance_store.location.Location(store_name, store_class, self.conf, uri=url, image_id=image_id, store_specs=store_specs, backend=self.backend_group) @staticmethod def _check_store_uri(conn, loc): # TODO(sigmavirus24): Make this a staticmethod # Check for bad status codes if conn.status_code >= 400: if conn.status_code == requests.codes.not_found: reason = _("HTTP datastore could not find image at URI.") LOG.debug(reason) raise exceptions.NotFound(message=reason) reason = (_("HTTP URL %(url)s returned a " "%(status)s status code. \nThe response body:\n" "%(body)s") % {'url': loc.path, 'status': conn.status_code, 'body': conn.text}) LOG.debug(reason) raise exceptions.BadStoreUri(message=reason) if conn.is_redirect and conn.status_code not in (301, 302): reason = (_("The HTTP URL %(url)s attempted to redirect " "with an invalid %(status)s status code."), {'url': loc.path, 'status': conn.status_code}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) def _get_response(self, location, verb): if not hasattr(self, 'session'): self.session = requests.Session() if self.backend_group: store_conf = getattr(self.conf, self.backend_group) else: store_conf = self.conf.glance_store ca_bundle = store_conf.https_ca_certificates_file disable_https = store_conf.https_insecure self.session.verify = ca_bundle if ca_bundle else not disable_https self.session.proxies = store_conf.http_proxy_information return self.session.request(verb, location.get_uri(), stream=True, allow_redirects=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/rbd.py0000664000175000017500000006701600000000000021631 0ustar00zuulzuul00000000000000# Copyright 2010-2011 Josh Durgin # Copyright 2020 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for RBD (RADOS (Reliable Autonomic Distributed Object Store) Block Device)""" import contextlib import logging import math import urllib from eventlet import tpool from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils import eventletutils from oslo_utils import units from glance_store import capabilities from glance_store.common import utils from glance_store import driver from glance_store import exceptions from glance_store.i18n import _, _LE, _LI, _LW from glance_store import location try: import rados import rbd except ImportError: rados = None rbd = None DEFAULT_POOL = 'images' DEFAULT_USER = None # let librados decide based on the Ceph conf file DEFAULT_CHUNKSIZE = 8 # in MiB DEFAULT_SNAPNAME = 'snap' LOG = logging.getLogger(__name__) _RBD_OPTS = [ cfg.IntOpt('rbd_store_chunk_size', default=DEFAULT_CHUNKSIZE, min=1, help=""" Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: * Any positive integer value Related options: * None """), cfg.StrOpt('rbd_store_pool', default=DEFAULT_POOL, help=""" RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a ``pool``. Each pool is defined with the number of placement groups it can contain. The default pool that is used is 'images'. More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: * A valid pool name Related options: * None """), cfg.StrOpt('rbd_store_user', default=DEFAULT_USER, help=""" RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: * A valid RADOS user Related options: * rbd_store_ceph_conf """), cfg.StrOpt('rbd_store_ceph_conf', default='', help=""" Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. NOTE: If using Cephx authentication, this file should include a reference to the right keyring in a client. section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: * A valid path to a configuration file Related options: * rbd_store_user """), cfg.IntOpt('rados_connect_timeout', default=-1, help=""" Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than 0, no timeout is set and the default librados value is used. Possible Values: * Any integer value Related options: * None """), cfg.BoolOpt('rbd_thin_provisioning', default=False, help=""" Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: * True * False Related options: * None """), ] class StoreLocation(location.StoreLocation): """ Class describing a RBD URI. This is of the form: rbd://image or rbd://fsid/pool/image/snapshot """ def process_specs(self): # convert to ascii since librbd doesn't handle unicode for key, value in self.specs.items(): self.specs[key] = str(value) self.fsid = self.specs.get('fsid') self.pool = self.specs.get('pool') self.image = self.specs.get('image') self.snapshot = self.specs.get('snapshot') def get_uri(self): if self.fsid and self.pool and self.snapshot: # ensure nothing contains / or any other url-unsafe character safe_fsid = urllib.parse.quote(self.fsid, '') safe_pool = urllib.parse.quote(self.pool, '') safe_image = urllib.parse.quote(self.image, '') safe_snapshot = urllib.parse.quote(self.snapshot, '') return "rbd://%s/%s/%s/%s" % (safe_fsid, safe_pool, safe_image, safe_snapshot) else: return "rbd://%s" % self.image def parse_uri(self, uri): prefix = 'rbd://' self.validate_schemas(uri, valid_schemas=(prefix,)) # convert to ascii since librbd doesn't handle unicode try: ascii_uri = str(uri) except UnicodeError: reason = _('URI contains non-ascii characters') msg = _LI("Invalid URI: %s") % reason LOG.info(msg) raise exceptions.BadStoreUri(message=reason) pieces = ascii_uri[len(prefix):].split('/') if len(pieces) == 1: self.fsid, self.pool, self.image, self.snapshot = \ (None, None, pieces[0], None) elif len(pieces) == 4: self.fsid, self.pool, self.image, self.snapshot = \ map(urllib.parse.unquote, pieces) else: reason = _('URI must have exactly 1 or 4 components') msg = _LI("Invalid URI: %s") % reason LOG.info(msg) raise exceptions.BadStoreUri(message=reason) if any(map(lambda p: p == '', pieces)): reason = _('URI cannot contain empty components') msg = _LI("Invalid URI: %s") % reason LOG.info(msg) raise exceptions.BadStoreUri(message=reason) class ImageIterator(object): """ Reads data from an RBD image, one chunk at a time. """ def __init__(self, pool, name, snapshot, store, chunk_size=None): self.pool = pool or store.pool self.name = name self.snapshot = snapshot self.user = store.user self.conf_file = store.conf_file self.chunk_size = chunk_size or store.READ_CHUNKSIZE self.store = store def __iter__(self): try: with self.store.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: with conn.open_ioctx(self.pool) as ioctx: with rbd.Image(ioctx, self.name, snapshot=self.snapshot) as image: size = image.size() bytes_left = size while bytes_left > 0: length = min(self.chunk_size, bytes_left) data = image.read(size - bytes_left, length) bytes_left -= len(data) yield data return except rbd.ImageNotFound: raise exceptions.NotFound( _('RBD image %s does not exist') % self.name) class Store(driver.Store): """An implementation of the RBD backend adapter.""" _CAPABILITIES = capabilities.BitMasks.RW_ACCESS OPTIONS = _RBD_OPTS EXAMPLE_URL = "rbd://///" def get_schemes(self): return ('rbd',) def RBDProxy(self): if eventletutils.is_monkey_patched('thread'): return tpool.Proxy(rbd.RBD()) else: return rbd.RBD() @contextlib.contextmanager def get_connection(self, conffile, rados_id): client = rados.Rados(conffile=conffile, rados_id=rados_id) if self.backend_group: timeout = getattr(self.conf, self.backend_group).rados_connect_timeout else: timeout = self.conf.glance_store.rados_connect_timeout if timeout >= 0: t = str(timeout) client.conf_set('rados_osd_op_timeout', t) client.conf_set('rados_mon_op_timeout', t) client.conf_set('client_mount_timeout', t) try: client.connect() except (rados.Error, rados.ObjectNotFound) as e: if self.backend_group and len(self.conf.enabled_backends) > 1: reason = _("Error in store configuration: %s") % e LOG.debug(reason) raise exceptions.BadStoreConfiguration( store_name=self.backend_group, reason=reason) else: msg = _LE("Error connecting to ceph cluster.") LOG.exception(msg) raise exceptions.BackendException() try: yield client finally: client.shutdown() def configure_add(self): """ Configure the Store to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exceptions.BadStoreConfiguration` """ if rbd is None or rados is None: reason = _("The required libraries(rbd and rados) are not " "available") LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name='rbd', reason=reason) try: if self.backend_group: chunk = getattr(self.conf, self.backend_group).rbd_store_chunk_size pool = getattr(self.conf, self.backend_group).rbd_store_pool user = getattr(self.conf, self.backend_group).rbd_store_user conf_file = getattr(self.conf, self.backend_group).rbd_store_ceph_conf thin_provisioning = getattr(self.conf, self.backend_group).\ rbd_thin_provisioning else: chunk = self.conf.glance_store.rbd_store_chunk_size pool = self.conf.glance_store.rbd_store_pool user = self.conf.glance_store.rbd_store_user conf_file = self.conf.glance_store.rbd_store_ceph_conf thin_provisioning = \ self.conf.glance_store.rbd_thin_provisioning self.thin_provisioning = thin_provisioning self.chunk_size = chunk * units.Mi self.READ_CHUNKSIZE = self.chunk_size self.WRITE_CHUNKSIZE = self.READ_CHUNKSIZE # these must not be unicode since they will be passed to a # non-unicode-aware C library self.pool = str(pool) self.user = str(user) self.conf_file = str(conf_file) except cfg.ConfigFileValueError as e: reason = _("Error in store configuration: %s") % e LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name='rbd', reason=reason) if self.backend_group: self._set_url_prefix() self.size = 0 self.resize_amount = self.WRITE_CHUNKSIZE def _set_url_prefix(self): fsid = None with self.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: if hasattr(conn, 'get_fsid'): fsid = encodeutils.safe_decode(conn.get_fsid()) if fsid and self.pool: # ensure nothing contains / or any other url-unsafe character safe_fsid = urllib.parse.quote(fsid, '') safe_pool = urllib.parse.quote(self.pool, '') self._url_prefix = "rbd://%s/%s/" % (safe_fsid, safe_pool) else: self._url_prefix = "rbd://" @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ loc = location.store_location return (ImageIterator(loc.pool, loc.image, loc.snapshot, self), self.get_size(location)) def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns the size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ loc = location.store_location # if there is a pool specific in the location, use it; otherwise # we fall back to the default pool specified in the config target_pool = loc.pool or self.pool with self.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: with conn.open_ioctx(target_pool) as ioctx: try: with rbd.Image(ioctx, loc.image, snapshot=loc.snapshot) as image: img_info = image.stat() return img_info['size'] except rbd.ImageNotFound: msg = _('RBD image %s does not exist') % loc.get_uri() LOG.debug(msg) raise exceptions.NotFound(msg) def _create_image(self, fsid, conn, ioctx, image_name, size, order, context=None): """ Create an rbd image. If librbd supports it, make it a cloneable snapshot, so that copy-on-write volumes can be created from it. :param image_name: Image's name :returns: `glance_store.rbd.StoreLocation` object """ features = conn.conf_get('rbd_default_features') if ((features is None) or (int(features) == 0)): features = rbd.RBD_FEATURE_LAYERING self.RBDProxy().create(ioctx, image_name, size, order, old_format=False, features=int(features)) return StoreLocation({ 'fsid': fsid, 'pool': self.pool, 'image': image_name, 'snapshot': DEFAULT_SNAPNAME, }, self.conf) def _delete_image(self, target_pool, image_name, snapshot_name=None, context=None): """ Delete RBD image and snapshot. :param image_name: Image's name :param snapshot_name: Image snapshot's name :raises: NotFound if image does not exist; InUseByStore if image is in use or snapshot unprotect failed """ with self.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: with conn.open_ioctx(target_pool) as ioctx: try: # First remove snapshot. if snapshot_name is not None: with rbd.Image(ioctx, image_name) as image: try: self._unprotect_snapshot(image, snapshot_name) image.remove_snap(snapshot_name) except rbd.ImageNotFound as exc: msg = (_("Snap Operating Exception " "%(snap_exc)s " "Snapshot does not exist.") % {'snap_exc': exc}) LOG.debug(msg) except rbd.ImageBusy as exc: log_msg = (_LW("Snap Operating Exception " "%(snap_exc)s " "Snapshot is in use.") % {'snap_exc': exc}) LOG.warning(log_msg) raise exceptions.InUseByStore() # Then delete image. self.RBDProxy().remove(ioctx, image_name) except rbd.ImageHasSnapshots: log_msg = (_LW("Unable to remove image %(img_name)s: it " "has snapshot(s) left; trashing instead") % {'img_name': image_name}) LOG.warning(log_msg) with rbd.Image(ioctx, image_name) as image: try: rbd.RBD().trash_move(ioctx, image_name) LOG.debug('Moved %s to trash', image_name) except rbd.ImageBusy: LOG.warning(_('Unable to move in-use image to ' 'trash')) raise exceptions.InUseByStore() return raise exceptions.HasSnapshot() except rbd.ImageBusy: log_msg = (_LW("Remove image %(img_name)s failed. " "It is in use.") % {'img_name': image_name}) LOG.warning(log_msg) raise exceptions.InUseByStore() except rbd.ImageNotFound: msg = _("RBD image %s does not exist") % image_name raise exceptions.NotFound(message=msg) def _unprotect_snapshot(self, image, snap_name): try: image.unprotect_snap(snap_name) except rbd.InvalidArgument: # NOTE(slaweq): if snapshot was unprotected already, rbd library # raises InvalidArgument exception without any "clear" message. # Such exception is not dangerous for us so it will be just logged LOG.debug("Snapshot %s is unprotected already" % snap_name) def _resize_on_write(self, image, image_size, bytes_written, chunk_length): """Handle the rbd resize when needed.""" if image_size != 0 or self.size >= bytes_written + chunk_length: return self.size # Note(jokke): We double how much we grow the image each time # up to 8gigs to avoid resizing for each write on bigger images self.resize_amount = min(self.resize_amount * 2, 8 * units.Gi) new_size = self.size + self.resize_amount LOG.debug("resizing image to %s KiB" % (new_size / units.Ki)) image.resize(new_size) return new_size @driver.back_compat_add @capabilities.check def add(self, image_id, image_file, image_size, hashing_algo, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param hashing_algo: A hashlib algorithm identifier (string) :param context: A context object :param verifier: An object used to verify signatures for images :returns: tuple of: (1) URL in backing store, (2) bytes written, (3) checksum, (4) multihash value, and (5) a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already exists """ os_hash_value = utils.get_hasher(hashing_algo, False) checksum = utils.get_hasher('md5', False) image_name = str(image_id) with self.get_connection(conffile=self.conf_file, rados_id=self.user) as conn: fsid = None if hasattr(conn, 'get_fsid'): # Librados's get_fsid is represented as binary # in py3 instead of str as it is in py2. # This is causing problems with ceph. # Decode binary to str fixes these issues. # Fix with encodeutils.safe_decode CAN BE REMOVED # after librados's fix will be stable. # # More information: # https://bugs.launchpad.net/glance-store/+bug/1816721 # https://bugs.launchpad.net/cinder/+bug/1816468 # https://tracker.ceph.com/issues/38381 fsid = encodeutils.safe_decode(conn.get_fsid()) with conn.open_ioctx(self.pool) as ioctx: order = int(math.log(self.WRITE_CHUNKSIZE, 2)) LOG.debug('creating image %s with order %d and size %d', image_name, order, image_size) if image_size == 0: LOG.warning(_LW("Since image size is zero we will be " "doing resize-before-write which will be " "slower than normal")) try: loc = self._create_image(fsid, conn, ioctx, image_name, image_size, order) except rbd.ImageExists: msg = _('RBD image %s already exists') % image_id raise exceptions.Duplicate(message=msg) try: with rbd.Image(ioctx, image_name) as image: bytes_written = 0 offset = 0 chunks = utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE) for chunk in chunks: # NOTE(jokke): If we don't know image size we need # to resize it on write. The resize amount will # ramp up to 8 gigs. chunk_length = len(chunk) self.size = self._resize_on_write(image, image_size, bytes_written, chunk_length) bytes_written += chunk_length if not (self.thin_provisioning and not any(chunk)): image.write(chunk, offset) offset += chunk_length os_hash_value.update(chunk) checksum.update(chunk) if verifier: verifier.update(chunk) # Lets trim the image in case we overshoot with resize if image_size == 0: image.resize(bytes_written) if loc.snapshot: image.create_snap(loc.snapshot) image.protect_snap(loc.snapshot) except rbd.NoSpace: log_msg = (_LE("Failed to store image %(img_name)s " "insufficient space available") % {'img_name': image_name}) LOG.error(log_msg) # Delete image if one was created try: target_pool = loc.pool or self.pool self._delete_image(target_pool, loc.image, loc.snapshot) except exceptions.NotFound: pass raise exceptions.StorageFull(message=log_msg) except Exception as exc: log_msg = (_LE("Failed to store image %(img_name)s " "Store Exception %(store_exc)s") % {'img_name': image_name, 'store_exc': exc}) LOG.error(log_msg) # Delete image if one was created try: target_pool = loc.pool or self.pool self._delete_image(target_pool, loc.image, loc.snapshot) except exceptions.NotFound: pass raise exc # Make sure we send back the image size whether provided or inferred. if image_size == 0: image_size = bytes_written # Add store backend information to location metadata metadata = {} if self.backend_group: metadata['store'] = self.backend_group return (loc.get_uri(), image_size, checksum.hexdigest(), os_hash_value.hexdigest(), metadata) @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete. :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist; InUseByStore if image is in use or snapshot unprotect failed """ loc = location.store_location target_pool = loc.pool or self.pool self._delete_image(target_pool, loc.image, loc.snapshot) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/s3.py0000664000175000017500000011172600000000000021405 0ustar00zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for S3 or Storage Servers that follow the S3 Protocol""" import io import logging import math import re import urllib try: from boto3 import session as boto_session from botocore import client as boto_client from botocore import exceptions as boto_exceptions from botocore import utils as boto_utils except ImportError: boto_session = None boto_client = None boto_exceptions = None boto_utils = None import eventlet from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils import units import glance_store from glance_store import capabilities from glance_store.common import utils import glance_store.driver from glance_store import exceptions from glance_store.i18n import _ import glance_store.location LOG = logging.getLogger(__name__) DEFAULT_LARGE_OBJECT_SIZE = 100 # 100M DEFAULT_LARGE_OBJECT_CHUNK_SIZE = 10 # 10M DEFAULT_LARGE_OBJECT_MIN_CHUNK_SIZE = 5 # 5M DEFAULT_THREAD_POOLS = 10 # 10 pools MAX_PART_NUM = 10000 # 10000 upload parts _S3_OPTS = [ cfg.StrOpt('s3_store_host', help=""" The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: * A valid DNS name * A valid IPv4 address Related Options: * s3_store_access_key * s3_store_secret_key """), cfg.StrOpt('s3_store_region_name', default='', help=""" The S3 region name. This parameter will set the region_name used by boto. If this parameter is not set, we we will try to compute it from the s3_store_host. Possible values: * A valid region name Related Options: * s3_store_host """), cfg.StrOpt('s3_store_access_key', secret=True, help=""" The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: * Any string value that is the access key for a user with appropriate privileges Related Options: * s3_store_host * s3_store_secret_key """), cfg.StrOpt('s3_store_secret_key', secret=True, help=""" The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: * Any string value that is a secret key corresponding to the access key specified using the ``s3_store_host`` option Related Options: * s3_store_host * s3_store_access_key """), cfg.StrOpt('s3_store_bucket', help=""" The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If ``s3_store_create_bucket_on_put`` is set to true, it will be created automatically even if the bucket does not exist. Possible values: * Any string value Related Options: * s3_store_create_bucket_on_put * s3_store_bucket_url_format """), cfg.BoolOpt('s3_store_create_bucket_on_put', default=False, help=""" Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: * Any Boolean value Related Options: * None """), cfg.StrOpt('s3_store_bucket_url_format', default='auto', help=""" The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In ``path``-style, the endpoint for the object looks like 'https://s3.amazonaws.com/bucket/example.img'. And in ``virtual``-style, the endpoint for the object looks like 'https://bucket.s3.amazonaws.com/example.img'. If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: * Any string value of ``auto``, ``virtual``, or ``path`` Related Options: * s3_store_bucket """), cfg.IntOpt('s3_store_large_object_size', default=DEFAULT_LARGE_OBJECT_SIZE, help=""" What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: * Any positive integer value Related Options: * s3_store_large_object_chunk_size * s3_store_thread_pools """), cfg.IntOpt('s3_store_large_object_chunk_size', default=DEFAULT_LARGE_OBJECT_CHUNK_SIZE, help=""" What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: * Any positive integer value (must be greater than or equal to 5M) Related Options: * s3_store_large_object_size * s3_store_thread_pools """), cfg.IntOpt('s3_store_thread_pools', default=DEFAULT_THREAD_POOLS, help=""" The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: * Any positive integer value Related Options: * s3_store_large_object_size * s3_store_large_object_chunk_size """) ] class UploadPart(object): """The class for the upload part.""" def __init__(self, mpu, fp, partnum, chunks): self.mpu = mpu self.partnum = partnum self.fp = fp self.size = 0 self.chunks = chunks self.etag = {} self.success = True def run_upload(s3_client, bucket, key, part): """Upload the upload part into S3 and set returned etag and size to its part info. :param s3_client: An object with credentials to connect to S3 :param bucket: The S3 bucket name :param key: The object name to be stored (image identifier) :param part: UploadPart object which used during multipart upload """ pnum = part.partnum bsize = part.chunks upload_id = part.mpu['UploadId'] LOG.debug("Uploading upload part in S3 partnum=%(pnum)d, " "size=%(bsize)d, key=%(key)s, UploadId=%(UploadId)s", {'pnum': pnum, 'bsize': bsize, 'key': key, 'UploadId': upload_id}) try: key = s3_client.upload_part(Body=part.fp, Bucket=bucket, ContentLength=bsize, Key=key, PartNumber=pnum, UploadId=upload_id) part.etag[part.partnum] = key['ETag'] part.size = bsize except boto_exceptions.ClientError as e: error_code = e.response['Error']['Code'] error_message = e.response['Error']['Message'] LOG.warning("Failed to upload part in S3 partnum=%(pnum)d, " "size=%(bsize)d, error code=%(error_code)d, " "error message=%(error_message)s", {'pnum': pnum, 'bsize': bsize, 'error_code': error_code, 'error_message': error_message}) part.success = False finally: part.fp.close() class StoreLocation(glance_store.location.StoreLocation): """Class describing an S3 URI. An S3 URI can look like any of the following: s3://accesskey:secretkey@s3.amazonaws.com/bucket/key-id s3+https://accesskey:secretkey@s3.amazonaws.com/bucket/key-id The s3+https:// URIs indicate there is an HTTPS s3service URL """ def process_specs(self): self.scheme = self.specs.get('scheme', 's3') self.accesskey = self.specs.get('accesskey') self.secretkey = self.specs.get('secretkey') s3_host = self.specs.get('s3serviceurl') self.bucket = self.specs.get('bucket') self.key = self.specs.get('key') if s3_host.startswith('https://'): self.scheme = 's3+https' s3_host = s3_host[len('https://'):].strip('/') elif s3_host.startswith('http://'): s3_host = s3_host[len('http://'):].strip('/') self.s3serviceurl = s3_host.strip('/') def _get_credstring(self): if self.accesskey: return '%s:%s@' % (self.accesskey, self.secretkey) return '' def get_uri(self): return "%s://%s%s/%s/%s" % (self.scheme, self._get_credstring(), self.s3serviceurl, self.bucket, self.key) def parse_uri(self, uri): """Parse URLs. Note that an Amazon AWS secret key can contain the forward slash, which is entirely retarded, and breaks urlparse miserably. This function works around that issue. """ # Make sure that URIs that contain multiple schemes, such as: # s3://accesskey:secretkey@https://s3.amazonaws.com/bucket/key-id # are immediately rejected. if uri.count('://') != 1: reason = ("URI cannot contain more than one occurrence " "of a scheme. If you have specified a URI like " "s3://accesskey:secretkey@" "https://s3.amazonaws.com/bucket/key-id" ", you need to change it to use the " "s3+https:// scheme, like so: " "s3+https://accesskey:secretkey@" "s3.amazonaws.com/bucket/key-id") LOG.info("Invalid store uri: %s", reason) raise exceptions.BadStoreUri(uri=uri) pieces = urllib.parse.urlparse(uri) self.validate_schemas(uri, valid_schemas=( 's3://', 's3+http://', 's3+https://')) self.scheme = pieces.scheme path = pieces.path.strip('/') netloc = pieces.netloc.strip('/') entire_path = (netloc + '/' + path).strip('/') if '@' in uri: creds, path = entire_path.split('@') cred_parts = creds.split(':') try: self.accesskey = cred_parts[0] self.secretkey = cred_parts[1] except IndexError: LOG.error("Badly formed S3 credentials") raise exceptions.BadStoreUri(uri=uri) else: self.accesskey = None path = entire_path try: path_parts = path.split('/') self.key = path_parts.pop() self.bucket = path_parts.pop() if path_parts: self.s3serviceurl = '/'.join(path_parts).strip('/') else: LOG.error("Badly formed S3 URI. Missing s3 service URL.") raise exceptions.BadStoreUri(uri=uri) except IndexError: LOG.error("Badly formed S3 URI") raise exceptions.BadStoreUri(uri=uri) class Store(glance_store.driver.Store): """An implementation of the s3 adapter.""" _CAPABILITIES = capabilities.BitMasks.RW_ACCESS OPTIONS = _S3_OPTS EXAMPLE_URL = "s3://:@//" READ_CHUNKSIZE = 64 * units.Ki WRITE_CHUNKSIZE = 5 * units.Mi @staticmethod def get_schemes(): return 's3', 's3+http', 's3+https' def configure_add(self): """ Configure the Store to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exceptions.BadStoreConfiguration` """ if boto_session is None: reason = _("boto3 or botocore is not available.") LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="s3", reason=reason) self.s3_host = self._option_get('s3_store_host') self.region_name = self._option_get('s3_store_region_name') self.access_key = self._option_get('s3_store_access_key') self.secret_key = self._option_get('s3_store_secret_key') self.bucket = self._option_get('s3_store_bucket') self.scheme = 's3' if self.s3_host.startswith('https://'): self.scheme = 's3+https' self.full_s3_host = self.s3_host elif self.s3_host.startswith('http://'): self.full_s3_host = self.s3_host else: # Defaults http self.full_s3_host = 'http://' + self.s3_host _s3_obj_size = self._option_get('s3_store_large_object_size') self.s3_store_large_object_size = _s3_obj_size * units.Mi _s3_ck_size = self._option_get('s3_store_large_object_chunk_size') _s3_ck_min = DEFAULT_LARGE_OBJECT_MIN_CHUNK_SIZE if _s3_ck_size < _s3_ck_min: reason = _("s3_store_large_object_chunk_size must be at " "least %d MB.") % _s3_ck_min LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="s3", reason=reason) self.s3_store_large_object_chunk_size = _s3_ck_size * units.Mi self.s3_store_thread_pools = self._option_get('s3_store_thread_pools') if self.s3_store_thread_pools <= 0: reason = _("s3_store_thread_pools must be a positive " "integer. %s") % self.s3_store_thread_pools LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="s3", reason=reason) if self.backend_group: self._set_url_prefix() def _set_url_prefix(self): s3_host = self.s3_host if s3_host.startswith('http://'): s3_host = s3_host[len('http://'):] elif s3_host.startswith('https://'): s3_host = s3_host[len('https://'):] self._url_prefix = "%s://%s:%s@%s/%s" % (self.scheme, self.access_key, self.secret_key, s3_host, self.bucket) def _option_get(self, param): if self.backend_group: store_conf = getattr(self.conf, self.backend_group) else: store_conf = self.conf.glance_store result = getattr(store_conf, param) if not result: if param == 's3_store_create_bucket_on_put': return result if param == 's3_store_region_name': return result reason = _("Could not find %s in configuration options.") % param LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="s3", reason=reason) return result def _create_s3_client(self, loc): """Create a client object to use when connecting to S3. :param loc: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :returns: An object with credentials to connect to S3 """ s3_host = self._option_get('s3_store_host') url_format = self._option_get('s3_store_bucket_url_format') calling_format = {'addressing_style': url_format} session = boto_session.Session(aws_access_key_id=loc.accesskey, aws_secret_access_key=loc.secretkey) config = boto_client.Config(s3=calling_format) location = get_s3_location(s3_host) bucket_name = loc.bucket if (url_format == 'virtual' and not boto_utils.check_dns_name(bucket_name)): raise boto_exceptions.InvalidDNSNameError(bucket_name=bucket_name) region_name, endpoint_url = None, None if self.region_name: region_name = self.region_name endpoint_url = s3_host elif location: region_name = location else: endpoint_url = s3_host return session.client(service_name='s3', endpoint_url=endpoint_url, region_name=region_name, use_ssl=(loc.scheme == 's3+https'), config=config) def _operation_set(self, loc): """Objects and variables frequently used when operating S3 are returned together. :param loc: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() "returns: tuple of: (1) S3 client object, (2) Bucket name, (3) Image Object name """ return self._create_s3_client(loc), loc.bucket, loc.key @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ loc = location.store_location s3_client, bucket, key = self._operation_set(loc) if not self._object_exists(s3_client, bucket, key): LOG.warning("Could not find key %(key)s in " "bucket %(bucket)s", {'key': key, 'bucket': bucket}) raise exceptions.NotFound(image=key) key = s3_client.get_object(Bucket=bucket, Key=key) LOG.debug("Retrieved image object from S3 using s3_host=%(s3_host)s, " "bucket=%(bucket)s key=%(key)s)", {'s3_host': loc.s3serviceurl, 'bucket': bucket, 'key': key}) cs = self.READ_CHUNKSIZE class ResponseIndexable(glance_store.Indexable): def another(self): try: return next(self.wrapped) except StopIteration: return b'' return (ResponseIndexable(utils.chunkiter(key['Body'], cs), key['ContentLength']), key['ContentLength']) def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file and returns the image size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist :rtype: int """ loc = location.store_location s3_client, bucket, key = self._operation_set(loc) if not self._object_exists(s3_client, bucket, key): LOG.warning("Could not find key %(key)s in " "bucket %(bucket)s", {'key': key, 'bucket': bucket}) raise exceptions.NotFound(image=key) key = s3_client.head_object(Bucket=bucket, Key=key) return key['ContentLength'] @capabilities.check def add(self, image_id, image_file, image_size, hashing_algo, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param hashing_algo: A hashlib algorithm identifier (string) :param context: A context object :param verifier: An object used to verify signatures for images :returns: tuple of: (1) URL in backing store, (2) bytes written, (3) checksum, (4) multihash value, and (5) a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already exists """ loc = StoreLocation(store_specs={'scheme': self.scheme, 'bucket': self.bucket, 'key': image_id, 's3serviceurl': self.full_s3_host, 'accesskey': self.access_key, 'secretkey': self.secret_key}, conf=self.conf, backend_group=self.backend_group) s3_client, bucket, key = self._operation_set(loc) if not self._bucket_exists(s3_client, bucket): if self._option_get('s3_store_create_bucket_on_put'): self._create_bucket(s3_client, self._option_get('s3_store_host'), bucket, self._option_get('s3_store_region_name')) else: msg = (_("The bucket %s does not exist in " "S3. Please set the " "s3_store_create_bucket_on_put option " "to add bucket to S3 automatically.") % bucket) raise glance_store.BackendException(msg) LOG.debug("Adding image object to S3 using (s3_host=%(s3_host)s, " "bucket=%(bucket)s, key=%(key)s)", {'s3_host': self.s3_host, 'bucket': bucket, 'key': key}) if not self._object_exists(s3_client, bucket, key): if image_size < self.s3_store_large_object_size: return self._add_singlepart(s3_client=s3_client, image_file=image_file, bucket=bucket, key=key, loc=loc, hashing_algo=hashing_algo, verifier=verifier) return self._add_multipart(s3_client=s3_client, image_file=image_file, image_size=image_size, bucket=bucket, key=key, loc=loc, hashing_algo=hashing_algo, verifier=verifier) LOG.warning("S3 already has an image with bucket ID %(bucket)s, " "key %(key)s", {'bucket': bucket, 'key': key}) raise exceptions.Duplicate(image=key) def _add_singlepart(self, s3_client, image_file, bucket, key, loc, hashing_algo, verifier): """Stores an image file with a single part upload to S3 backend. :param s3_client: An object with credentials to connect to S3 :param image_file: The image data to write, as a file-like object :param bucket: S3 bucket name :param key: The object name to be stored (image identifier) :param loc: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :param hashing_algo: A hashlib algorithm identifier (string) :param verifier: An object used to verify signatures for images :returns: tuple of: (1) URL in backing store, (2) bytes written, (3) checksum, (4) multihash value, and (5) a dictionary with storage system specific information """ os_hash_value = utils.get_hasher(hashing_algo, False) checksum = utils.get_hasher('md5', False) image_data = b'' image_size = 0 for chunk in utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE): image_data += chunk image_size += len(chunk) os_hash_value.update(chunk) checksum.update(chunk) if verifier: verifier.update(chunk) s3_client.put_object(Body=image_data, Bucket=bucket, Key=key) hash_hex = os_hash_value.hexdigest() checksum_hex = checksum.hexdigest() # Add store backend information to location metadata metadata = {} if self.backend_group: metadata['store'] = self.backend_group LOG.debug("Wrote %(size)d bytes to S3 key named %(key)s " "with checksum %(checksum)s", {'size': image_size, 'key': key, 'checksum': checksum_hex}) return loc.get_uri(), image_size, checksum_hex, hash_hex, metadata def _add_multipart(self, s3_client, image_file, image_size, bucket, key, loc, hashing_algo, verifier): """Stores an image file with a multi part upload to S3 backend. :param s3_client: An object with credentials to connect to S3 :param image_file: The image data to write, as a file-like object :param bucket: S3 bucket name :param key: The object name to be stored (image identifier) :param loc: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :param hashing_algo: A hashlib algorithm identifier (string) :param verifier: An object used to verify signatures for images :returns: tuple of: (1) URL in backing store, (2) bytes written, (3) checksum, (4) multihash value, and (5) a dictionary with storage system specific information """ os_hash_value = utils.get_hasher(hashing_algo, False) checksum = utils.get_hasher('md5', False) pool_size = self.s3_store_thread_pools pool = eventlet.greenpool.GreenPool(size=pool_size) mpu = s3_client.create_multipart_upload(Bucket=bucket, Key=key) upload_id = mpu['UploadId'] LOG.debug("Multipart initiate key=%(key)s, UploadId=%(UploadId)s", {'key': key, 'UploadId': upload_id}) cstart = 0 plist = [] chunk_size = int(math.ceil(float(image_size) / MAX_PART_NUM)) write_chunk_size = max(self.s3_store_large_object_chunk_size, chunk_size) it = utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE) buffered_chunk = b'' while True: try: buffered_clen = len(buffered_chunk) if buffered_clen < write_chunk_size: # keep reading data read_chunk = next(it) buffered_chunk += read_chunk continue else: write_chunk = buffered_chunk[:write_chunk_size] remained_data = buffered_chunk[write_chunk_size:] os_hash_value.update(write_chunk) checksum.update(write_chunk) if verifier: verifier.update(write_chunk) fp = io.BytesIO(write_chunk) fp.seek(0) part = UploadPart(mpu, fp, cstart + 1, len(write_chunk)) pool.spawn_n(run_upload, s3_client, bucket, key, part) plist.append(part) cstart += 1 buffered_chunk = remained_data except StopIteration: if len(buffered_chunk) > 0: # Write the last chunk data write_chunk = buffered_chunk os_hash_value.update(write_chunk) checksum.update(write_chunk) if verifier: verifier.update(write_chunk) fp = io.BytesIO(write_chunk) fp.seek(0) part = UploadPart(mpu, fp, cstart + 1, len(write_chunk)) pool.spawn_n(run_upload, s3_client, bucket, key, part) plist.append(part) break pedict = {} total_size = 0 pool.waitall() for part in plist: pedict.update(part.etag) total_size += part.size success = True for part in plist: if not part.success: success = False if success: # Complete mpu_list = self._get_mpu_list(pedict) s3_client.complete_multipart_upload(Bucket=bucket, Key=key, MultipartUpload=mpu_list, UploadId=upload_id) hash_hex = os_hash_value.hexdigest() checksum_hex = checksum.hexdigest() # Add store backend information to location metadata metadata = {} if self.backend_group: metadata['store'] = self.backend_group LOG.info("Multipart complete key=%(key)s " "UploadId=%(UploadId)s " "Wrote %(total_size)d bytes to S3 key " "named %(key)s " "with checksum %(checksum)s", {'key': key, 'UploadId': upload_id, 'total_size': total_size, 'checksum': checksum_hex}) return loc.get_uri(), total_size, checksum_hex, hash_hex, metadata # Abort s3_client.abort_multipart_upload(Bucket=bucket, Key=key, UploadId=upload_id) LOG.error("Some parts failed to upload to S3. " "Aborted the key=%s", key) msg = _("Failed to add image object to S3. key=%s") % key raise glance_store.BackendException(msg) @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete. :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist; InUseByStore if image is in use or snapshot unprotect failed """ loc = location.store_location s3_client, bucket, key = self._operation_set(loc) if not self._object_exists(s3_client, bucket, key): LOG.warning("Could not find key %(key)s in bucket %(bucket)s", {'key': key, 'bucket': bucket}) raise exceptions.NotFound(image=key) LOG.debug("Deleting image object from S3 using s3_host=%(s3_host)s, " "bucket=%(bucket)s, key=%(key)s)", {'s3_host': loc.s3serviceurl, 'bucket': bucket, 'key': key}) return s3_client.delete_object(Bucket=bucket, Key=key) @staticmethod def _bucket_exists(s3_client, bucket): """Check whether bucket exists in the S3. :param s3_client: An object with credentials to connect to S3 :param bucket: S3 bucket name :returns: boolean value; If the value is true, the bucket is exist if false, it is not. :raises: BadStoreConfiguration if cannot connect to S3 successfully """ try: s3_client.head_bucket(Bucket=bucket) except boto_exceptions.ClientError as e: error_code = e.response['Error']['Code'] if error_code == '404': return False msg = ("Failed to get bucket info: %s" % encodeutils.exception_to_unicode(e)) LOG.error(msg) raise glance_store.BadStoreConfiguration(store_name='s3', reason=msg) else: return True @staticmethod def _object_exists(s3_client, bucket, key): """Check whether object exists in the specific bucket of S3. :param s3_client: An object with credentials to connect to S3 :param bucket: S3 bucket name :param key: The image object name :returns: boolean value; If the value is true, the object is exist if false, it is not. :raises: BadStoreConfiguration if cannot connect to S3 successfully """ try: s3_client.head_object(Bucket=bucket, Key=key) except boto_exceptions.ClientError as e: error_code = e.response['Error']['Code'] if error_code == '404': return False msg = ("Failed to get object info: %s" % encodeutils.exception_to_unicode(e)) LOG.error(msg) raise glance_store.BadStoreConfiguration(store_name='s3', reason=msg) else: return True @staticmethod def _create_bucket(s3_client, s3_host, bucket, region_name=None): """Create bucket into the S3. :param s3_client: An object with credentials to connect to S3 :param s3_host: S3 endpoint url :param bucket: S3 bucket name :param region_name: An optional region_name. If not provided, will try to compute it from s3_host :raises: BadStoreConfiguration if cannot connect to S3 successfully """ if region_name: region = region_name else: region = get_s3_location(s3_host) try: s3_client.create_bucket( Bucket=bucket, ) if region == '' else s3_client.create_bucket( Bucket=bucket, CreateBucketConfiguration={ 'LocationConstraint': region } ) except boto_exceptions.ClientError as e: msg = ("Failed to add bucket to S3: %s" % encodeutils.exception_to_unicode(e)) LOG.error(msg) raise glance_store.BadStoreConfiguration(store_name='s3', reason=msg) @staticmethod def _get_mpu_list(pedict): """Convert an object type and struct for use in boto3.client('s3').complete_multipart_upload. :param pedict: dict which containing UploadPart.etag :returns: list with pedict converted properly """ return { 'Parts': [ { 'PartNumber': pnum, 'ETag': etag } for pnum, etag in pedict.items() ] } def get_s3_location(s3_host): """Get S3 region information from ``s3_store_host``. :param s3_host: S3 endpoint url :returns: string value; region information which user wants to use on Amazon S3, and if user wants to use S3 compatible storage, returns '' """ # NOTE(arnaud): maybe get rid of hardcoded amazon stuff here? locations = { 's3.amazonaws.com': '', 's3-us-east-1.amazonaws.com': 'us-east-1', 's3-us-east-2.amazonaws.com': 'us-east-2', 's3-us-west-1.amazonaws.com': 'us-west-1', 's3-us-west-2.amazonaws.com': 'us-west-2', 's3-ap-east-1.amazonaws.com': 'ap-east-1', 's3-ap-south-1.amazonaws.com': 'ap-south-1', 's3-ap-northeast-1.amazonaws.com': 'ap-northeast-1', 's3-ap-northeast-2.amazonaws.com': 'ap-northeast-2', 's3-ap-northeast-3.amazonaws.com': 'ap-northeast-3', 's3-ap-southeast-1.amazonaws.com': 'ap-southeast-1', 's3-ap-southeast-2.amazonaws.com': 'ap-southeast-2', 's3-ca-central-1.amazonaws.com': 'ca-central-1', 's3-cn-north-1.amazonaws.com.cn': 'cn-north-1', 's3-cn-northwest-1.amazonaws.com.cn': 'cn-northwest-1', 's3-eu-central-1.amazonaws.com': 'eu-central-1', 's3-eu-west-1.amazonaws.com': 'eu-west-1', 's3-eu-west-2.amazonaws.com': 'eu-west-2', 's3-eu-west-3.amazonaws.com': 'eu-west-3', 's3-eu-north-1.amazonaws.com': 'eu-north-1', 's3-sa-east-1.amazonaws.com': 'sa-east-1' } # strip off scheme and port if present key = re.sub(r'^(https?://)?(?P[^:]+[^/])(:[0-9]+)?/?$', r'\g', s3_host) return locations.get(key, '') ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1732116 glance_store-4.8.1/glance_store/_drivers/swift/0000775000175000017500000000000000000000000021632 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/swift/__init__.py0000664000175000017500000000134300000000000023744 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store._drivers.swift import utils # noqa from glance_store._drivers.swift.store import * # noqa ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/swift/buffered.py0000664000175000017500000001505200000000000023771 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import socket import tempfile from oslo_config import cfg from oslo_utils import encodeutils from glance_store import exceptions from glance_store.i18n import _ LOG = logging.getLogger(__name__) READ_SIZE = 65536 BUFFERING_OPTS = [ cfg.StrOpt('swift_upload_buffer_dir', help=""" Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option ``swift_buffer_on_upload`` is set to True. * This directory should be provisioned keeping in mind the ``swift_store_large_object_chunk_size`` and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: * String value representing an absolute directory path Related options: * swift_buffer_on_upload * swift_store_large_object_chunk_size """), ] CONF = cfg.CONF def validate_buffering(buffer_dir): if buffer_dir is None: msg = _('Configuration option "swift_upload_buffer_dir" is ' 'not set. Please set it to a valid path to buffer ' 'during Swift uploads.') raise exceptions.BadStoreConfiguration(store_name='swift', reason=msg) # NOTE(dharinic): Ensure that the provided directory path for # buffering is valid try: _tmpfile = tempfile.TemporaryFile(dir=buffer_dir) except OSError as err: msg = (_('Unable to use buffer directory set with ' '"swift_upload_buffer_dir". Error: %s') % encodeutils.exception_to_unicode(err)) raise exceptions.BadStoreConfiguration(store_name='swift', reason=msg) else: _tmpfile.close() return True class BufferedReader(object): """Buffer a chunk (segment) worth of data to disk before sending it swift. This creates the ability to back the input stream up and re-try put object requests. (Swiftclient will try to reset the file pointer on any upload failure if seek and tell methods are provided on the input file.) Chunks are temporarily buffered to disk. Disk space consumed will be roughly (segment size * number of in-flight upload requests). There exists a possibility where the disk space consumed for buffering MAY eat into the disk space available for glance cache. This may affect image download performance. So, extra care should be taken while deploying this to ensure there is enough disk space available. """ def __init__(self, fd, checksum, os_hash_value, total, verifier=None, backend_group=None): self.fd = fd self.total = total self.checksum = checksum self.os_hash_value = os_hash_value self.verifier = verifier self.backend_group = backend_group # maintain a pointer to use to update checksum and verifier self.update_position = 0 if self.backend_group: buffer_dir = getattr(CONF, self.backend_group).swift_upload_buffer_dir else: buffer_dir = CONF.glance_store.swift_upload_buffer_dir self._tmpfile = tempfile.TemporaryFile(dir=buffer_dir) self._buffered = False self.is_zero_size = False self._buffer() # Setting the file pointer back to the beginning of file self._tmpfile.seek(0) def read(self, size): """Read up to a chunk's worth of data from the input stream into a file buffer. Then return data out of that buffer. """ remaining = self.total - self._tmpfile.tell() read_size = min(remaining, size) # read out of the buffered chunk result = self._tmpfile.read(read_size) # update the checksum and verifier with only the bytes # they have not seen update = self.update_position - self._tmpfile.tell() if update < 0: self.checksum.update(result[update:]) self.os_hash_value.update(result[update:]) if self.verifier: self.verifier.update(result[update:]) self.update_position += abs(update) return result def _buffer(self): to_buffer = self.total LOG.debug("Buffering %s bytes of image segment" % to_buffer) buffer_read_count = 0 while not self._buffered: read_size = min(to_buffer, READ_SIZE) try: buf = self.fd.read(read_size) except IOError as e: # We actually don't know what exactly self.fd is. And as a # result we don't know which exception it may raise. To pass # the retry mechanism inside swift client we must limit the # possible set of errors. raise socket.error(*e.args) if len(buf) == 0: if self._tmpfile.tell() == 0: self.is_zero_size = True self._tmpfile.seek(0) self._buffered = True break self._tmpfile.write(buf) to_buffer -= len(buf) buffer_read_count = buffer_read_count + 1 if buffer_read_count == 0: self.is_zero_size = True # NOTE(belliott) seek and tell get used by python-swiftclient to "reset" # if there is a put_object error def seek(self, offset): LOG.debug("Seek from %s to %s" % (self._tmpfile.tell(), offset)) self._tmpfile.seek(offset) def tell(self): return self._tmpfile.tell() @property def bytes_read(self): return self.tell() def __enter__(self): self._tmpfile.__enter__() return self def __exit__(self, type, value, traceback): # close and delete the temporary file used to buffer data self._tmpfile.__exit__(type, value, traceback) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/swift/connection_manager.py0000664000175000017500000002163500000000000026044 0ustar00zuulzuul00000000000000# Copyright 2010-2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Connection Manager for Swift connections that responsible for providing connection with valid credentials and updated token""" import logging from oslo_utils import encodeutils from glance_store import exceptions from glance_store.i18n import _, _LI LOG = logging.getLogger(__name__) class SwiftConnectionManager(object): """Connection Manager class responsible for initializing and managing swiftclient connections in store. The instance of that class can provide swift connections with a valid(and refreshed) user token if the token is going to expire soon. """ AUTH_HEADER_NAME = 'X-Auth-Token' def __init__(self, store, store_location, context=None, allow_reauth=False): """Initialize manager with parameters required to establish connection. Initialize store and prepare it for interacting with swift. Also initialize keystone client that need to be used for authentication if allow_reauth is True. The method invariant is the following: if method was executed successfully and self.allow_reauth is True users can safely request valid(no expiration) swift connections any time. Otherwise, connection manager initialize a connection once and always returns that connection to users. :param store: store that provides connections :param store_location: image location in store :param context: user context to access data in Swift :param allow_reauth: defines if re-authentication need to be executed when a user request the connection """ self._client = None self.store = store self.location = store_location self.context = context self.allow_reauth = allow_reauth self.storage_url = self._get_storage_url() self.connection = self._init_connection() def get_connection(self): """Get swift client connection. Returns swift client connection. If allow_reauth is True and connection token is going to expire soon then the method returns updated connection. The method invariant is the following: if self.allow_reauth is False then the method returns the same connection for every call. So the connection may expire. If self.allow_reauth is True the returned swift connection is always valid and cannot expire at least for swift_store_expire_soon_interval. """ if self.allow_reauth: # we are refreshing token only and if only connection manager # re-authentication is allowed. Token refreshing is setup by # connection manager users. Also we disable re-authentication # if there is not way to execute it (cannot initialize trusts for # multi-tenant or auth_version is not 3) auth_ref = self.client.session.auth.auth_ref # if connection token is going to expire soon (keystone checks # is token is going to expire or expired already) if self.store.backend_group: interval = getattr( self.store.conf, self.store.backend_group ).swift_store_expire_soon_interval else: store_conf = self.store.conf.glance_store interval = store_conf.swift_store_expire_soon_interval if auth_ref.will_expire_soon(interval): LOG.info(_LI("Requesting new token for swift connection.")) # request new token with session and client provided by store auth_token = self.client.session.get_auth_headers().get( self.AUTH_HEADER_NAME) LOG.info(_LI("Token has been successfully requested. " "Refreshing swift connection.")) # initialize new switclient connection with fresh token self.connection = self.store.get_store_connection( auth_token, self.storage_url) return self.connection @property def client(self): """Return keystone client to request a new token. Initialize a client lazily from the method provided by glance_store. The method invariant is the following: if client cannot be initialized raise exception otherwise return initialized client that can be used for re-authentication any time. """ if self._client is None: self._client = self._init_client() return self._client def _init_connection(self): """Initialize and return valid Swift connection.""" auth_token = self.client.session.get_auth_headers().get( self.AUTH_HEADER_NAME) return self.store.get_store_connection( auth_token, self.storage_url) def _init_client(self): """Initialize Keystone client.""" return self.store.init_client(location=self.location, context=self.context) def _get_storage_url(self): """Request swift storage url.""" raise NotImplementedError() def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): pass class SingleTenantConnectionManager(SwiftConnectionManager): def _get_storage_url(self): """Get swift endpoint from keystone Return endpoint for swift from service catalog if not overridden in store configuration. The method works only Keystone v3. If you are using different version (1 or 2) it returns None. :return: swift endpoint """ if self.store.conf_endpoint: return self.store.conf_endpoint if self.store.auth_version == '3': try: return self.client.session.get_endpoint( service_type=self.store.service_type, interface=self.store.endpoint_type, region_name=self.store.region ) except Exception as e: # do the same that swift driver does # when catching ClientException msg = _("Cannot find swift service endpoint : " "%s") % encodeutils.exception_to_unicode(e) raise exceptions.BackendException(msg) def _init_connection(self): if self.store.auth_version == '3': return super(SingleTenantConnectionManager, self)._init_connection() else: # no re-authentication for v1 and v2 self.allow_reauth = False # use good old connection initialization return self.store.get_connection(self.location, self.context) class MultiTenantConnectionManager(SwiftConnectionManager): def __init__(self, store, store_location, context=None, allow_reauth=False): # no context - no party if context is None: reason = _("Multi-tenant Swift storage requires a user context.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) super(MultiTenantConnectionManager, self).__init__( store, store_location, context, allow_reauth) def __exit__(self, exc_type, exc_val, exc_tb): if self._client and self.client.trust_id: # client has been initialized - need to cleanup resources LOG.info(_LI("Revoking trust %s"), self.client.trust_id) self.client.trusts.delete(self.client.trust_id) def _get_storage_url(self): return self.location.swift_url def _init_connection(self): if self.allow_reauth: try: return super(MultiTenantConnectionManager, self)._init_connection() except Exception as e: LOG.debug("Cannot initialize swift connection for multi-tenant" " store with trustee token: %s. Using user token for" " connection initialization.", e) # for multi-tenant store we have a token, so we can use it # for connection initialization but we cannot fetch new token # with client self.allow_reauth = False return self.store.get_store_connection( self.context.auth_token, self.storage_url) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/swift/store.py0000664000175000017500000021024700000000000023346 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for SWIFT""" import http.client import io import logging import math import urllib.parse from keystoneauth1.access import service_catalog as keystone_sc from keystoneauth1 import identity as ks_identity from keystoneauth1 import session as ks_session from keystoneclient.v3 import client as ks_client from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import units try: import swiftclient except ImportError: swiftclient = None import glance_store from glance_store._drivers.swift import buffered from glance_store._drivers.swift import connection_manager from glance_store._drivers.swift import utils as sutils from glance_store import capabilities from glance_store.common import utils as gutils from glance_store import driver from glance_store import exceptions from glance_store.i18n import _, _LE, _LI from glance_store import location LOG = logging.getLogger(__name__) DEFAULT_CONTAINER = 'glance' DEFAULT_LARGE_OBJECT_SIZE = 5 * units.Ki # 5GB DEFAULT_LARGE_OBJECT_CHUNK_SIZE = 200 # 200M ONE_MB = units.k * units.Ki # Here we used the mixed meaning of MB _SWIFT_OPTS = [ cfg.BoolOpt('swift_store_auth_insecure', default=False, help=""" Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: * True * False Related options: * swift_store_cacert """), cfg.StrOpt('swift_store_cacert', sample_default='/etc/ssl/certs/ca-certificates.crt', help=""" Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: * A valid path to a CA file Related options: * swift_store_auth_insecure """), cfg.StrOpt('swift_store_region', sample_default='RegionTwo', help=""" The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with ``swift_store_region`` allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. NOTE: Setting the region with ``swift_store_region`` is tenant-specific and is necessary ``only if`` the tenant has multiple endpoints across different regions. Possible values: * A string value representing a valid Swift region. Related Options: * None """), cfg.StrOpt('swift_store_endpoint', sample_default="""\ https://swift.openstack.example.org/v1/path_not_including_container\ _name\ """, help=""" The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by ``auth`` is used. Setting an endpoint with ``swift_store_endpoint`` overrides the storage URL and is used for Glance image storage. NOTE: The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: * String value representing a valid URL path up to a Swift container Related Options: * None """), cfg.StrOpt('swift_store_endpoint_type', default='publicURL', choices=('publicURL', 'adminURL', 'internalURL'), help=""" Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: * publicURL * adminURL * internalURL Related options: * swift_store_endpoint """), cfg.StrOpt('swift_store_service_type', default='object-store', help=""" Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to ``object-store``. NOTE: If ``swift_store_auth_version`` is set to 2, the value for this configuration option needs to be ``object-store``. If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: * A string representing a valid service type for Swift storage. Related Options: * None """), cfg.StrOpt('swift_store_container', default=DEFAULT_CONTAINER, help=""" Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option ``swift_store_multiple_containers_seed``. When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by ``swift_store_multiple_containers_seed``). Example: if the seed is set to 3 and swift_store_container = ``glance``, then an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in the container ``glance_fda``. All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be ``glance_fdae39a1-ba.`` Possible values: * If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account * If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of ``swift_store_multiple_containers_seed`` should be taken into account as well. Related options: * ``swift_store_multiple_containers_seed`` * ``swift_store_multi_tenant`` * ``swift_store_create_container_on_put`` """), cfg.IntOpt('swift_store_large_object_size', default=DEFAULT_LARGE_OBJECT_SIZE, min=1, help=""" The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. NOTE: This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: * A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: * ``swift_store_large_object_chunk_size`` """), cfg.IntOpt('swift_store_large_object_chunk_size', default=DEFAULT_LARGE_OBJECT_CHUNK_SIZE, min=1, help=""" The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to ``swift_store_large_object_size`` for more detail. For example: if ``swift_store_large_object_size`` is 5GB and ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: * A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: * ``swift_store_large_object_size`` """), cfg.BoolOpt('swift_store_create_container_on_put', default=False, help=""" Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: * True * False Related options: * None """), cfg.BoolOpt('swift_store_multi_tenant', default=False, help=""" Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage NOTE: If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the 'swift_store_config_file' option. Possible values: * True * False Related options: * swift_store_config_file """), cfg.IntOpt('swift_store_multiple_containers_seed', default=0, min=0, max=32, help=""" Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to ``swift_store_container`` for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html NOTE: This is used only when swift_store_multi_tenant is disabled. Possible values: * A non-negative integer less than or equal to 32 Related options: * ``swift_store_container`` * ``swift_store_multi_tenant`` * ``swift_store_create_container_on_put`` """), cfg.ListOpt('swift_store_admin_tenants', default=[], help=""" List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: * A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: * None """), cfg.BoolOpt('swift_store_ssl_compression', default=True, help=""" SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: * True * False Related Options: * None """), cfg.IntOpt('swift_store_retry_get_count', default=0, min=0, help=""" The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, ``swift_store_retry_get_count`` ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: * Zero * Positive integer value Related Options: * None """), cfg.IntOpt('swift_store_expire_soon_interval', min=0, default=60, help=""" Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: * Zero * Positive integer value Related Options: * None """), cfg.BoolOpt('swift_store_use_trusts', default=True, help=""" Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, ``swift_store_use_trusts`` is set to ``True``(use of trusts is enabled). If set to ``False``, a user token is used for the Swift connection instead, eliminating the overhead of trust creation. NOTE: This option is considered only when ``swift_store_multi_tenant`` is set to ``True`` Possible values: * True * False Related options: * swift_store_multi_tenant """), cfg.BoolOpt('swift_buffer_on_upload', default=False, help=""" Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: (``swift_store_large_object_chunk_size`` * ``workers`` * 1000) Possible values: * True * False Related options: * swift_upload_buffer_dir """), ] def swift_retry_iter(resp_iter, length, store, location, manager): if not length and isinstance(resp_iter, io.BytesIO): # io.BytesIO does not have a len attribute, instead go the end using # seek to get the size of the file pos = resp_iter.tell() resp_iter.seek(0, 2) length = resp_iter.tell() resp_iter.seek(pos) length = length if length else (resp_iter.len if hasattr(resp_iter, 'len') else 0) retries = 0 bytes_read = 0 if store.backend_group: rcount = getattr(store.conf, store.backend_group).swift_store_retry_get_count else: rcount = store.conf.glance_store.swift_store_retry_get_count while retries <= rcount: try: for chunk in resp_iter: yield chunk bytes_read += len(chunk) except swiftclient.ClientException as e: LOG.warning("Swift exception raised %s" % encodeutils.exception_to_unicode(e)) if bytes_read != length: if retries == rcount: # terminate silently and let higher level decide LOG.error(_LE("Stopping Swift retries after %d " "attempts") % retries) break else: retries += 1 LOG.info(_LI("Retrying Swift connection " "(%(retries)d/%(max_retries)d) with " "range=%(start)d-%(end)d"), {'retries': retries, 'max_retries': rcount, 'start': bytes_read, 'end': length}) (_resp_headers, resp_iter) = store._get_object(location, manager, bytes_read) else: break class StoreLocation(location.StoreLocation): """ Class describing a Swift URI. A Swift URI can look like any of the following: swift://user:pass@authurl.com/container/obj-id swift://account:user:pass@authurl.com/container/obj-id swift+http://user:pass@authurl.com/container/obj-id swift+https://user:pass@authurl.com/container/obj-id When using multi-tenant a URI might look like this (a storage URL): swift+https://example.com/container/obj-id The swift+http:// URIs indicate there is an HTTP authentication URL. The default for Swift is an HTTPS authentication URL, so swift:// and swift+https:// are the same... """ def process_specs(self): self.scheme = self.specs.get('scheme', 'swift+https') self.user = self.specs.get('user') self.key = self.specs.get('key') self.auth_or_store_url = self.specs.get('auth_or_store_url') self.container = self.specs.get('container') self.obj = self.specs.get('obj') def _get_credstring(self): if self.user and self.key: return '%s:%s' % (urllib.parse.quote(self.user), urllib.parse.quote(self.key)) return '' def get_uri(self, credentials_included=True): auth_or_store_url = self.auth_or_store_url if auth_or_store_url.startswith('http://'): auth_or_store_url = auth_or_store_url[len('http://'):] elif auth_or_store_url.startswith('https://'): auth_or_store_url = auth_or_store_url[len('https://'):] credstring = self._get_credstring() auth_or_store_url = auth_or_store_url.strip('/') container = self.container.strip('/') obj = self.obj.strip('/') if not credentials_included: # Used only in case of an add # Get the current store from config if self.backend_group: store = getattr(self.conf, self.backend_group).default_swift_reference else: store = self.conf.glance_store.default_swift_reference return '%s://%s/%s/%s' % ('swift+config', store, container, obj) if self.scheme == 'swift+config': if self.ssl_enabled: self.scheme = 'swift+https' else: self.scheme = 'swift+http' if credstring != '': credstring = "%s@" % credstring return '%s://%s%s/%s/%s' % (self.scheme, credstring, auth_or_store_url, container, obj) def _get_conf_value_from_account_ref(self, netloc): try: ref_params = sutils.SwiftParams( self.conf, backend=self.backend_group).params self.user = ref_params[netloc]['user'] self.key = ref_params[netloc]['key'] netloc = ref_params[netloc]['auth_address'] self.ssl_enabled = True if netloc != '': if netloc.startswith('http://'): self.ssl_enabled = False netloc = netloc[len('http://'):] elif netloc.startswith('https://'): netloc = netloc[len('https://'):] except KeyError: reason = _("Badly formed Swift URI. Credentials not found for " "account reference") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) return netloc def _form_uri_parts(self, netloc, path): if netloc != '': # > Python 2.6.1 if '@' in netloc: creds, netloc = netloc.split('@') else: creds = None else: # Python 2.6.1 compat # see lp659445 and Python issue7904 if '@' in path: creds, path = path.split('@') else: creds = None netloc = path[0:path.find('/')].strip('/') path = path[path.find('/'):].strip('/') if creds: cred_parts = creds.split(':') if len(cred_parts) < 2: reason = _("Badly formed credentials in Swift URI.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) key = cred_parts.pop() user = ':'.join(cred_parts) creds = urllib.parse.unquote(creds) try: self.user, self.key = creds.rsplit(':', 1) except exceptions.BadStoreConfiguration: self.user = urllib.parse.unquote(user) self.key = urllib.parse.unquote(key) else: self.user = None self.key = None return netloc, path def _form_auth_or_store_url(self, netloc, path): path_parts = path.split('/') try: self.obj = path_parts.pop() self.container = path_parts.pop() if not netloc.startswith('http'): # push hostname back into the remaining to build full authurl path_parts.insert(0, netloc) self.auth_or_store_url = '/'.join(path_parts) except IndexError: reason = _("Badly formed Swift URI.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) def parse_uri(self, uri): """ Parse URLs. This method fixes an issue where credentials specified in the URL are interpreted differently in Python 2.6.1+ than prior versions of Python. It also deals with the peculiarity that new-style Swift URIs have where a username can contain a ':', like so: swift://account:user:pass@authurl.com/container/obj and for system created locations with account reference swift+config://account_reference/container/obj """ # Make sure that URIs that contain multiple schemes, such as: # swift://user:pass@http://authurl.com/v1/container/obj # are immediately rejected. if uri.count('://') != 1: reason = _("URI cannot contain more than one occurrence " "of a scheme. If you have specified a URI like " "swift://user:pass@http://authurl.com/v1/container/obj" ", you need to change it to use the " "swift+http:// scheme, like so: " "swift+http://user:pass@authurl.com/v1/container/obj") LOG.info(_LI("Invalid store URI: %(reason)s"), {'reason': reason}) raise exceptions.BadStoreUri(message=reason) pieces = urllib.parse.urlparse(uri) self.validate_schemas(uri, valid_schemas=( 'swift://', 'swift+http://', 'swift+https://', 'swift+config://')) self.scheme = pieces.scheme netloc = pieces.netloc path = pieces.path.lstrip('/') # NOTE(Sridevi): Fix to map the account reference to the # corresponding configuration value if self.scheme == 'swift+config': netloc = self._get_conf_value_from_account_ref(netloc) else: netloc, path = self._form_uri_parts(netloc, path) self._form_auth_or_store_url(netloc, path) @property def swift_url(self): """ Creates a fully-qualified auth address that the Swift client library can use. The scheme for the auth_address is determined using the scheme included in the `location` field. HTTPS is assumed, unless 'swift+http' is specified. """ if self.auth_or_store_url.startswith('http'): return self.auth_or_store_url else: if self.scheme == 'swift+config': if self.ssl_enabled: self.scheme = 'swift+https' else: self.scheme = 'swift+http' if self.scheme in ('swift+https', 'swift'): auth_scheme = 'https://' else: auth_scheme = 'http://' return ''.join([auth_scheme, self.auth_or_store_url]) def Store(conf, backend=None): group = 'glance_store' if backend: group = backend multi_tenant = getattr(conf, backend).swift_store_multi_tenant default_store = conf.glance_store.default_backend else: default_store = conf.glance_store.default_store multi_tenant = conf.glance_store.swift_store_multi_tenant # NOTE(dharinic): Multi-tenant store cannot work with swift config if multi_tenant: if (default_store == 'swift+config' or sutils.is_multiple_swift_store_accounts_enabled( conf, backend=backend)): msg = _("Swift multi-tenant store cannot be configured to " "work with swift+config. The options " "'swift_store_multi_tenant' and " "'swift_store_config_file' are mutually exclusive. " "If you intend to use multi-tenant swift store, please " "make sure that you have not set a swift configuration " "file with the 'swift_store_config_file' option.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=msg) try: conf.register_opts(_SWIFT_OPTS + sutils.swift_opts + buffered.BUFFERING_OPTS, group=group) except cfg.DuplicateOptError: pass if multi_tenant: return MultiTenantStore(conf, backend=backend) return SingleTenantStore(conf, backend=backend) Store.OPTIONS = _SWIFT_OPTS + sutils.swift_opts + buffered.BUFFERING_OPTS def _is_slo(slo_header): if (slo_header is not None and isinstance(slo_header, str) and slo_header.lower() == 'true'): return True return False class BaseStore(driver.Store): _CAPABILITIES = capabilities.BitMasks.RW_ACCESS CHUNKSIZE = 65536 OPTIONS = _SWIFT_OPTS + sutils.swift_opts def get_schemes(self): return ('swift+https', 'swift', 'swift+http', 'swift+config') def configure(self, re_raise_bsc=False): if self.backend_group: glance_conf = getattr(self.conf, self.backend_group) else: glance_conf = self.conf.glance_store _obj_size = self._option_get('swift_store_large_object_size') self.large_object_size = _obj_size * ONE_MB _chunk_size = self._option_get('swift_store_large_object_chunk_size') self.large_object_chunk_size = _chunk_size * ONE_MB self.admin_tenants = glance_conf.swift_store_admin_tenants self.region = glance_conf.swift_store_region self.service_type = glance_conf.swift_store_service_type self.conf_endpoint = glance_conf.swift_store_endpoint self.endpoint_type = glance_conf.swift_store_endpoint_type self.insecure = glance_conf.swift_store_auth_insecure self.ssl_compression = glance_conf.swift_store_ssl_compression self.cacert = glance_conf.swift_store_cacert if self.insecure: self.ks_verify = False else: self.ks_verify = self.cacert or True if swiftclient is None: msg = _("Missing dependency python_swiftclient.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=msg) if glance_conf.swift_buffer_on_upload: buffer_dir = glance_conf.swift_upload_buffer_dir if buffered.validate_buffering(buffer_dir): self.reader_class = buffered.BufferedReader else: self.reader_class = ChunkReader super(BaseStore, self).configure(re_raise_bsc=re_raise_bsc) def _get_object(self, location, manager, start=None): headers = {} if start is not None: bytes_range = 'bytes=%d-' % start headers = {'Range': bytes_range} try: resp_headers, resp_body = manager.get_connection().get_object( location.container, location.obj, resp_chunk_size=self.CHUNKSIZE, headers=headers) except swiftclient.ClientException as e: if e.http_status == http.client.NOT_FOUND: msg = _("Swift could not find object %s.") % location.obj LOG.warning(msg) raise exceptions.NotFound(message=msg) else: raise return (resp_headers, resp_body) @capabilities.check def get(self, location, connection=None, offset=0, chunk_size=None, context=None): if self.backend_group: glance_conf = getattr(self.conf, self.backend_group) else: glance_conf = self.conf.glance_store location = location.store_location # initialize manager to receive valid connections allow_retry = glance_conf.swift_store_retry_get_count > 0 with self.get_manager(location, context, allow_reauth=allow_retry) as manager: (resp_headers, resp_body) = self._get_object(location, manager=manager) class ResponseIndexable(glance_store.Indexable): def another(self): try: return next(self.wrapped) except StopIteration: return b'' length = int(resp_headers.get('content-length', 0)) if allow_retry: resp_body = swift_retry_iter(resp_body, length, self, location, manager=manager) return ResponseIndexable(resp_body, length), length def get_size(self, location, connection=None, context=None): location = location.store_location if not connection: connection = self.get_connection(location, context=context) try: resp_headers = connection.head_object( location.container, location.obj) return int(resp_headers.get('content-length', 0)) except Exception: return 0 def _option_get(self, param): if self.backend_group: result = getattr(getattr(self.conf, self.backend_group), param) else: result = getattr(self.conf.glance_store, param) if result is None: reason = (_("Could not find %(param)s in configuration options.") % param) LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) return result def _delete_stale_chunks(self, connection, container, chunk_list): for chunk in chunk_list: LOG.debug("Deleting chunk %s" % chunk) try: connection.delete_object(container, chunk) except Exception: msg = _("Failed to delete orphaned chunk " "%(container)s/%(chunk)s") LOG.exception(msg % {'container': container, 'chunk': chunk}) @driver.back_compat_add @capabilities.check def add(self, image_id, image_file, image_size, hashing_algo, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param hashing_algo: A hashlib algorithm identifier (string) :param verifier: An object used to verify signatures for images :returns: tuple of URL in backing store, bytes written, checksum, multihash value, and a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if something already exists at this location """ os_hash_value = gutils.get_hasher(hashing_algo, False) location = self.create_location(image_id, context=context) # initialize a manager with re-auth if image need to be split need_chunks = (image_size == 0) or ( image_size >= self.large_object_size) with self.get_manager(location, context, allow_reauth=need_chunks) as manager: self._create_container_if_missing(location.container, manager.get_connection()) LOG.debug("Adding image object '%(obj_name)s' " "to Swift" % dict(obj_name=location.obj)) try: if not need_chunks: # Image size is known, and is less than large_object_size. # Send to Swift with regular PUT. checksum = gutils.get_hasher('md5', False) reader = ChunkReader(image_file, checksum, os_hash_value, image_size, verifier=verifier) obj_etag = manager.get_connection().put_object( location.container, location.obj, reader, content_length=image_size) else: # Write the image into Swift in chunks. chunk_id = 1 if image_size > 0: total_chunks = str(int( math.ceil(float(image_size) / float(self.large_object_chunk_size)))) else: # image_size == 0 is when we don't know the size # of the image. This can occur with older clients # that don't inspect the payload size. LOG.debug("Cannot determine image size because it is " "either not provided in the request or " "chunked-transfer encoding is used. " "Adding image as a segmented object to " "Swift.") total_chunks = '?' checksum = gutils.get_hasher('md5', False) written_chunks = [] combined_chunks_size = 0 while True: chunk_size = self.large_object_chunk_size if image_size == 0: content_length = None else: left = image_size - combined_chunks_size if left == 0: break if chunk_size > left: chunk_size = left content_length = chunk_size chunk_name = "%s-%05d" % (location.obj, chunk_id) with self.reader_class( image_file, checksum, os_hash_value, chunk_size, verifier, backend_group=self.backend_group) as reader: if reader.is_zero_size is True: LOG.debug('Not writing zero-length chunk.') break try: chunk_etag = \ manager.get_connection().put_object( location.container, chunk_name, reader, content_length=content_length) written_chunks.append(chunk_name) except Exception: # Delete orphaned segments from swift backend with excutils.save_and_reraise_exception(): LOG.error(_("Error during chunked upload " "to backend, deleting stale " "chunks.")) self._delete_stale_chunks( manager.get_connection(), location.container, written_chunks) bytes_read = reader.bytes_read msg = ("Wrote chunk %(chunk_name)s (%(chunk_id)d/" "%(total_chunks)s) of length %(bytes_read)" "d to Swift returning MD5 of content: " "%(chunk_etag)s" % {'chunk_name': chunk_name, 'chunk_id': chunk_id, 'total_chunks': total_chunks, 'bytes_read': bytes_read, 'chunk_etag': chunk_etag}) LOG.debug(msg) chunk_id += 1 combined_chunks_size += bytes_read # In the case we have been given an unknown image size, # set the size to the total size of the combined chunks. if image_size == 0: image_size = combined_chunks_size # Now we write the object manifest in X-Object-Manifest # header as defined for Dynamic Large Objects (DLO) Mode. # This request does not include ETag as PUT request has not # actual content that we need to verify. manifest = "%s/%s-" % (location.container, location.obj) headers = {'X-Object-Manifest': manifest} # The ETag returned for the manifest is actually the # MD5 hash of the concatenated checksums of the strings # of each chunk...so we ignore this result in favour of # the MD5 of the entire image file contents, so that # users can verify the image file contents accordingly manager.get_connection().put_object(location.container, location.obj, None, headers=headers) obj_etag = checksum.hexdigest() # NOTE: We return the user and key here! Have to because # location is used by the API server to return the actual # image data. We *really* should consider NOT returning # the location attribute from GET /images/ and # GET /images/details if sutils.is_multiple_swift_store_accounts_enabled( self.conf, backend=self.backend_group): include_creds = False else: include_creds = True metadata = {} if self.backend_group: metadata['store'] = self.backend_group return (location.get_uri(credentials_included=include_creds), image_size, obj_etag, os_hash_value.hexdigest(), metadata) except swiftclient.ClientException as e: if e.http_status == http.client.CONFLICT: msg = _("Swift already has an image at this location") raise exceptions.Duplicate(message=msg) elif e.http_status == http.client.REQUEST_ENTITY_TOO_LARGE: raise exceptions.StorageFull(message=e.msg) msg = (_("Failed to add object to Swift.\n" "Got error from Swift: %s.") % encodeutils.exception_to_unicode(e)) LOG.error(msg) raise glance_store.BackendException(msg) @capabilities.check def delete(self, location, connection=None, context=None): location = location.store_location if not connection: connection = self.get_connection(location, context=context) try: # We request the manifest for the object. If one exists, # that means the object was uploaded in chunks/segments, # and we need to delete all the chunks as well as the # manifest. dlo_manifest = None slo_manifest = None try: headers = connection.head_object( location.container, location.obj) dlo_manifest = headers.get('x-object-manifest') slo_manifest = headers.get('x-static-large-object') except swiftclient.ClientException as e: if e.http_status != http.client.NOT_FOUND: raise if _is_slo(slo_manifest): # Delete the manifest as well as the segments query_string = 'multipart-manifest=delete' connection.delete_object(location.container, location.obj, query_string=query_string) return if dlo_manifest: # Delete all the chunks before the object manifest itself obj_container, obj_prefix = dlo_manifest.split('/', 1) segments = connection.get_container( obj_container, prefix=obj_prefix)[1] for segment in segments: # TODO(jaypipes): This would be an easy area to parallelize # since we're simply sending off parallelizable requests # to Swift to delete stuff. It's not like we're going to # be hogging up network or file I/O here... try: connection.delete_object(obj_container, segment['name']) except swiftclient.ClientException: msg = _('Unable to delete segment %(segment_name)s') msg = msg % {'segment_name': segment['name']} LOG.exception(msg) # Delete object (or, in segmented case, the manifest) connection.delete_object(location.container, location.obj) except swiftclient.ClientException as e: if e.http_status == http.client.NOT_FOUND: msg = _("Swift could not find image at URI.") raise exceptions.NotFound(message=msg) else: raise def _create_container_if_missing(self, container, connection): """ Creates a missing container in Swift if the ``swift_store_create_container_on_put`` option is set. :param container: Name of container to create :param connection: Connection to swift service """ if self.backend_group: store_conf = getattr(self.conf, self.backend_group) else: store_conf = self.conf.glance_store try: connection.head_container(container) except swiftclient.ClientException as e: if e.http_status == http.client.NOT_FOUND: if store_conf.swift_store_create_container_on_put: try: msg = (_LI("Creating swift container %(container)s") % {'container': container}) LOG.info(msg) connection.put_container(container) except swiftclient.ClientException as e: msg = (_("Failed to add container to Swift.\n" "Got error from Swift: %s.") % encodeutils.exception_to_unicode(e)) raise glance_store.BackendException(msg) else: msg = (_("The container %(container)s does not exist in " "Swift. Please set the " "swift_store_create_container_on_put option " "to add container to Swift automatically.") % {'container': container}) raise glance_store.BackendException(msg) else: raise def get_connection(self, location, context=None): raise NotImplementedError() def create_location(self, image_id, context=None): raise NotImplementedError() def init_client(self, location, context=None): """Initialize and return client to authorize against keystone The method invariant is the following: it always returns Keystone client that can be used to receive fresh token in any time. Otherwise it raises appropriate exception. :param location: swift location data :param context: user context (it is not required if user grants are specified for single tenant store) :return correctly initialized keystone client """ raise NotImplementedError() def get_store_connection(self, auth_token, storage_url): """Get initialized swift connection :param auth_token: auth token :param storage_url: swift storage url :return: swiftclient connection that allows to request container and others """ # initialize a connection return swiftclient.Connection( preauthurl=storage_url, preauthtoken=auth_token, insecure=self.insecure, ssl_compression=self.ssl_compression, cacert=self.cacert) def get_manager(self, store_location, context=None, allow_reauth=False): """Return appropriate connection manager for store The method detects store type (singletenant or multitenant) and returns appropriate connection manager (singletenant or multitenant) that allows to request swiftclient connections. :param store_location: StoreLocation object that define image location :param context: user context :param allow_reauth: defines if we allow re-authentication when user token is expired and refresh swift connection :return: connection manager for store """ msg = _("There is no Connection Manager implemented for %s class.") raise NotImplementedError(msg % self.__class__.__name__) def _set_url_prefix(self, context=None): raise NotImplementedError() class SingleTenantStore(BaseStore): EXAMPLE_URL = "swift://:@//" def __init__(self, conf, backend=None): super(SingleTenantStore, self).__init__(conf, backend=backend) self.backend_group = backend self.ref_params = sutils.SwiftParams(self.conf, backend=backend).params def configure(self, re_raise_bsc=False): # set configuration before super so configure_add can override self.auth_version = self._option_get('swift_store_auth_version') self.user_domain_id = None self.user_domain_name = None self.project_domain_id = None self.project_domain_name = None super(SingleTenantStore, self).configure(re_raise_bsc=re_raise_bsc) def configure_add(self): if self.backend_group: default_ref = getattr(self.conf, self.backend_group).default_swift_reference self.container = getattr(self.conf, self.backend_group).swift_store_container else: default_ref = self.conf.glance_store.default_swift_reference self.container = self.conf.glance_store.swift_store_container default_swift_reference = self.ref_params.get(default_ref) if default_swift_reference: self.auth_address = default_swift_reference.get('auth_address') if (not default_swift_reference) or (not self.auth_address): reason = _("A value for swift_store_auth_address is required.") LOG.error(reason) raise exceptions.BadStoreConfiguration(message=reason) if self.auth_address.startswith('http://'): self.scheme = 'swift+http' else: self.scheme = 'swift+https' self.auth_version = default_swift_reference.get('auth_version') self.user = default_swift_reference.get('user') self.key = default_swift_reference.get('key') self.user_domain_id = default_swift_reference.get('user_domain_id') self.user_domain_name = default_swift_reference.get('user_domain_name') self.project_domain_id = default_swift_reference.get( 'project_domain_id') self.project_domain_name = default_swift_reference.get( 'project_domain_name') if not (self.user or self.key): reason = _("A value for swift_store_ref_params is required.") LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) if self.backend_group: self._set_url_prefix() def _get_credstring(self): if self.user and self.key: return '%s:%s' % (urllib.parse.quote(self.user), urllib.parse.quote(self.key)) return '' def _set_url_prefix(self, context=None): auth_or_store_url = self.auth_address if auth_or_store_url.startswith('http://'): auth_or_store_url = auth_or_store_url[len('http://'):] elif auth_or_store_url.startswith('https://'): auth_or_store_url = auth_or_store_url[len('https://'):] credstring = self._get_credstring() auth_or_store_url = auth_or_store_url.strip('/') container = self.container.strip('/') if sutils.is_multiple_swift_store_accounts_enabled( self.conf, backend=self.backend_group): include_creds = False else: include_creds = True if not include_creds: store = getattr(self.conf, self.backend_group).default_swift_reference self._url_prefix = '%s://%s/%s/' % ( 'swift+config', store, container) return if self.scheme == 'swift+config': if self.ssl_enabled: self.scheme = 'swift+https' else: self.scheme = 'swift+http' if credstring != '': credstring = "%s@" % credstring self._url_prefix = '%s://%s%s/%s/' % ( self.scheme, credstring, auth_or_store_url, container) def create_location(self, image_id, context=None): container_name = self.get_container_name(image_id, self.container) specs = {'scheme': self.scheme, 'container': container_name, 'obj': str(image_id), 'auth_or_store_url': self.auth_address, 'user': self.user, 'key': self.key} return StoreLocation(specs, self.conf, backend_group=self.backend_group) def get_container_name(self, image_id, default_image_container): """ Returns appropriate container name depending upon value of ``swift_store_multiple_containers_seed``. In single-container mode, which is a seed value of 0, simply returns default_image_container. In multiple-container mode, returns default_image_container as the prefix plus a suffix determined by the multiple container seed examples: single-container mode: 'glance' multiple-container mode: 'glance_3a1' for image uuid 3A1xxxxxxx... :param image_id: UUID of image :param default_image_container: container name from ``swift_store_container`` """ if self.backend_group: seed_num_chars = getattr( self.conf, self.backend_group).swift_store_multiple_containers_seed else: seed_num_chars = \ self.conf.glance_store.swift_store_multiple_containers_seed if seed_num_chars is None \ or seed_num_chars < 0 or seed_num_chars > 32: reason = _("An integer value between 0 and 32 is required for" " swift_store_multiple_containers_seed.") LOG.error(reason) raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) elif seed_num_chars > 0: image_id = str(image_id).lower() num_dashes = image_id[:seed_num_chars].count('-') num_chars = seed_num_chars + num_dashes name_suffix = image_id[:num_chars] new_container_name = default_image_container + '_' + name_suffix return new_container_name else: return default_image_container def get_connection(self, location, context=None): if not location.user: reason = _("Location is missing user:password information.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) auth_url = location.swift_url if not auth_url.endswith('/'): auth_url += '/' if self.auth_version in ('2', '3'): try: tenant_name, user = location.user.split(':') except ValueError: reason = (_("Badly formed tenant:user '%(user)s' in " "Swift URI") % {'user': location.user}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) else: tenant_name = None user = location.user os_options = {} if self.region: os_options['region_name'] = self.region os_options['endpoint_type'] = self.endpoint_type os_options['service_type'] = self.service_type if self.user_domain_id: os_options['user_domain_id'] = self.user_domain_id if self.user_domain_name: os_options['user_domain_name'] = self.user_domain_name if self.project_domain_id: os_options['project_domain_id'] = self.project_domain_id if self.project_domain_name: os_options['project_domain_name'] = self.project_domain_name return swiftclient.Connection( auth_url, user, location.key, preauthurl=self.conf_endpoint, insecure=self.insecure, tenant_name=tenant_name, auth_version=self.auth_version, os_options=os_options, ssl_compression=self.ssl_compression, cacert=self.cacert) def init_client(self, location, context=None): """Initialize keystone client with swift service user credentials""" # prepare swift admin credentials if not location.user: reason = _("Location is missing user:password information.") LOG.info(reason) raise exceptions.BadStoreUri(message=reason) auth_url = location.swift_url if not auth_url.endswith('/'): auth_url += '/' try: tenant_name, user = location.user.split(':') except ValueError: reason = (_("Badly formed tenant:user '%(user)s' in " "Swift URI") % {'user': location.user}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) # initialize a keystone plugin for swift admin with creds password = ks_identity.V3Password( auth_url=auth_url, username=user, password=location.key, project_name=tenant_name, user_domain_id=self.user_domain_id, user_domain_name=self.user_domain_name, project_domain_id=self.project_domain_id, project_domain_name=self.project_domain_name) sess = ks_session.Session(auth=password, verify=self.ks_verify) return ks_client.Client(session=sess) def get_manager(self, store_location, context=None, allow_reauth=False): return connection_manager.SingleTenantConnectionManager(self, store_location, context, allow_reauth) class MultiTenantStore(BaseStore): EXAMPLE_URL = "swift:////" def _get_endpoint(self, context): if self.backend_group: self.container = getattr(self.conf, self.backend_group).swift_store_container else: self.container = self.conf.glance_store.swift_store_container if context is None: reason = _("Multi-tenant Swift storage requires a context.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) if context.service_catalog is None: reason = _("Multi-tenant Swift storage requires " "a service catalog.") raise exceptions.BadStoreConfiguration(store_name="swift", reason=reason) self.storage_url = self.conf_endpoint if not self.storage_url: catalog = keystone_sc.ServiceCatalogV2(context.service_catalog) self.storage_url = catalog.url_for(service_type=self.service_type, region_name=self.region, interface=self.endpoint_type) if self.storage_url.startswith('http://'): self.scheme = 'swift+http' else: self.scheme = 'swift+https' return self.storage_url def delete(self, location, connection=None, context=None): if not connection: connection = self.get_connection(location.store_location, context=context) super(MultiTenantStore, self).delete(location, connection) connection.delete_container(location.store_location.container) def set_acls(self, location, public=False, read_tenants=None, write_tenants=None, connection=None, context=None): location = location.store_location if not connection: connection = self.get_connection(location, context=context) if read_tenants is None: read_tenants = [] if write_tenants is None: write_tenants = [] headers = {} if public: headers['X-Container-Read'] = "*:*" elif read_tenants: headers['X-Container-Read'] = ','.join('%s:*' % i for i in read_tenants) else: headers['X-Container-Read'] = '' write_tenants.extend(self.admin_tenants) if write_tenants: headers['X-Container-Write'] = ','.join('%s:*' % i for i in write_tenants) else: headers['X-Container-Write'] = '' try: connection.post_container(location.container, headers=headers) except swiftclient.ClientException as e: if e.http_status == http.client.NOT_FOUND: msg = _("Swift could not find image at URI.") raise exceptions.NotFound(message=msg) else: raise def create_location(self, image_id, context=None): ep = self._get_endpoint(context) specs = {'scheme': self.scheme, 'container': self.container + '_' + str(image_id), 'obj': str(image_id), 'auth_or_store_url': ep} return StoreLocation(specs, self.conf, backend_group=self.backend_group) def _set_url_prefix(self, context=None): ep = self._get_endpoint(context) self._url_prefix = "%s://%s:%s_" % ( self.scheme, ep, self.container) def get_connection(self, location, context=None): return swiftclient.Connection( preauthurl=location.swift_url, preauthtoken=context.auth_token, insecure=self.insecure, ssl_compression=self.ssl_compression, cacert=self.cacert) def init_client(self, location, context=None): # read client parameters from config files ref_params = sutils.SwiftParams(self.conf, backend=self.backend_group).params if self.backend_group: default_ref = getattr(self.conf, self.backend_group).default_swift_reference else: default_ref = self.conf.glance_store.default_swift_reference default_swift_reference = ref_params.get(default_ref) if not default_swift_reference: reason = _("default_swift_reference %s is " "required."), default_ref LOG.error(reason) raise exceptions.BadStoreConfiguration(message=reason) auth_address = default_swift_reference.get('auth_address') user = default_swift_reference.get('user') key = default_swift_reference.get('key') user_domain_id = default_swift_reference.get('user_domain_id') user_domain_name = default_swift_reference.get('user_domain_name') project_domain_id = default_swift_reference.get('project_domain_id') project_domain_name = default_swift_reference.get( 'project_domain_name') if self.backend_group: self._set_url_prefix(context=context) # create client for multitenant user(trustor) trustor_auth = ks_identity.V3Token(auth_url=auth_address, token=context.auth_token, project_id=context.project_id) trustor_sess = ks_session.Session(auth=trustor_auth, verify=self.ks_verify) trustor_client = ks_client.Client(session=trustor_sess) auth_ref = trustor_client.session.auth.get_auth_ref(trustor_sess) roles = [t['name'] for t in auth_ref['roles']] # create client for trustee - glance user specified in swift config tenant_name, user = user.split(':') password = ks_identity.V3Password( auth_url=auth_address, username=user, password=key, project_name=tenant_name, user_domain_id=user_domain_id, user_domain_name=user_domain_name, project_domain_id=project_domain_id, project_domain_name=project_domain_name) trustee_sess = ks_session.Session(auth=password, verify=self.ks_verify) trustee_client = ks_client.Client(session=trustee_sess) # request glance user id - we will use it as trustee user trustee_user_id = trustee_client.session.get_user_id() # create trust for trustee user trust_id = trustor_client.trusts.create( trustee_user=trustee_user_id, trustor_user=context.user_id, project=context.project_id, impersonation=True, role_names=roles ).id # initialize a new client with trust and trustee credentials # create client for glance trustee user client_password = ks_identity.V3Password( auth_url=auth_address, username=user, password=key, trust_id=trust_id, user_domain_id=user_domain_id, user_domain_name=user_domain_name, project_domain_id=project_domain_id, project_domain_name=project_domain_name ) # now we can authenticate against KS # as trustee of user who provided token client_sess = ks_session.Session(auth=client_password, verify=self.ks_verify) return ks_client.Client(session=client_sess) def get_manager(self, store_location, context=None, allow_reauth=False): # if global toggle is turned off then do not allow re-authentication # with trusts if self.backend_group: use_trusts = getattr(self.conf, self.backend_group).swift_store_use_trusts else: use_trusts = self.conf.glance_store.swift_store_use_trusts if not use_trusts: allow_reauth = False return connection_manager.MultiTenantConnectionManager(self, store_location, context, allow_reauth) class ChunkReader(object): def __init__(self, fd, checksum, os_hash_value, total, verifier=None, backend_group=None): self.fd = fd self.checksum = checksum self.os_hash_value = os_hash_value self.total = total self.verifier = verifier self.backend_group = backend_group self.bytes_read = 0 self.is_zero_size = False self.byteone = fd.read(1) if len(self.byteone) == 0: self.is_zero_size = True def do_read(self, i): if self.bytes_read == 0 and i > 0 and self.byteone is not None: return self.byteone + self.fd.read(i - 1) else: return self.fd.read(i) def read(self, i): left = self.total - self.bytes_read if i > left: i = left # Note(rosmaita): under some circumstances in py3, a zero-byte # read results in a non-byte value that then causes a "unicode # objects must be encoded before hashing" error when we do the # hash computations below. (At least that seems to be what's # happening in testing.) So just fake a zero-byte read and let # the current execution path continue. # See https://bugs.launchpad.net/glance-store/+bug/1805332 # TODO(rosmaita): find what in the execution path is returning # a native string instead of bytes and fix it. if i == 0: result = b'' else: result = self.do_read(i) self.bytes_read += len(result) self.checksum.update(result) self.os_hash_value.update(result) if self.verifier: self.verifier.update(result) return result def __enter__(self): return self def __exit__(self, type, value, traceback): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/swift/utils.py0000664000175000017500000002025200000000000023345 0ustar00zuulzuul00000000000000# Copyright 2014 Rackspace # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import configparser import logging from oslo_config import cfg from glance_store import exceptions from glance_store.i18n import _, _LE swift_opts = [ cfg.StrOpt('default_swift_reference', default="ref1", help=""" Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is 'ref1'. This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: * A valid string value Related options: * None """), cfg.StrOpt('swift_store_auth_version', default='2', help='Version of the authentication service to use. ' 'Valid versions are 2 and 3 for keystone and 1 ' '(deprecated) for swauth and rackspace.', deprecated_for_removal=True, deprecated_reason=""" The option 'auth_version' in the Swift back-end configuration file is used instead. """), cfg.StrOpt('swift_store_auth_address', help='The address where the Swift authentication ' 'service is listening.', deprecated_for_removal=True, deprecated_reason=""" The option 'auth_address' in the Swift back-end configuration file is used instead. """), cfg.StrOpt('swift_store_user', secret=True, help='The user to authenticate against the Swift ' 'authentication service.', deprecated_for_removal=True, deprecated_reason=""" The option 'user' in the Swift back-end configuration file is set instead. """), cfg.StrOpt('swift_store_key', secret=True, help='Auth key for the user authenticating against the ' 'Swift authentication service.', deprecated_for_removal=True, deprecated_reason=""" The option 'key' in the Swift back-end configuration file is used to set the authentication key instead. """), cfg.StrOpt('swift_store_config_file', default=None, help=""" Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. NOTE: Please do not configure this option if you have set ``swift_store_multi_tenant`` to ``True``. Possible values: * String value representing an absolute path on the glance-api node Related options: * swift_store_multi_tenant """), ] class SwiftConfigParser(configparser.ConfigParser): def get(self, *args, **kwargs): value = super(configparser.ConfigParser, self).get(*args, **kwargs) return self._process_quotes(value) @staticmethod def _process_quotes(value): if value: if value[0] in "\"'": if len(value) == 1 or value[-1] != value[0]: raise ValueError('Non-closed quote: %s' % value) value = value[1:-1] return value CONFIG = SwiftConfigParser() LOG = logging.getLogger(__name__) def is_multiple_swift_store_accounts_enabled(conf, backend=None): if backend: cfg_file = getattr(conf, backend).swift_store_config_file else: cfg_file = conf.glance_store.swift_store_config_file if cfg_file is None: return False return True class SwiftParams(object): def __init__(self, conf, backend=None): self.conf = conf self.backend_group = backend if is_multiple_swift_store_accounts_enabled( self.conf, backend=backend): self.params = self._load_config() else: self.params = self._form_default_params() def _form_default_params(self): default = {} if self.backend_group: glance_store = getattr(self.conf, self.backend_group) else: glance_store = self.conf.glance_store if ( glance_store.swift_store_user and glance_store.swift_store_key and glance_store.swift_store_auth_address ): default['user'] = glance_store.swift_store_user default['key'] = glance_store.swift_store_key default['auth_address'] = glance_store.swift_store_auth_address default['project_domain_id'] = 'default' default['project_domain_name'] = None default['user_domain_id'] = 'default' default['user_domain_name'] = None default['auth_version'] = glance_store.swift_store_auth_version return {glance_store.default_swift_reference: default} return {} def _load_config(self): if self.backend_group: scf = getattr(self.conf, self.backend_group).swift_store_config_file else: scf = self.conf.glance_store.swift_store_config_file try: conf_file = self.conf.find_file(scf) CONFIG.read(conf_file) except Exception as e: msg = (_("swift config file " "%(conf)s:%(exc)s not found"), {'conf': scf, 'exc': e}) LOG.error(msg) raise exceptions.BadStoreConfiguration(store_name='swift', reason=msg) account_params = {} account_references = CONFIG.sections() for ref in account_references: reference = {} try: for param in ('auth_address', 'user', 'key'): reference[param] = CONFIG.get(ref, param) reference['project_domain_name'] = CONFIG.get( ref, 'project_domain_name', fallback=None) reference['project_domain_id'] = CONFIG.get( ref, 'project_domain_id', fallback=None) if (reference['project_domain_name'] is None and reference['project_domain_id'] is None): reference['project_domain_id'] = 'default' reference['user_domain_name'] = CONFIG.get( ref, 'user_domain_name', fallback=None) reference['user_domain_id'] = CONFIG.get( ref, 'user_domain_id', fallback=None) if (reference['user_domain_name'] is None and reference['user_domain_id'] is None): reference['user_domain_id'] = 'default' try: reference['auth_version'] = CONFIG.get(ref, 'auth_version') except configparser.NoOptionError: if self.backend_group: av = getattr( self.conf, self.backend_group).swift_store_auth_version else: av = self.conf.glance_store.swift_store_auth_version reference['auth_version'] = av account_params[ref] = reference except (ValueError, SyntaxError, configparser.NoOptionError): LOG.exception(_LE("Invalid format of swift store config cfg")) return account_params ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/_drivers/vmware_datastore.py0000664000175000017500000007721000000000000024426 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack, LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage backend for VMware Datastore""" import logging import os import urllib.parse from oslo_config import cfg from oslo_utils import excutils from oslo_utils import netutils from oslo_utils import units try: from oslo_vmware import api import oslo_vmware.exceptions as vexc from oslo_vmware.objects import datacenter as oslo_datacenter from oslo_vmware.objects import datastore as oslo_datastore from oslo_vmware import vim_util except ImportError: api = None import requests from requests import adapters from requests.packages.urllib3.util import retry import glance_store from glance_store import capabilities from glance_store.common import utils from glance_store import exceptions from glance_store.i18n import _, _LE from glance_store import location LOG = logging.getLogger(__name__) CHUNKSIZE = 1024 * 64 # 64kB MAX_REDIRECTS = 5 DEFAULT_STORE_IMAGE_DIR = '/openstack_glance' DS_URL_PREFIX = '/folder' STORE_SCHEME = 'vsphere' _VMWARE_OPTS = [ cfg.HostAddressOpt('vmware_server_host', sample_default='127.0.0.1', help=""" Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: * A valid IPv4 or IPv6 address * A valid DNS name Related options: * vmware_server_username * vmware_server_password """), cfg.StrOpt('vmware_server_username', sample_default='root', help=""" Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: * Any string that is the username for a user with appropriate privileges Related options: * vmware_server_host * vmware_server_password """), cfg.StrOpt('vmware_server_password', sample_default='vmware', help=""" Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: * Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: * vmware_server_host * vmware_server_username """, secret=True), cfg.IntOpt('vmware_api_retry_count', default=10, min=1, help=""" The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify 'retry forever'. Possible Values: * Any positive integer value Related options: * None """), cfg.IntOpt('vmware_task_poll_interval', default=5, min=1, help=""" Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: * Any positive integer value Related options: * None """), cfg.StrOpt('vmware_store_image_dir', default=DEFAULT_STORE_IMAGE_DIR, help=""" The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: * Any string that is a valid path to a directory Related options: * None """), cfg.BoolOpt('vmware_insecure', default=False, deprecated_name='vmware_api_insecure', help=""" Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: * True * False Related options: * vmware_ca_file """), cfg.StrOpt('vmware_ca_file', sample_default='/etc/ssl/certs/ca-certificates.crt', help=""" Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: * Any string that is a valid absolute path to a CA file Related options: * vmware_insecure """), cfg.MultiStrOpt( 'vmware_datastores', help=""" The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes ::. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: * Any string of the format: :: Related options: * None """)] def http_response_iterator(conn, response, size): """Return an iterator for a file-like object. :param conn: HTTP(S) Connection :param response: http_client.HTTPResponse object :param size: Chunk size to iterate with """ try: chunk = response.read(size) while chunk: yield chunk chunk = response.read(size) finally: conn.close() class _Reader(object): def __init__(self, data, hashing_algo, verifier=None): self._size = 0 self.data = data self.os_hash_value = utils.get_hasher(hashing_algo, False) self.checksum = utils.get_hasher('md5', False) self.verifier = verifier def read(self, size=None): result = self.data.read(size) self._size += len(result) self.checksum.update(result) self.os_hash_value.update(result) if self.verifier: self.verifier.update(result) return result @property def size(self): return self._size class StoreLocation(location.StoreLocation): """Class describing an VMware URI. An VMware URI can look like any of the following: vsphere://server_host/folder/file_path?dcPath=dc_path&dsName=ds_name """ def __init__(self, store_specs, conf, backend_group=None): super(StoreLocation, self).__init__(store_specs, conf, backend_group=backend_group) self.datacenter_path = None self.datastore_name = None self.backend_group = backend_group def process_specs(self): self.scheme = self.specs.get('scheme', STORE_SCHEME) self.server_host = self.specs.get('server_host') self.path = os.path.join(DS_URL_PREFIX, self.specs.get('image_dir').strip('/'), self.specs.get('image_id')) self.datacenter_path = self.specs.get('datacenter_path') self.datstore_name = self.specs.get('datastore_name') param_list = {'dsName': self.datstore_name} if self.datacenter_path: param_list['dcPath'] = self.datacenter_path self.query = urllib.parse.urlencode(param_list) def get_uri(self): if netutils.is_valid_ipv6(self.server_host): base_url = '%s://[%s]%s' % (self.scheme, self.server_host, self.path) else: base_url = '%s://%s%s' % (self.scheme, self.server_host, self.path) return '%s?%s' % (base_url, self.query) # NOTE(flaper87): Commenting out for now, it's probably better to do # it during image add/get. This validation relies on a config param # which doesn't make sense to have in the StoreLocation instance. # def _is_valid_path(self, path): # sdir = self.conf.glance_store.vmware_store_image_dir.strip('/') # return path.startswith(os.path.join(DS_URL_PREFIX, sdir)) def parse_uri(self, uri): self.validate_schemas(uri, valid_schemas=('%s://' % STORE_SCHEME,)) (self.scheme, self.server_host, path, params, query, fragment) = urllib.parse.urlparse(uri) if not query: path, query = path.split('?') self.path = path self.query = query # NOTE(flaper87): Read comment on `_is_valid_path` # reason = 'Badly formed VMware datastore URI %(uri)s.' % {'uri': uri} # LOG.debug(reason) # raise exceptions.BadStoreUri(reason) parts = urllib.parse.parse_qs(self.query) dc_path = parts.get('dcPath') if dc_path: self.datacenter_path = dc_path[0] ds_name = parts.get('dsName') if ds_name: self.datastore_name = ds_name[0] @property def https_url(self): """ Creates a https url that can be used to upload/download data from a vmware store. """ parsed_url = urllib.parse.urlparse(self.get_uri()) new_url = parsed_url._replace(scheme='https') return urllib.parse.urlunparse(new_url) class Store(glance_store.Store): """An implementation of the VMware datastore adapter.""" _CAPABILITIES = (capabilities.BitMasks.RW_ACCESS | capabilities.BitMasks.DRIVER_REUSABLE) OPTIONS = _VMWARE_OPTS WRITE_CHUNKSIZE = units.Mi def __init__(self, conf, backend=None): super(Store, self).__init__(conf, backend=backend) self.datastores = {} LOG.warning("The VMWare Datastore has been deprecated because " "the vmwareapi driver in nova was marked experimental and " "may be removed in a future release.") def reset_session(self): self.session = api.VMwareAPISession( self.server_host, self.server_username, self.server_password, self.api_retry_count, self.tpoll_interval, cacert=self.ca_file, insecure=self.api_insecure) return self.session def get_schemes(self): return (STORE_SCHEME,) def _sanity_check(self): if self.backend_group: store_conf = getattr(self.conf, self.backend_group) else: store_conf = self.conf.glance_store if store_conf.vmware_api_retry_count <= 0: msg = _('vmware_api_retry_count should be greater than zero') LOG.error(msg) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=msg) if store_conf.vmware_task_poll_interval <= 0: msg = _('vmware_task_poll_interval should be greater than zero') LOG.error(msg) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=msg) def configure(self, re_raise_bsc=False): self._sanity_check() self.scheme = STORE_SCHEME self.server_host = self._option_get('vmware_server_host') self.server_username = self._option_get('vmware_server_username') self.server_password = self._option_get('vmware_server_password') if self.backend_group: store_conf = getattr(self.conf, self.backend_group) else: store_conf = self.conf.glance_store self.api_retry_count = store_conf.vmware_api_retry_count self.tpoll_interval = store_conf.vmware_task_poll_interval self.ca_file = store_conf.vmware_ca_file self.api_insecure = store_conf.vmware_insecure if api is None: msg = _("Missing dependencies: oslo_vmware") raise exceptions.BadStoreConfiguration( store_name="vmware_datastore", reason=msg) self.session = self.reset_session() super(Store, self).configure(re_raise_bsc=re_raise_bsc) def _get_datacenter(self, datacenter_path): search_index_moref = self.session.vim.service_content.searchIndex dc_moref = self.session.invoke_api( self.session.vim, 'FindByInventoryPath', search_index_moref, inventoryPath=datacenter_path) dc_name = datacenter_path.rsplit('/', 1)[-1] # TODO(sabari): Add datacenter_path attribute in oslo.vmware dc_obj = oslo_datacenter.Datacenter(ref=dc_moref, name=dc_name) dc_obj.path = datacenter_path return dc_obj def _get_datastore(self, datacenter_path, datastore_name): dc_obj = self._get_datacenter(datacenter_path) datastore_ret = self.session.invoke_api( vim_util, 'get_object_property', self.session.vim, dc_obj.ref, 'datastore') if datastore_ret: datastore_refs = datastore_ret.ManagedObjectReference for ds_ref in datastore_refs: ds_obj = oslo_datastore.get_datastore_by_ref(self.session, ds_ref) if ds_obj.name == datastore_name: ds_obj.datacenter = dc_obj return ds_obj def _get_freespace(self, ds_obj): # TODO(sabari): Move this function into oslo_vmware's datastore object. return self.session.invoke_api( vim_util, 'get_object_property', self.session.vim, ds_obj.ref, 'summary.freeSpace') def _parse_datastore_info_and_weight(self, datastore): weight = 0 parts = [part.strip() for part in datastore.rsplit(":", 2)] if len(parts) < 2: msg = _('vmware_datastores format must be ' 'datacenter_path:datastore_name:weight or ' 'datacenter_path:datastore_name') LOG.error(msg) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=msg) if len(parts) == 3 and parts[2]: try: weight = int(parts[2]) except ValueError: msg = (_('Invalid weight value %(weight)s in ' 'vmware_datastores configuration') % {'weight': weight}) LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="vmware_datastore", reason=msg) datacenter_path, datastore_name = parts[0], parts[1] if not datacenter_path or not datastore_name: msg = _('Invalid datacenter_path or datastore_name specified ' 'in vmware_datastores configuration') LOG.exception(msg) raise exceptions.BadStoreConfiguration( store_name="vmware_datastore", reason=msg) return datacenter_path, datastore_name, weight def _build_datastore_weighted_map(self, datastores): """Build an ordered map where the key is a weight and the value is a Datastore object. :param: a list of datastores in the format datacenter_path:datastore_name:weight :return: a map with key-value : """ ds_map = {} for ds in datastores: dc_path, name, weight = self._parse_datastore_info_and_weight(ds) # Fetch the server side reference. ds_obj = self._get_datastore(dc_path, name) if not ds_obj: msg = (_("Could not find datastore %(ds_name)s " "in datacenter %(dc_path)s") % {'ds_name': name, 'dc_path': dc_path}) LOG.error(msg) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=msg) ds_map.setdefault(weight, []).append(ds_obj) return ds_map def configure_add(self): datastores = self._option_get('vmware_datastores') self.datastores = self._build_datastore_weighted_map(datastores) if self.backend_group: store_conf = getattr(self.conf, self.backend_group) else: store_conf = self.conf.glance_store self.store_image_dir = store_conf.vmware_store_image_dir if self.backend_group: self._set_url_prefix() def _set_url_prefix(self): path = os.path.join(DS_URL_PREFIX, self.store_image_dir) if netutils.is_valid_ipv6(self.server_host): self._url_prefix = '%s://[%s]%s' % (self.scheme, self.server_host, path) else: self._url_prefix = '%s://%s%s' % (self.scheme, self.server_host, path) def select_datastore(self, image_size): """Select a datastore with free space larger than image size.""" for k, v in sorted(self.datastores.items(), reverse=True): max_ds = None max_fs = 0 for ds in v: # Update with current freespace ds.freespace = self._get_freespace(ds) if ds.freespace > max_fs: max_ds = ds max_fs = ds.freespace if max_ds and max_ds.freespace >= image_size: return max_ds msg = _LE("No datastore found with enough free space to contain an " "image of size %d") % image_size LOG.error(msg) raise exceptions.StorageFull() def _option_get(self, param): if self.backend_group: store_conf = getattr(self.conf, self.backend_group) else: store_conf = self.conf.glance_store result = getattr(store_conf, param) if result is None: reason = (_("Could not find %(param)s in configuration " "options.") % {'param': param}) raise exceptions.BadStoreConfiguration( store_name='vmware_datastore', reason=reason) return result def _build_vim_cookie_header(self, verify_session=False): """Build ESX host session cookie header.""" if verify_session and not self.session.is_current_session_active(): self.reset_session() vim_cookies = self.session.vim.client.cookiejar if len(list(vim_cookies)) > 0: cookie = list(vim_cookies)[0] return cookie.name + '=' + cookie.value @glance_store.driver.back_compat_add @capabilities.check def add(self, image_id, image_file, image_size, hashing_algo, context=None, verifier=None): """Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param hashing_algo: A hashlib algorithm identifier (string) :param context: A context object :param verifier: An object used to verify signatures for images :returns: tuple of: (1) URL in backing store, (2) bytes written, (3) checksum, (4) multihash value, and (5) a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already exists :raises: `glance.common.exceptions.UnexpectedStatus` if the upload request returned an unexpected status. The expected responses are 201 Created and 200 OK. """ ds = self.select_datastore(image_size) image_file = _Reader(image_file, hashing_algo, verifier) headers = {} if image_size > 0: headers.update({'Content-Length': str(image_size)}) data = image_file else: data = utils.chunkiter(image_file, CHUNKSIZE) loc = StoreLocation({'scheme': self.scheme, 'server_host': self.server_host, 'image_dir': self.store_image_dir, 'datacenter_path': ds.datacenter.path, 'datastore_name': ds.name, 'image_id': image_id}, self.conf, backend_group=self.backend_group) # NOTE(arnaud): use a decorator when the config is not tied to self cookie = self._build_vim_cookie_header(True) headers = dict(headers) headers.update({'Cookie': cookie}) session = new_session(self.api_insecure, self.ca_file) url = loc.https_url try: response = session.put(url, data=data, headers=headers) except IOError as e: # TODO(sigmavirus24): Figure out what the new exception type would # be in requests. # When a session is not authenticated, the socket is closed by # the server after sending the response. http_client has an open # issue with https that raises Broken Pipe # error instead of returning the response. # See http://bugs.python.org/issue16062. Here, we log the error # and continue to look into the response. msg = _LE('Communication error sending http %(method)s request ' 'to the url %(url)s.\n' 'Got IOError %(e)s') % {'method': 'PUT', 'url': url, 'e': e} LOG.error(msg) raise exceptions.BackendException(msg) except Exception: with excutils.save_and_reraise_exception(): LOG.exception(_LE('Failed to upload content of image ' '%(image)s'), {'image': image_id}) res = response.raw if res.status == requests.codes.conflict: raise exceptions.Duplicate(_("Image file %(image_id)s already " "exists!") % {'image_id': image_id}) if res.status not in (requests.codes.created, requests.codes.ok): msg = (_LE('Failed to upload content of image %(image)s. ' 'The request returned an unexpected status: %(status)s.' '\nThe response body:\n%(body)s') % {'image': image_id, 'status': res.status, 'body': getattr(res, 'body', None)}) LOG.error(msg) raise exceptions.BackendException(msg) metadata = {} if self.backend_group: metadata['store'] = self.backend_group return (loc.get_uri(), image_file.size, image_file.checksum.hexdigest(), image_file.os_hash_value.hexdigest(), metadata) @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() """ conn, resp, content_length = self._query(location, 'GET') iterator = http_response_iterator(conn, resp, self.READ_CHUNKSIZE) class ResponseIndexable(glance_store.Indexable): def another(self): try: return next(self.wrapped) except StopIteration: return '' return (ResponseIndexable(iterator, content_length), content_length) def get_size(self, location, context=None): """Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns the size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() """ conn = None try: conn, resp, size = self._query(location, 'HEAD') return size finally: # NOTE(sabari): Close the connection as the request was made with # stream=True. if conn is not None: conn.close() @capabilities.check def delete(self, location, context=None): """Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: NotFound if image does not exist """ file_path = '[%s] %s' % ( location.store_location.datastore_name, location.store_location.path[len(DS_URL_PREFIX):]) dc_obj = self._get_datacenter(location.store_location.datacenter_path) delete_task = self.session.invoke_api( self.session.vim, 'DeleteDatastoreFile_Task', self.session.vim.service_content.fileManager, name=file_path, datacenter=dc_obj.ref) try: self.session.wait_for_task(delete_task) except vexc.FileNotFoundException: msg = _('Image file %s not found') % file_path LOG.warning(msg) raise exceptions.NotFound(message=msg) except Exception: with excutils.save_and_reraise_exception(): LOG.exception(_LE('Failed to delete image %(image)s ' 'content.') % {'image': location.image_id}) def _query(self, location, method): session = new_session(self.api_insecure, self.ca_file) loc = location.store_location redirects_followed = 0 # TODO(sabari): The redirect logic was added to handle cases when the # backend redirects http url's to https. But the store never makes a # http request and hence this can be safely removed. while redirects_followed < MAX_REDIRECTS: conn, resp = self._retry_request(session, method, location) # NOTE(sigmavirus24): _retry_request handles 4xx and 5xx errors so # if the response is not a redirect, we can return early. if not conn.is_redirect: break redirects_followed += 1 location_header = conn.headers.get('location') if location_header: if resp.status not in (301, 302): reason = (_("The HTTP URL %(path)s attempted to redirect " "with an invalid %(status)s status code.") % {'path': loc.path, 'status': resp.status}) LOG.info(reason) raise exceptions.BadStoreUri(message=reason) conn.close() location = self._new_location(location, location_header) else: # NOTE(sigmavirus24): We exceeded the maximum number of redirects msg = ("The HTTP URL exceeded %(max_redirects)s maximum " "redirects.", {'max_redirects': MAX_REDIRECTS}) LOG.debug(msg) raise exceptions.MaxRedirectsExceeded(redirects=MAX_REDIRECTS) content_length = int(resp.getheader('content-length', 0)) return (conn, resp, content_length) def _retry_request(self, session, method, location): loc = location.store_location # NOTE(arnaud): use a decorator when the config is not tied to self for i in range(self.api_retry_count + 1): cookie = self._build_vim_cookie_header() headers = {'Cookie': cookie} conn = session.request(method, loc.https_url, headers=headers, stream=True) resp = conn.raw if resp.status >= 400: if resp.status == requests.codes.unauthorized: self.reset_session() continue if resp.status == requests.codes.not_found: reason = _('VMware datastore could not find image at URI.') LOG.info(reason) raise exceptions.NotFound(message=reason) msg = ('HTTP request returned a %(status)s status code.' % {'status': resp.status}) LOG.debug(msg) raise exceptions.BadStoreUri(msg) break return conn, resp def _new_location(self, old_location, url): store_name = old_location.store_name store_class = old_location.store_location.__class__ image_id = old_location.image_id store_specs = old_location.store_specs # Note(sabari): The redirect url will have a scheme 'http(s)', but the # store only accepts url with scheme 'vsphere'. Thus, replacing with # store's scheme. parsed_url = urllib.parse.urlparse(url) new_url = parsed_url._replace(scheme='vsphere') vsphere_url = urllib.parse.urlunparse(new_url) return glance_store.location.Location(store_name, store_class, self.conf, uri=vsphere_url, image_id=image_id, store_specs=store_specs, backend=self.backend_group) def new_session(insecure=False, ca_file=None, total_retries=None): session = requests.Session() if total_retries is not None: http_adapter = adapters.HTTPAdapter( max_retries=retry.Retry(total=total_retries)) https_adapter = adapters.HTTPAdapter( max_retries=retry.Retry(total=total_retries)) session.mount('http://', http_adapter) session.mount('https://', https_adapter) session.verify = ca_file if ca_file else not insecure return session ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/backend.py0000664000175000017500000004206600000000000020632 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2018 Verizon Wireless # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib import logging from oslo_config import cfg from oslo_utils import encodeutils from stevedore import driver from stevedore import extension from glance_store import capabilities from glance_store import exceptions from glance_store.i18n import _ from glance_store import location CONF = cfg.CONF LOG = logging.getLogger(__name__) _STORE_OPTS = [ cfg.ListOpt('stores', default=['file', 'http'], deprecated_for_removal=True, deprecated_since='Rocky', deprecated_reason=""" This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. """, help=""" List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are ``file`` and ``http``. Possible values: * A comma separated list that could include: * file * http * swift * rbd * cinder * vmware * s3 Related Options: * default_store """), cfg.StrOpt('default_store', default='file', choices=('file', 'filesystem', 'http', 'https', 'swift', 'swift+http', 'swift+https', 'swift+config', 'rbd', 'cinder', 'vsphere', 's3'), deprecated_for_removal=True, deprecated_since='Rocky', deprecated_reason=""" This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. """, help=""" The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses ``file`` as the default scheme to store images with the ``file`` store. NOTE: The value given for this configuration option must be a valid scheme for a store registered with the ``stores`` configuration option. Possible values: * file * filesystem * http * https * swift * swift+http * swift+https * swift+config * rbd * cinder * vsphere * s3 Related Options: * stores """), ] _STORE_CFG_GROUP = 'glance_store' def _list_opts(): driver_opts = [] mgr = extension.ExtensionManager('glance_store.drivers') # NOTE(zhiyan): Handle available drivers entry_points provided # NOTE(nikhil): Return a sorted list of drivers to ensure that the sample # configuration files generated by oslo config generator retain the order # in which the config opts appear across different runs. If this order of # config opts is not preserved, some downstream packagers may see a long # diff of the changes though not relevant as only order has changed. See # some more details at bug 1619487. drivers = sorted([ext.name for ext in mgr]) handled_drivers = [] # Used to handle backwards-compatible entries for store_entry in drivers: driver_cls = _load_store(None, store_entry, False) if driver_cls and driver_cls not in handled_drivers: if getattr(driver_cls, 'OPTIONS', None) is not None: driver_opts += driver_cls.OPTIONS handled_drivers.append(driver_cls) # NOTE(zhiyan): This separated approach could list # store options before all driver ones, which easier # to read and configure by operator. return ([(_STORE_CFG_GROUP, _STORE_OPTS)] + [(_STORE_CFG_GROUP, driver_opts)]) def register_opts(conf): opts = _list_opts() for group, opt_list in opts: LOG.debug("Registering options for group %s" % group) for opt in opt_list: conf.register_opt(opt, group=group) class Indexable(object): """Indexable for file-like objs iterators Wrapper that allows an iterator or filelike be treated as an indexable data structure. This is required in the case where the return value from Store.get() is passed to Store.add() when adding a Copy-From image to a Store where the client library relies on eventlet GreenSockets, in which case the data to be written is indexed over. """ def __init__(self, wrapped, size): """ Initialize the object :param wrappped: the wrapped iterator or filelike. :param size: the size of data available """ self.wrapped = wrapped self.size = int(size) if size else (wrapped.len if hasattr(wrapped, 'len') else 0) self.cursor = 0 self.chunk = None def __iter__(self): """ Delegate iteration to the wrapped instance. """ for self.chunk in self.wrapped: yield self.chunk def __getitem__(self, i): """ Index into the next chunk (or previous chunk in the case where the last data returned was not fully consumed). :param i: a slice-to-the-end """ start = (i.start or 0) if isinstance(i, slice) else i if start < self.cursor: return self.chunk[(start - self.cursor):] self.chunk = self.another() if self.chunk: self.cursor += len(self.chunk) return self.chunk def another(self): """Implemented by subclasses to return the next element.""" raise NotImplementedError def getvalue(self): """ Return entire string value... used in testing """ return self.wrapped.getvalue() def __len__(self): """ Length accessor. """ return self.size def _load_store(conf, store_entry, invoke_load=True): try: LOG.debug("Attempting to import store %s", store_entry) mgr = driver.DriverManager('glance_store.drivers', store_entry, invoke_args=[conf], invoke_on_load=invoke_load) return mgr.driver except RuntimeError as e: LOG.warning("Failed to load driver %(driver)s. The " "driver will be disabled" % dict(driver=str([driver, e]))) def _load_stores(conf): for store_entry in set(conf.glance_store.stores): try: # FIXME(flaper87): Don't hide BadStoreConfiguration # exceptions. These exceptions should be propagated # to the user of the library. store_instance = _load_store(conf, store_entry) if not store_instance: continue yield (store_entry, store_instance) except exceptions.BadStoreConfiguration: continue def create_stores(conf=CONF): """ Registers all store modules and all schemes from the given config. Duplicates are not re-registered. """ store_count = 0 for (store_entry, store_instance) in _load_stores(conf): try: schemes = store_instance.get_schemes() store_instance.configure(re_raise_bsc=False) except NotImplementedError: continue if not schemes: raise exceptions.BackendException('Unable to register store %s. ' 'No schemes associated with it.' % store_entry) else: LOG.debug("Registering store %s with schemes %s", store_entry, schemes) scheme_map = {} loc_cls = store_instance.get_store_location_class() for scheme in schemes: scheme_map[scheme] = { 'store': store_instance, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_map(scheme_map) store_count += 1 return store_count def verify_default_store(): scheme = CONF.glance_store.default_store try: get_store_from_scheme(scheme) except exceptions.UnknownScheme: msg = _("Store for scheme %s not found") % scheme raise RuntimeError(msg) def get_known_schemes(): """Returns list of known schemes.""" return location.SCHEME_TO_CLS_MAP.keys() def get_store_from_scheme(scheme): """ Given a scheme, return the appropriate store object for handling that scheme. """ if scheme not in location.SCHEME_TO_CLS_MAP: raise exceptions.UnknownScheme(scheme=scheme) scheme_info = location.SCHEME_TO_CLS_MAP[scheme] store = scheme_info['store'] if not store.is_capable(capabilities.BitMasks.DRIVER_REUSABLE): # Driver instance isn't stateless so it can't # be reused safely and need recreation. store_entry = scheme_info['store_entry'] store = _load_store(store.conf, store_entry, invoke_load=True) store.configure() try: scheme_map = {} loc_cls = store.get_store_location_class() for scheme in store.get_schemes(): scheme_map[scheme] = { 'store': store, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_map(scheme_map) except NotImplementedError: scheme_info['store'] = store return store def get_store_from_uri(uri): """ Given a URI, return the store object that would handle operations on the URI. :param uri: URI to analyze """ scheme = uri[0:uri.find('/') - 1] return get_store_from_scheme(scheme) def get_from_backend(uri, offset=0, chunk_size=None, context=None): """Yields chunks of data from backend specified by uri.""" loc = location.get_location_from_uri(uri, conf=CONF) store = get_store_from_uri(uri) return store.get(loc, offset=offset, chunk_size=chunk_size, context=context) def get_size_from_backend(uri, context=None): """Retrieves image size from backend specified by uri.""" loc = location.get_location_from_uri(uri, conf=CONF) store = get_store_from_uri(uri) return store.get_size(loc, context=context) def delete_from_backend(uri, context=None): """Removes chunks of data from backend specified by uri.""" loc = location.get_location_from_uri(uri, conf=CONF) store = get_store_from_uri(uri) return store.delete(loc, context=context) def get_store_from_location(uri): """ Given a location (assumed to be a URL), attempt to determine the store from the location. We use here a simple guess that the scheme of the parsed URL is the store... :param uri: Location to check for the store """ loc = location.get_location_from_uri(uri, conf=CONF) return loc.store_name def check_location_metadata(val, key=''): if isinstance(val, dict): for key in val: check_location_metadata(val[key], key=key) elif isinstance(val, list): ndx = 0 for v in val: check_location_metadata(v, key='%s[%d]' % (key, ndx)) ndx = ndx + 1 elif not isinstance(val, str): raise exceptions.BackendException(_("The image metadata key %(key)s " "has an invalid type of %(type)s. " "Only dict, list, and unicode are " "supported.") % dict(key=key, type=type(val))) def _check_metadata(store, metadata): if not isinstance(metadata, dict): msg = (_("The storage driver %(driver)s returned invalid " " metadata %(metadata)s. This must be a dictionary type") % dict(driver=str(store), metadata=str(metadata))) LOG.error(msg) raise exceptions.BackendException(msg) try: check_location_metadata(metadata) except exceptions.BackendException as e: e_msg = (_("A bad metadata structure was returned from the " "%(driver)s storage driver: %(metadata)s. %(e)s.") % dict(driver=encodeutils.exception_to_unicode(store), metadata=encodeutils.exception_to_unicode(metadata), e=encodeutils.exception_to_unicode(e))) LOG.error(e_msg) raise exceptions.BackendException(e_msg) def store_add_to_backend(image_id, data, size, store, context=None, verifier=None): """ A wrapper around a call to each stores add() method. This gives glance a common place to check the output :param image_id: The image add to which data is added :param data: The data to be stored :param size: The length of the data in bytes :param store: The store to which the data is being added :param context: The request context :param verifier: An object used to verify signatures for images :return: The url location of the file, the size amount of data, the checksum of the data the storage systems metadata dictionary for the location """ (location, size, checksum, metadata) = store.add(image_id, data, size, context=context, verifier=verifier) if metadata is not None: _check_metadata(store, metadata) return (location, size, checksum, metadata) def store_add_to_backend_with_multihash( image_id, data, size, hashing_algo, store, context=None, verifier=None): """ A wrapper around a call to each store's add() method that requires a hashing_algo identifier and returns a 5-tuple including the "multihash" computed using the specified hashing_algo. (This is an enhanced version of store_add_to_backend(), which is left as-is for backward compatibility.) :param image_id: The image add to which data is added :param data: The data to be stored :param size: The length of the data in bytes :param store: The store to which the data is being added :param hashing_algo: A hashlib algorithm identifier (string) :param context: The request context :param verifier: An object used to verify signatures for images :return: The url location of the file, the size amount of data, the checksum of the data, the multihash of the data, the storage system's metadata dictionary for the location :raises: ``glance_store.exceptions.BackendException`` ``glance_store.exceptions.UnknownHashingAlgo`` """ if hashing_algo not in hashlib.algorithms_available: raise exceptions.UnknownHashingAlgo(algo=hashing_algo) (location, size, checksum, multihash, metadata) = store.add( image_id, data, size, hashing_algo, context=context, verifier=verifier) if metadata is not None: _check_metadata(store, metadata) return (location, size, checksum, multihash, metadata) def add_to_backend(conf, image_id, data, size, scheme=None, context=None, verifier=None): if scheme is None: scheme = conf['glance_store']['default_store'] store = get_store_from_scheme(scheme) return store_add_to_backend(image_id, data, size, store, context, verifier) def add_to_backend_with_multihash(conf, image_id, data, size, hashing_algo, scheme=None, context=None, verifier=None): if scheme is None: scheme = conf['glance_store']['default_store'] store = get_store_from_scheme(scheme) return store_add_to_backend_with_multihash( image_id, data, size, hashing_algo, store, context, verifier) def set_acls(location_uri, public=False, read_tenants=[], write_tenants=None, context=None): if write_tenants is None: write_tenants = [] loc = location.get_location_from_uri(location_uri, conf=CONF) scheme = get_store_from_location(location_uri) store = get_store_from_scheme(scheme) try: store.set_acls(loc, public=public, read_tenants=read_tenants, write_tenants=write_tenants, context=context) except NotImplementedError: LOG.debug(_("Skipping store.set_acls... not implemented.")) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/capabilities.py0000664000175000017500000001277600000000000021701 0ustar00zuulzuul00000000000000# Copyright (c) 2015 IBM, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Glance Store capability""" import logging import threading import enum from oslo_utils import reflection from glance_store import exceptions from glance_store.i18n import _LW _STORE_CAPABILITES_UPDATE_SCHEDULING_BOOK = {} _STORE_CAPABILITES_UPDATE_SCHEDULING_LOCK = threading.Lock() LOG = logging.getLogger(__name__) class BitMasks(enum.IntEnum): NONE = 0b00000000 ALL = 0b11111111 READ_ACCESS = 0b00000001 # Included READ_ACCESS READ_OFFSET = 0b00000011 # Included READ_ACCESS READ_CHUNK = 0b00000101 # READ_OFFSET | READ_CHUNK READ_RANDOM = 0b00000111 WRITE_ACCESS = 0b00001000 # Included WRITE_ACCESS WRITE_OFFSET = 0b00011000 # Included WRITE_ACCESS WRITE_CHUNK = 0b00101000 # WRITE_OFFSET | WRITE_CHUNK WRITE_RANDOM = 0b00111000 # READ_ACCESS | WRITE_ACCESS RW_ACCESS = 0b00001001 # READ_OFFSET | WRITE_OFFSET RW_OFFSET = 0b00011011 # READ_CHUNK | WRITE_CHUNK RW_CHUNK = 0b00101101 # RW_OFFSET | RW_CHUNK RW_RANDOM = 0b00111111 # driver is stateless and can be reused safely DRIVER_REUSABLE = 0b01000000 class StoreCapability(object): def __init__(self): # Set static store capabilities base on # current driver implementation. self._capabilities = getattr(self.__class__, "_CAPABILITIES", 0) @property def capabilities(self): return self._capabilities @staticmethod def contains(x, y): return x & y == y def update_capabilities(self): """ Update dynamic storage capabilities based on current driver configuration and backend status when needed. As a hook, the function will be triggered in two cases: calling once after store driver get configured, it was used to update dynamic storage capabilities based on current driver configuration, or calling when the capabilities checking of an operation failed every time, this was used to refresh dynamic storage capabilities based on backend status then. This function shouldn't raise any exception out. """ LOG.debug(("Store %s doesn't support updating dynamic " "storage capabilities. Please overwrite " "'update_capabilities' method of the store to " "implement updating logics if needed.") % reflection.get_class_name(self)) def is_capable(self, *capabilities): """ Check if requested capability(s) are supported by current driver instance. :param capabilities: required capability(s). """ caps = 0 for cap in capabilities: caps |= int(cap) return self.contains(self.capabilities, caps) def set_capabilities(self, *dynamic_capabilites): """ Set dynamic storage capabilities based on current driver configuration and backend status. :param dynamic_capabilites: dynamic storage capability(s). """ for cap in dynamic_capabilites: self._capabilities |= int(cap) def unset_capabilities(self, *dynamic_capabilites): """ Unset dynamic storage capabilities. :param dynamic_capabilites: dynamic storage capability(s). """ caps = 0 for cap in dynamic_capabilites: caps |= int(cap) # TODO(zhiyan): Cascaded capability removal is # skipped currently, we can add it back later # when a concrete requirement comes out. # For example, when removing READ_ACCESS, all # read related capabilities need to be removed # together, e.g. READ_RANDOM. self._capabilities &= ~caps def check(store_op_fun): def op_checker(store, *args, **kwargs): get_capabilities = [ BitMasks.READ_ACCESS, BitMasks.READ_OFFSET if kwargs.get('offset') else BitMasks.NONE, BitMasks.READ_CHUNK if kwargs.get('chunk_size') else BitMasks.NONE ] op_cap_map = { 'get': get_capabilities, 'add': [BitMasks.WRITE_ACCESS], 'delete': [BitMasks.WRITE_ACCESS]} op_exec_map = { 'get': (exceptions.StoreRandomGetNotSupported if kwargs.get('offset') or kwargs.get('chunk_size') else exceptions.StoreGetNotSupported), 'add': exceptions.StoreAddDisabled, 'delete': exceptions.StoreDeleteNotSupported} op = store_op_fun.__name__.lower() try: req_cap = op_cap_map[op] except KeyError: LOG.warning(_LW('The capability of operation "%s" ' 'could not be checked.'), op) else: if not store.is_capable(*req_cap): kwargs.setdefault('offset', 0) kwargs.setdefault('chunk_size', None) raise op_exec_map[op](**kwargs) return store_op_fun(store, *args, **kwargs) return op_checker ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1732116 glance_store-4.8.1/glance_store/common/0000775000175000017500000000000000000000000020151 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/common/__init__.py0000664000175000017500000000000000000000000022250 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/common/attachment_state_manager.py0000664000175000017500000002244600000000000025555 0ustar00zuulzuul00000000000000# Copyright 2021 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import logging import socket import threading from oslo_config import cfg from glance_store.common import cinder_utils from glance_store import exceptions from glance_store.i18n import _LE, _LW LOG = logging.getLogger(__name__) HOST = socket.gethostname() CONF = cfg.CONF class AttachmentStateManagerMeta(type): _instance = {} def __call__(cls, *args, **kwargs): if cls not in cls._instance: cls._instance[cls] = super( AttachmentStateManagerMeta, cls).__call__(*args, **kwargs) return cls._instance[cls] class _AttachmentStateManager(metaclass=AttachmentStateManagerMeta): """A global manager of a volume's multiple attachments. _AttachmentStateManager manages a _AttachmentState object for the current glance node. Primarily it creates one on object initialization and returns it via get_state(). _AttachmentStateManager manages concurrency itself. Independent callers do not need to consider interactions between multiple _AttachmentStateManager calls when designing their own locking. """ # Reset state of global _AttachmentStateManager state = None use_count = 0 # Guards both state and use_count cond = threading.Condition() def __init__(self, host): """Initialise a new _AttachmentState We will block before creating a new state until all operations using a previous state have completed. :param host: host """ # Wait until all operations using a previous state are # complete before initialising a new one. Note that self.state is # already None, set either by initialisation or by host_down. This # means the current state will not be returned to any new callers, # and use_count will eventually reach zero. # We do this to avoid a race between _AttachmentState initialisation # and an on-going attach/detach operation self.host = host while self.use_count != 0: self.cond.wait() # Another thread might have initialised state while we were # waiting if self.state is None: LOG.debug('Initialising _AttachmentStateManager') self.state = _AttachmentState() @contextlib.contextmanager def get_state(self): """Return the current attachment state. _AttachmentStateManager will not permit a new state object to be created while any previous state object is still in use. :rtype: _AttachmentState """ # We hold the instance lock here so that if a _AttachmentState is # currently initialising we'll wait for it to complete rather than # fail. with self.cond: state = self.state if state is None: LOG.error('Host not initialized') raise exceptions.HostNotInitialized(host=self.host) self.use_count += 1 try: LOG.debug('Got _AttachmentState') yield state finally: with self.cond: self.use_count -= 1 self.cond.notify_all() class _AttachmentState(object): """A data structure recording all managed attachments. _AttachmentState ensures that the glance node only attempts to a single multiattach volume in use by multiple attachments once, and that it is not disconnected until it is no longer in use by any attachments. Callers should not create a _AttachmentState directly, but should obtain it via: with attachment.get_manager().get_state() as state: state.attach(...) _AttachmentState manages concurrency itself. Independent callers do not need to consider interactions between multiple _AttachmentState calls when designing their own locking. """ class _Attachment(object): # A single multiattach volume, and the set of attachments in use # on it. def __init__(self): # A guard for operations on this volume self.lock = threading.Lock() # The set of attachments on this volume self.attachments = set() def add_attachment(self, attachment_id, host): self.attachments.add((attachment_id, host)) def remove_attachment(self, attachment_id, host): self.attachments.remove((attachment_id, host)) def in_use(self): return len(self.attachments) > 0 def __init__(self): """Initialise _AttachmentState""" self.volumes = collections.defaultdict(self._Attachment) self.volume_api = cinder_utils.API() @contextlib.contextmanager def _get_locked(self, volume): """Get a locked attachment object :param mountpoint: The path of the volume whose attachment we should return. :rtype: _AttachmentState._Attachment """ while True: vol = self.volumes[volume] with vol.lock: if self.volumes[volume] is vol: yield vol break def attach(self, client, volume_id, host, mode=None): """Ensure a volume is available for an attachment and create an attachment :param client: Cinderclient object :param volume_id: ID of the volume to attach :param host: The host the volume will be attached to :param mode: The attachment mode """ LOG.debug('_AttachmentState.attach(volume_id=%(volume_id)s, ' 'host=%(host)s, mode=%(mode)s)', {'volume_id': volume_id, 'host': host, 'mode': mode}) with self._get_locked(volume_id) as vol_attachment: try: attachment = self.volume_api.attachment_create( client, volume_id, mode=mode) except Exception: LOG.exception(_LE('Error attaching volume %(volume_id)s'), {'volume_id': volume_id}) del self.volumes[volume_id] raise vol_attachment.add_attachment(attachment['id'], host) LOG.debug('_AttachmentState.attach for volume_id=%(volume_id)s ' 'and attachment_id=%(attachment_id)s completed successfully', {'volume_id': volume_id, 'attachment_id': attachment['id']}) return attachment def detach(self, client, attachment_id, volume_id, host, conn, connection_info, device): """Delete the attachment no longer in use, and disconnect volume if necessary. :param client: Cinderclient object :param attachment_id: ID of the attachment between volume and host :param volume_id: ID of the volume to attach :param host: The host the volume was attached to :param conn: connector object :param connection_info: connection information of the volume we are detaching :device: device used to write image """ LOG.debug('_AttachmentState.detach(vol_id=%(volume_id)s, ' 'attachment_id=%(attachment_id)s)', {'volume_id': volume_id, 'attachment_id': attachment_id}) with self._get_locked(volume_id) as vol_attachment: try: vol_attachment.remove_attachment(attachment_id, host) except KeyError: LOG.warning(_LW("Request to remove attachment " "(%(volume_id)s, %(host)s) but we " "don't think it's in use."), {'volume_id': volume_id, 'host': host}) if not vol_attachment.in_use(): conn.disconnect_volume(device) del self.volumes[volume_id] self.volume_api.attachment_delete(client, attachment_id) LOG.debug('_AttachmentState.detach for volume %(volume_id)s ' 'and attachment_id=%(attachment_id)s completed ' 'successfully', {'volume_id': volume_id, 'attachment_id': attachment_id}) __manager__ = _AttachmentStateManager(HOST) def attach(client, volume_id, host, mode=None): """A convenience wrapper around _AttachmentState.attach()""" with __manager__.get_state() as attach_state: attachment = attach_state.attach(client, volume_id, host, mode=mode) return attachment def detach(client, attachment_id, volume_id, host, conn, connection_info, device): """A convenience wrapper around _AttachmentState.detach()""" with __manager__.get_state() as attach_state: attach_state.detach(client, attachment_id, volume_id, host, conn, connection_info, device) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/common/cinder_utils.py0000664000175000017500000002302000000000000023204 0ustar00zuulzuul00000000000000# Copyright 2021 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging from cinderclient.apiclient import exceptions as apiclient_exception from cinderclient import exceptions as cinder_exception from keystoneauth1 import exceptions as keystone_exc from oslo_utils import excutils import retrying from glance_store import exceptions from glance_store.i18n import _LE LOG = logging.getLogger(__name__) def handle_exceptions(method): """Transforms the exception for the volume but keeps its traceback intact. """ def wrapper(self, ctx, volume_id, *args, **kwargs): try: res = method(self, ctx, volume_id, *args, **kwargs) except (keystone_exc.NotFound, cinder_exception.NotFound, cinder_exception.OverLimit) as e: raise exceptions.BackendException(str(e)) return res return wrapper def _retry_on_internal_server_error(e): if isinstance(e, apiclient_exception.InternalServerError): return True return False def _retry_on_bad_request(e): if isinstance(e, cinder_exception.BadRequest): return True return False class API(object): """API for interacting with the cinder.""" @handle_exceptions def create(self, client, size, name, volume_type=None, metadata=None): kwargs = dict(volume_type=volume_type, metadata=metadata, name=name) volume = client.volumes.create(size, **kwargs) return volume def delete(self, client, volume_id): client.volumes.delete(volume_id) @retrying.retry(stop_max_attempt_number=5, retry_on_exception=_retry_on_bad_request, wait_exponential_multiplier=1000, wait_exponential_max=10000) @handle_exceptions def attachment_create(self, client, volume_id, connector=None, mountpoint=None, mode=None): """Create a volume attachment. This requires microversion >= 3.54. The attachment_create call was introduced in microversion 3.27. We need 3.54 as minimum here as we need attachment_complete to finish the attaching process and it which was introduced in version 3.44 and we also pass the attach mode which was introduced in version 3.54. :param client: cinderclient object :param volume_id: UUID of the volume on which to create the attachment. :param connector: host connector dict; if None, the attachment will be 'reserved' but not yet attached. :param mountpoint: Optional mount device name for the attachment, e.g. "/dev/vdb". This is only used if a connector is provided. :param mode: The mode in which the attachment is made i.e. read only(ro) or read/write(rw) :returns: a dict created from the cinderclient.v3.attachments.VolumeAttachment object with a backward compatible connection_info dict """ if connector and mountpoint and 'mountpoint' not in connector: connector['mountpoint'] = mountpoint try: attachment_ref = client.attachments.create( volume_id, connector, mode=mode) return attachment_ref except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): # While handling simultaneous requests, the volume can be # in different states and we retry on attachment_create # until the volume reaches a valid state for attachment. # Hence, it is better to not log 400 cases as no action # from users is needed in this case if getattr(ex, 'code', None) != 400: LOG.error(_LE('Create attachment failed for volume ' '%(volume_id)s. Error: %(msg)s ' 'Code: %(code)s'), {'volume_id': volume_id, 'msg': str(ex), 'code': getattr(ex, 'code', None)}) @handle_exceptions def attachment_get(self, client, attachment_id): """Gets a volume attachment. :param client: cinderclient object :param attachment_id: UUID of the volume attachment to get. :returns: a dict created from the cinderclient.v3.attachments.VolumeAttachment object with a backward compatible connection_info dict """ try: attachment_ref = client.attachments.show( attachment_id) return attachment_ref except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(_LE('Show attachment failed for attachment ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': attachment_id, 'msg': str(ex), 'code': getattr(ex, 'code', None)}) @handle_exceptions def attachment_update(self, client, attachment_id, connector, mountpoint=None): """Updates the connector on the volume attachment. An attachment without a connector is considered reserved but not fully attached. :param client: cinderclient object :param attachment_id: UUID of the volume attachment to update. :param connector: host connector dict. This is required when updating a volume attachment. To terminate a connection, the volume attachment for that connection must be deleted. :param mountpoint: Optional mount device name for the attachment, e.g. "/dev/vdb". Theoretically this is optional per volume backend, but in practice it's normally required so it's best to always provide a value. :returns: a dict created from the cinderclient.v3.attachments.VolumeAttachment object with a backward compatible connection_info dict """ if mountpoint and 'mountpoint' not in connector: connector['mountpoint'] = mountpoint try: attachment_ref = client.attachments.update( attachment_id, connector) return attachment_ref except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(_LE('Update attachment failed for attachment ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': attachment_id, 'msg': str(ex), 'code': getattr(ex, 'code', None)}) @handle_exceptions def attachment_complete(self, client, attachment_id): """Marks a volume attachment complete. This call should be used to inform Cinder that a volume attachment is fully connected on the host so Cinder can apply the necessary state changes to the volume info in its database. :param client: cinderclient object :param attachment_id: UUID of the volume attachment to update. """ try: client.attachments.complete(attachment_id) except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(_LE('Complete attachment failed for attachment ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': attachment_id, 'msg': str(ex), 'code': getattr(ex, 'code', None)}) @handle_exceptions @retrying.retry(stop_max_attempt_number=5, retry_on_exception=_retry_on_internal_server_error) def attachment_delete(self, client, attachment_id): try: client.attachments.delete(attachment_id) except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(_LE('Delete attachment failed for attachment ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': attachment_id, 'msg': str(ex), 'code': getattr(ex, 'code', None)}) @handle_exceptions def extend_volume(self, client, volume, new_size): """Extend volume :param client: cinderclient object :param volume: UUID of the volume to extend :param new_size: new size of the volume after extend """ try: client.volumes.extend(volume, new_size) except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(_LE('Extend volume failed for volume ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': volume.id, 'msg': str(ex), 'code': getattr(ex, 'code', None)}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/common/fs_mount.py0000664000175000017500000003452500000000000022366 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import contextlib import logging import os import socket import threading from oslo_concurrency import processutils from oslo_config import cfg from glance_store import exceptions from glance_store.i18n import _LE, _LW LOG = logging.getLogger(__name__) HOST = socket.gethostname() CONF = cfg.CONF class HostMountStateManagerMeta(type): _instance = {} def __call__(cls, *args, **kwargs): if cls not in cls._instance: cls._instance[cls] = super( HostMountStateManagerMeta, cls).__call__(*args, **kwargs) return cls._instance[cls] class _HostMountStateManager(metaclass=HostMountStateManagerMeta): """A global manager of filesystem mounts. _HostMountStateManager manages a _HostMountState object for the current glance node. Primarily it creates one on object initialization and returns it via get_state(). _HostMountStateManager manages concurrency itself. Independent callers do not need to consider interactions between multiple _HostMountStateManager calls when designing their own locking. """ # Reset state of global _HostMountStateManager state = None use_count = 0 # Guards both state and use_count cond = threading.Condition() def __init__(self, host): """Initialise a new _HostMountState We will block before creating a new state until all operations using a previous state have completed. :param host: host """ # Wait until all operations using a previous state are # complete before initialising a new one. Note that self.state is # already None, set either by initialisation or by host_down. This # means the current state will not be returned to any new callers, # and use_count will eventually reach zero. # We do this to avoid a race between _HostMountState initialisation # and an on-going mount/unmount operation self.host = host while self.use_count != 0: self.cond.wait() # Another thread might have initialised state while we were # waiting if self.state is None: LOG.debug('Initialising _HostMountState') self.state = _HostMountState() backends = [] enabled_backends = CONF.enabled_backends if enabled_backends: for backend in enabled_backends: if enabled_backends[backend] == 'cinder': backends.append(backend) else: backends.append('glance_store') for backend in backends: mountpoint = getattr(CONF, backend).cinder_mount_point_base # This is currently designed for cinder nfs backend only. # Later can be modified to work with other *fs backends. mountpoint = os.path.join(mountpoint, 'nfs') # There will probably be the same rootwrap file for all stores, # generalizing this will be done in a later refactoring rootwrap = getattr(CONF, backend).rootwrap_config rootwrap = ('sudo glance-rootwrap %s' % rootwrap) dirs = [] # fetch the directories in the mountpoint path if os.path.isdir(mountpoint): dirs = os.listdir(mountpoint) else: continue if not dirs: return for dir in dirs: # for every directory in the mountpath, we # unmount it (if mounted) and remove it dir = os.path.join(mountpoint, dir) with self.get_state() as mount_state: if os.path.exists(dir) and not os.path.ismount(dir): try: os.rmdir(dir) except Exception as ex: LOG.debug( "Couldn't remove directory " "%(mountpoint)s: %(reason)s", {'mountpoint': mountpoint, 'reason': ex}) else: mount_state.umount(None, dir, HOST, rootwrap) @contextlib.contextmanager def get_state(self): """Return the current mount state. _HostMountStateManager will not permit a new state object to be created while any previous state object is still in use. :rtype: _HostMountState """ # We hold the instance lock here so that if a _HostMountState is # currently initialising we'll wait for it to complete rather than # fail. with self.cond: state = self.state if state is None: LOG.error('Host not initialized') raise exceptions.HostNotInitialized(host=self.host) self.use_count += 1 try: LOG.debug('Got _HostMountState') yield state finally: with self.cond: self.use_count -= 1 self.cond.notify_all() class _HostMountState(object): """A data structure recording all managed mountpoints and the attachments in use for each one. _HostMountState ensures that the glance node only attempts to mount a single mountpoint in use by multiple attachments once, and that it is not unmounted until it is no longer in use by any attachments. Callers should not create a _HostMountState directly, but should obtain it via: with mount.get_manager().get_state() as state: state.mount(...) _HostMountState manages concurrency itself. Independent callers do not need to consider interactions between multiple _HostMountState calls when designing their own locking. """ class _MountPoint(object): """A single mountpoint, and the set of attachments in use on it.""" def __init__(self): # A guard for operations on this mountpoint # N.B. Care is required using this lock, as it will be deleted # if the containing _MountPoint is deleted. self.lock = threading.Lock() # The set of attachments on this mountpoint. self.attachments = set() def add_attachment(self, vol_name, host): self.attachments.add((vol_name, host)) def remove_attachment(self, vol_name, host): self.attachments.remove((vol_name, host)) def in_use(self): return len(self.attachments) > 0 def __init__(self): """Initialise _HostMountState""" self.mountpoints = collections.defaultdict(self._MountPoint) @contextlib.contextmanager def _get_locked(self, mountpoint): """Get a locked mountpoint object :param mountpoint: The path of the mountpoint whose object we should return. :rtype: _HostMountState._MountPoint """ while True: mount = self.mountpoints[mountpoint] with mount.lock: if self.mountpoints[mountpoint] is mount: yield mount break def mount(self, fstype, export, vol_name, mountpoint, host, rootwrap_helper, options): """Ensure a mountpoint is available for an attachment, mounting it if necessary. If this is the first attachment on this mountpoint, we will mount it with: mount -t :param fstype: The filesystem type to be passed to mount command. :param export: The type-specific identifier of the filesystem to be mounted. e.g. for nfs 'host.example.com:/mountpoint'. :param vol_name: The name of the volume on the remote filesystem. :param mountpoint: The directory where the filesystem will be mounted on the local compute host. :param host: The host the volume will be attached to. :param options: An arbitrary list of additional arguments to be passed to the mount command immediate before export and mountpoint. """ LOG.debug('_HostMountState.mount(fstype=%(fstype)s, ' 'export=%(export)s, vol_name=%(vol_name)s, %(mountpoint)s, ' 'options=%(options)s)', {'fstype': fstype, 'export': export, 'vol_name': vol_name, 'mountpoint': mountpoint, 'options': options}) with self._get_locked(mountpoint) as mount: if not os.path.ismount(mountpoint): LOG.debug('Mounting %(mountpoint)s', {'mountpoint': mountpoint}) os.makedirs(mountpoint) mount_cmd = ['mount', '-t', fstype] if options is not None: mount_cmd.extend(options) mount_cmd.extend([export, mountpoint]) try: processutils.execute(*mount_cmd, run_as_root=True, root_helper=rootwrap_helper) except Exception: # Check to see if mountpoint is mounted despite the error # eg it was already mounted if os.path.ismount(mountpoint): # We're not going to raise the exception because we're # in the desired state anyway. However, this is still # unusual so we'll log it. LOG.exception(_LE('Error mounting %(fstype)s export ' '%(export)s on %(mountpoint)s. ' 'Continuing because mountpount is ' 'mounted despite this.'), {'fstype': fstype, 'export': export, 'mountpoint': mountpoint}) else: # If the mount failed there's no reason for us to keep # a record of it. It will be created again if the # caller retries. # Delete while holding lock del self.mountpoints[mountpoint] raise mount.add_attachment(vol_name, host) LOG.debug('_HostMountState.mount() for %(mountpoint)s ' 'completed successfully', {'mountpoint': mountpoint}) def umount(self, vol_name, mountpoint, host, rootwrap_helper): """Mark an attachment as no longer in use, and unmount its mountpoint if necessary. :param vol_name: The name of the volume on the remote filesystem. :param mountpoint: The directory where the filesystem is be mounted on the local compute host. :param host: The host the volume was attached to. """ LOG.debug('_HostMountState.umount(vol_name=%(vol_name)s, ' 'mountpoint=%(mountpoint)s)', {'vol_name': vol_name, 'mountpoint': mountpoint}) with self._get_locked(mountpoint) as mount: try: mount.remove_attachment(vol_name, host) except KeyError: LOG.warning(_LW("Request to remove attachment " "(%(vol_name)s, %(host)s) from " "%(mountpoint)s, but we don't think it's in " "use."), {'vol_name': vol_name, 'host': host, 'mountpoint': mountpoint}) if not mount.in_use(): mounted = os.path.ismount(mountpoint) if mounted: mounted = self._real_umount(mountpoint, rootwrap_helper) # Delete our record entirely if it's unmounted if not mounted: del self.mountpoints[mountpoint] LOG.debug('_HostMountState.umount() for %(mountpoint)s ' 'completed successfully', {'mountpoint': mountpoint}) def _real_umount(self, mountpoint, rootwrap_helper): # Unmount and delete a mountpoint. # Return mount state after umount (i.e. True means still mounted) LOG.debug('Unmounting %(mountpoint)s', {'mountpoint': mountpoint}) try: processutils.execute('umount', mountpoint, run_as_root=True, attempts=3, delay_on_retry=True, root_helper=rootwrap_helper) except processutils.ProcessExecutionError as ex: LOG.error(_LE("Couldn't unmount %(mountpoint)s: %(reason)s"), {'mountpoint': mountpoint, 'reason': ex}) if not os.path.ismount(mountpoint): try: os.rmdir(mountpoint) except Exception as ex: LOG.error(_LE("Couldn't remove directory %(mountpoint)s: " "%(reason)s"), {'mountpoint': mountpoint, 'reason': ex}) return False return True __manager__ = _HostMountStateManager(HOST) def mount(fstype, export, vol_name, mountpoint, host, rootwrap_helper, options=None): """A convenience wrapper around _HostMountState.mount()""" with __manager__.get_state() as mount_state: mount_state.mount(fstype, export, vol_name, mountpoint, host, rootwrap_helper, options) def umount(vol_name, mountpoint, host, rootwrap_helper): """A convenience wrapper around _HostMountState.umount()""" with __manager__.get_state() as mount_state: mount_state.umount(vol_name, mountpoint, host, rootwrap_helper) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/common/utils.py0000664000175000017500000001117300000000000021666 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ System-level utilities and helper functions. """ import hashlib import logging import uuid from oslo_concurrency import lockutils from oslo_utils.secretutils import md5 try: from eventlet import sleep except ImportError: from time import sleep from glance_store.i18n import _ LOG = logging.getLogger(__name__) synchronized = lockutils.synchronized_with_prefix('glance_store-') def is_uuid_like(val): """Returns validation of a value as a UUID. For our purposes, a UUID is a canonical form string: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa """ try: return str(uuid.UUID(val)) == val except (TypeError, ValueError, AttributeError): return False def chunkreadable(iter, chunk_size=65536): """ Wrap a readable iterator with a reader yielding chunks of a preferred size, otherwise leave iterator unchanged. :param iter: an iter which may also be readable :param chunk_size: maximum size of chunk """ return chunkiter(iter, chunk_size) if hasattr(iter, 'read') else iter def chunkiter(fp, chunk_size=65536): """ Return an iterator to a file-like obj which yields fixed size chunks :param fp: a file-like object :param chunk_size: maximum size of chunk """ while True: chunk = fp.read(chunk_size) if chunk: yield chunk else: break def cooperative_iter(iter): """ Return an iterator which schedules after each iteration. This can prevent eventlet thread starvation. :param iter: an iterator to wrap """ try: for chunk in iter: sleep(0) yield chunk except Exception as err: msg = _("Error: cooperative_iter exception %s") % err LOG.error(msg) raise def cooperative_read(fd): """ Wrap a file descriptor's read with a partial function which schedules after each read. This can prevent eventlet thread starvation. :param fd: a file descriptor to wrap """ def readfn(*args): result = fd.read(*args) sleep(0) return result return readfn def get_hasher(hash_algo, usedforsecurity=True): """ Returns the required hasher, given the hashing algorithm. This is primarily to ensure that the hash algorithm is correctly chosen when executed on a FIPS enabled system :param hash_algo: hash algorithm requested :param usedforsecurity: whether the hashes are used in a security context """ if str(hash_algo) == 'md5': return md5(usedforsecurity=usedforsecurity) else: return hashlib.new(str(hash_algo)) class CooperativeReader(object): """ An eventlet thread friendly class for reading in image data. When accessing data either through the iterator or the read method we perform a sleep to allow a co-operative yield. When there is more than one image being uploaded/downloaded this prevents eventlet thread starvation, ie allows all threads to be scheduled periodically rather than having the same thread be continuously active. """ def __init__(self, fd): """ :param fd: Underlying image file object """ self.fd = fd self.iterator = None # NOTE(markwash): if the underlying supports read(), overwrite the # default iterator-based implementation with cooperative_read which # is more straightforward if hasattr(fd, 'read'): self.read = cooperative_read(fd) def read(self, length=None): """Return the next chunk of the underlying iterator. This is replaced with cooperative_read in __init__ if the underlying fd already supports read(). """ if self.iterator is None: self.iterator = self.__iter__() try: return next(self.iterator) except StopIteration: return b'' def __iter__(self): return cooperative_iter(self.fd.__iter__()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/driver.py0000664000175000017500000002540600000000000020535 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 RedHat Inc. # Copyright 2018 Verizon Wireless # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base class for all storage backends""" from functools import wraps import logging from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils import importutils from oslo_utils import units from glance_store import capabilities from glance_store import exceptions from glance_store.i18n import _ LOG = logging.getLogger(__name__) _MULTI_BACKEND_OPTS = [ cfg.StrOpt('store_description', help=_(""" This option will be used to provide a constructive information about the store backend to end users. Using /v2/stores-info call user can seek more information on all available backends. """)), cfg.IntOpt('weight', help=_(""" This option is used to define a relative weight for this store over any others that are configured. The actual value of the weight is meaningless and only serves to provide a "sort order" compared to others. Any stores with the same weight will be treated as equivalent. """), default=0), ] class Store(capabilities.StoreCapability): OPTIONS = None MULTI_BACKEND_OPTIONS = _MULTI_BACKEND_OPTS READ_CHUNKSIZE = 4 * units.Mi # 4M WRITE_CHUNKSIZE = READ_CHUNKSIZE def __init__(self, conf, backend=None): """ Initialize the Store """ super(Store, self).__init__() self.conf = conf self.backend_group = backend self.store_location_class = None self._url_prefix = None try: if self.OPTIONS is not None: group = 'glance_store' if self.backend_group: group = self.backend_group if self.MULTI_BACKEND_OPTIONS is not None: self.conf.register_opts( self.MULTI_BACKEND_OPTIONS, group=group) self.conf.register_opts(self.OPTIONS, group=group) except cfg.DuplicateOptError: pass @property def url_prefix(self): return self._url_prefix @property def weight(self): if self.backend_group is None: # NOTE(danms): A backend with no config group can not have a # weight set, so just return the default return 0 else: return getattr(self.conf, self.backend_group).weight def configure(self, re_raise_bsc=False): """ Configure the store to use the stored configuration options and initialize capabilities based on current configuration. Any store that needs special configuration should implement this method. """ try: self.configure_add() except exceptions.BadStoreConfiguration as e: self.unset_capabilities(capabilities.BitMasks.WRITE_ACCESS) msg = (_("Failed to configure store correctly: %s " "Disabling add method.") % encodeutils.exception_to_unicode(e)) LOG.warning(msg) if re_raise_bsc: raise finally: self.update_capabilities() def get_schemes(self): """ Returns a tuple of schemes which this store can handle. """ raise NotImplementedError def get_store_location_class(self): """ Returns the store location class that is used by this store. """ if not self.store_location_class: class_name = "%s.StoreLocation" % (self.__module__) LOG.debug("Late loading location class %s", class_name) self.store_location_class = importutils.import_class(class_name) return self.store_location_class def configure_add(self): """ This is like `configure` except that it's specifically for configuring the store to accept objects. If the store was not able to successfully configure itself, it should raise `exceptions.BadStoreConfiguration`. """ # NOTE(flaper87): This should probably go away @capabilities.check def get(self, location, offset=0, chunk_size=None, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns a tuple of generator (for reading the image file) and image_size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance.exceptions.NotFound` if image does not exist """ raise NotImplementedError def get_size(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file, and returns the size :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ raise NotImplementedError # NOTE(rosmaita): use the @glance_store.driver.back_compat_add # annotation on implementions for backward compatibility with # pre-0.26.0 add(). Need backcompat because pre-0.26.0 returned # a 4 tuple, this returns a 5-tuple @capabilities.check def add(self, image_id, image_file, image_size, hashing_algo, context=None, verifier=None): """ Stores an image file with supplied identifier to the backend storage system and returns a tuple containing information about the stored image. :param image_id: The opaque image identifier :param image_file: The image data to write, as a file-like object :param image_size: The size of the image data to write, in bytes :param hashing_algo: A hashlib algorithm identifier (string) :param context: A context object :param verifier: An object used to verify signatures for images :returns: tuple of: (1) URL in backing store, (2) bytes written, (3) checksum, (4) multihash value, and (5) a dictionary with storage system specific information :raises: `glance_store.exceptions.Duplicate` if the image already exists """ raise NotImplementedError @capabilities.check def delete(self, location, context=None): """ Takes a `glance_store.location.Location` object that indicates where to find the image file to delete :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :raises: `glance_store.exceptions.NotFound` if image does not exist """ raise NotImplementedError def set_acls(self, location, public=False, read_tenants=None, write_tenants=None, context=None): """ Sets the read and write access control list for an image in the backend store. :param location: `glance_store.location.Location` object, supplied from glance_store.location.get_location_from_uri() :param public: A boolean indicating whether the image should be public. :param read_tenants: A list of tenant strings which should be granted read access for an image. :param write_tenants: A list of tenant strings which should be granted write access for an image. """ raise NotImplementedError def back_compat_add(store_add_fun): """ Provides backward compatibility for the 0.26.0+ Store.add() function. In 0.26.0, the 'hashing_algo' parameter is introduced and Store.add() returns a 5-tuple containing a computed 'multihash' value. This wrapper behaves as follows: If no hashing_algo identifier is supplied as an argument, the response is the pre-0.26.0 4-tuple of:: (backend_url, bytes_written, checksum, metadata_dict) If a hashing_algo is supplied, the response is a 5-tuple:: (backend_url, bytes_written, checksum, multihash, metadata_dict) The wrapper detects the presence of a 'hashing_algo' argument both by examining named arguments and positionally. """ @wraps(store_add_fun) def add_adapter(*args, **kwargs): """ Wrapper for the store 'add' function. If no hashing_algo identifier is supplied, the response is the pre-0.25.0 4-tuple of:: (backend_url, bytes_written, checksum, metadata_dict) If a hashing_algo is supplied, the response is a 5-tuple:: (backend_url, bytes_written, checksum, multihash, metadata_dict) """ # strategy: assume this until we determine otherwise back_compat_required = True # specify info about 0.26.0 Store.add() call (can't introspect # this because the add method is wrapped by the capabilities # check) p_algo = 4 max_args = 7 num_args = len(args) num_kwargs = len(kwargs) if num_args + num_kwargs == max_args: # everything is present, including hashing_algo back_compat_required = False elif ('hashing_algo' in kwargs or (num_args >= p_algo + 1 and isinstance(args[p_algo], str))): # there is a hashing_algo argument present back_compat_required = False else: # this is a pre-0.26.0-style call, so let's figure out # whether to insert the hashing_algo in the args or kwargs if kwargs and 'image_' in ''.join(kwargs): # if any of the image_* is named, everything after it # must be named as well, so slap the algo into kwargs kwargs['hashing_algo'] = 'md5' else: args = args[:p_algo] + ('md5',) + args[p_algo:] # business time (backend_url, bytes_written, checksum, multihash, metadata_dict) = store_add_fun(*args, **kwargs) if back_compat_required: return (backend_url, bytes_written, checksum, metadata_dict) return (backend_url, bytes_written, checksum, multihash, metadata_dict) return add_adapter ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/exceptions.py0000664000175000017500000001243000000000000021414 0ustar00zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Glance Store exception subclasses""" import urllib.parse from glance_store.i18n import _ class BackendException(Exception): pass class UnsupportedBackend(BackendException): pass class RedirectException(Exception): def __init__(self, url): self.url = urllib.parse.urlparse(url) class GlanceStoreException(Exception): """ Base Glance Store Exception To correctly use this class, inherit from it and define a 'message' property. That message will get printf'd with the keyword arguments provided to the constructor. """ message = _("An unknown exception occurred") def __init__(self, message=None, **kwargs): if not message: message = self.message try: if kwargs: message = message % kwargs except Exception: pass self.msg = message super(GlanceStoreException, self).__init__(message) class MissingCredentialError(GlanceStoreException): message = _("Missing required credential: %(required)s") class BadAuthStrategy(GlanceStoreException): message = _("Incorrect auth strategy, expected \"%(expected)s\" but " "received \"%(received)s\"") class AuthorizationRedirect(GlanceStoreException): message = _("Redirecting to %(uri)s for authorization.") class NotFound(GlanceStoreException): message = _("Image %(image)s not found") class UnknownHashingAlgo(GlanceStoreException): message = _("Unknown hashing algorithm identifier: %(algo)s") class UnknownScheme(GlanceStoreException): message = _("Unknown scheme '%(scheme)s' found in URI") class BadStoreUri(GlanceStoreException): message = _("The Store URI was malformed: %(uri)s") class Duplicate(GlanceStoreException): message = _("Image %(image)s already exists") class StorageFull(GlanceStoreException): message = _("There is not enough disk space on the image storage media.") class StorageWriteDenied(GlanceStoreException): message = _("Permission to write image storage media denied.") class AuthBadRequest(GlanceStoreException): message = _("Connect error/bad request to Auth service at URL %(url)s.") class AuthUrlNotFound(GlanceStoreException): message = _("Auth service at URL %(url)s not found.") class AuthorizationFailure(GlanceStoreException): message = _("Authorization failed.") class NotAuthenticated(GlanceStoreException): message = _("You are not authenticated.") class Forbidden(GlanceStoreException): message = _("You are not authorized to complete this action.") class Invalid(GlanceStoreException): # NOTE(NiallBunting) This could be deprecated however the debtcollector # seems to have problems deprecating this as well as the subclasses. message = _("Data supplied was not valid.") class BadStoreConfiguration(GlanceStoreException): message = _("Store %(store_name)s could not be configured correctly. " "Reason: %(reason)s") class DriverLoadFailure(GlanceStoreException): message = _("Driver %(driver_name)s could not be loaded.") class StoreDeleteNotSupported(GlanceStoreException): message = _("Deleting images from this store is not supported.") class StoreGetNotSupported(GlanceStoreException): message = _("Getting images from this store is not supported.") class StoreRandomGetNotSupported(StoreGetNotSupported): message = _("Getting images randomly from this store is not supported. " "Offset: %(offset)s, length: %(chunk_size)s") class StoreAddDisabled(GlanceStoreException): message = _("Configuration for store failed. Adding images to this " "store is disabled.") class MaxRedirectsExceeded(GlanceStoreException): message = _("Maximum redirects (%(redirects)s) was exceeded.") class NoServiceEndpoint(GlanceStoreException): message = _("Response from Keystone does not contain a Glance endpoint.") class RegionAmbiguity(GlanceStoreException): message = _("Multiple 'image' service matches for region %(region)s. This " "generally means that a region is required and you have not " "supplied one.") class RemoteServiceUnavailable(GlanceStoreException): message = _("Remote server where the image is present is unavailable.") class HasSnapshot(GlanceStoreException): message = _("The image cannot be deleted because it has snapshot(s).") class InUseByStore(GlanceStoreException): message = _("The image cannot be deleted because it is in use through " "the backend store outside of Glance.") class HostNotInitialized(GlanceStoreException): message = _("The glance cinder store host %(host)s which will used to " "perform nfs mount/umount operations isn't initialized.") ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/i18n.py0000664000175000017500000000213100000000000020007 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import oslo_i18n as i18n _translators = i18n.TranslatorFactory(domain='glance_store') # The primary translation function using the well-known name "_" _ = _translators.primary # Translators for log levels. # # The abbreviated names are meant to reflect the usual use of a short # name like '_'. The "L" is for "log" and the other letter comes from # the level. _LI = _translators.log_info _LW = _translators.log_warning _LE = _translators.log_error _LC = _translators.log_critical ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/glance_store/locale/0000775000175000017500000000000000000000000020120 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/glance_store/locale/en_GB/0000775000175000017500000000000000000000000021072 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1732116 glance_store-4.8.1/glance_store/locale/en_GB/LC_MESSAGES/0000775000175000017500000000000000000000000022657 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/locale/en_GB/LC_MESSAGES/glance_store.po0000664000175000017500000002152200000000000025666 0ustar00zuulzuul00000000000000# Andi Chandler , 2016. #zanata # Andreas Jaeger , 2016. #zanata # Andi Chandler , 2018. #zanata # Andi Chandler , 2019. #zanata # Andi Chandler , 2020. #zanata # Andi Chandler , 2023. #zanata msgid "" msgstr "" "Project-Id-Version: glance_store VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2023-06-23 19:07+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2023-07-07 09:08+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en_GB\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "" "\n" "The store identifier for the default backend in which data will be\n" "stored.\n" "\n" "The value must be defined as one of the keys in the dict defined\n" "by the ``enabled_backends`` configuration option in the DEFAULT\n" "configuration group.\n" "\n" "If a value is not defined for this option:\n" "\n" "* the consuming service may refuse to start\n" "* store_add calls that do not specify a specific backend will\n" " raise a ``glance_store.exceptions.UnknownScheme`` exception\n" "\n" "Related Options:\n" " * enabled_backends\n" "\n" msgstr "" "\n" "The store identifier for the default backend in which data will be\n" "stored.\n" "\n" "The value must be defined as one of the keys in the dict defined\n" "by the ``enabled_backends`` configuration option in the DEFAULT\n" "configuration group.\n" "\n" "If a value is not defined for this option:\n" "\n" "* the consuming service may refuse to start\n" "* store_add calls that do not specify a specific backend will\n" " raise a ``glance_store.exceptions.UnknownScheme`` exception\n" "\n" "Related Options:\n" " * enabled_backends\n" "\n" msgid "" "\n" "This option is used to define a relative weight for this store over\n" "any others that are configured. The actual value of the weight is " "meaningless\n" "and only serves to provide a \"sort order\" compared to others. Any stores\n" "with the same weight will be treated as equivalent.\n" msgstr "" "\n" "This option is used to define a relative weight for this store over\n" "any others that are configured. The actual value of the weight is " "meaningless\n" "and only serves to provide a \"sort order\" compared to others. Any stores\n" "with the same weight will be treated as equivalent.\n" msgid "" "\n" "This option will be used to provide a constructive information about\n" "the store backend to end users. Using /v2/stores-info call user can\n" "seek more information on all available backends.\n" "\n" msgstr "" "\n" "This option will be used to provide a constructive information about\n" "the store backend to end users. Using /v2/stores-info call user can\n" "seek more information on all available backends.\n" "\n" msgid "'default_backend' config option is not set." msgstr "'default_backend' config option is not set." #, python-format msgid "" "A bad metadata structure was returned from the %(driver)s storage driver: " "%(metadata)s. %(e)s." msgstr "" "A bad metadata structure was returned from the %(driver)s storage driver: " "%(metadata)s. %(e)s." msgid "An unknown exception occurred" msgstr "An unknown exception occurred" #, python-format msgid "Auth service at URL %(url)s not found." msgstr "Auth service at URL %(url)s not found." msgid "Authorization failed." msgstr "Authorisation failed." msgid "" "Configuration for store failed. Adding images to this store is disabled." msgstr "" "Configuration for store failed. Adding images to this store is disabled." #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "Connect error/bad request to Auth service at URL %(url)s." msgid "Data supplied was not valid." msgstr "Data supplied was not valid." msgid "Deleting images from this store is not supported." msgstr "Deleting images from this store is not supported." #, python-format msgid "Driver %(driver_name)s could not be loaded." msgstr "Driver %(driver_name)s could not be loaded." #, python-format msgid "Error: cooperative_iter exception %s" msgstr "Error: cooperative_iter exception %s" #, python-format msgid "Failed to configure store correctly: %s Disabling add method." msgstr "Failed to configure store correctly: %s Disabling add method." msgid "Getting images from this store is not supported." msgstr "Getting images from this store is not supported." #, python-format msgid "" "Getting images randomly from this store is not supported. Offset: " "%(offset)s, length: %(chunk_size)s" msgstr "" "Getting images randomly from this store is not supported. Offset: " "%(offset)s, length: %(chunk_size)s" #, python-format msgid "Image %(image)s already exists" msgstr "Image %(image)s already exists" #, python-format msgid "Image %(image)s not found" msgstr "Image %(image)s not found" msgid "Image not found in any configured backend" msgstr "Image not found in any configured backend" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" #, python-format msgid "Location URI must start with one of the following schemas: %s" msgstr "Location URI must start with one of the following schemas: %s" #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Maximum redirects (%(redirects)s) was exceeded." #, python-format msgid "Missing required credential: %(required)s" msgstr "Missing required credential: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgid "Permission to write image storage media denied." msgstr "Permission to write image storage media denied." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Redirecting to %(uri)s for authorisation." msgid "Remote server where the image is present is unavailable." msgstr "Remote server where the image is present is unavailable." msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Response from Keystone does not contain a Glance endpoint." msgid "Skipping store.set_acls... not implemented." msgstr "Skipping store.set_acls... not implemented." #, python-format msgid "" "Store %(store_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "Store %(store_name)s could not be configured correctly. Reason: %(reason)s" #, python-format msgid "Store for identifier %s not found" msgstr "Store for identifier %s not found" #, python-format msgid "Store for scheme %s not found" msgstr "Store for scheme %s not found" #, python-format msgid "The Store URI was malformed: %(uri)s" msgstr "The Store URI was malformed: %(uri)s" #, python-format msgid "" "The glance cinder store host %(host)s which will used to perform nfs mount/" "umount operations isn't initialized." msgstr "" "The Glance Cinder store host %(host)s which will used to perform NFS mount/" "umount operations isn't initialised." msgid "The image cannot be deleted because it has snapshot(s)." msgstr "The image cannot be deleted because it has snapshot(s)." msgid "" "The image cannot be deleted because it is in use through the backend store " "outside of Glance." msgstr "" "The image cannot be deleted because it is in use through the backend store " "outside of Glance." #, python-format msgid "" "The image metadata key %(key)s has an invalid type of %(type)s. Only dict, " "list, and unicode are supported." msgstr "" "The image metadata key %(key)s has an invalid type of %(type)s. Only dict, " "list, and unicode are supported." #, python-format msgid "" "The storage driver %(driver)s returned invalid metadata %(metadata)s. This " "must be a dictionary type" msgstr "" "The storage driver %(driver)s returned invalid metadata %(metadata)s. This " "must be a dictionary type" msgid "There is not enough disk space on the image storage media." msgstr "There is not enough disk space on the image storage media." #, python-format msgid "Unable to register store %s. No schemes associated with it." msgstr "Unable to register store %s. No schemes associated with it." #, python-format msgid "Unknown hashing algorithm identifier: %(algo)s" msgstr "Unknown hashing algorithm identifier: %(algo)s" #, python-format msgid "Unknown scheme '%(scheme)s' found in URI" msgstr "Unknown scheme '%(scheme)s' found in URI" msgid "You are not authenticated." msgstr "You are not authenticated." msgid "You are not authorized to complete this action." msgstr "You are not authorised to complete this action." ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/glance_store/locale/ko_KR/0000775000175000017500000000000000000000000021125 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1732116 glance_store-4.8.1/glance_store/locale/ko_KR/LC_MESSAGES/0000775000175000017500000000000000000000000022712 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/locale/ko_KR/LC_MESSAGES/glance_store.po0000664000175000017500000001432000000000000025717 0ustar00zuulzuul00000000000000# Andreas Jaeger , 2016. #zanata # Jongwoo Han , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: glance_store VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-28 18:24+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-08-12 04:24+0000\n" "Last-Translator: Jongwoo Han \n" "Language-Team: Korean (South Korea)\n" "Language: ko_KR\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=1; plural=0\n" #, python-format msgid "" "A bad metadata structure was returned from the %(driver)s storage driver: " "%(metadata)s. %(e)s." msgstr "" "스토리지 드라이버 %(driver)s 가 잘못된 메타데이터 structure 를 반환했습니" "다 : %(metadata)s. %(e)s." msgid "An unknown exception occurred" msgstr "알 수 없는 예외가 발생했음" #, python-format msgid "Auth service at URL %(url)s not found." msgstr "URL %(url)s의 Auth 서비스를 찾을 수 없습니다." msgid "Authorization failed." msgstr "권한 부여에 실패했습니다. " msgid "" "Configuration for store failed. Adding images to this store is disabled." msgstr "" "저장소 설정이 실패했습니다. 이 저장소에 이미지를 추가하는 기능은 꺼집니다." #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "연결 오류/URL %(url)s에서 Auth 서비스에 대한 잘못된 요청입니다." msgid "Data supplied was not valid." msgstr "제공된 데이터가 올바르지 않습니다." msgid "Deleting images from this store is not supported." msgstr "이 저장소에서 이미지를 지우는 것은 지원되지 않습니다." #, python-format msgid "Driver %(driver_name)s could not be loaded." msgstr " %(driver_name)s 드라이버를 로드하지 못했습니다." #, python-format msgid "Error: cooperative_iter exception %s" msgstr "오류: cooperative_iter 예외 %s" #, python-format msgid "Failed to configure store correctly: %s Disabling add method." msgstr "" "저장소를 제대로 설정하지 못했습니다. : %s 에서 add method를 제외합니다." msgid "Getting images from this store is not supported." msgstr "이 저장소에서 이미지를 가져오는 것은 지원되지 않습니다." #, python-format msgid "" "Getting images randomly from this store is not supported. Offset: " "%(offset)s, length: %(chunk_size)s" msgstr "" "이 저장소에서 이미지를 랜덤하게 가져오는 것은 지원되지 않습니다. 위치: " "%(offset)s, 길이: %(chunk_size)s" #, python-format msgid "Image %(image)s already exists" msgstr "이미지 %(image)s 가 이미 있습니다." #, python-format msgid "Image %(image)s not found" msgstr "이미지 %(image)s 가 없습니다." #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "인증 전략이 올바르지 않음. 예상: \"%(expected)s\", 수신: \"%(received)s\"" #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "최대 경로 재지정(%(redirects)s)에 도달했습니다." #, python-format msgid "Missing required credential: %(required)s" msgstr "필수 신임 정보 누락: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "다중 '이미지' 서비스가 %(region)s 리젼에 일치합니다. 이는 일반적으로 리젼이 " "필요하지만 아직 리젼을 제공하지 않은 경우 발생합니다." msgid "Permission to write image storage media denied." msgstr "이미지 스토리지 미디어에 쓰기 권한이 거부되었습니다." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "권한 부여를 위해 %(uri)s(으)로 경로 재지정 중입니다." msgid "Remote server where the image is present is unavailable." msgstr "이 이미지가 있는 원격 서버에 접속할 수 없습니다." msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Keystone의 응답에 Glance 엔드포인트가 들어있지 않습니다." msgid "Skipping store.set_acls... not implemented." msgstr " store.set_acls... 는 구현되지 않았으므로 skip합니다." #, python-format msgid "" "Store %(store_name)s could not be configured correctly. Reason: %(reason)s" msgstr "저장소 %(store_name)s 가 제대로 설정되지 않습니다. 원인: %(reason)s" #, python-format msgid "Store for scheme %s not found" msgstr "%s 스키마에 대한 저장소를 찾을 수 없음" #, python-format msgid "The Store URI was malformed: %(uri)s" msgstr "저장소 URI가 잘못된 형식입니다: %(uri)s" msgid "The image cannot be deleted because it has snapshot(s)." msgstr "스냅샷이 있으므로 이 이미지를 지울 수 없습니다." msgid "" "The image cannot be deleted because it is in use through the backend store " "outside of Glance." msgstr "" "이미지는 Glance가 사용하지 않는 백엔드 저장소에서 사용중이므로 삭제할 수 없습" "니다." #, python-format msgid "" "The image metadata key %(key)s has an invalid type of %(type)s. Only dict, " "list, and unicode are supported." msgstr "" "이미지 메타데이터 키 %(key)s 가 잘못된 타입인 %(type)s 타입입니다. dict, " "list, unicode만이 지원됩니다." #, python-format msgid "" "The storage driver %(driver)s returned invalid metadata %(metadata)s. This " "must be a dictionary type" msgstr "" "스토리지 드라이버 %(driver)s 가 잘못된 메타데이터 %(metadata)s 를 반환했습니" "다. 이 값은 반드시 dict 타입이어야 합니다." msgid "There is not enough disk space on the image storage media." msgstr "이미지 스토리지 매체에 충분한 저장 공간이 부족합니다." #, python-format msgid "Unknown scheme '%(scheme)s' found in URI" msgstr "URI에 알 수 없는 스킴인 '%(scheme)s' 가 있습니다." msgid "You are not authenticated." msgstr "인증되지 않은 사용자입니다." msgid "You are not authorized to complete this action." msgstr "이 조치를 완료할 권한이 없습니다. " ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/location.py0000664000175000017500000002071700000000000021052 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A class that describes the location of an image in Glance. In Glance, an image can either be **stored** in Glance, or it can be **registered** in Glance but actually be stored somewhere else. We needed a class that could support the various ways that Glance describes where exactly an image is stored. An image in Glance has two location properties: the image URI and the image storage URI. The image URI is essentially the permalink identifier for the image. It is displayed in the output of various Glance API calls and, while read-only, is entirely user-facing. It shall **not** contain any security credential information at all. The Glance image URI shall be the host:port of that Glance API server along with /images/. The Glance storage URI is an internal URI structure that Glance uses to maintain critical information about how to access the images that it stores in its storage backends. It **may contain** security credentials and is **not** user-facing. """ import logging import urllib.parse from oslo_config import cfg from glance_store import exceptions from glance_store.i18n import _ CONF = cfg.CONF LOG = logging.getLogger(__name__) SCHEME_TO_CLS_MAP = {} SCHEME_TO_CLS_BACKEND_MAP = {} def get_location_from_uri(uri, conf=CONF): """ Given a URI, return a Location object that has had an appropriate store parse the URI. :param uri: A URI that could come from the end-user in the Location attribute/header. :param conf: The global configuration. Example URIs: https://user:pass@example.com:80/images/some-id http://example.com/123456 swift://example.com/container/obj-id swift://user:account:pass@authurl.com/container/obj-id swift+http://user:account:pass@authurl.com/container/obj-id file:///var/lib/glance/images/1 cinder://volume-id s3://accesskey:secretkey@s3.amazonaws.com/bucket/key-id s3+https://accesskey:secretkey@s3.amazonaws.com/bucket/key-id """ pieces = urllib.parse.urlparse(uri) if pieces.scheme not in SCHEME_TO_CLS_MAP.keys(): raise exceptions.UnknownScheme(scheme=pieces.scheme) scheme_info = SCHEME_TO_CLS_MAP[pieces.scheme] return Location(pieces.scheme, scheme_info['location_class'], conf, uri=uri) def get_location_from_uri_and_backend(uri, backend, conf=CONF): """Extract backend location from a URI. Given a URI, return a Location object that has had an appropriate store parse the URI. :param uri: A URI that could come from the end-user in the Location attribute/header. :param backend: A backend name for the store. :param conf: The global configuration. Example URIs: https://user:pass@example.com:80/images/some-id http://example.com/123456 swift://example.com/container/obj-id swift://user:account:pass@authurl.com/container/obj-id swift+http://user:account:pass@authurl.com/container/obj-id file:///var/lib/glance/images/1 cinder://volume-id s3://accesskey:secretkey@s3.amazonaws.com/bucket/key-id s3+https://accesskey:secretkey@s3.amazonaws.com/bucket/key-id """ pieces = urllib.parse.urlparse(uri) if pieces.scheme not in SCHEME_TO_CLS_BACKEND_MAP.keys(): raise exceptions.UnknownScheme(scheme=pieces.scheme) try: scheme_info = SCHEME_TO_CLS_BACKEND_MAP[pieces.scheme][backend] except KeyError: raise exceptions.UnknownScheme(scheme=backend) return Location(pieces.scheme, scheme_info['location_class'], conf, uri=uri, backend=backend) def register_scheme_backend_map(scheme_map): """Registers a mapping between a scheme and a backend. Given a mapping of 'scheme' to store_name, adds the mapping to the known list of schemes. This function overrides existing stores. """ for (k, v) in scheme_map.items(): if k not in SCHEME_TO_CLS_BACKEND_MAP: SCHEME_TO_CLS_BACKEND_MAP[k] = {} LOG.debug("Registering scheme %s with %s", k, v) for key, value in v.items(): SCHEME_TO_CLS_BACKEND_MAP[k][key] = value def register_scheme_map(scheme_map): """ Given a mapping of 'scheme' to store_name, adds the mapping to the known list of schemes. This function overrides existing stores. """ for (k, v) in scheme_map.items(): LOG.debug("Registering scheme %s with %s", k, v) SCHEME_TO_CLS_MAP[k] = v class Location(object): """ Class describing the location of an image that Glance knows about """ def __init__(self, store_name, store_location_class, conf, uri=None, image_id=None, store_specs=None, backend=None): """ Create a new Location object. :param store_name: The string identifier/scheme of the storage backend :param store_location_class: The store location class to use for this location instance. :param image_id: The identifier of the image in whatever storage backend is used. :param uri: Optional URI to construct location from :param store_specs: Dictionary of information about the location of the image that is dependent on the backend store :param backend: Name of store backend """ self.store_name = store_name self.image_id = image_id self.store_specs = store_specs or {} self.conf = conf self.backend_group = backend self.store_location = store_location_class( self.store_specs, conf, backend_group=backend) if uri: self.store_location.parse_uri(uri) def get_store_uri(self): """ Returns the Glance image URI, which is the host:port of the API server along with /images/ """ return self.store_location.get_uri() def get_uri(self): return None class StoreLocation(object): """ Base class that must be implemented by each store """ def __init__(self, store_specs, conf, backend_group=None): self.conf = conf self.specs = store_specs self.backend_group = backend_group if self.specs: self.process_specs() def process_specs(self): """ Subclasses should implement any processing of the self.specs collection such as storing credentials and possibly establishing connections. """ pass def get_uri(self): """ Subclasses should implement a method that returns an internal URI that, when supplied to the StoreLocation instance, can be interpreted by the StoreLocation's parse_uri() method. The URI returned from this method shall never be public and only used internally within Glance, so it is fine to encode credentials in this URI. """ raise NotImplementedError("StoreLocation subclass must implement " "get_uri()") def parse_uri(self, uri): """ Subclasses should implement a method that accepts a string URI and sets appropriate internal fields such that a call to get_uri() will return a proper internal URI """ raise NotImplementedError("StoreLocation subclass must implement " "parse_uri()") @staticmethod def validate_schemas(uri, valid_schemas): """check if uri scheme is one of valid_schemas generate exception otherwise """ for valid_schema in valid_schemas: if uri.startswith(valid_schema): return reason = _("Location URI must start with one of the following " "schemas: %s") % ', '.join(valid_schemas) LOG.warning(reason) raise exceptions.BadStoreUri(message=reason) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/multi_backend.py0000664000175000017500000005412300000000000022041 0ustar00zuulzuul00000000000000# Copyright 2018 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import hashlib import logging from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils import units from stevedore import driver from stevedore import extension from glance_store import capabilities from glance_store import exceptions from glance_store.i18n import _ from glance_store import location CONF = cfg.CONF LOG = logging.getLogger(__name__) _STORE_OPTS = [ cfg.StrOpt('default_backend', help=_(""" The store identifier for the default backend in which data will be stored. The value must be defined as one of the keys in the dict defined by the ``enabled_backends`` configuration option in the DEFAULT configuration group. If a value is not defined for this option: * the consuming service may refuse to start * store_add calls that do not specify a specific backend will raise a ``glance_store.exceptions.UnknownScheme`` exception Related Options: * enabled_backends """)), ] FS_CONF_DATADIR_HELP = """ Directory of which the reserved store {} uses. Possible values: * A valid path to a directory Refer to [glance_store]/filesystem store config opts for more details. """ FS_CONF_CHUNKSIZE_HELP = """ Chunk size, in bytes to be used by reserved store {}. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: * Any positive integer value """ _STORE_CFG_GROUP = 'glance_store' _RESERVED_STORES = {} def _list_config_opts(): # NOTE(abhishekk): This separated approach could list # store options before all driver ones, which easier # to generate sampe config file. driver_opts = _list_driver_opts() sample_opts = [(_STORE_CFG_GROUP, _STORE_OPTS)] for store_entry in driver_opts: # NOTE(abhishekk): Do not include no_conf store if store_entry == "no_conf": continue sample_opts.append((store_entry, driver_opts[store_entry])) return sample_opts def _list_driver_opts(): driver_opts = {} mgr = extension.ExtensionManager('glance_store.drivers') # NOTE(zhiyan): Handle available drivers entry_points provided # NOTE(nikhil): Return a sorted list of drivers to ensure that the sample # configuration files generated by oslo config generator retain the order # in which the config opts appear across different runs. If this order of # config opts is not preserved, some downstream packagers may see a long # diff of the changes though not relevant as only order has changed. See # some more details at bug 1619487. drivers = sorted([ext.name for ext in mgr]) handled_drivers = [] # Used to handle backwards-compatible entries for store_entry in drivers: driver_cls = _load_multi_store(None, store_entry, False) if driver_cls and driver_cls not in handled_drivers: if getattr(driver_cls, 'OPTIONS', None) is not None: driver_opts[store_entry] = driver_cls.OPTIONS handled_drivers.append(driver_cls) # NOTE(zhiyan): This separated approach could list # store options before all driver ones, which easier # to read and configure by operator. return driver_opts def register_store_opts(conf, reserved_stores=None): LOG.debug("Registering options for group %s", _STORE_CFG_GROUP) conf.register_opts(_STORE_OPTS, group=_STORE_CFG_GROUP) configured_backends = copy.deepcopy(conf.enabled_backends) if reserved_stores: conf.enabled_backends.update(reserved_stores) for key in reserved_stores.keys(): fs_conf_template = [ cfg.StrOpt('filesystem_store_datadir', default='/var/lib/glance/{}'.format(key), help=FS_CONF_DATADIR_HELP.format(key)), cfg.MultiStrOpt('filesystem_store_datadirs', help="""Not used"""), cfg.StrOpt('filesystem_store_metadata_file', help="""Not used"""), cfg.IntOpt('filesystem_store_file_perm', default=0, help="""Not used"""), cfg.IntOpt('filesystem_store_chunk_size', default=64 * units.Ki, min=1, help=FS_CONF_CHUNKSIZE_HELP.format(key)), cfg.BoolOpt('filesystem_thin_provisioning', default=False, help="""Not used""")] LOG.debug("Registering options for reserved store: {}".format(key)) conf.register_opts(fs_conf_template, group=key) driver_opts = _list_driver_opts() for backend in configured_backends: for opt_list in driver_opts: if configured_backends[backend] not in opt_list: continue LOG.debug("Registering options for group %s", backend) conf.register_opts(driver_opts[opt_list], group=backend) def _load_multi_store(conf, store_entry, invoke_load=True, backend=None): if backend: invoke_args = [conf, backend] else: invoke_args = [conf] try: LOG.debug("Attempting to import store %s", store_entry) mgr = driver.DriverManager('glance_store.drivers', store_entry, invoke_args=invoke_args, invoke_on_load=invoke_load) return mgr.driver except RuntimeError as e: LOG.warning("Failed to load driver %(driver)s. The " "driver will be disabled", dict(driver=str([driver, e]))) def _load_multi_stores(conf, reserved_stores=None): enabled_backends = conf.enabled_backends if reserved_stores: enabled_backends.update(reserved_stores) _RESERVED_STORES.update(reserved_stores) for backend, store_entry in enabled_backends.items(): try: # FIXME(flaper87): Don't hide BadStoreConfiguration # exceptions. These exceptions should be propagated # to the user of the library. store_instance = _load_multi_store(conf, store_entry, backend=backend) if not store_instance: continue yield (store_entry, store_instance, backend) except exceptions.BadStoreConfiguration: continue def create_multi_stores(conf=CONF, reserved_stores=None): """ Registers all store modules and all schemes from the given configuration object. :param conf: A oslo_config (or compatible) object :param reserved_stores: A list of stores for the consuming service's internal use. The list must be the same format as the ``enabled_backends`` configuration setting. The default value is None :return: The number of stores configured :raises: ``glance_store.exceptions.BackendException`` *Configuring Multiple Backends* The backends to be configured are expected to be found in the ``enabled_backends`` configuration variable in the DEFAULT group of the object. The format for the variable is a dictionary of key:value pairs where the key is an arbitrary store identifier and the value is the store type identifier for the store. The type identifiers must be defined in the ``[entry points]`` section of the glance_store ``setup.cfg`` file as values for the ``glance_store.drivers`` configuration. (See the default ``setup.cfg`` file for an example.) The store type identifiers for the currently supported drivers are already defined in the file. Thus an example value for ``enabled_backends`` is:: {'store_one': 'http', 'store_two': 'file', 'store_three': 'rbd'} The ``reserved_stores`` parameter, if included, must have the same format. There is no difference between the ``enabled_backends`` and ``reserved_stores`` from the glance_store point of view: the reserved stores are a convenience for the consuming service, which may wish to handle the two sets of stores differently. *The Default Store* If you wish to set a default store, its store identifier should be defined as the value of the ``default_backend`` configuration option in the ``glance_store`` group of the ``conf`` parameter. The store identifier, or course, should be specified as one of the keys in the ``enabled_backends`` dict. It is recommended that a default store be set. *Configuring Individual Backends* To configure each store mentioned in the ``enabled_backends`` configuration option, you must define an option group with the same name as the store identifier. The options defined for that backend will depend upon the store type; consult the documentation for the appropriate backend driver to determine what these are. For example, given the ``enabled_backends`` example above, you would put the following in the configuration file that loads the ``conf`` object:: [DEFAULT] enabled_backends = store_one:rbd,store_two:file,store_three:http [store_one] store_description = "A human-readable string aimed at end users" rbd_store_chunk_size = 8 rbd_store_pool = images rbd_store_user = admin rbd_store_ceph_conf = /etc/ceph/ceph.conf [store_two] store_description = "Human-readable description of this store" filesystem_store_datadir = /opt/stack/data/glance/store_two [store_three] store_description = "A read-only store" https_ca_certificates_file = /opt/stack/certs/gs.cert [glance_store] default_backend = store_two The ``store_description`` options may be used by a consuming service. As recommended above, this file also defines a default backend. """ store_count = 0 scheme_map = {} for (store_entry, store_instance, store_identifier) in _load_multi_stores( conf, reserved_stores=reserved_stores): try: schemes = store_instance.get_schemes() store_instance.configure(re_raise_bsc=False) except NotImplementedError: continue if not schemes: raise exceptions.BackendException( _('Unable to register store %s. No schemes associated ' 'with it.') % store_entry) else: LOG.debug("Registering store %s with schemes %s", store_entry, schemes) loc_cls = store_instance.get_store_location_class() for scheme in schemes: if scheme not in scheme_map: scheme_map[scheme] = {} scheme_map[scheme][store_identifier] = { 'store': store_instance, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_backend_map(scheme_map) store_count += 1 return store_count def verify_store(): store_id = CONF.glance_store.default_backend if not store_id: msg = _("'default_backend' config option is not set.") raise RuntimeError(msg) try: get_store_from_store_identifier(store_id) except exceptions.UnknownScheme: msg = _("Store for identifier %s not found") % store_id raise RuntimeError(msg) def get_store_from_store_identifier(store_identifier): """Determine backing store from identifier. Given a store identifier, return the appropriate store object for handling that scheme. """ scheme_map = {} enabled_backends = CONF.enabled_backends enabled_backends.update(_RESERVED_STORES) try: scheme = enabled_backends[store_identifier] except KeyError: msg = _("Store for identifier %s not found") % store_identifier raise exceptions.UnknownScheme(msg) if scheme not in location.SCHEME_TO_CLS_BACKEND_MAP: raise exceptions.UnknownScheme(scheme=scheme) scheme_info = location.SCHEME_TO_CLS_BACKEND_MAP[scheme][store_identifier] store = scheme_info['store'] if not store.is_capable(capabilities.BitMasks.DRIVER_REUSABLE): # Driver instance isn't stateless so it can't # be reused safely and need recreation. store_entry = scheme_info['store_entry'] store = _load_multi_store(store.conf, store_entry, invoke_load=True, backend=store_identifier) store.configure() try: loc_cls = store.get_store_location_class() for new_scheme in store.get_schemes(): if new_scheme not in scheme_map: scheme_map[new_scheme] = {} scheme_map[new_scheme][store_identifier] = { 'store': store, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_backend_map(scheme_map) except NotImplementedError: scheme_info['store'] = store return store def add(conf, image_id, data, size, backend, context=None, verifier=None): if not backend: backend = conf.glance_store.default_backend store = get_store_from_store_identifier(backend) return store_add_to_backend(image_id, data, size, store, context, verifier) def add_with_multihash(conf, image_id, data, size, backend, hashing_algo, scheme=None, context=None, verifier=None): if not backend: backend = conf.glance_store.default_backend store = get_store_from_store_identifier(backend) return store_add_to_backend_with_multihash( image_id, data, size, hashing_algo, store, context, verifier) def _check_metadata(store, metadata): if not isinstance(metadata, dict): msg = (_("The storage driver %(driver)s returned invalid " " metadata %(metadata)s. This must be a dictionary type") % dict(driver=str(store), metadata=str(metadata))) LOG.error(msg) raise exceptions.BackendException(msg) try: check_location_metadata(metadata) except exceptions.BackendException as e: e_msg = (_("A bad metadata structure was returned from the " "%(driver)s storage driver: %(metadata)s. %(e)s.") % dict(driver=encodeutils.exception_to_unicode(store), metadata=encodeutils.exception_to_unicode(metadata), e=encodeutils.exception_to_unicode(e))) LOG.error(e_msg) raise exceptions.BackendException(e_msg) def store_add_to_backend(image_id, data, size, store, context=None, verifier=None): """A wrapper around a call to each stores add() method. This gives glance a common place to check the output. :param image_id: The image add to which data is added :param data: The data to be stored :param size: The length of the data in bytes :param store: The store to which the data is being added :param context: The request context :param verifier: An object used to verify signatures for images :param backend: Name of the backend to store the image :return: The url location of the file, the size amount of data, the checksum of the data the storage systems metadata dictionary for the location """ (location, size, checksum, metadata) = store.add(image_id, data, size, context=context, verifier=verifier) if metadata is not None: _check_metadata(store, metadata) return (location, size, checksum, metadata) def store_add_to_backend_with_multihash( image_id, data, size, hashing_algo, store, context=None, verifier=None): """ A wrapper around a call to each store's add() method that requires a hashing_algo identifier and returns a 5-tuple including the "multihash" computed using the specified hashing_algo. (This is an enhanced version of store_add_to_backend(), which is left as-is for backward compatibility.) :param image_id: The image add to which data is added :param data: The data to be stored :param size: The length of the data in bytes :param store: The store to which the data is being added :param hashing_algo: A hashlib algorithm identifier (string) :param context: The request context :param verifier: An object used to verify signatures for images :return: The url location of the file, the size amount of data, the checksum of the data, the multihash of the data, the storage system's metadata dictionary for the location :raises: ``glance_store.exceptions.BackendException`` ``glance_store.exceptions.UnknownHashingAlgo`` """ if hashing_algo not in hashlib.algorithms_available: raise exceptions.UnknownHashingAlgo(algo=hashing_algo) (location, size, checksum, multihash, metadata) = store.add( image_id, data, size, hashing_algo, context=context, verifier=verifier) if metadata is not None: _check_metadata(store, metadata) return (location, size, checksum, multihash, metadata) def check_location_metadata(val, key=''): if isinstance(val, dict): for key in val: check_location_metadata(val[key], key=key) elif isinstance(val, list): ndx = 0 for v in val: check_location_metadata(v, key='%s[%d]' % (key, ndx)) ndx = ndx + 1 elif not isinstance(val, str): raise exceptions.BackendException(_("The image metadata key %(key)s " "has an invalid type of %(type)s. " "Only dict, list, and unicode are " "supported.") % dict(key=key, type=type(val))) def delete(uri, backend, context=None): """Removes chunks of data from backend specified by uri.""" if backend: loc = location.get_location_from_uri_and_backend( uri, backend, conf=CONF) store = get_store_from_store_identifier(backend) return store.delete(loc, context=context) LOG.warning('Backend is not set to image, searching all backends based on ' 'location URI.') backends = CONF.enabled_backends for backend in backends: try: if not uri.startswith(backends[backend]): continue loc = location.get_location_from_uri_and_backend( uri, backend, conf=CONF) store = get_store_from_store_identifier(backend) return store.delete(loc, context=context) except (exceptions.NotFound, exceptions.UnknownScheme): continue raise exceptions.NotFound(_("Image not found in any configured backend")) def set_acls_for_multi_store(location_uri, backend, public=False, read_tenants=[], write_tenants=None, context=None): if write_tenants is None: write_tenants = [] loc = location.get_location_from_uri_and_backend( location_uri, backend, conf=CONF) store = get_store_from_store_identifier(backend) try: store.set_acls(loc, public=public, read_tenants=read_tenants, write_tenants=write_tenants, context=context) except NotImplementedError: LOG.debug("Skipping store.set_acls... not implemented") def get(uri, backend, offset=0, chunk_size=None, context=None): """Yields chunks of data from backend specified by uri.""" if backend: loc = location.get_location_from_uri_and_backend(uri, backend, conf=CONF) store = get_store_from_store_identifier(backend) return store.get(loc, offset=offset, chunk_size=chunk_size, context=context) LOG.warning('Backend is not set to image, searching all backends based on ' 'location URI.') backends = CONF.enabled_backends for backend in backends: try: if not uri.startswith(backends[backend]): continue loc = location.get_location_from_uri_and_backend( uri, backend, conf=CONF) store = get_store_from_store_identifier(backend) data, size = store.get(loc, offset=offset, chunk_size=chunk_size, context=context) if data: return data, size except (exceptions.NotFound, exceptions.UnknownScheme): continue raise exceptions.NotFound(_("Image not found in any configured backend")) def get_known_schemes_for_multi_store(): """Returns list of known schemes.""" return location.SCHEME_TO_CLS_BACKEND_MAP.keys() def get_size_from_uri_and_backend(uri, backend, context=None): """Retrieves image size from backend specified by uri.""" loc = location.get_location_from_uri_and_backend( uri, backend, conf=CONF) store = get_store_from_store_identifier(backend) return store.get_size(loc, context=context) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1732116 glance_store-4.8.1/glance_store/tests/0000775000175000017500000000000000000000000020023 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/__init__.py0000664000175000017500000000000000000000000022122 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/base.py0000664000175000017500000001021200000000000021303 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2014 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import shutil import fixtures from oslo_config import cfg from oslotest import base import glance_store as store from glance_store import location class StoreBaseTest(base.BaseTestCase): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): super(StoreBaseTest, self).setUp() self.conf = self._CONF self.conf(args=[]) store.register_opts(self.conf) self.config(stores=[]) # Ensure stores + locations cleared location.SCHEME_TO_CLS_MAP = {} store.create_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path self.addCleanup(self.conf.reset) def copy_data_file(self, file_name, dst_dir): src_file_name = os.path.join('glance_store/tests/etc', file_name) shutil.copy(src_file_name, dst_dir) dst_file_name = os.path.join(dst_dir, file_name) return dst_file_name def config(self, **kw): """Override some configuration values. The keyword arguments are the names of configuration options to override and their values. If a group argument is supplied, the overrides are applied to the specified configuration option group. All overrides are automatically cleared at the end of the current test by the fixtures cleanup process. """ group = kw.pop('group', 'glance_store') for k, v in kw.items(): self.conf.set_override(k, v, group) def register_store_schemes(self, store, store_entry): schemes = store.get_schemes() scheme_map = {} loc_cls = store.get_store_location_class() for scheme in schemes: scheme_map[scheme] = { 'store': store, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_map(scheme_map) class MultiStoreBaseTest(base.BaseTestCase): def copy_data_file(self, file_name, dst_dir): src_file_name = os.path.join('glance_store/tests/etc', file_name) shutil.copy(src_file_name, dst_dir) dst_file_name = os.path.join(dst_dir, file_name) return dst_file_name def config(self, **kw): """Override some configuration values. The keyword arguments are the names of configuration options to override and their values. If a group argument is supplied, the overrides are applied to the specified configuration option group. All overrides are automatically cleared at the end of the current test by the fixtures cleanup process. """ group = kw.pop('group', None) for k, v in kw.items(): if group: self.conf.set_override(k, v, group) else: self.conf.set_override(k, v) def register_store_backend_schemes(self, store, store_entry, store_identifier): schemes = store.get_schemes() scheme_map = {} loc_cls = store.get_store_location_class() for scheme in schemes: scheme_map[scheme] = {} scheme_map[scheme][store_identifier] = { 'store': store, 'location_class': loc_cls, 'store_entry': store_entry } location.register_scheme_backend_map(scheme_map) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1732116 glance_store-4.8.1/glance_store/tests/etc/0000775000175000017500000000000000000000000020576 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/etc/glance-swift.conf0000664000175000017500000000167200000000000024036 0ustar00zuulzuul00000000000000[ref1] user = tenant:user1 key = key1 auth_address = example.com [ref2] user = user2 key = key2 user_domain_id = default project_domain_id = default auth_version = 3 auth_address = http://example.com [ref3] user = "user3" key = "key3" auth_address = "http://example.com" [ref4] user = user4 key = key4 user_domain_id = userdomainid project_domain_id = projdomainid auth_version = 3 auth_address = "http://example.com" [ref5] user = user5 key = key5 user_domain_name = userdomain project_domain_name = projdomain auth_version = 3 auth_address = "http://example.com" [store_2] user = tenant:user1 key = key1 auth_address= https://localhost:8080 [store_3] user= tenant:user2 key= key2 auth_address= https://localhost:8080 [store_4] user = tenant:user1 key = key1 auth_address = http://localhost:80 [store_5] user = tenant:user1 key = key1 auth_address = http://localhost [store_6] user = tenant:user1 key = key1 auth_address = https://localhost/v1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/fakes.py0000664000175000017500000000150200000000000021464 0ustar00zuulzuul00000000000000# Copyright 2014 Red hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store import driver from glance_store import exceptions class UnconfigurableStore(driver.Store): def configure(self, re_raise_bsc=False): raise exceptions.BadStoreConfiguration() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1772127 glance_store-4.8.1/glance_store/tests/functional/0000775000175000017500000000000000000000000022165 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/functional/README.rst0000664000175000017500000000553500000000000023664 0ustar00zuulzuul00000000000000=============================== glance_store functional testing =============================== Writing functional tests for glance_store ----------------------------------------- The functional tests verify glance_store against a "live" backend. The tests are isolated so that a development environment doesn't have to all the backends available, just the particular backend whose driver the developer is working on. To add tests for a driver: 1. Create a new module in ``glance_store/tests/functional`` with the driver name. 2. Create a submodule ``test_functional_{driver-name}`` containing a class that inherits from ``glance_store.tests.functional.BaseFunctionalTests``. The actual tests are in the ``BaseFunctionalTests`` class. The test classes for each driver do any extra setup/teardown necessary for that particular driver. (The idea is that all the backends should be able to pass the same tests.) 3. Add a testenv to ``tox.ini`` named ``functional-{driver-name}`` so that tox can run the tests for your driver. (Use the other functional testenvs as examples.) 4. If your driver is well-supported by devstack, it shouldn't be too hard to set up a gate job for the functional tests in ``.zuul.yaml``. (Use the other jobs defined in that file as examples.) Configuration ------------- The functional tests have been designed to work well with devstack so that we can run them in the gate. Thus the tests expect to find a yaml file containing valid credentials just like the ``clouds.yaml`` file created by devstack in the ``/etc/openstack`` directory. The test code knows where to find it, so if you're using devstack, you should be all set. If you are not using devstack you should create a yaml file with the following format:: clouds: devstack-admin: auth: auth_url: https://172.16.132.143/identity password: example project_domain_id: default project_name: admin user_domain_id: default username: admin identity_api_version: '3' region_name: RegionOne volume_api_version: '3' The clouds.yaml format allows for a set of credentials to be defined for each named cloud. By default, the tests will use the credentials for the cloud named **devstack-admin** (that's the cloud shown in the example above). You can change which cloud is read from ``clouds.yaml`` by exporting the environment variable ``OS_TEST_GLANCE_STORE_FUNC_TEST_CLOUD`` set to the name of the cloud you want used. Where to put clouds.yaml ------------------------ The tests will look for a file named ``clouds.yaml`` in the following locations (in this order, first found wins): * current directory * ~/.config/openstack * /etc/openstack You may also set the environment variable ``OS_CLIENT_CONFIG_FILE`` to the absolute pathname of a file and that location will be inserted at the front of the search list. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/functional/__init__.py0000664000175000017500000000000000000000000024264 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/functional/base.py0000664000175000017500000000751300000000000023457 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import io from os import environ import glance_store import os_client_config from oslo_config import cfg import testtools CONF = cfg.CONF UUID1 = '961973d8-3360-4364-919e-2c197825dbb4' UUID2 = 'e03cf3b1-3070-4497-a37d-9703edfb615b' UUID3 = '0d7f89b2-e236-45e9-b081-561cd3102e92' UUID4 = '165e9681-ea56-46b0-a84c-f148c752ef8b' IMAGE_BITS = b'I am a bootable image, I promise' class Base(testtools.TestCase): def __init__(self, driver_name, *args, **kwargs): super(Base, self).__init__(*args, **kwargs) self.driver_name = driver_name # check whether a particular cloud should be used cloud = environ.get('OS_TEST_GLANCE_STORE_FUNC_TEST_CLOUD', 'devstack-admin') creds = os_client_config.OpenStackConfig().get_one_cloud( cloud=cloud) auth = creds.get_auth_args() self.username = auth["username"] self.password = auth["password"] self.project_name = auth["project_name"] self.user_domain_id = auth["user_domain_id"] self.project_domain_id = auth["project_domain_id"] self.keystone_version = creds.get_api_version('identity') self.cinder_version = creds.get_api_version('volume') self.region_name = creds.get_region_name() # auth_url in devstack clouds.yaml is unversioned if auth["auth_url"].endswith('/v3'): self.auth_url = auth["auth_url"] else: self.auth_url = '{}/v3'.format(auth["auth_url"]) # finally, load the configuration options glance_store.register_opts(CONF) def setUp(self): super(Base, self).setUp() CONF.set_override('stores', [self.driver_name], group='glance_store') CONF.set_override('default_store', self.driver_name, group='glance_store' ) glance_store.create_stores() self.store = glance_store.backend._load_store(CONF, self.driver_name) self.store.configure() class BaseFunctionalTests(Base): def test_add(self): image_file = io.BytesIO(IMAGE_BITS) loc, written, _, _ = self.store.add(UUID1, image_file, len(IMAGE_BITS)) self.assertEqual(len(IMAGE_BITS), written) def test_delete(self): image_file = io.BytesIO(IMAGE_BITS) loc, written, _, _ = self.store.add(UUID2, image_file, len(IMAGE_BITS)) location = glance_store.location.get_location_from_uri(loc) self.store.delete(location) def test_get_size(self): image_file = io.BytesIO(IMAGE_BITS) loc, written, _, _ = self.store.add(UUID3, image_file, len(IMAGE_BITS)) location = glance_store.location.get_location_from_uri(loc) size = self.store.get_size(location) self.assertEqual(len(IMAGE_BITS), size) def test_get(self): image_file = io.BytesIO(IMAGE_BITS) loc, written, _, _ = self.store.add(UUID3, image_file, len(IMAGE_BITS)) location = glance_store.location.get_location_from_uri(loc) image, size = self.store.get(location) self.assertEqual(len(IMAGE_BITS), size) data = b'' for chunk in image: data += chunk self.assertEqual(IMAGE_BITS, data) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1772127 glance_store-4.8.1/glance_store/tests/functional/filesystem/0000775000175000017500000000000000000000000024351 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/functional/filesystem/__init__.py0000664000175000017500000000000000000000000026450 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/functional/filesystem/test_functional_filesystem.py0000664000175000017500000000251700000000000032375 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import shutil import tempfile from oslo_config import cfg from glance_store.tests.functional import base CONF = cfg.CONF logging.basicConfig() class TestFilesystem(base.BaseFunctionalTests): def __init__(self, *args, **kwargs): super(TestFilesystem, self).__init__('file', *args, **kwargs) def setUp(self): self.tmp_image_dir = tempfile.mkdtemp(prefix='glance_store_') CONF.set_override('filesystem_store_datadir', self.tmp_image_dir, group='glance_store') super(TestFilesystem, self).setUp() def tearDown(self): shutil.rmtree(self.tmp_image_dir) super(TestFilesystem, self).tearDown() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1772127 glance_store-4.8.1/glance_store/tests/functional/swift/0000775000175000017500000000000000000000000023321 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/functional/swift/__init__.py0000664000175000017500000000000000000000000025420 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/functional/swift/test_functional_swift.py0000664000175000017500000000623400000000000030315 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import random import time from keystoneauth1.identity import v3 from keystoneauth1 import session from oslo_config import cfg import swiftclient from glance_store.tests.functional import base CONF = cfg.CONF logging.basicConfig() class TestSwift(base.BaseFunctionalTests): def __init__(self, *args, **kwargs): super(TestSwift, self).__init__('swift', *args, **kwargs) CONF.set_override('swift_store_user', '{1}:{0}'.format(self.username, self.project_name), group='glance_store') CONF.set_override('swift_store_auth_address', self.auth_url, group='glance_store') CONF.set_override('swift_store_auth_version', self.keystone_version, group='glance_store') CONF.set_override('swift_store_key', self.password, group='glance_store') CONF.set_override('swift_store_region', self.region_name, group='glance_store') CONF.set_override('swift_store_create_container_on_put', True, group='glance_store') def setUp(self): self.container = ("glance_store_container_" + str(int(random.random() * 1000))) CONF.set_override('swift_store_container', self.container, group='glance_store') super(TestSwift, self).setUp() def tearDown(self): auth = v3.Password(auth_url=self.auth_url, username=self.username, password=self.password, project_name=self.project_name, user_domain_id=self.user_domain_id, project_domain_id=self.project_domain_id) sess = session.Session(auth=auth) swift = swiftclient.client.Connection(session=sess) for x in range(1, 4): time.sleep(x) try: _, objects = swift.get_container(self.container) for obj in objects: swift.delete_object(self.container, obj.get('name')) swift.delete_container(self.container) except Exception: if x < 3: pass else: raise else: break super(TestSwift, self).tearDown() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1812139 glance_store-4.8.1/glance_store/tests/unit/0000775000175000017500000000000000000000000021002 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/__init__.py0000664000175000017500000000000000000000000023101 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1812139 glance_store-4.8.1/glance_store/tests/unit/cinder/0000775000175000017500000000000000000000000022246 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/cinder/__init__.py0000664000175000017500000000000000000000000024345 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/cinder/test_base.py0000664000175000017500000001105700000000000024575 0ustar00zuulzuul00000000000000# Copyright 2023 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from unittest import mock import ddt from glance_store._drivers.cinder import base from glance_store._drivers.cinder import scaleio from glance_store.tests import base as test_base sys.modules['glance_store.common.fs_mount'] = mock.Mock() from glance_store._drivers.cinder import store as cinder # noqa from glance_store._drivers.cinder import nfs # noqa @ddt.ddt class TestConnectorBase(test_base.StoreBaseTest): @ddt.data( ('iscsi', base.BaseBrickConnectorInterface), ('nfs', nfs.NfsBrickConnector), ('scaleio', scaleio.ScaleIOBrickConnector), ) @ddt.unpack def test_factory(self, protocol, expected_class): connector_class = base.factory( connection_info={'driver_volume_type': protocol}) self.assertIsInstance(connector_class, expected_class) class TestBaseBrickConnectorInterface(test_base.StoreBaseTest): def get_connection_info(self): """Return iSCSI connection information""" return { 'target_discovered': False, 'target_portal': '0.0.0.0:3260', 'target_iqn': 'iqn.2010-10.org.openstack:volume-fake-vol', 'target_lun': 0, 'volume_id': '007dedb8-ddc0-445c-88f1-d07acbe4efcb', 'auth_method': 'CHAP', 'auth_username': '2ttANgVaDRqxtMNK3hUj', 'auth_password': 'fake-password', 'encrypted': False, 'qos_specs': None, 'access_mode': 'rw', 'cacheable': False, 'driver_volume_type': 'iscsi', 'attachment_id': '7f45b2fe-111a-42df-be3e-f02b312ad8ea'} def setUp(self, connection_info={}, **kwargs): super().setUp() self.connection_info = connection_info or self.get_connection_info() self.root_helper = 'fake_rootwrap' self.use_multipath = False self.properties = { 'connection_info': self.connection_info, 'root_helper': self.root_helper, 'use_multipath': self.use_multipath} self.properties.update(kwargs) self.mock_object(base.connector.InitiatorConnector, 'factory') self.connector = base.factory(**self.properties) def mock_object(self, obj, attr_name, *args, **kwargs): """Use python mock to mock an object attribute Mocks the specified objects attribute with the given value. Automatically performs 'addCleanup' for the mock. """ patcher = mock.patch.object(obj, attr_name, *args, **kwargs) result = patcher.start() self.addCleanup(patcher.stop) return result def test_connect_volume(self): if self.connection_info['driver_volume_type'] == 'nfs': self.skip('NFS tests have custom implementation of this method.') fake_vol = mock.MagicMock() fake_path = {'path': 'fake_dev_path'} self.mock_object(self.connector.conn, 'connect_volume', return_value=fake_path) fake_dev_path = self.connector.connect_volume(fake_vol) self.connector.conn.connect_volume.assert_called_once_with( self.connector.connection_info) self.assertEqual(fake_path['path'], fake_dev_path['path']) def test_disconnect_volume(self): fake_device = 'fake_dev_path' self.mock_object(self.connector.conn, 'disconnect_volume') self.connector.disconnect_volume(fake_device) self.connector.conn.disconnect_volume.assert_called_once_with( self.connection_info, fake_device, force=True) def test_extend_volume(self): self.mock_object(self.connector.conn, 'extend_volume') self.connector.extend_volume() self.connector.conn.extend_volume.assert_called_once_with( self.connection_info) def test_yield_path(self): fake_vol = mock.MagicMock() fake_device = 'fake_dev_path' fake_dev_path = self.connector.yield_path(fake_vol, fake_device) self.assertEqual(fake_device, fake_dev_path) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/cinder/test_cinder_base.py0000664000175000017500000012063500000000000026124 0ustar00zuulzuul00000000000000# Copyright 2022 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import hashlib import io import math import os from unittest import mock import socket import sys import tempfile import time import uuid from keystoneauth1 import exceptions as keystone_exc from os_brick.initiator import connector from oslo_concurrency import processutils from oslo_utils.secretutils import md5 from oslo_utils import units from glance_store._drivers.cinder import scaleio from glance_store.common import attachment_state_manager from glance_store.common import cinder_utils from glance_store import exceptions from glance_store import location sys.modules['glance_store.common.fs_mount'] = mock.Mock() from glance_store._drivers.cinder import store as cinder # noqa from glance_store._drivers.cinder import nfs # noqa class TestCinderStoreBase(object): def test_get_cinderclient(self): cc = self.store.get_cinderclient(self.context) self.assertEqual('fake_token', cc.client.auth.token) self.assertEqual('http://foo/public_url', cc.client.auth.endpoint) def _get_cinderclient_with_user_overriden(self, group='glance_store', **kwargs): cinderclient_opts = { 'cinder_store_user_name': 'test_user', 'cinder_store_password': 'test_password', 'cinder_store_project_name': 'test_project', 'cinder_store_auth_address': 'test_address'} cinderclient_opts.update(kwargs) self.config(**cinderclient_opts, group=group) cc = self.store.get_cinderclient(self.context) return cc def _test_get_cinderclient_with_user_overriden( self, group='glance_store'): cc = self._get_cinderclient_with_user_overriden( group=group) self.assertEqual('test_project', cc.client.session.auth.project_name) self.assertEqual('Default', cc.client.session.auth.project_domain_name) def _test_get_cinderclient_with_user_overriden_and_region( self, group='glance_store'): cc = self._get_cinderclient_with_user_overriden( group=group, **{'cinder_os_region_name': 'test_region'}) self.assertEqual('test_region', cc.client.region_name) def _test_get_cinderclient_with_api_insecure( self, group='glance_store'): self.config() with mock.patch.object( cinder.ksa_session, 'Session') as fake_session, \ mock.patch.object( cinder.ksa_identity, 'V3Password') as fake_auth_method: fake_auth = fake_auth_method() self._get_cinderclient_with_user_overriden( group=group, **{'cinder_api_insecure': True}) fake_session.assert_called_once_with(auth=fake_auth, verify=False) def _test_get_cinderclient_with_ca_certificates( self, group='glance_store'): fake_cert_path = 'fake_cert_path' with mock.patch.object( cinder.ksa_session, 'Session') as fake_session, \ mock.patch.object( cinder.ksa_identity, 'V3Password') as fake_auth_method: fake_auth = fake_auth_method() self._get_cinderclient_with_user_overriden( group=group, **{'cinder_ca_certificates_file': fake_cert_path}) fake_session.assert_called_once_with( auth=fake_auth, verify=fake_cert_path) def _test_get_cinderclient_cinder_endpoint_template(self, group='glance_store'): fake_endpoint = 'http://cinder.openstack.example.com/v2/fake_project' self.config(cinder_endpoint_template=fake_endpoint, group=group) with mock.patch.object( cinder.ksa_token_endpoint, 'Token') as fake_token: self.store.get_cinderclient(self.context) fake_token.assert_called_once_with(endpoint=fake_endpoint, token=self.context.auth_token) def test_get_cinderclient_endpoint_exception(self): with mock.patch.object(cinder.ksa_session, 'Session'), \ mock.patch.object(cinder.ksa_identity, 'V3Password'), \ mock.patch.object( cinder.Store, 'is_user_overriden', return_value=False), \ mock.patch.object( cinder.keystone_sc, 'ServiceCatalogV2') as service_catalog: service_catalog.side_effect = keystone_exc.EndpointNotFound self.assertRaises( exceptions.BadStoreConfiguration, self.store.get_cinderclient, self.context) def test__check_context(self): with mock.patch.object(cinder.Store, 'is_user_overriden', return_value=True) as fake_overriden: self.store._check_context(self.context) fake_overriden.assert_called_once() def test__check_context_no_context(self): with mock.patch.object( cinder.Store, 'is_user_overriden', return_value=False): self.assertRaises( exceptions.BadStoreConfiguration, self.store._check_context, None) def test__check_context_no_service_catalog(self): with mock.patch.object( cinder.Store, 'is_user_overriden', return_value=False): fake_context = mock.MagicMock(service_catalog=None) self.assertRaises( exceptions.BadStoreConfiguration, self.store._check_context, fake_context) def test_temporary_chown(self): fake_stat = mock.MagicMock(st_uid=1) with mock.patch.object(os, 'stat', return_value=fake_stat), \ mock.patch.object(os, 'getuid', return_value=2), \ mock.patch.object(processutils, 'execute') as mock_execute, \ mock.patch.object(cinder.Store, 'get_root_helper', return_value='sudo'): with self.store.temporary_chown('test'): pass expected_calls = [mock.call('chown', 2, 'test', run_as_root=True, root_helper='sudo'), mock.call('chown', 1, 'test', run_as_root=True, root_helper='sudo')] self.assertEqual(expected_calls, mock_execute.call_args_list) @mock.patch.object(time, 'sleep') def test_wait_volume_status(self, mock_sleep): fake_manager = mock.MagicMock(get=mock.Mock()) volume_available = mock.MagicMock(manager=fake_manager, id='fake-id', status='available') volume_in_use = mock.MagicMock(manager=fake_manager, id='fake-id', status='in-use') fake_manager.get.side_effect = [volume_available, volume_in_use] self.assertEqual(volume_in_use, self.store._wait_volume_status( volume_available, 'available', 'in-use')) fake_manager.get.assert_called_with('fake-id') mock_sleep.assert_called_once_with(0.5) @mock.patch.object(time, 'sleep') def test_wait_volume_status_unexpected(self, mock_sleep): fake_manager = mock.MagicMock(get=mock.Mock()) volume_available = mock.MagicMock(manager=fake_manager, id='fake-id', status='error') fake_manager.get.return_value = volume_available self.assertRaises(exceptions.BackendException, self.store._wait_volume_status, volume_available, 'available', 'in-use') fake_manager.get.assert_called_with('fake-id') @mock.patch.object(time, 'sleep') def test_wait_volume_status_timeout(self, mock_sleep): fake_manager = mock.MagicMock(get=mock.Mock()) volume_available = mock.MagicMock(manager=fake_manager, id='fake-id', status='available') fake_manager.get.return_value = volume_available self.assertRaises(exceptions.BackendException, self.store._wait_volume_status, volume_available, 'available', 'in-use') fake_manager.get.assert_called_with('fake-id') def _test_open_cinder_volume(self, open_mode, attach_mode, error, multipath_supported=False, enforce_multipath=False, encrypted_nfs=False, qcow2_vol=False, multiattach=False, update_attachment_error=None): fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', multiattach=multiattach) fake_volume.manager.get.return_value = fake_volume fake_attachment_id = str(uuid.uuid4()) fake_attachment_create = {'id': fake_attachment_id} if encrypted_nfs or qcow2_vol: fake_attachment_update = mock.MagicMock( id=fake_attachment_id, connection_info={'driver_volume_type': 'nfs'}) else: fake_attachment_update = mock.MagicMock( id=fake_attachment_id, connection_info={'driver_volume_type': 'fake'}) fake_conn_info = mock.MagicMock(connector={}) fake_volumes = mock.MagicMock(get=lambda id: fake_volume) fake_client = mock.MagicMock(volumes=fake_volumes) _, fake_dev_path = tempfile.mkstemp(dir=self.test_dir) fake_devinfo = {'path': fake_dev_path} fake_connector = mock.MagicMock( connect_volume=mock.Mock(return_value=fake_devinfo), disconnect_volume=mock.Mock()) @contextlib.contextmanager def fake_chown(path): yield def do_open(): if multiattach: with mock.patch.object( attachment_state_manager._AttachmentStateManager, 'get_state') as mock_get_state: mock_get_state.return_value.__enter__.return_value = ( attachment_state_manager._AttachmentState()) with self.store._open_cinder_volume( fake_client, fake_volume, open_mode): pass else: with self.store._open_cinder_volume( fake_client, fake_volume, open_mode): if error: raise error def fake_factory(protocol, root_helper, **kwargs): return fake_connector root_helper = "sudo glance-rootwrap /etc/glance/rootwrap.conf" with mock.patch.object(cinder.Store, '_wait_volume_status', return_value=fake_volume), \ mock.patch.object(cinder.Store, 'temporary_chown', side_effect=fake_chown), \ mock.patch.object(cinder.Store, 'get_root_helper', return_value=root_helper), \ mock.patch.object(connector.InitiatorConnector, 'factory', side_effect=fake_factory ) as fake_conn_obj, \ mock.patch.object(cinder_utils.API, 'attachment_create', return_value=fake_attachment_create ) as attach_create, \ mock.patch.object(cinder_utils.API, 'attachment_update', return_value=fake_attachment_update ) as attach_update, \ mock.patch.object(cinder_utils.API, 'attachment_delete') as attach_delete, \ mock.patch.object(cinder_utils.API, 'attachment_get') as attach_get, \ mock.patch.object(cinder_utils.API, 'attachment_complete') as attach_complete, \ mock.patch.object(socket, 'gethostname') as mock_get_host, \ mock.patch.object(socket, 'getaddrinfo') as mock_get_host_ip, \ mock.patch.object(cinder.strutils, 'mask_dict_password'): if update_attachment_error: attach_update.side_effect = update_attachment_error fake_host = 'fake_host' fake_addr_info = [[0, 1, 2, 3, ['127.0.0.1']]] fake_ip = fake_addr_info[0][4][0] mock_get_host.return_value = fake_host mock_get_host_ip.return_value = fake_addr_info with mock.patch.object(connector, 'get_connector_properties', return_value=fake_conn_info) as mock_conn: if error: self.assertRaises(error, do_open) elif encrypted_nfs or qcow2_vol: fake_volume.encrypted = False if encrypted_nfs: fake_volume.encrypted = True elif qcow2_vol: attach_get.return_value = mock.MagicMock( connection_info={'format': 'qcow2'}) try: with self.store._open_cinder_volume( fake_client, fake_volume, open_mode): pass except exceptions.BackendException: attach_delete.assert_called_once_with( fake_client, fake_attachment_id) elif update_attachment_error: self.assertRaises(type(update_attachment_error), do_open) else: do_open() if update_attachment_error: attach_delete.assert_called_once_with( fake_client, fake_attachment_id) elif not (encrypted_nfs or qcow2_vol): mock_conn.assert_called_once_with( root_helper, fake_ip, multipath_supported, enforce_multipath, host=fake_host) fake_connector.connect_volume.assert_called_once_with( mock.ANY) fake_connector.disconnect_volume.assert_called_once_with( mock.ANY, fake_devinfo, force=True) fake_conn_obj.assert_called_once_with( mock.ANY, root_helper, conn=mock.ANY, use_multipath=multipath_supported) attach_create.assert_called_once_with( fake_client, fake_volume.id, mode=attach_mode) attach_update.assert_called_once_with( fake_client, fake_attachment_id, fake_conn_info, mountpoint='glance_store') attach_complete.assert_called_once_with( fake_client, fake_attachment_id) attach_delete.assert_called_once_with(fake_client, fake_attachment_id) else: mock_conn.assert_called_once_with( root_helper, fake_ip, multipath_supported, enforce_multipath, host=fake_host) fake_connector.connect_volume.assert_not_called() fake_connector.disconnect_volume.assert_not_called() attach_create.assert_called_once_with( fake_client, fake_volume.id, mode=attach_mode) attach_update.assert_called_once_with( fake_client, fake_attachment_id, fake_conn_info, mountpoint='glance_store') attach_delete.assert_called_once_with( fake_client, fake_attachment_id) def test_open_cinder_volume_rw(self): self._test_open_cinder_volume('wb', 'rw', None) def test_open_cinder_volume_ro(self): self._test_open_cinder_volume('rb', 'ro', None) def test_open_cinder_volume_update_attachment_error(self): err = Exception("update attachment fake error") self._test_open_cinder_volume('rb', 'ro', None, update_attachment_error=err) def test_open_cinder_volume_nfs_encrypted(self): self._test_open_cinder_volume('rb', 'ro', None, encrypted_nfs=True) def test_open_cinder_volume_nfs_qcow2_volume(self): self._test_open_cinder_volume('rb', 'ro', None, qcow2_vol=True) def test_open_cinder_volume_multiattach_volume(self): self._test_open_cinder_volume('rb', 'ro', None, multiattach=True) def _fake_volume_type_check(self, name): if name != 'some_type': raise cinder.cinder_exception.NotFound(code=404) def _test_configure_add_valid_type(self): with mock.patch.object(self.store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock( volume_types=mock.MagicMock( find=self._fake_volume_type_check)) # If volume type exists, no exception is raised self.store.configure_add() def _test_configure_add_invalid_type(self): with mock.patch.object(self.store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock( volume_types=mock.MagicMock( find=self._fake_volume_type_check)) with mock.patch.object(cinder, 'LOG') as mock_log: self.store.configure_add() mock_log.warning.assert_called_with( "Invalid `cinder_volume_type some_random_type`") def _get_uri_loc(self, fake_volume_uuid, is_multi_store=False): if is_multi_store: uri = "cinder://cinder1/%s" % fake_volume_uuid loc = location.get_location_from_uri_and_backend( uri, "cinder1", conf=self.conf) else: uri = "cinder://%s" % fake_volume_uuid loc = location.get_location_from_uri(uri, conf=self.conf) return loc def _test_cinder_get(self, is_multi_store=False): expected_size = 5 * units.Ki expected_file_contents = b"*" * expected_size volume_file = io.BytesIO(expected_file_contents) fake_client = mock.MagicMock(auth_token=None, management_url=None) fake_volume_uuid = str(uuid.uuid4()) fake_volume = mock.MagicMock(id=fake_volume_uuid, metadata={'image_size': expected_size}, status='available') fake_volume.manager.get.return_value = fake_volume fake_volumes = mock.MagicMock(get=lambda id: fake_volume) @contextlib.contextmanager def fake_open(client, volume, mode): self.assertEqual('rb', mode) yield volume_file with mock.patch.object(cinder.Store, 'get_cinderclient') as mock_cc, \ mock.patch.object(self.store, '_open_cinder_volume', side_effect=fake_open): mock_cc.return_value = mock.MagicMock(client=fake_client, volumes=fake_volumes) loc = self._get_uri_loc(fake_volume_uuid, is_multi_store=is_multi_store) (image_file, image_size) = self.store.get(loc, context=self.context) expected_num_chunks = 2 data = b"" num_chunks = 0 for chunk in image_file: num_chunks += 1 data += chunk self.assertEqual(expected_num_chunks, num_chunks) self.assertEqual(expected_file_contents, data) def _test_cinder_volume_not_found(self, method_call, mock_method): fake_volume_uuid = str(uuid.uuid4()) loc = mock.MagicMock(volume_id=fake_volume_uuid) mock_not_found = {mock_method: mock.MagicMock( side_effect=cinder.cinder_exception.NotFound(code=404))} fake_volumes = mock.MagicMock(**mock_not_found) with mock.patch.object(cinder.Store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock(volumes=fake_volumes) self.assertRaises(exceptions.NotFound, method_call, loc, context=self.context) def test_cinder_get_volume_not_found(self): self._test_cinder_volume_not_found(self.store.get, 'get') def test_cinder_get_size_volume_not_found(self): self._test_cinder_volume_not_found(self.store.get_size, 'get') def test_cinder_delete_volume_not_found(self): self._test_cinder_volume_not_found(self.store.delete, 'delete') def test_cinder_get_client_exception(self): fake_volume_uuid = str(uuid.uuid4()) loc = mock.MagicMock(volume_id=fake_volume_uuid) with mock.patch.object(cinder.Store, 'get_cinderclient') as mock_cc: mock_cc.side_effect = ( cinder.cinder_exception.ClientException(code=500)) self.assertRaises(exceptions.BackendException, self.store.get, loc, context=self.context) def _test_cinder_get_size(self, is_multi_store=False): fake_client = mock.MagicMock(auth_token=None, management_url=None) fake_volume_uuid = str(uuid.uuid4()) fake_volume = mock.MagicMock(size=5, metadata={}) fake_volumes = mock.MagicMock(get=lambda fake_volume_uuid: fake_volume) with mock.patch.object(cinder.Store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock(client=fake_client, volumes=fake_volumes) loc = self._get_uri_loc(fake_volume_uuid, is_multi_store=is_multi_store) image_size = self.store.get_size(loc, context=self.context) self.assertEqual(fake_volume.size * units.Gi, image_size) def _test_cinder_get_size_with_metadata(self, is_multi_store=False): fake_client = mock.MagicMock(auth_token=None, management_url=None) fake_volume_uuid = str(uuid.uuid4()) expected_image_size = 4500 * units.Mi fake_volume = mock.MagicMock( size=5, metadata={'image_size': expected_image_size}) fake_volumes = {fake_volume_uuid: fake_volume} with mock.patch.object(cinder.Store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock(client=fake_client, volumes=fake_volumes) loc = self._get_uri_loc(fake_volume_uuid, is_multi_store=is_multi_store) image_size = self.store.get_size(loc, context=self.context) self.assertEqual(expected_image_size, image_size) def test_cinder_get_size_generic_exception(self): fake_volume_uuid = str(uuid.uuid4()) loc = mock.MagicMock(volume_id=fake_volume_uuid) fake_volumes = mock.MagicMock( get=mock.MagicMock(side_effect=Exception())) with mock.patch.object(cinder.Store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock(volumes=fake_volumes) image_size = self.store.get_size(loc, context=self.context) self.assertEqual(0, image_size) def _test_cinder_add(self, fake_volume, volume_file, size_kb=5, verifier=None, backend='glance_store', is_multi_store=False): expected_image_id = str(uuid.uuid4()) expected_size = size_kb * units.Ki expected_file_contents = b"*" * expected_size image_file = io.BytesIO(expected_file_contents) expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_file_contents).hexdigest() expected_location = 'cinder://%s' % fake_volume.id if is_multi_store: # Default backend is 'glance_store' for single store but in case # of multi store, if the backend option is not passed, we should # assign it to the default i.e. 'cinder1' if backend == 'glance_store': backend = 'cinder1' expected_location = 'cinder://%s/%s' % (backend, fake_volume.id) self.config(cinder_volume_type='some_type', group=backend) fake_client = mock.MagicMock(auth_token=None, management_url=None) fake_volume.manager.get.return_value = fake_volume fake_volumes = mock.MagicMock(create=mock.Mock( return_value=fake_volume)) @contextlib.contextmanager def fake_open(client, volume, mode): self.assertEqual('wb', mode) yield volume_file with mock.patch.object(cinder.Store, 'get_cinderclient') as mock_cc, \ mock.patch.object(self.store, '_open_cinder_volume', side_effect=fake_open): mock_cc.return_value = mock.MagicMock(client=fake_client, volumes=fake_volumes) loc, size, checksum, multihash, metadata = self.store.add( expected_image_id, image_file, expected_size, self.hash_algo, self.context, verifier) self.assertEqual(expected_location, loc) self.assertEqual(expected_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) fake_volumes.create.assert_called_once_with( 1, name='image-%s' % expected_image_id, metadata={'image_owner': self.context.project_id, 'glance_image_id': expected_image_id, 'image_size': str(expected_size)}, volume_type='some_type') if is_multi_store: self.assertEqual(backend, metadata["store"]) def test_cinder_add_volume_not_found(self): image_file = mock.MagicMock() fake_image_id = str(uuid.uuid4()) expected_size = 0 fake_volumes = mock.MagicMock(create=mock.MagicMock( side_effect=cinder.cinder_exception.NotFound(code=404))) with mock.patch.object(cinder.Store, 'get_cinderclient') as mock_cc: mock_cc.return_value = mock.MagicMock(volumes=fake_volumes) self.assertRaises( exceptions.BackendException, self.store.add, fake_image_id, image_file, expected_size, self.hash_algo, self.context, None) def _test_cinder_add_extend(self, is_multi_store=False, online=False): expected_volume_size = 2 * units.Gi expected_multihash = 'fake_hash' fakebuffer = mock.MagicMock() # CPython implementation detail: __len__ cannot return > sys.maxsize, # which on a 32-bit system is 2*units.Gi - 1 # https://docs.python.org/3/reference/datamodel.html#object.__len__ fakebuffer.__len__.return_value = int(expected_volume_size / 2) def get_fake_hash(type, secure=False): if type == 'md5': return mock.MagicMock(hexdigest=lambda: expected_checksum) else: return mock.MagicMock(hexdigest=lambda: expected_multihash) expected_image_id = str(uuid.uuid4()) expected_volume_id = str(uuid.uuid4()) expected_size = 0 image_file = mock.MagicMock( read=mock.MagicMock(side_effect=[fakebuffer, fakebuffer, None])) fake_volume = mock.MagicMock(id=expected_volume_id, status='available', size=1) expected_checksum = 'fake_checksum' verifier = None backend = 'glance_store' expected_location = 'cinder://%s' % fake_volume.id if is_multi_store: # Default backend is 'glance_store' for single store but in case # of multi store, if the backend option is not passed, we should # assign it to the default i.e. 'cinder1' backend = 'cinder1' expected_location = 'cinder://%s/%s' % (backend, fake_volume.id) self.config(cinder_volume_type='some_type', group=backend) if online: self.config(cinder_do_extend_attached=True, group=backend) fake_connector = mock.MagicMock() fake_vol_connector_map = {expected_volume_id: fake_connector} self.store.volume_connector_map = fake_vol_connector_map fake_client = mock.MagicMock(auth_token=None, management_url=None) fake_volume.manager.get.return_value = fake_volume fake_volumes = mock.MagicMock(create=mock.Mock( return_value=fake_volume)) @contextlib.contextmanager def fake_open(client, volume, mode): self.assertEqual('wb', mode) yield mock.MagicMock() with mock.patch.object(cinder.Store, 'get_cinderclient') as mock_cc, \ mock.patch.object(self.store, '_open_cinder_volume', side_effect=fake_open), \ mock.patch.object(cinder.utils, 'get_hasher') as fake_hasher, \ mock.patch.object(cinder.Store, '_wait_volume_status', return_value=fake_volume) as mock_wait, \ mock.patch.object(cinder_utils.API, 'extend_volume') as extend_vol: mock_cc_return_val = mock.MagicMock(client=fake_client, volumes=fake_volumes) mock_cc.return_value = mock_cc_return_val fake_hasher.side_effect = get_fake_hash loc, size, checksum, multihash, metadata = self.store.add( expected_image_id, image_file, expected_size, self.hash_algo, self.context, verifier) self.assertEqual(expected_location, loc) self.assertEqual(expected_volume_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) fake_volumes.create.assert_called_once_with( 1, name='image-%s' % expected_image_id, metadata={'image_owner': self.context.project_id, 'glance_image_id': expected_image_id, 'image_size': str(expected_volume_size)}, volume_type='some_type') if is_multi_store: self.assertEqual(backend, metadata["store"]) if online: extend_vol.assert_called_once_with( mock_cc_return_val, fake_volume, expected_volume_size // units.Gi) mock_wait.assert_has_calls( [mock.call(fake_volume, 'creating', 'available'), mock.call(fake_volume, 'extending', 'in-use')]) else: fake_volume.extend.assert_called_once_with( fake_volume, expected_volume_size // units.Gi) mock_wait.assert_has_calls( [mock.call(fake_volume, 'creating', 'available'), mock.call(fake_volume, 'extending', 'available')]) def test_cinder_add_extend_storage_full(self): expected_volume_size = 2 * units.Gi fakebuffer = mock.MagicMock() # CPython implementation detail: __len__ cannot return > sys.maxsize, # which on a 32-bit system is 2*units.Gi - 1 # https://docs.python.org/3/reference/datamodel.html#object.__len__ fakebuffer.__len__.return_value = int(expected_volume_size / 2) expected_image_id = str(uuid.uuid4()) expected_volume_id = str(uuid.uuid4()) expected_size = 0 image_file = mock.MagicMock( read=mock.MagicMock(side_effect=[fakebuffer, fakebuffer, None])) fake_volume = mock.MagicMock(id=expected_volume_id, status='available', size=1) verifier = None fake_client = mock.MagicMock() fake_volume.manager.get.return_value = fake_volume fake_volumes = mock.MagicMock(create=mock.Mock( return_value=fake_volume)) with mock.patch.object(cinder.Store, 'get_cinderclient') as mock_cc, \ mock.patch.object(self.store, '_open_cinder_volume'), \ mock.patch.object(cinder.utils, 'get_hasher'), \ mock.patch.object( cinder.Store, '_wait_volume_status') as mock_wait: mock_cc.return_value = mock.MagicMock(client=fake_client, volumes=fake_volumes) mock_wait.side_effect = [fake_volume, exceptions.BackendException] self.assertRaises( exceptions.StorageFull, self.store.add, expected_image_id, image_file, expected_size, self.hash_algo, self.context, verifier) def test_cinder_add_extend_volume_delete_exception(self): expected_volume_size = 2 * units.Gi fakebuffer = mock.MagicMock() # CPython implementation detail: __len__ cannot return > sys.maxsize, # which on a 32-bit system is 2*units.Gi - 1 # https://docs.python.org/3/reference/datamodel.html#object.__len__ fakebuffer.__len__.return_value = int(expected_volume_size / 2) expected_image_id = str(uuid.uuid4()) expected_volume_id = str(uuid.uuid4()) expected_size = 0 image_file = mock.MagicMock( read=mock.MagicMock(side_effect=[fakebuffer, fakebuffer, None])) fake_volume = mock.MagicMock( id=expected_volume_id, status='available', size=1, delete=mock.MagicMock(side_effect=Exception())) fake_client = mock.MagicMock() fake_volume.manager.get.return_value = fake_volume fake_volumes = mock.MagicMock(create=mock.Mock( return_value=fake_volume)) verifier = None with mock.patch.object(cinder.Store, 'get_cinderclient') as mock_cc, \ mock.patch.object(self.store, '_open_cinder_volume'), \ mock.patch.object(cinder.utils, 'get_hasher'), \ mock.patch.object( cinder.Store, '_wait_volume_status') as mock_wait: mock_cc.return_value = mock.MagicMock(client=fake_client, volumes=fake_volumes) mock_wait.side_effect = [fake_volume, exceptions.BackendException] self.assertRaises( exceptions.StorageFull, self.store.add, expected_image_id, image_file, expected_size, self.hash_algo, self.context, verifier) fake_volume.delete.assert_called_once() def _test_cinder_delete(self, is_multi_store=False): fake_client = mock.MagicMock(auth_token=None, management_url=None) fake_volume_uuid = str(uuid.uuid4()) fake_volumes = mock.MagicMock(delete=mock.Mock()) with mock.patch.object(cinder.Store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock(client=fake_client, volumes=fake_volumes) loc = self._get_uri_loc(fake_volume_uuid, is_multi_store=is_multi_store) self.store.delete(loc, context=self.context) fake_volumes.delete.assert_called_once_with(fake_volume_uuid) def test_cinder_delete_client_exception(self): fake_volume_uuid = str(uuid.uuid4()) loc = mock.MagicMock(volume_id=fake_volume_uuid) fake_volumes = mock.MagicMock(delete=mock.MagicMock( side_effect=cinder.cinder_exception.ClientException(code=500))) with mock.patch.object(cinder.Store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock(volumes=fake_volumes) self.assertRaises(exceptions.BackendException, self.store.delete, loc, context=self.context) def test__get_device_size(self): fake_data = b"fake binary data" fake_len = int(math.ceil(float(len(fake_data)) / units.Gi)) fake_file = io.BytesIO(fake_data) dev_size = scaleio.ScaleIOBrickConnector._get_device_size(fake_file) self.assertEqual(fake_len, dev_size) @mock.patch.object(time, 'sleep') def test__wait_resize_device_resized(self, mock_sleep): fake_vol = mock.MagicMock() fake_vol.size = 2 fake_file = io.BytesIO(b"fake binary data") with mock.patch.object( scaleio.ScaleIOBrickConnector, '_get_device_size') as mock_get_dev_size: mock_get_dev_size.side_effect = [1, 2] scaleio.ScaleIOBrickConnector._wait_resize_device( fake_vol, fake_file) @mock.patch.object(time, 'sleep') def test__wait_resize_device_fails(self, mock_sleep): fake_vol = mock.MagicMock() fake_vol.size = 2 fake_file = io.BytesIO(b"fake binary data") with mock.patch.object( scaleio.ScaleIOBrickConnector, '_get_device_size', return_value=1): self.assertRaises( exceptions.BackendException, scaleio.ScaleIOBrickConnector._wait_resize_device, fake_vol, fake_file) def test_process_specs(self): self.location.process_specs() self.assertEqual('cinder', self.location.scheme) self.assertEqual(self.volume_id, self.location.volume_id) def _test_get_uri(self, expected_uri): uri = self.location.get_uri() self.assertEqual(expected_uri, uri) def _test_parse_uri_invalid(self, uri): self.assertRaises( exceptions.BadStoreUri, self.location.parse_uri, uri) def _test_get_root_helper(self, group='glance_store'): fake_rootwrap = 'fake_rootwrap' expected = 'sudo glance-rootwrap %s' % fake_rootwrap self.config(rootwrap_config=fake_rootwrap, group=group) res = self.store.get_root_helper() self.assertEqual(expected, res) def test_get_hash_str(self): nfs_conn = nfs.NfsBrickConnector() test_str = 'test_str' with mock.patch.object(nfs.hashlib, 'sha256') as fake_hashlib: nfs_conn.get_hash_str(test_str) test_str = test_str.encode('utf-8') fake_hashlib.assert_called_once_with(test_str) def test__get_mount_path(self): nfs_conn = nfs.NfsBrickConnector(mountpoint_base='fake_mount_path') fake_hex = 'fake_hex_digest' fake_share = 'fake_share' fake_path = 'fake_mount_path' expected_path = os.path.join(fake_path, fake_hex) with mock.patch.object( nfs.NfsBrickConnector, 'get_hash_str') as fake_hash: fake_hash.return_value = fake_hex res = nfs_conn._get_mount_path(fake_share, fake_path) self.assertEqual(expected_path, res) def test__get_host_ip_v6(self): fake_ipv6 = '2001:0db8:85a3:0000:0000:8a2e:0370' fake_socket_return = [[0, 1, 2, 3, [fake_ipv6]]] with mock.patch.object(cinder.socket, 'getaddrinfo') as fake_socket: fake_socket.return_value = fake_socket_return res = self.store._get_host_ip('fake_host') self.assertEqual(fake_ipv6, res) def test__get_host_ip_v4(self): fake_ip = '127.0.0.1' fake_socket_return = [[0, 1, 2, 3, [fake_ip]]] with mock.patch.object(cinder.socket, 'getaddrinfo') as fake_socket: fake_socket.side_effect = [socket.gaierror, fake_socket_return] res = self.store._get_host_ip('fake_host') self.assertEqual(fake_ip, res) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/cinder/test_cinder_store.py0000664000175000017500000001515400000000000026345 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import io from unittest import mock import sys import uuid from oslo_utils import units from glance_store import exceptions from glance_store.tests import base from glance_store.tests.unit.cinder import test_cinder_base from glance_store.tests.unit import test_store_capabilities sys.modules['glance_store.common.fs_mount'] = mock.Mock() from glance_store._drivers.cinder import store as cinder # noqa class TestCinderStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking, test_cinder_base.TestCinderStoreBase): def setUp(self): super(TestCinderStore, self).setUp() self.store = cinder.Store(self.conf) self.store.configure() self.register_store_schemes(self.store, 'cinder') self.store.READ_CHUNKSIZE = 4096 self.store.WRITE_CHUNKSIZE = 4096 fake_sc = [{'endpoints': [{'publicURL': 'http://foo/public_url'}], 'endpoints_links': [], 'name': 'cinder', 'type': 'volumev3'}] self.context = mock.MagicMock(service_catalog=fake_sc, user_id='fake_user', auth_token='fake_token', project_id='fake_project') self.hash_algo = 'sha256' cinder._reset_cinder_session() self.config(cinder_mount_point_base=None) self.volume_id = str(uuid.uuid4()) specs = {'scheme': 'cinder', 'volume_id': self.volume_id} self.location = cinder.StoreLocation(specs, self.conf) def test_get_cinderclient_with_user_overriden(self): self._test_get_cinderclient_with_user_overriden() def test_get_cinderclient_with_user_overriden_and_region(self): self._test_get_cinderclient_with_user_overriden_and_region() def test_get_cinderclient_with_api_insecure(self): self._test_get_cinderclient_with_api_insecure() def test_get_cinderclient_with_ca_certificates(self): self._test_get_cinderclient_with_ca_certificates() def test_open_cinder_volume_multipath_enabled(self): self.config(cinder_use_multipath=True) self._test_open_cinder_volume('wb', 'rw', None, multipath_supported=True) def test_open_cinder_volume_multipath_disabled(self): self.config(cinder_use_multipath=False) self._test_open_cinder_volume('wb', 'rw', None, multipath_supported=False) def test_open_cinder_volume_enforce_multipath(self): self.config(cinder_use_multipath=True) self.config(cinder_enforce_multipath=True) self._test_open_cinder_volume('wb', 'rw', None, multipath_supported=True, enforce_multipath=True) def test_cinder_configure_add(self): self.assertRaises(exceptions.BadStoreConfiguration, self.store._check_context, None) self.assertRaises(exceptions.BadStoreConfiguration, self.store._check_context, mock.MagicMock(service_catalog=None)) self.store._check_context(mock.MagicMock(service_catalog='fake')) def test_cinder_get(self): self._test_cinder_get() def test_cinder_get_size(self): self._test_cinder_get_size() def test_cinder_get_size_with_metadata(self): self._test_cinder_get_size_with_metadata() def test_cinder_add(self): fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) volume_file = io.BytesIO() self._test_cinder_add(fake_volume, volume_file) def test_cinder_add_with_verifier(self): fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) volume_file = io.BytesIO() verifier = mock.MagicMock() self._test_cinder_add(fake_volume, volume_file, 1, verifier) verifier.update.assert_called_with(b"*" * units.Ki) def test_cinder_add_volume_full(self): e = IOError() volume_file = io.BytesIO() e.errno = errno.ENOSPC fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) with mock.patch.object(volume_file, 'write', side_effect=e): self.assertRaises(exceptions.StorageFull, self._test_cinder_add, fake_volume, volume_file) fake_volume.delete.assert_called_once_with() def test_cinder_add_extend(self): self._test_cinder_add_extend() def test_cinder_add_extend_online(self): self._test_cinder_add_extend(online=True) def test_cinder_delete(self): self._test_cinder_delete() def test_set_url_prefix(self): self.assertEqual('cinder://', self.store._url_prefix) def test_configure_add_valid_type(self): self.config(cinder_volume_type='some_type') self._test_configure_add_valid_type() def test_configure_add_invalid_type(self): # setting cinder_volume_type to non-existent value will log a # warning self.config(cinder_volume_type='some_random_type') self._test_configure_add_invalid_type() def test_get_uri(self): expected_uri = 'cinder://%s' % self.volume_id self._test_get_uri(expected_uri) def test_parse_uri_valid(self): expected_uri = 'cinder://%s' % self.volume_id self.location.parse_uri(expected_uri) def test_parse_uri_invalid(self): uri = 'cinder://%s' % 'fake_volume' self._test_parse_uri_invalid(uri) def test_get_root_helper(self): self._test_get_root_helper() def test_get_cinderclient_cinder_endpoint_template(self): self._test_get_cinderclient_cinder_endpoint_template() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/cinder/test_multistore_cinder.py0000664000175000017500000003274300000000000027423 0ustar00zuulzuul00000000000000# Copyright 2018-2019 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import io from unittest import mock import sys import uuid import fixtures from oslo_config import cfg from oslo_utils import units import glance_store as store from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit.cinder import test_cinder_base from glance_store.tests.unit import test_store_capabilities as test_cap sys.modules['glance_store.common.fs_mount'] = mock.Mock() from glance_store._drivers.cinder import store as cinder # noqa class TestMultiCinderStore(base.MultiStoreBaseTest, test_cap.TestStoreCapabilitiesChecking, test_cinder_base.TestCinderStoreBase): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): super(TestMultiCinderStore, self).setUp() enabled_backends = { "cinder1": "cinder", "cinder2": "cinder" } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='cinder1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path self.addCleanup(self.conf.reset) self.store = cinder.Store(self.conf, backend="cinder1") self.store.configure() self.register_store_backend_schemes(self.store, 'cinder', 'cinder1') self.store.READ_CHUNKSIZE = 4096 self.store.WRITE_CHUNKSIZE = 4096 fake_sc = [{'endpoints': [{'publicURL': 'http://foo/public_url'}], 'endpoints_links': [], 'name': 'cinder', 'type': 'volumev3'}] self.context = mock.MagicMock(service_catalog=fake_sc, user_id='fake_user', auth_token='fake_token', project_id='fake_project') self.hash_algo = 'sha256' self.fake_admin_context = mock.MagicMock() self.fake_admin_context.elevated.return_value = mock.MagicMock( service_catalog=fake_sc, user_id='admin_user', auth_token='admin_token', project_id='admin_project') cinder._reset_cinder_session() self.config(cinder_mount_point_base=None, group='cinder1') self.volume_id = str(uuid.uuid4()) specs = {'scheme': 'cinder', 'volume_id': self.volume_id} self.location = cinder.StoreLocation(specs, self.conf, backend_group='cinder1') def test_location_url_prefix_is_set(self): self.assertEqual("cinder://cinder1", self.store.url_prefix) def test_get_cinderclient_with_user_overriden(self): self._test_get_cinderclient_with_user_overriden(group='cinder1') def test_get_cinderclient_with_user_overriden_and_region(self): self._test_get_cinderclient_with_user_overriden_and_region( group='cinder1') def test_get_cinderclient_with_api_insecure(self): self._test_get_cinderclient_with_api_insecure(group='cinder1') def test_get_cinderclient_with_ca_certificates(self): self._test_get_cinderclient_with_ca_certificates(group='cinder1') def test_get_cinderclient_legacy_update(self): fake_endpoint = 'http://cinder.openstack.example.com/v2/fake_project' self.config(cinder_endpoint_template=fake_endpoint, group='cinder1') cc = self.store.get_cinderclient(self.context) self.assertEqual(self.context.auth_token, cc.client.auth.token) self.assertEqual(fake_endpoint, cc.client.auth.endpoint) def test_open_cinder_volume_multipath_enabled(self): self.config(cinder_use_multipath=True, group='cinder1') self._test_open_cinder_volume('wb', 'rw', None, multipath_supported=True) def test_open_cinder_volume_multipath_disabled(self): self.config(cinder_use_multipath=False, group='cinder1') self._test_open_cinder_volume('wb', 'rw', None, multipath_supported=False) def test_open_cinder_volume_enforce_multipath(self): self.config(cinder_use_multipath=True, group='cinder1') self.config(cinder_enforce_multipath=True, group='cinder1') self._test_open_cinder_volume('wb', 'rw', None, multipath_supported=True, enforce_multipath=True) def test_cinder_check_context(self): self.assertRaises(exceptions.BadStoreConfiguration, self.store._check_context, None) self.assertRaises(exceptions.BadStoreConfiguration, self.store._check_context, mock.MagicMock(service_catalog=None)) self.store._check_context(mock.MagicMock(service_catalog='fake')) def test_configure_add_valid_type(self): self.config(cinder_volume_type='some_type', group=self.store.backend_group) self._test_configure_add_valid_type() def test_configure_add_invalid_type(self): # setting cinder_volume_type to non-existent value will log a # warning self.config(cinder_volume_type='some_random_type', group=self.store.backend_group) self._test_configure_add_invalid_type() def test_configure_add_cinder_service_down(self): def fake_volume_type_check(name): raise cinder.cinder_exception.ClientException(code=503) self.config(cinder_volume_type='some_type', group=self.store.backend_group) with mock.patch.object(self.store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock( volume_types=mock.MagicMock( find=fake_volume_type_check)) # We handle the ClientException to pass so no exception is raised # in this case self.store.configure_add() def test_configure_add_authorization_failed(self): def fake_volume_type_check(name): raise cinder.exceptions.AuthorizationFailure(code=401) self.config(cinder_volume_type='some_type', group=self.store.backend_group) with mock.patch.object(self.store, 'get_cinderclient') as mocked_cc: mocked_cc.return_value = mock.MagicMock( volume_types=mock.MagicMock( find=fake_volume_type_check)) # Anything apart from invalid volume type or cinder service # down will raise an exception self.assertRaises(cinder.exceptions.AuthorizationFailure, self.store.configure_add) def test_is_image_associated_with_store(self): with mock.patch.object(self.store, 'get_cinderclient') as mocked_cc: mock_default = mock.MagicMock() # The 'name' attribute is set separately since 'name' is a property # of MagicMock and it can't be set during initialization of # MagicMock object mock_default.name = 'some_type' mocked_cc.return_value = mock.MagicMock( volumes=mock.MagicMock( get=lambda volume_id: mock.MagicMock( volume_type='some_type')), volume_types=mock.MagicMock( default=lambda: mock_default)) # When cinder_volume_type is set and is same as volume's type self.config(cinder_volume_type='some_type', group=self.store.backend_group) fake_vol_id = str(uuid.uuid4()) type_match = self.store.is_image_associated_with_store( self.context, fake_vol_id) self.assertTrue(type_match) # When cinder_volume_type is not set and volume's type is same as # set default volume type self.config(cinder_volume_type=None, group=self.store.backend_group) type_match = self.store.is_image_associated_with_store( self.context, fake_vol_id) self.assertTrue(type_match) # When cinder_volume_type is not set and volume's type does not # match with default volume type mocked_cc.return_value.volume_types = mock.MagicMock( default=lambda: {'name': 'random_type'}) type_match = self.store.is_image_associated_with_store( self.context, fake_vol_id) self.assertFalse(type_match) # When the Image-Volume is not found mocked_cc.return_value.volumes.get = mock.MagicMock( side_effect=cinder.cinder_exception.NotFound(code=404)) with mock.patch.object(cinder, 'LOG') as mock_log: type_match = self.store.is_image_associated_with_store( self.context, fake_vol_id) mock_log.warning.assert_called_with( "Image-Volume %s not found. If you have " "upgraded your environment from single store " "to multi store, transfer all your " "Image-Volumes from user projects to service " "project." % fake_vol_id) self.assertFalse(type_match) def test_cinder_get(self): self._test_cinder_get(is_multi_store=True) def test_cinder_get_size(self): self._test_cinder_get_size(is_multi_store=True) def test_cinder_get_size_with_metadata(self): self._test_cinder_get_size_with_metadata(is_multi_store=True) def test_cinder_add(self): fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) volume_file = io.BytesIO() self._test_cinder_add(fake_volume, volume_file, is_multi_store=True) def test_cinder_add_with_verifier(self): fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) volume_file = io.BytesIO() verifier = mock.MagicMock() self._test_cinder_add(fake_volume, volume_file, 1, verifier, is_multi_store=True) verifier.update.assert_called_with(b"*" * units.Ki) def test_cinder_add_volume_full(self): e = IOError() volume_file = io.BytesIO() e.errno = errno.ENOSPC fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) with mock.patch.object(volume_file, 'write', side_effect=e): self.assertRaises(exceptions.StorageFull, self._test_cinder_add, fake_volume, volume_file, is_multi_store=True) fake_volume.delete.assert_called_once_with() def test_cinder_add_different_backend(self): self.store = cinder.Store(self.conf, backend="cinder2") self.store.configure() self.register_store_backend_schemes(self.store, 'cinder', 'cinder2') fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available', size=1) volume_file = io.BytesIO() self._test_cinder_add(fake_volume, volume_file, backend="cinder2", is_multi_store=True) def test_cinder_add_extend(self): self._test_cinder_add_extend(is_multi_store=True) def test_cinder_add_extend_online(self): self._test_cinder_add_extend(is_multi_store=True, online=True) def test_cinder_delete(self): self._test_cinder_delete(is_multi_store=True) def test_set_url_prefix(self): self.assertEqual('cinder://cinder1', self.store._url_prefix) def test_get_uri(self): expected_uri = 'cinder://cinder1/%s' % self.volume_id self._test_get_uri(expected_uri) def test_parse_uri_valid(self): expected_uri = 'cinder://cinder1/%s' % self.volume_id self.location.parse_uri(expected_uri) def test_parse_uri_invalid(self): uri = 'cinder://cinder1/%s' % 'fake_volume' self._test_parse_uri_invalid(uri) def test_get_root_helper(self): self._test_get_root_helper(group='cinder1') def test_get_cinderclient_cinder_endpoint_template(self): self._test_get_cinderclient_cinder_endpoint_template( group='cinder1') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/cinder/test_nfs.py0000664000175000017500000000753500000000000024457 0ustar00zuulzuul00000000000000# Copyright 2023 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys from unittest import mock import ddt from glance_store import exceptions from glance_store.tests.unit.cinder import test_base as test_base_connector sys.modules['glance_store.common.fs_mount'] = mock.Mock() from glance_store._drivers.cinder import store as cinder # noqa from glance_store._drivers.cinder import nfs # noqa @ddt.ddt class TestNfsBrickConnector( test_base_connector.TestBaseBrickConnectorInterface): def setUp(self): self.connection_info = { 'export': 'localhost:/srv/fake-nfs-path', 'name': 'volume-1fa96ca8-9e07-4dad-a0ed-990c6e86b938', 'options': None, 'format': 'raw', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False, 'driver_volume_type': 'nfs', 'mount_point_base': '/opt/stack/data/cinder/mnt', 'attachment_id': '7eb574ce-f32d-4173-a68b-870ead29fd84'} fake_attachment = mock.MagicMock(id='fake_attachment_uuid') self.mountpath = 'fake_mount_path' super().setUp(connection_info=self.connection_info, attachment_obj=fake_attachment, mountpoint_base=self.mountpath) @ddt.data( (False, 'raw'), (False, 'qcow2'), (True, 'raw'), (True, 'qcow2')) @ddt.unpack def test_connect_volume(self, encrypted, file_format): fake_vol = mock.MagicMock(id='fake_vol_uuid', encrypted=encrypted) fake_attachment = mock.MagicMock( id='fake_attachment_uuid', connection_info={'format': file_format}) self.mock_object(self.connector.volume_api, 'attachment_get', return_value=fake_attachment) if encrypted or file_format == 'qcow2': self.assertRaises(exceptions.BackendException, self.connector.connect_volume, fake_vol) else: fake_hash = 'fake_hash' fake_path = {'path': os.path.join( self.mountpath, fake_hash, self.connection_info['name'])} self.mock_object(nfs.NfsBrickConnector, 'get_hash_str', return_value=fake_hash) fake_dev_path = self.connector.connect_volume(fake_vol) nfs.mount.mount.assert_called_once_with( 'nfs', self.connection_info['export'], self.connection_info['name'], os.path.join(self.mountpath, fake_hash), self.connector.host, self.connector.root_helper, self.connection_info['options']) self.assertEqual(fake_path['path'], fake_dev_path['path']) def test_disconnect_volume(self): fake_hash = 'fake_hash' fake_path = {'path': os.path.join( self.mountpath, fake_hash, self.connection_info['name'])} mount_path, vol_name = fake_path['path'].rsplit('/', 1) self.connector.disconnect_volume(fake_path) nfs.mount.umount.assert_called_once_with( vol_name, mount_path, self.connector.host, self.connector.root_helper) def test_extend_volume(self): self.assertRaises(NotImplementedError, self.connector.extend_volume) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/cinder/test_scaleio.py0000664000175000017500000000347100000000000025303 0ustar00zuulzuul00000000000000# Copyright 2023 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import io from unittest import mock from glance_store.tests.unit.cinder import test_base as test_base_connector class TestScaleioBrickConnector( test_base_connector.TestBaseBrickConnectorInterface): def setUp(self): connection_info = { 'scaleIO_volname': 'TZpPr43ISgmNSgpo0LP2uw==', 'hostIP': None, 'serverIP': 'l4-pflex154gw', 'serverPort': 443, 'serverUsername': 'admin', 'iopsLimit': None, 'bandwidthLimit': None, 'scaleIO_volume_id': '3b2f23b00000000d', 'config_group': 'powerflex1', 'failed_over': False, 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False, 'driver_volume_type': 'scaleio', 'attachment_id': '22914c3a-5818-4840-9188-2ac9833b9f7b'} super().setUp(connection_info=connection_info) def test_yield_path(self): fake_vol = mock.MagicMock(size=1) fake_device = io.BytesIO(b"fake binary data") fake_dev_path = self.connector.yield_path(fake_vol, fake_device) self.assertEqual(fake_device, fake_dev_path) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1812139 glance_store-4.8.1/glance_store/tests/unit/common/0000775000175000017500000000000000000000000022272 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/common/__init__.py0000664000175000017500000000000000000000000024371 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/common/test_attachment_state_manager.py0000664000175000017500000001115400000000000030727 0ustar00zuulzuul00000000000000# Copyright 2021 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from oslo_config import cfg from oslotest import base from cinderclient import exceptions as cinder_exception from glance_store.common import attachment_state_manager as attach_manager from glance_store.common import cinder_utils from glance_store import exceptions CONF = cfg.CONF class AttachmentStateManagerTestCase(base.BaseTestCase): class FakeAttachmentState: def __init__(self): self.attachments = {mock.sentinel.attachments} def setUp(self): super(AttachmentStateManagerTestCase, self).setUp() self.__manager__ = attach_manager.__manager__ def get_state(self): with self.__manager__.get_state() as state: return state def test_get_state_host_not_initialized(self): self.__manager__.state = None self.assertRaises(exceptions.HostNotInitialized, self.get_state) def test_get_state(self): self.__manager__.state = self.FakeAttachmentState() state = self.get_state() self.assertEqual({mock.sentinel.attachments}, state.attachments) class AttachmentStateTestCase(base.BaseTestCase): def setUp(self): super(AttachmentStateTestCase, self).setUp() self.attachments = set() self.m = attach_manager._AttachmentState() self.attach_call_1 = [mock.sentinel.client, mock.sentinel.volume_id] self.attach_call_2 = {'mode': mock.sentinel.mode} self.disconnect_vol_call = [mock.sentinel.device] self.detach_call = [mock.sentinel.client, mock.sentinel.attachment_id] self.attachment_dict = {'id': mock.sentinel.attachment_id} def _sentinel_attach(self): attachment_id = self.m.attach( mock.sentinel.client, mock.sentinel.volume_id, mock.sentinel.host, mode=mock.sentinel.mode) return attachment_id def _sentinel_detach(self, conn): self.m.detach(mock.sentinel.client, mock.sentinel.attachment_id, mock.sentinel.volume_id, mock.sentinel.host, conn, mock.sentinel.connection_info, mock.sentinel.device) @mock.patch.object(cinder_utils.API, 'attachment_create') def test_attach(self, mock_attach_create): mock_attach_create.return_value = self.attachment_dict attachment = self._sentinel_attach() mock_attach_create.assert_called_once_with( *self.attach_call_1, **self.attach_call_2) self.assertEqual(mock.sentinel.attachment_id, attachment['id']) @mock.patch.object(cinder_utils.API, 'attachment_delete') def test_detach_without_attach(self, mock_attach_delete): ex = exceptions.BackendException conn = mock.MagicMock() mock_attach_delete.side_effect = ex() self.assertRaises(ex, self._sentinel_detach, conn) conn.disconnect_volume.assert_called_once_with( *self.disconnect_vol_call) @mock.patch.object(cinder_utils.API, 'attachment_create') @mock.patch.object(cinder_utils.API, 'attachment_delete') def test_detach_with_attach(self, mock_attach_delete, mock_attach_create): conn = mock.MagicMock() mock_attach_create.return_value = self.attachment_dict attachment = self._sentinel_attach() self._sentinel_detach(conn) mock_attach_create.assert_called_once_with( *self.attach_call_1, **self.attach_call_2) self.assertEqual(mock.sentinel.attachment_id, attachment['id']) conn.disconnect_volume.assert_called_once_with( *self.disconnect_vol_call) mock_attach_delete.assert_called_once_with( *self.detach_call) @mock.patch.object(cinder_utils.API, 'attachment_create') def test_attach_fails(self, mock_attach_create): mock_attach_create.side_effect = cinder_exception.BadRequest(code=400) self.assertRaises( cinder_exception.BadRequest, self.m.attach, mock.sentinel.client, mock.sentinel.volume_id, mock.sentinel.host, mode=mock.sentinel.mode) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/common/test_cinder_utils.py0000664000175000017500000001670500000000000026400 0ustar00zuulzuul00000000000000# Copyright 2021 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import uuid from cinderclient.apiclient import exceptions as apiclient_exception from cinderclient import exceptions as cinder_exception from oslo_config import cfg from oslotest import base from glance_store.common import cinder_utils CONF = cfg.CONF class FakeObject(object): def __init__(self, **kwargs): for name, value in kwargs.items(): setattr(self, name, value) class CinderUtilsTestCase(base.BaseTestCase): def setUp(self): super(CinderUtilsTestCase, self).setUp() CONF.register_opt(cfg.DictOpt('enabled_backends')) CONF.set_override('enabled_backends', 'fake:cinder') self.volume_api = cinder_utils.API() self.fake_client = FakeObject(attachments=FakeObject( create=mock.MagicMock(), delete=mock.MagicMock(), complete=mock.MagicMock(), update=mock.MagicMock(), show=mock.MagicMock())) self.fake_vol_id = uuid.uuid4() self.fake_attach_id = uuid.uuid4() self.fake_connector = { 'platform': 'x86_64', 'os_type': 'linux', 'ip': 'fake_ip', 'host': 'fake_host', 'multipath': False, 'initiator': 'fake_initiator', 'do_local_attach': False, 'uuid': '3e1a7217-104e-41c1-b177-a37c491129a0', 'system uuid': '98755544-c749-40ed-b30a-a1cb27b2a46d', 'nqn': 'fake_nqn'} def test_attachment_create(self): self.volume_api.attachment_create(self.fake_client, self.fake_vol_id) self.fake_client.attachments.create.assert_called_once_with( self.fake_vol_id, None, mode=None) def test_attachment_create_with_connector_and_mountpoint(self): self.volume_api.attachment_create( self.fake_client, self.fake_vol_id, connector=self.fake_connector, mountpoint='fake_mountpoint') self.fake_connector['mountpoint'] = 'fake_mountpoint' self.fake_client.attachments.create.assert_called_once_with( self.fake_vol_id, self.fake_connector, mode=None) def test_attachment_create_client_exception(self): self.fake_client.attachments.create.side_effect = ( cinder_exception.ClientException(code=1)) self.assertRaises( cinder_exception.ClientException, self.volume_api.attachment_create, self.fake_client, self.fake_vol_id) @mock.patch('time.sleep', new=mock.Mock()) def test_attachment_create_retries(self): fake_attach_id = 'fake-attach-id' # Make create fail two times and succeed on the third attempt. self.fake_client.attachments.create.side_effect = [ cinder_exception.BadRequest(400), cinder_exception.BadRequest(400), fake_attach_id] # Make sure we get a clean result. fake_attachment_id = self.volume_api.attachment_create( self.fake_client, self.fake_vol_id) self.assertEqual(fake_attach_id, fake_attachment_id) # Assert that we called attachment create three times due to the retry # decorator. self.fake_client.attachments.create.assert_has_calls([ mock.call(self.fake_vol_id, None, mode=None), mock.call(self.fake_vol_id, None, mode=None), mock.call(self.fake_vol_id, None, mode=None)]) def test_attachment_get(self): self.volume_api.attachment_get(self.fake_client, self.fake_attach_id) self.fake_client.attachments.show.assert_called_once_with( self.fake_attach_id) def test_attachment_get_client_exception(self): self.fake_client.attachments.show.side_effect = ( cinder_exception.ClientException(code=1)) self.assertRaises( cinder_exception.ClientException, self.volume_api.attachment_get, self.fake_client, self.fake_attach_id) def test_attachment_update(self): self.volume_api.attachment_update(self.fake_client, self.fake_attach_id, self.fake_connector) self.fake_client.attachments.update.assert_called_once_with( self.fake_attach_id, self.fake_connector) def test_attachment_update_with_connector_and_mountpoint(self): self.volume_api.attachment_update( self.fake_client, self.fake_attach_id, self.fake_connector, mountpoint='fake_mountpoint') self.fake_connector['mountpoint'] = 'fake_mountpoint' self.fake_client.attachments.update.assert_called_once_with( self.fake_attach_id, self.fake_connector) def test_attachment_update_client_exception(self): self.fake_client.attachments.update.side_effect = ( cinder_exception.ClientException(code=1)) self.assertRaises( cinder_exception.ClientException, self.volume_api.attachment_update, self.fake_client, self.fake_attach_id, self.fake_connector) def test_attachment_complete(self): self.volume_api.attachment_complete(self.fake_client, self.fake_attach_id) self.fake_client.attachments.complete.assert_called_once_with( self.fake_attach_id) def test_attachment_complete_client_exception(self): self.fake_client.attachments.complete.side_effect = ( cinder_exception.ClientException(code=1)) self.assertRaises( cinder_exception.ClientException, self.volume_api.attachment_complete, self.fake_client, self.fake_attach_id) def test_attachment_delete(self): self.volume_api.attachment_delete(self.fake_client, self.fake_attach_id) self.fake_client.attachments.delete.assert_called_once_with( self.fake_attach_id) def test_attachment_delete_client_exception(self): self.fake_client.attachments.delete.side_effect = ( cinder_exception.ClientException(code=1)) self.assertRaises( cinder_exception.ClientException, self.volume_api.attachment_delete, self.fake_client, self.fake_attach_id) def test_attachment_delete_retries(self): # Make delete fail two times and succeed on the third attempt. self.fake_client.attachments.delete.side_effect = [ apiclient_exception.InternalServerError(), apiclient_exception.InternalServerError(), lambda aid: 'foo'] # Make sure we get a clean result. self.assertIsNone(self.volume_api.attachment_delete( self.fake_client, self.fake_attach_id)) # Assert that we called delete three times due to the retry # decorator. self.fake_client.attachments.delete.assert_has_calls([ mock.call(self.fake_attach_id), mock.call(self.fake_attach_id), mock.call(self.fake_attach_id)]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/common/test_fs_mount.py0000664000175000017500000001320300000000000025534 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from unittest import mock import fixtures from oslo_concurrency import processutils from oslo_config import cfg from oslotest import base from glance_store import exceptions CONF = cfg.CONF class HostMountManagerTestCase(base.BaseTestCase): class FakeHostMountState: def __init__(self): self.mountpoints = {mock.sentinel.mountpoint} def setUp(self): super(HostMountManagerTestCase, self).setUp() CONF.register_opt(cfg.DictOpt('enabled_backends')) CONF.set_override('enabled_backends', 'fake:file') # Since this is mocked in other tests, we unmock it here if 'glance_store.common.fs_mount' in sys.modules: sys.modules.pop('glance_store.common.fs_mount') # Since the _HostMountStateManager class instantiates on its # import, this import is done here to register the enabled_backends # config option before it is used during initialization from glance_store.common import fs_mount as mount # noqa self.__manager__ = mount.__manager__ def get_state(self): with self.__manager__.get_state() as state: return state def test_get_state_host_not_initialized(self): self.__manager__.state = None self.assertRaises(exceptions.HostNotInitialized, self.get_state) def test_get_state(self): self.__manager__.state = self.FakeHostMountState() state = self.get_state() self.assertEqual({mock.sentinel.mountpoint}, state.mountpoints) class HostMountStateTestCase(base.BaseTestCase): def setUp(self): super(HostMountStateTestCase, self).setUp() CONF.register_opt(cfg.DictOpt('enabled_backends')) CONF.set_override('enabled_backends', 'fake:file') # Since this is mocked in other tests, we unmock it here if 'glance_store.common.fs_mount' in sys.modules: sys.modules.pop('glance_store.common.fs_mount') # Since the _HostMountStateManager class instantiates on its # import, this import is done here to register the enabled_backends # config option before it is used during initialization from glance_store.common import fs_mount as mount # noqa self.mounted = set() self.m = mount._HostMountState() def fake_execute(cmd, *args, **kwargs): if cmd == 'mount': path = args[-1] if path in self.mounted: raise processutils.ProcessExecutionError('Already mounted') self.mounted.add(path) elif cmd == 'umount': path = args[-1] if path not in self.mounted: raise processutils.ProcessExecutionError('Not mounted') self.mounted.remove(path) def fake_ismount(path): return path in self.mounted mock_execute = mock.MagicMock(side_effect=fake_execute) self.useFixture(fixtures.MonkeyPatch( 'oslo_concurrency.processutils.execute', mock_execute)) self.useFixture(fixtures.MonkeyPatch('os.path.ismount', fake_ismount)) @staticmethod def _expected_sentinel_mount_calls(mountpoint=mock.sentinel.mountpoint): return [mock.call('mount', '-t', mock.sentinel.fstype, mock.sentinel.option1, mock.sentinel.option2, mock.sentinel.export, mountpoint, root_helper=mock.sentinel.rootwrap_helper, run_as_root=True)] @staticmethod def _expected_sentinel_umount_calls(mountpoint=mock.sentinel.mountpoint): return [mock.call('umount', mountpoint, attempts=3, delay_on_retry=True, root_helper=mock.sentinel.rootwrap_helper, run_as_root=True)] def _sentinel_mount(self): self.m.mount(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.vol, mock.sentinel.mountpoint, mock.sentinel.host, mock.sentinel.rootwrap_helper, [mock.sentinel.option1, mock.sentinel.option2]) def _sentinel_umount(self): self.m.umount(mock.sentinel.vol, mock.sentinel.mountpoint, mock.sentinel.host, mock.sentinel.rootwrap_helper) @mock.patch('os.makedirs') def test_mount(self, mock_makedirs): self._sentinel_mount() mock_makedirs.assert_called_once() processutils.execute.assert_has_calls( self._expected_sentinel_mount_calls()) def test_unmount_without_mount(self): self._sentinel_umount() processutils.execute.assert_not_called() @mock.patch('os.rmdir') @mock.patch('os.makedirs') def test_umount_with_mount(self, mock_makedirs, mock_rmdir): self._sentinel_mount() self._sentinel_umount() mock_makedirs.assert_called_once() mock_rmdir.assert_called_once() processutils.execute.assert_has_calls( self._expected_sentinel_mount_calls() + self._expected_sentinel_umount_calls()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/common/test_utils.py0000664000175000017500000000217000000000000025043 0ustar00zuulzuul00000000000000# Copyright 2018 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile from oslotest import base from glance_store.common import utils class TestUtils(base.BaseTestCase): def test_cooperative_reader_returns_bytes(self): with tempfile.TemporaryFile() as fd: reader = utils.CooperativeReader(fd) # Make sure CooperativeReader does not use cooperative_read instead # of its own read method. reader.read = utils.CooperativeReader.read out = reader.read(reader) self.assertEqual(out, b'') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_backend.py0000664000175000017500000001334600000000000024011 0ustar00zuulzuul00000000000000# Copyright 2016 OpenStack, LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the backend store API's""" from unittest import mock from glance_store import backend from glance_store import exceptions from glance_store.tests import base class TestStoreAddToBackend(base.StoreBaseTest): def setUp(self): super(TestStoreAddToBackend, self).setUp() self.image_id = "animage" self.data = "dataandstuff" self.size = len(self.data) self.location = "file:///ab/cde/fgh" self.checksum = "md5" self.multihash = 'multihash' self.default_hash_algo = 'md5' self.hash_algo = 'sha256' def _bad_metadata(self, in_metadata): mstore = mock.Mock() mstore.add.return_value = (self.location, self.size, self.checksum, in_metadata) mstore.__str__ = lambda self: "hello" mstore.__unicode__ = lambda self: "hello" self.assertRaises(exceptions.BackendException, backend.store_add_to_backend, self.image_id, self.data, self.size, mstore) mstore.add.assert_called_once_with(self.image_id, mock.ANY, self.size, context=None, verifier=None) newstore = mock.Mock() newstore.add.return_value = (self.location, self.size, self.checksum, self.multihash, in_metadata) newstore.__str__ = lambda self: "hello" newstore.__unicode__ = lambda self: "hello" self.assertRaises(exceptions.BackendException, backend.store_add_to_backend_with_multihash, self.image_id, self.data, self.size, self.hash_algo, newstore) newstore.add.assert_called_once_with(self.image_id, mock.ANY, self.size, self.hash_algo, context=None, verifier=None) def _good_metadata(self, in_metadata): mstore = mock.Mock() mstore.add.return_value = (self.location, self.size, self.checksum, in_metadata) (location, size, checksum, metadata) = backend.store_add_to_backend(self.image_id, self.data, self.size, mstore) mstore.add.assert_called_once_with(self.image_id, mock.ANY, self.size, context=None, verifier=None) self.assertEqual(self.location, location) self.assertEqual(self.size, size) self.assertEqual(self.checksum, checksum) self.assertEqual(in_metadata, metadata) newstore = mock.Mock() newstore.add.return_value = (self.location, self.size, self.checksum, self.multihash, in_metadata) (location, size, checksum, multihash, metadata) = backend.store_add_to_backend_with_multihash( self.image_id, self.data, self.size, self.hash_algo, newstore) newstore.add.assert_called_once_with(self.image_id, mock.ANY, self.size, self.hash_algo, context=None, verifier=None) self.assertEqual(self.location, location) self.assertEqual(self.size, size) self.assertEqual(self.checksum, checksum) self.assertEqual(self.multihash, multihash) self.assertEqual(in_metadata, metadata) def test_empty(self): metadata = {} self._good_metadata(metadata) def test_string(self): metadata = {'key': 'somevalue'} self._good_metadata(metadata) def test_list(self): m = {'key': ['somevalue', '2']} self._good_metadata(m) def test_unicode_dict(self): inner = {'key1': 'somevalue', 'key2': 'somevalue'} m = {'topkey': inner} self._good_metadata(m) def test_unicode_dict_list(self): inner = {'key1': 'somevalue', 'key2': 'somevalue'} m = {'topkey': inner, 'list': ['somevalue', '2'], 'u': '2'} self._good_metadata(m) def test_nested_dict(self): inner = {'key1': 'somevalue', 'key2': 'somevalue'} inner = {'newkey': inner} inner = {'anotherkey': inner} m = {'topkey': inner} self._good_metadata(m) def test_bad_top_level_nonunicode(self): metadata = {'key': b'a string'} self._bad_metadata(metadata) def test_bad_nonunicode_dict_list(self): inner = {'key1': 'somevalue', 'key2': 'somevalue', 'k3': [1, object()]} m = {'topkey': inner, 'list': ['somevalue', '2'], 'u': '2'} self._bad_metadata(m) def test_bad_metadata_not_dict(self): self._bad_metadata([]) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_connection_manager.py0000664000175000017500000001751600000000000026256 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from glance_store._drivers.swift import connection_manager from glance_store._drivers.swift import store as swift_store from glance_store import exceptions from glance_store.tests import base class TestConnectionManager(base.StoreBaseTest): def setUp(self): super(TestConnectionManager, self).setUp() self.client = mock.MagicMock() self.client.session.get_auth_headers.return_value = { connection_manager.SwiftConnectionManager.AUTH_HEADER_NAME: "fake_token"} self.location = mock.create_autospec(swift_store.StoreLocation) self.context = mock.MagicMock() self.conf = mock.MagicMock() def prepare_store(self, multi_tenant=False): if multi_tenant: store = mock.create_autospec(swift_store.MultiTenantStore, conf=self.conf) else: store = mock.create_autospec(swift_store.SingleTenantStore, service_type="swift", endpoint_type="internal", region=None, conf=self.conf, auth_version='3') store.backend_group = None store.conf_endpoint = None store.init_client.return_value = self.client return store def test_basic_single_tenant_cm_init(self): store = self.prepare_store() manager = connection_manager.SingleTenantConnectionManager( store=store, store_location=self.location ) store.init_client.assert_called_once_with(self.location, None) self.client.session.get_endpoint.assert_called_once_with( service_type=store.service_type, interface=store.endpoint_type, region_name=store.region ) store.get_store_connection.assert_called_once_with( "fake_token", manager.storage_url ) def test_basic_multi_tenant_cm_init(self): store = self.prepare_store(multi_tenant=True) manager = connection_manager.MultiTenantConnectionManager( store=store, store_location=self.location, context=self.context ) store.get_store_connection.assert_called_once_with( self.context.auth_token, manager.storage_url) def test_basis_multi_tenant_no_context(self): store = self.prepare_store(multi_tenant=True) self.assertRaises(exceptions.BadStoreConfiguration, connection_manager.MultiTenantConnectionManager, store=store, store_location=self.location) def test_multi_tenant_client_cm_with_client_creation_fails(self): store = self.prepare_store(multi_tenant=True) store.init_client.side_effect = [Exception] manager = connection_manager.MultiTenantConnectionManager( store=store, store_location=self.location, context=self.context, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, self.context) store.get_store_connection.assert_called_once_with( self.context.auth_token, manager.storage_url) self.assertFalse(manager.allow_reauth) def test_multi_tenant_client_cm_with_no_expiration(self): store = self.prepare_store(multi_tenant=True) manager = connection_manager.MultiTenantConnectionManager( store=store, store_location=self.location, context=self.context, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, self.context) # return the same connection because it should not be expired auth_ref = mock.MagicMock() self.client.session.auth.auth_ref = auth_ref auth_ref.will_expire_soon.return_value = False manager.get_connection() # check that we don't update connection store.get_store_connection.assert_called_once_with("fake_token", manager.storage_url) self.client.session.get_auth_headers.assert_called_once_with() def test_multi_tenant_client_cm_with_expiration(self): store = self.prepare_store(multi_tenant=True) manager = connection_manager.MultiTenantConnectionManager( store=store, store_location=self.location, context=self.context, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, self.context) # return the same connection because it should not be expired auth_ref = mock.MagicMock() self.client.session.auth.get_auth_ref.return_value = auth_ref auth_ref.will_expire_soon.return_value = True manager.get_connection() # check that we don't update connection self.assertEqual(2, store.get_store_connection.call_count) self.assertEqual(2, self.client.session.get_auth_headers.call_count) def test_single_tenant_client_cm_with_no_expiration(self): store = self.prepare_store() manager = connection_manager.SingleTenantConnectionManager( store=store, store_location=self.location, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, None) self.client.session.get_endpoint.assert_called_once_with( service_type=store.service_type, interface=store.endpoint_type, region_name=store.region ) # return the same connection because it should not be expired auth_ref = mock.MagicMock() self.client.session.auth.auth_ref = auth_ref auth_ref.will_expire_soon.return_value = False manager.get_connection() # check that we don't update connection store.get_store_connection.assert_called_once_with("fake_token", manager.storage_url) self.client.session.get_auth_headers.assert_called_once_with() def test_single_tenant_client_cm_with_expiration(self): store = self.prepare_store() manager = connection_manager.SingleTenantConnectionManager( store=store, store_location=self.location, allow_reauth=True ) store.init_client.assert_called_once_with(self.location, None) self.client.session.get_endpoint.assert_called_once_with( service_type=store.service_type, interface=store.endpoint_type, region_name=store.region ) # return the same connection because it should not be expired auth_ref = mock.MagicMock() self.client.session.auth.get_auth_ref.return_value = auth_ref auth_ref.will_expire_soon.return_value = True manager.get_connection() # check that we don't update connection self.assertEqual(2, store.get_store_connection.call_count) self.assertEqual(2, self.client.session.get_auth_headers.call_count) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_driver.py0000664000175000017500000004064600000000000023720 0ustar00zuulzuul00000000000000# Copyright 2018 Verizon Wireless # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib from oslo_utils.secretutils import md5 from oslotest import base import glance_store.driver as driver class _FakeStore(object): @driver.back_compat_add def add(self, image_id, image_file, image_size, hashing_algo, context=None, verifier=None): """This is a 0.26.0+ add, returns a 5-tuple""" if hashing_algo == 'md5': hasher = md5(usedforsecurity=False) else: hasher = hashlib.new(str(hashing_algo)) # assume 'image_file' will be bytes for these tests hasher.update(image_file) backend_url = "backend://%s" % image_id bytes_written = len(image_file) checksum = md5(image_file, usedforsecurity=False).hexdigest() multihash = hasher.hexdigest() metadata_dict = {"verifier_obj": verifier.name if verifier else None, "context_obj": context.name if context else None} return (backend_url, bytes_written, checksum, multihash, metadata_dict) class _FakeContext(object): name = 'context' class _FakeVerifier(object): name = 'verifier' class TestBackCompatWrapper(base.BaseTestCase): def setUp(self): super(TestBackCompatWrapper, self).setUp() self.fake_store = _FakeStore() self.fake_context = _FakeContext() self.fake_verifier = _FakeVerifier() self.img_id = '1234' self.img_file = b'0123456789' self.img_size = 10 self.img_checksum = md5(self.img_file, usedforsecurity=False).hexdigest() self.hashing_algo = 'sha256' self.img_sha256 = hashlib.sha256(self.img_file).hexdigest() def test_old_style_3_args(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertIsNone(x[3]['context_obj']) self.assertIsNone(x[3]['verifier_obj']) def test_old_style_4_args(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.fake_context) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertEqual('context', x[3]['context_obj']) self.assertIsNone(x[3]['verifier_obj']) def test_old_style_5_args(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.fake_context, self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertEqual('context', x[3]['context_obj']) self.assertEqual('verifier', x[3]['verifier_obj']) def test_old_style_3_args_kw_context(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, context=self.fake_context) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertEqual('context', x[3]['context_obj']) self.assertIsNone(x[3]['verifier_obj']) def test_old_style_3_args_kw_verifier(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertIsNone(x[3]['context_obj']) self.assertEqual('verifier', x[3]['verifier_obj']) def test_old_style_4_args_kw_verifier(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.fake_context, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertEqual('context', x[3]['context_obj']) self.assertEqual('verifier', x[3]['verifier_obj']) def test_old_style_3_args_kws_context_verifier(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, context=self.fake_context, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertEqual('context', x[3]['context_obj']) self.assertEqual('verifier', x[3]['verifier_obj']) def test_old_style_all_kw_in_order(self): x = self.fake_store.add(image_id=self.img_id, image_file=self.img_file, image_size=self.img_size, context=self.fake_context, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertEqual('context', x[3]['context_obj']) self.assertEqual('verifier', x[3]['verifier_obj']) def test_old_style_all_kw_random_order(self): x = self.fake_store.add(image_file=self.img_file, context=self.fake_context, image_size=self.img_size, verifier=self.fake_verifier, image_id=self.img_id) self.assertEqual(tuple, type(x)) self.assertEqual(4, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertIsInstance(x[3], dict) self.assertEqual('context', x[3]['context_obj']) self.assertEqual('verifier', x[3]['verifier_obj']) def test_new_style_6_args(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.hashing_algo, self.fake_context, self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertEqual('verifier', x[4]['verifier_obj']) def test_new_style_3_args_kw_hash(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, hashing_algo=self.hashing_algo) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertIsNone(x[4]['context_obj']) self.assertIsNone(x[4]['verifier_obj']) def test_new_style_3_args_kws_context_hash(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, context=self.fake_context, hashing_algo=self.hashing_algo) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertIsNone(x[4]['verifier_obj']) def test_new_style_3_args_kws_verifier_hash(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, hashing_algo=self.hashing_algo, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertIsNone(x[4]['context_obj']) self.assertEqual('verifier', x[4]['verifier_obj']) def test_new_style_3_args_kws_hash_context_verifier(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, hashing_algo=self.hashing_algo, context=self.fake_context, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertEqual('verifier', x[4]['verifier_obj']) def test_new_style_4_args(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.hashing_algo) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertIsNone(x[4]['context_obj']) self.assertIsNone(x[4]['verifier_obj']) def test_new_style_4_args_kw_context(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.hashing_algo, context=self.fake_context) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertIsNone(x[4]['verifier_obj']) def test_new_style_4_args_kws_verifier_context(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.hashing_algo, context=self.fake_context, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertEqual('verifier', x[4]['verifier_obj']) def test_new_style_5_args_kw_verifier(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.hashing_algo, self.fake_context, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertEqual('verifier', x[4]['verifier_obj']) def test_new_style_6_args_no_kw(self): x = self.fake_store.add(self.img_id, self.img_file, self.img_size, self.hashing_algo, self.fake_context, self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertEqual('verifier', x[4]['verifier_obj']) def test_new_style_all_kw_in_order(self): x = self.fake_store.add(image_id=self.img_id, image_file=self.img_file, image_size=self.img_size, hashing_algo=self.hashing_algo, context=self.fake_context, verifier=self.fake_verifier) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertEqual('verifier', x[4]['verifier_obj']) def test_new_style_all_kw_random_order(self): x = self.fake_store.add(hashing_algo=self.hashing_algo, image_file=self.img_file, context=self.fake_context, image_size=self.img_size, verifier=self.fake_verifier, image_id=self.img_id) self.assertEqual(tuple, type(x)) self.assertEqual(5, len(x)) self.assertIn(self.img_id, x[0]) self.assertEqual(self.img_size, x[1]) self.assertEqual(self.img_checksum, x[2]) self.assertEqual(self.img_sha256, x[3]) self.assertIsInstance(x[4], dict) self.assertEqual('context', x[4]['context_obj']) self.assertEqual('verifier', x[4]['verifier_obj']) def test_neg_too_few_args(self): self.assertRaises(TypeError, self.fake_store.add, self.img_id, self.img_file) def test_neg_too_few_kw_args(self): self.assertRaises(TypeError, self.fake_store.add, self.img_file, self.img_size, self.fake_context, self.fake_verifier, image_id=self.img_id) def test_neg_bogus_kw_args(self): self.assertRaises(TypeError, self.fake_store.add, thrashing_algo=self.hashing_algo, image_file=self.img_file, context=self.fake_context, image_size=self.img_size, verifier=self.fake_verifier, image_id=self.img_id) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_exceptions.py0000664000175000017500000000425700000000000024604 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import encodeutils from oslotest import base import glance_store class TestExceptions(base.BaseTestCase): """Test routines in glance_store.common.utils.""" def test_backend_exception(self): msg = glance_store.BackendException() self.assertIn('', encodeutils.exception_to_unicode(msg)) def test_unsupported_backend_exception(self): msg = glance_store.UnsupportedBackend() self.assertIn('', encodeutils.exception_to_unicode(msg)) def test_redirect_exception(self): # Just checks imports work ok glance_store.RedirectException(url='http://localhost') def test_exception_no_message(self): msg = glance_store.NotFound() self.assertIn('Image %(image)s not found', encodeutils.exception_to_unicode(msg)) def test_exception_not_found_with_image(self): msg = glance_store.NotFound(image='123') self.assertIn('Image 123 not found', encodeutils.exception_to_unicode(msg)) def test_exception_with_message(self): msg = glance_store.NotFound('Some message') self.assertIn('Some message', encodeutils.exception_to_unicode(msg)) def test_exception_with_kwargs(self): msg = glance_store.NotFound('Message: %(foo)s', foo='bar') self.assertIn('Message: bar', encodeutils.exception_to_unicode(msg)) def test_non_unicode_error_msg(self): exc = glance_store.NotFound(str('test')) self.assertIsInstance(encodeutils.exception_to_unicode(exc), str) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_filesystem_store.py0000664000175000017500000010527000000000000026020 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the filesystem backend store""" import builtins import errno import hashlib import io import json import os import stat from unittest import mock import uuid import fixtures from oslo_utils.secretutils import md5 from oslo_utils import units from glance_store._drivers import filesystem from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities class TestStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): """Establish a clean test environment.""" super(TestStore, self).setUp() self.store = filesystem.Store(self.conf) self.config(filesystem_store_datadir=self.test_dir, filesystem_store_chunk_size=10, stores=['glance.store.filesystem.Store'], group="glance_store") self.store.configure() self.register_store_schemes(self.store, 'file') self.hash_algo = 'sha256' def _create_metadata_json_file(self, metadata): expected_image_id = str(uuid.uuid4()) jsonfilename = os.path.join(self.test_dir, "storage_metadata.%s" % expected_image_id) self.config(filesystem_store_metadata_file=jsonfilename, group="glance_store") with open(jsonfilename, 'w') as fptr: json.dump(metadata, fptr) def _store_image(self, in_metadata): expected_image_id = str(uuid.uuid4()) expected_file_size = 10 expected_file_contents = b"*" * expected_file_size image_file = io.BytesIO(expected_file_contents) self.store.FILESYSTEM_STORE_METADATA = in_metadata return self.store.add(expected_image_id, image_file, expected_file_size, self.hash_algo) def test_get(self): """Test a "normal" retrieval of an image in chunks.""" # First add an image... image_id = str(uuid.uuid4()) file_contents = b"chunk00000remainder" image_file = io.BytesIO(file_contents) loc, size, checksum, multihash, _ = self.store.add( image_id, image_file, len(file_contents), self.hash_algo) # Now read it back... uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri(uri, conf=self.conf) (image_file, image_size) = self.store.get(loc) expected_data = b"chunk00000remainder" expected_num_chunks = 2 data = b"" num_chunks = 0 for chunk in image_file: num_chunks += 1 data += chunk self.assertEqual(expected_data, data) self.assertEqual(expected_num_chunks, num_chunks) def test_get_random_access(self): """Test a "normal" retrieval of an image in chunks.""" # First add an image... image_id = str(uuid.uuid4()) file_contents = b"chunk00000remainder" image_file = io.BytesIO(file_contents) loc, size, checksum, multihash, _ = self.store.add( image_id, image_file, len(file_contents), self.hash_algo) # Now read it back... uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri(uri, conf=self.conf) data = b"" for offset in range(len(file_contents)): (image_file, image_size) = self.store.get(loc, offset=offset, chunk_size=1) for chunk in image_file: data += chunk self.assertEqual(file_contents, data) data = b"" chunk_size = 5 (image_file, image_size) = self.store.get(loc, offset=chunk_size, chunk_size=chunk_size) for chunk in image_file: data += chunk self.assertEqual(b'00000', data) self.assertEqual(chunk_size, image_size) def test_get_non_existing(self): """ Test that trying to retrieve a file that doesn't exist raises an error """ loc = location.get_location_from_uri( "file:///%s/non-existing" % self.test_dir, conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) def _do_test_add(self, enable_thin_provisoning): """Test that we can add an image via the filesystem backend.""" self.config(filesystem_store_chunk_size=units.Ki, filesystem_thin_provisioning=enable_thin_provisoning, group='glance_store') self.store.configure() filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (self.test_dir, expected_image_id) image_file = io.BytesIO(expected_file_contents) loc, size, checksum, multihash, _ = self.store.add( expected_image_id, image_file, expected_file_size, self.hash_algo) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) uri = "file:///%s/%s" % (self.test_dir, expected_image_id) loc = location.get_location_from_uri(uri, conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_thin_provisioning_is_disabled_by_default(self): self.assertEqual(self.store.thin_provisioning, False) def test_add_with_thick_provisioning(self): self._do_test_add(enable_thin_provisoning=False) def test_add_with_thin_provisioning(self): self._do_test_add(enable_thin_provisoning=True) def test_add_thick_provisioning_with_holes_in_file(self): """ Tests that a file which contains null bytes chunks is fully written with a thick provisioning configuration. """ chunk_size = units.Ki # 1K content = b"*" * chunk_size + b"\x00" * chunk_size + b"*" * chunk_size self._do_test_thin_provisioning(content, 3 * chunk_size, 0, 3, False) def test_add_thin_provisioning_with_holes_in_file(self): """ Tests that a file which contains null bytes chunks is sparsified with a thin provisioning configuration. """ chunk_size = units.Ki # 1K content = b"*" * chunk_size + b"\x00" * chunk_size + b"*" * chunk_size self._do_test_thin_provisioning(content, 3 * chunk_size, 1, 2, True) def test_add_thick_provisioning_without_holes_in_file(self): """ Tests that a file which not contain null bytes chunks is fully written with a thick provisioning configuration. """ chunk_size = units.Ki # 1K content = b"*" * 3 * chunk_size self._do_test_thin_provisioning(content, 3 * chunk_size, 0, 3, False) def test_add_thin_provisioning_without_holes_in_file(self): """ Tests that a file which not contain null bytes chunks is fully written with a thin provisioning configuration. """ chunk_size = units.Ki # 1K content = b"*" * 3 * chunk_size self._do_test_thin_provisioning(content, 3 * chunk_size, 0, 3, True) def test_add_thick_provisioning_with_partial_holes_in_file(self): """ Tests that a file which contains null bytes not aligned with chunk size is fully written with a thick provisioning configuration. """ chunk_size = units.Ki # 1K my_chunk = int(chunk_size * 1.5) content = b"*" * my_chunk + b"\x00" * my_chunk + b"*" * my_chunk self._do_test_thin_provisioning(content, 3 * my_chunk, 0, 5, False) def test_add_thin_provisioning_with_partial_holes_in_file(self): """ Tests that a file which contains null bytes not aligned with chunk size is sparsified with a thin provisioning configuration. """ chunk_size = units.Ki # 1K my_chunk = int(chunk_size * 1.5) content = b"*" * my_chunk + b"\x00" * my_chunk + b"*" * my_chunk self._do_test_thin_provisioning(content, 3 * my_chunk, 1, 4, True) def _do_test_thin_provisioning(self, content, size, truncate, write, thin): self.config(filesystem_store_chunk_size=units.Ki, filesystem_thin_provisioning=thin, group='glance_store') self.store.configure() image_file = io.BytesIO(content) image_id = str(uuid.uuid4()) with mock.patch.object(builtins, 'open') as popen: self.store.add(image_id, image_file, size, self.hash_algo) write_count = popen.return_value.__enter__().write.call_count truncate_count = popen.return_value.__enter__().truncate.call_count self.assertEqual(write_count, write) self.assertEqual(truncate_count, truncate) def test_add_with_verifier(self): """Test that 'verifier.update' is called when verifier is provided.""" verifier = mock.MagicMock(name='mock_verifier') self.config(filesystem_store_chunk_size=units.Ki, group='glance_store') self.store.configure() image_id = str(uuid.uuid4()) file_size = units.Ki # 1K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) self.store.add(image_id, image_file, file_size, self.hash_algo, verifier=verifier) verifier.update.assert_called_with(file_contents) def test_add_check_metadata_with_invalid_mountpoint_location(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}] location, size, checksum, multihash, metadata = self._store_image( in_metadata) self.assertEqual({}, metadata) def test_add_check_metadata_list_with_invalid_mountpoint_locations(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] location, size, checksum, multihash, metadata = self._store_image( in_metadata) self.assertEqual({}, metadata) def test_add_check_metadata_list_with_valid_mountpoint_locations(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/tmp'}, {'id': 'xyz1234', 'mountpoint': '/xyz'}] location, size, checksum, multihash, metadata = self._store_image( in_metadata) self.assertEqual(in_metadata[0], metadata) def test_add_check_metadata_bad_nosuch_file(self): expected_image_id = str(uuid.uuid4()) jsonfilename = os.path.join(self.test_dir, "storage_metadata.%s" % expected_image_id) self.config(filesystem_store_metadata_file=jsonfilename, group="glance_store") expected_file_size = 10 expected_file_contents = b"*" * expected_file_size image_file = io.BytesIO(expected_file_contents) location, size, checksum, multihash, metadata = self.store.add( expected_image_id, image_file, expected_file_size, self.hash_algo) self.assertEqual(metadata, {}) def test_add_already_existing(self): """ Tests that adding an image with an existing identifier raises an appropriate exception """ filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) location, size, checksum, multihash, _ = self.store.add( image_id, image_file, file_size, self.hash_algo) image_file = io.BytesIO(b"nevergonnamakeit") self.assertRaises(exceptions.Duplicate, self.store.add, image_id, image_file, 0, self.hash_algo) def _do_test_add_write_failure(self, errno, exception): filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size path = os.path.join(self.test_dir, image_id) image_file = io.BytesIO(file_contents) with mock.patch.object(builtins, 'open') as popen: e = IOError() e.errno = errno popen.side_effect = e self.assertRaises(exception, self.store.add, image_id, image_file, 0, self.hash_algo) self.assertFalse(os.path.exists(path)) def test_add_storage_full(self): """ Tests that adding an image without enough space on disk raises an appropriate exception """ self._do_test_add_write_failure(errno.ENOSPC, exceptions.StorageFull) def test_add_file_too_big(self): """ Tests that adding an excessively large image file raises an appropriate exception """ self._do_test_add_write_failure(errno.EFBIG, exceptions.StorageFull) def test_add_storage_write_denied(self): """ Tests that adding an image with insufficient filestore permissions raises an appropriate exception """ self._do_test_add_write_failure(errno.EACCES, exceptions.StorageWriteDenied) def test_add_other_failure(self): """ Tests that a non-space-related IOError does not raise a StorageFull exceptions. """ self._do_test_add_write_failure(errno.ENOTDIR, IOError) def test_add_cleanup_on_read_failure(self): """ Tests the partial image file is cleaned up after a read failure. """ filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size path = os.path.join(self.test_dir, image_id) image_file = io.BytesIO(file_contents) def fake_Error(size): raise AttributeError() with mock.patch.object(image_file, 'read') as mock_read: mock_read.side_effect = fake_Error self.assertRaises(AttributeError, self.store.add, image_id, image_file, 0, self.hash_algo) self.assertFalse(os.path.exists(path)) def test_delete(self): """ Test we can delete an existing image in the filesystem store """ # First add an image image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) loc, size, checksum, multihash, _ = self.store.add( image_id, image_file, file_size, self.hash_algo) # Now check that we can delete it uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_delete_non_existing(self): """ Test that trying to delete a file that doesn't exist raises an error """ loc = location.get_location_from_uri( "file:///tmp/glance-tests/non-existing", conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.delete, loc) def test_delete_forbidden(self): """ Tests that trying to delete a file without permissions raises the correct error """ # First add an image image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) loc, size, checksum, multihash, _ = self.store.add( image_id, image_file, file_size, self.hash_algo) uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri(uri, conf=self.conf) # Mock unlink to raise an OSError for lack of permissions # and make sure we can't delete the image with mock.patch.object(os, 'unlink') as unlink: e = OSError() e.errno = errno unlink.side_effect = e self.assertRaises(exceptions.Forbidden, self.store.delete, loc) # Make sure the image didn't get deleted self.store.get(loc) def test_configure_add_with_multi_datadirs(self): """ Tests multiple filesystem specified by filesystem_store_datadirs are parsed correctly. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='glance_store') self.store.configure_add() expected_priority_map = {100: [store_map[0]], 200: [store_map[1]]} expected_priority_list = [200, 100] self.assertEqual(expected_priority_map, self.store.priority_data_map) self.assertEqual(expected_priority_list, self.store.priority_list) def test_configure_add_with_metadata_file_success(self): metadata = {'id': 'asdf1234', 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual([metadata], self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_list_of_dicts_success(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 'xyz1234', 'mountpoint': '/tmp/'}] self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual(metadata, self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_success_list_val_for_some_key(self): metadata = {'akey': ['value1', 'value2'], 'id': 'asdf1234', 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual([metadata], self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_bad_data(self): metadata = {'akey': 10, 'id': 'asdf1234', 'mountpoint': '/tmp'} # only unicode is allowed self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_with_no_id_or_mountpoint(self): metadata = {'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = {'id': 'asdfg1234'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_id_or_mountpoint_is_not_string(self): metadata = {'id': 10, 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = {'id': 'asdf1234', 'mountpoint': 12345} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_list_with_no_id_or_mountpoint(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = [{'id': 'abcdefg'}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_add_check_metadata_list_id_or_mountpoint_is_not_string(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 1234, 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = [{'id': 'abcdefg', 'mountpoint': 1234}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_same_dir_multiple_times(self): """ Tests BadStoreConfiguration exception is raised if same directory is specified multiple times in filesystem_store_datadirs. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.clear_override('filesystem_store_datadir', group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200", store_map[0] + ":300"], group='glance_store') self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_same_dir_multiple_times_same_priority(self): """ Tests BadStoreConfiguration exception is raised if same directory is specified multiple times in filesystem_store_datadirs. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200", store_map[0] + ":100"], group='glance_store') try: self.store.configure() except exceptions.BadStoreConfiguration: self.fail("configure() raised BadStoreConfiguration unexpectedly!") # Test that we can add an image via the filesystem backend filesystem.ChunkedFile.CHUNKSIZE = 1024 expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (store_map[1], expected_image_id) image_file = io.BytesIO(expected_file_contents) loc, size, checksum, multihash, _ = self.store.add( expected_image_id, image_file, expected_file_size, self.hash_algo) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_with_multiple_dirs(self): """Test adding multiple filesystem directories.""" store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='glance_store') self.store.configure() # Test that we can add an image via the filesystem backend filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (store_map[1], expected_image_id) image_file = io.BytesIO(expected_file_contents) loc, size, checksum, multihash, _ = self.store.add( expected_image_id, image_file, expected_file_size, self.hash_algo) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_with_multiple_dirs_storage_full(self): """ Test StorageFull exception is raised if no filesystem directory is found that can store an image. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='glance_store') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='glance_store') self.store.configure_add() def fake_get_capacity_info(mount_point): return 0 with mock.patch.object(self.store, '_get_capacity_info') as capacity: capacity.return_value = 0 filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size image_file = io.BytesIO(expected_file_contents) self.assertRaises(exceptions.StorageFull, self.store.add, expected_image_id, image_file, expected_file_size, self.hash_algo) def test_configure_add_with_file_perm(self): """ Tests filesystem specified by filesystem_store_file_perm are parsed correctly. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='glance_store') self.conf.set_override('filesystem_store_file_perm', 700, # -rwx------ group='glance_store') self.store.configure_add() self.assertEqual(self.store.datadir, store) def test_configure_add_with_unaccessible_file_perm(self): """ Tests BadStoreConfiguration exception is raised if an invalid file permission specified in filesystem_store_file_perm. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='glance_store') self.conf.set_override('filesystem_store_file_perm', 7, # -------rwx group='glance_store') self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_add_with_file_perm_for_group_other_users_access(self): """ Test that we can add an image via the filesystem backend with a required image file permission. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='glance_store') self.conf.set_override('filesystem_store_file_perm', 744, # -rwxr--r-- group='glance_store') # -rwx------ os.chmod(store, 0o700) self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) self.store.configure_add() filesystem.Store.WRITE_CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (store, expected_image_id) image_file = io.BytesIO(expected_file_contents) location, size, checksum, multihash, _ = self.store.add( expected_image_id, image_file, expected_file_size, self.hash_algo) self.assertEqual(expected_location, location) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) # -rwx--x--x for store directory self.assertEqual(0o711, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) # -rwxr--r-- for image file mode = os.stat(expected_location[len('file:/'):])[stat.ST_MODE] perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8) self.assertEqual(perm, stat.S_IMODE(mode)) def test_add_with_file_perm_for_owner_users_access(self): """ Test that we can add an image via the filesystem backend with a required image file permission. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='glance_store') self.conf.set_override('filesystem_store_file_perm', 600, # -rw------- group='glance_store') # -rwx------ os.chmod(store, 0o700) self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) self.store.configure_add() filesystem.Store.WRITE_CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_file_contents).hexdigest() expected_location = "file://%s/%s" % (store, expected_image_id) image_file = io.BytesIO(expected_file_contents) location, size, checksum, multihash, _ = self.store.add( expected_image_id, image_file, expected_file_size, self.hash_algo) self.assertEqual(expected_location, location) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) # -rwx------ for store directory self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) # -rw------- for image file mode = os.stat(expected_location[len('file:/'):])[stat.ST_MODE] perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8) self.assertEqual(perm, stat.S_IMODE(mode)) def test_configure_add_chunk_size(self): # This definitely won't be the default chunk_size = units.Gi self.config(filesystem_store_chunk_size=chunk_size, group="glance_store") self.store.configure_add() self.assertEqual(chunk_size, self.store.chunk_size) self.assertEqual(chunk_size, self.store.READ_CHUNKSIZE) self.assertEqual(chunk_size, self.store.WRITE_CHUNKSIZE) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_http_store.py0000664000175000017500000002060500000000000024611 0ustar00zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import requests import glance_store from glance_store._drivers import http from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities from glance_store.tests import utils class TestHttpStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): super(TestHttpStore, self).setUp() self.config(default_store='http', group='glance_store') http.Store.READ_CHUNKSIZE = 2 self.store = http.Store(self.conf) self.register_store_schemes(self.store, 'http') def _mock_requests(self): """Mock requests session object. Should be called when we need to mock request/response objects. """ request = mock.patch('requests.Session.request') self.request = request.start() self.addCleanup(request.stop) def test_http_get(self): self._mock_requests() self.request.return_value = utils.fake_response() uri = "http://netloc/path/to/file.tar.gz" expected_returns = ['I ', 'am', ' a', ' t', 'ea', 'po', 't,', ' s', 'ho', 'rt', ' a', 'nd', ' s', 'to', 'ut', '\n'] loc = location.get_location_from_uri(uri, conf=self.conf) (image_file, image_size) = self.store.get(loc) self.assertEqual(31, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) def test_http_partial_get(self): uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.StoreRandomGetNotSupported, self.store.get, loc, chunk_size=1) def test_http_get_redirect(self): # Add two layers of redirects to the response stack, which will # return the default 200 OK with the expected data after resolving # both redirects. self._mock_requests() redirect1 = {"location": "http://example.com/teapot.img"} redirect2 = {"location": "http://example.com/teapot_real.img"} responses = [utils.fake_response(), utils.fake_response(status_code=301, headers=redirect2), utils.fake_response(status_code=302, headers=redirect1)] def getresponse(*args, **kwargs): return responses.pop() self.request.side_effect = getresponse uri = "http://netloc/path/to/file.tar.gz" expected_returns = ['I ', 'am', ' a', ' t', 'ea', 'po', 't,', ' s', 'ho', 'rt', ' a', 'nd', ' s', 'to', 'ut', '\n'] loc = location.get_location_from_uri(uri, conf=self.conf) (image_file, image_size) = self.store.get(loc) self.assertEqual(0, len(responses)) self.assertEqual(31, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) def test_http_get_max_redirects(self): self._mock_requests() redirect = {"location": "http://example.com/teapot.img"} responses = ([utils.fake_response(status_code=302, headers=redirect)] * (http.MAX_REDIRECTS + 2)) def getresponse(*args, **kwargs): return responses.pop() self.request.side_effect = getresponse uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.MaxRedirectsExceeded, self.store.get, loc) def test_http_get_redirect_invalid(self): self._mock_requests() redirect = {"location": "http://example.com/teapot.img"} redirect_resp = utils.fake_response(status_code=307, headers=redirect) self.request.return_value = redirect_resp uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.BadStoreUri, self.store.get, loc) def test_http_get_not_found(self): self._mock_requests() fake = utils.fake_response(status_code=404, content="404 Not Found") self.request.return_value = fake uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_http_delete_raise_error(self): self._mock_requests() self.request.return_value = utils.fake_response() uri = "https://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.StoreDeleteNotSupported, self.store.delete, loc) self.assertRaises(exceptions.StoreDeleteNotSupported, glance_store.delete_from_backend, uri, {}) def test_http_add_raise_error(self): self.assertRaises(exceptions.StoreAddDisabled, self.store.add, None, None, None, None) self.assertRaises(exceptions.StoreAddDisabled, glance_store.add_to_backend, None, None, None, None, 'http') def test_http_get_size_with_non_existent_image_raises_Not_Found(self): self._mock_requests() self.request.return_value = utils.fake_response( status_code=404, content='404 Not Found') uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get_size, loc) self.request.assert_called_once_with('HEAD', uri, stream=True, allow_redirects=False) def test_http_get_size_bad_status_line(self): self._mock_requests() # Note(sabari): Low-level httplib.BadStatusLine will be raised as # ConnectionErorr after migrating to requests. self.request.side_effect = requests.exceptions.ConnectionError uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.BadStoreUri, self.store.get_size, loc) def test_http_store_location_initialization(self): """Test store location initialization from valid uris""" uris = [ "http://127.0.0.1:8000/ubuntu.iso", "http://openstack.com:80/ubuntu.iso", "http://[1080::8:800:200C:417A]:80/ubuntu.iso" ] for uri in uris: location.get_location_from_uri(uri) def test_http_store_location_initialization_with_invalid_url(self): """Test store location initialization from incorrect uris.""" incorrect_uris = [ "http://127.0.0.1:~/ubuntu.iso", "http://openstack.com:some_text/ubuntu.iso", "http://[1080::8:800:200C:417A]:some_text/ubuntu.iso" ] for uri in incorrect_uris: self.assertRaises(exceptions.BadStoreUri, location.get_location_from_uri, uri) def test_http_store_location_get_uri(self): """Test for HTTP URI with and without query""" uris = ["http://netloc/path/to/file.tar.gz" "http://netloc/path/to/file.tar.gz?query=text", ] for uri in uris: loc = location.get_location_from_uri(uri, conf=self.conf) self.assertEqual(uri, loc.store_location.get_uri()) def test_http_get_raises_remote_service_unavailable(self): """Test http store raises RemoteServiceUnavailable.""" uri = "http://netloc/path/to/file.tar.gz" loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.RemoteServiceUnavailable, self.store.get, loc) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_location.py0000664000175000017500000000226400000000000024227 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store import exceptions from glance_store import location from glance_store.tests import base class TestStoreLocation(base.StoreBaseTest): def setUp(self): super(TestStoreLocation, self).setUp() def test_scheme_validation(self): valid_schemas = ("file://", "http://") correct_uri = "file://test" location.StoreLocation.validate_schemas(correct_uri, valid_schemas) incorrect_uri = "fake://test" self.assertRaises(exceptions.BadStoreUri, location.StoreLocation.validate_schemas, incorrect_uri, valid_schemas) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_multistore_filesystem.py0000664000175000017500000010460500000000000027074 0ustar00zuulzuul00000000000000# Copyright 2018 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the filesystem backend store""" import builtins import errno import io import json import os import stat from unittest import mock import uuid import fixtures from oslo_config import cfg from oslo_utils.secretutils import md5 from oslo_utils import units import glance_store as store from glance_store._drivers import filesystem from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities class TestMultiStore(base.MultiStoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): """Establish a clean test environment.""" super(TestMultiStore, self).setUp() self.enabled_backends = { "file1": "file", "file2": "file", } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=self.enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='file1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path self.addCleanup(self.conf.reset) self.store = filesystem.Store(self.conf, backend='file1') self.config(filesystem_store_datadir=self.test_dir, filesystem_store_chunk_size=10, group="file1") self.store.configure() self.register_store_backend_schemes(self.store, 'file', 'file1') def _create_metadata_json_file(self, metadata): expected_image_id = str(uuid.uuid4()) jsonfilename = os.path.join(self.test_dir, "storage_metadata.%s" % expected_image_id) self.config(filesystem_store_metadata_file=jsonfilename, group="file1") with open(jsonfilename, 'w') as fptr: json.dump(metadata, fptr) def _store_image(self, in_metadata): expected_image_id = str(uuid.uuid4()) expected_file_size = 10 expected_file_contents = b"*" * expected_file_size image_file = io.BytesIO(expected_file_contents) self.store.FILESYSTEM_STORE_METADATA = in_metadata return self.store.add(expected_image_id, image_file, expected_file_size) def test_location_url_prefix_is_set(self): expected_url_prefix = "file://%s" % self.test_dir self.assertEqual(expected_url_prefix, self.store.url_prefix) def test_get(self): """Test a "normal" retrieval of an image in chunks.""" # First add an image... image_id = str(uuid.uuid4()) file_contents = b"chunk00000remainder" image_file = io.BytesIO(file_contents) loc, size, checksum, metadata = self.store.add( image_id, image_file, len(file_contents)) # Check metadata contains 'file1' as a store self.assertEqual("file1", metadata['store']) # Now read it back... uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri_and_backend(uri, 'file1', conf=self.conf) (image_file, image_size) = self.store.get(loc) expected_data = b"chunk00000remainder" expected_num_chunks = 2 data = b"" num_chunks = 0 for chunk in image_file: num_chunks += 1 data += chunk self.assertEqual(expected_data, data) self.assertEqual(expected_num_chunks, num_chunks) def test_get_random_access(self): """Test a "normal" retrieval of an image in chunks.""" # First add an image... image_id = str(uuid.uuid4()) file_contents = b"chunk00000remainder" image_file = io.BytesIO(file_contents) loc, size, checksum, metadata = self.store.add(image_id, image_file, len(file_contents)) # Check metadata contains 'file1' as a store self.assertEqual("file1", metadata['store']) # Now read it back... uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri_and_backend(uri, 'file1', conf=self.conf) data = b"" for offset in range(len(file_contents)): (image_file, image_size) = self.store.get(loc, offset=offset, chunk_size=1) for chunk in image_file: data += chunk self.assertEqual(file_contents, data) data = b"" chunk_size = 5 (image_file, image_size) = self.store.get(loc, offset=chunk_size, chunk_size=chunk_size) for chunk in image_file: data += chunk self.assertEqual(b'00000', data) self.assertEqual(chunk_size, image_size) def test_get_non_existing(self): """Test trying to retrieve a file that doesn't exist raises error.""" loc = location.get_location_from_uri_and_backend( "file:///%s/non-existing" % self.test_dir, 'file1', conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_get_non_existing_identifier(self): """Test trying to retrieve a store that doesn't exist raises error.""" self.assertRaises(exceptions.UnknownScheme, location.get_location_from_uri_and_backend, "file:///%s/non-existing" % self.test_dir, 'file3', conf=self.conf) def test_add(self): """Test that we can add an image via the filesystem backend.""" filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_location = "file://%s/%s" % (self.test_dir, expected_image_id) image_file = io.BytesIO(expected_file_contents) loc, size, checksum, metadata = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual("file1", metadata['store']) uri = "file:///%s/%s" % (self.test_dir, expected_image_id) loc = location.get_location_from_uri_and_backend( uri, 'file1', conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_to_different_backned(self): """Test that we can add an image via the filesystem backend.""" self.store = filesystem.Store(self.conf, backend='file2') self.config(filesystem_store_datadir=self.test_dir, group="file2") self.store.configure() self.register_store_backend_schemes(self.store, 'file', 'file2') filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_location = "file://%s/%s" % (self.test_dir, expected_image_id) image_file = io.BytesIO(expected_file_contents) loc, size, checksum, metadata = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual("file2", metadata['store']) uri = "file:///%s/%s" % (self.test_dir, expected_image_id) loc = location.get_location_from_uri_and_backend( uri, 'file2', conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_check_metadata_with_invalid_mountpoint_location(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}] location, size, checksum, metadata = self._store_image(in_metadata) self.assertEqual({'store': 'file1'}, metadata) def test_add_check_metadata_list_with_invalid_mountpoint_locations(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] location, size, checksum, metadata = self._store_image(in_metadata) self.assertEqual({'store': 'file1'}, metadata) def test_add_check_metadata_list_with_valid_mountpoint_locations(self): in_metadata = [{'id': 'abcdefg', 'mountpoint': '/tmp'}, {'id': 'xyz1234', 'mountpoint': '/xyz'}] location, size, checksum, metadata = self._store_image(in_metadata) self.assertEqual(in_metadata[0], metadata) self.assertEqual("file1", metadata["store"]) def test_add_check_metadata_bad_nosuch_file(self): expected_image_id = str(uuid.uuid4()) jsonfilename = os.path.join(self.test_dir, "storage_metadata.%s" % expected_image_id) self.config(filesystem_store_metadata_file=jsonfilename, group="file1") expected_file_size = 10 expected_file_contents = b"*" * expected_file_size image_file = io.BytesIO(expected_file_contents) location, size, checksum, metadata = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual({'store': 'file1'}, metadata) def test_add_already_existing(self): """ Tests that adding an image with an existing identifier raises an appropriate exception """ filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) location, size, checksum, metadata = self.store.add(image_id, image_file, file_size) self.assertEqual("file1", metadata["store"]) image_file = io.BytesIO(b"nevergonnamakeit") self.assertRaises(exceptions.Duplicate, self.store.add, image_id, image_file, 0) def _do_test_add_write_failure(self, errno, exception): filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size path = os.path.join(self.test_dir, image_id) image_file = io.BytesIO(file_contents) with mock.patch.object(builtins, 'open') as popen: e = IOError() e.errno = errno popen.side_effect = e self.assertRaises(exception, self.store.add, image_id, image_file, 0) self.assertFalse(os.path.exists(path)) def test_add_storage_full(self): """Tests adding an image without enough space. Tests that adding an image without enough space on disk raises an appropriate exception. """ self._do_test_add_write_failure(errno.ENOSPC, exceptions.StorageFull) def test_add_file_too_big(self): """Tests adding a very large image. Tests that adding an excessively large image file raises an appropriate exception. """ self._do_test_add_write_failure(errno.EFBIG, exceptions.StorageFull) def test_add_storage_write_denied(self): """Tests adding an image without store permissions. Tests that adding an image with insufficient filestore permissions raises an appropriate exception. """ self._do_test_add_write_failure(errno.EACCES, exceptions.StorageWriteDenied) def test_add_other_failure(self): """Tests other IOErrors do not raise a StorageFull exception.""" self._do_test_add_write_failure(errno.ENOTDIR, IOError) def test_add_cleanup_on_read_failure(self): """Tests partial image is cleaned up after a read failure.""" filesystem.ChunkedFile.CHUNKSIZE = units.Ki image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size path = os.path.join(self.test_dir, image_id) image_file = io.BytesIO(file_contents) def fake_Error(size): raise AttributeError() with mock.patch.object(image_file, 'read') as mock_read: mock_read.side_effect = fake_Error self.assertRaises(AttributeError, self.store.add, image_id, image_file, 0) self.assertFalse(os.path.exists(path)) def test_delete(self): """Test we can delete an existing image in the filesystem store.""" # First add an image image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) loc, size, checksum, metadata = self.store.add(image_id, image_file, file_size) self.assertEqual("file1", metadata["store"]) # Now check that we can delete it uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri_and_backend(uri, "file1", conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_delete_non_existing(self): """Test deleting file that doesn't exist raises an error.""" loc = location.get_location_from_uri_and_backend( "file:///tmp/glance-tests/non-existing", "file1", conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.delete, loc) def test_delete_forbidden(self): """Tests deleting file without permissions raises the correct error.""" # First add an image image_id = str(uuid.uuid4()) file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) loc, size, checksum, metadata = self.store.add(image_id, image_file, file_size) self.assertEqual("file1", metadata["store"]) uri = "file:///%s/%s" % (self.test_dir, image_id) loc = location.get_location_from_uri_and_backend(uri, "file1", conf=self.conf) # Mock unlink to raise an OSError for lack of permissions # and make sure we can't delete the image with mock.patch.object(os, 'unlink') as unlink: e = OSError() e.errno = errno unlink.side_effect = e self.assertRaises(exceptions.Forbidden, self.store.delete, loc) # Make sure the image didn't get deleted loc = location.get_location_from_uri_and_backend(uri, "file1", conf=self.conf) self.store.get(loc) def test_configure_add_with_multi_datadirs(self): """Test multiple filesystems are parsed correctly.""" store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='file1') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='file1') self.store.configure_add() expected_priority_map = {100: [store_map[0]], 200: [store_map[1]]} expected_priority_list = [200, 100] self.assertEqual(expected_priority_map, self.store.priority_data_map) self.assertEqual(expected_priority_list, self.store.priority_list) def test_configure_add_with_metadata_file_success(self): metadata = {'id': 'asdf1234', 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual([metadata], self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_list_of_dicts_success(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 'xyz1234', 'mountpoint': '/tmp/'}] self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual(metadata, self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_success_list_val_for_some_key(self): metadata = {'akey': ['value1', 'value2'], 'id': 'asdf1234', 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.store.configure_add() self.assertEqual([metadata], self.store.FILESYSTEM_STORE_METADATA) def test_configure_add_check_metadata_bad_data(self): metadata = {'akey': 10, 'id': 'asdf1234', 'mountpoint': '/tmp'} # only unicode is allowed self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_with_no_id_or_mountpoint(self): metadata = {'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = {'id': 'asdfg1234'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_id_or_mountpoint_is_not_string(self): metadata = {'id': 10, 'mountpoint': '/tmp'} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = {'id': 'asdf1234', 'mountpoint': 12345} self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_check_metadata_list_with_no_id_or_mountpoint(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = [{'id': 'abcdefg'}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_add_check_metadata_list_id_or_mountpoint_is_not_string(self): metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'}, {'id': 1234, 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) metadata = [{'id': 'abcdefg', 'mountpoint': 1234}, {'id': 'xyz1234', 'mountpoint': '/pqr/images'}] self._create_metadata_json_file(metadata) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_same_dir_multiple_times(self): """Tests handling of same dir in config multiple times. Tests BadStoreConfiguration exception is raised if same directory is specified multiple times in filesystem_store_datadirs with different priorities. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.clear_override('filesystem_store_datadir', group='file1') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200", store_map[0] + ":300"], group='file1') self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_configure_add_same_dir_multiple_times_same_priority(self): """Tests handling of same dir in config multiple times. Tests BadStoreConfiguration exception is raised if same directory is specified multiple times in filesystem_store_datadirs with the same priority. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='file1') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200", store_map[0] + ":100"], group='file1') try: self.store.configure() except exceptions.BadStoreConfiguration: self.fail("configure() raised BadStoreConfiguration unexpectedly!") # Test that we can add an image via the filesystem backend filesystem.ChunkedFile.CHUNKSIZE = 1024 expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_location = "file://%s/%s" % (store_map[1], expected_image_id) image_file = io.BytesIO(expected_file_contents) loc, size, checksum, metadata = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual("file1", metadata["store"]) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) loc = location.get_location_from_uri_and_backend( expected_location, "file1", conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_with_multiple_dirs(self): """Test adding multiple filesystem directories.""" store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='file1') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='file1') self.store.configure() # Test that we can add an image via the filesystem backend filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_location = "file://%s/%s" % (store_map[1], expected_image_id) image_file = io.BytesIO(expected_file_contents) loc, size, checksum, metadata = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual("file1", metadata["store"]) self.assertEqual(expected_location, loc) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) loc = location.get_location_from_uri_and_backend( expected_location, "file1", conf=self.conf) (new_image_file, new_image_size) = self.store.get(loc) new_image_contents = b"" new_image_file_size = 0 for chunk in new_image_file: new_image_file_size += len(chunk) new_image_contents += chunk self.assertEqual(expected_file_contents, new_image_contents) self.assertEqual(expected_file_size, new_image_file_size) def test_add_with_multiple_dirs_storage_full(self): """Tests adding dirs with storage full. Test StorageFull exception is raised if no filesystem directory is found that can store an image. """ store_map = [self.useFixture(fixtures.TempDir()).path, self.useFixture(fixtures.TempDir()).path] self.conf.set_override('filesystem_store_datadir', override=None, group='file1') self.conf.set_override('filesystem_store_datadirs', [store_map[0] + ":100", store_map[1] + ":200"], group='file1') self.store.configure_add() def fake_get_capacity_info(mount_point): return 0 with mock.patch.object(self.store, '_get_capacity_info') as capacity: capacity.return_value = 0 filesystem.ChunkedFile.CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size image_file = io.BytesIO(expected_file_contents) self.assertRaises(exceptions.StorageFull, self.store.add, expected_image_id, image_file, expected_file_size) def test_configure_add_with_file_perm(self): """Tests adding with permissions. Tests filesystem specified by filesystem_store_file_perm are parsed correctly. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='file1') self.conf.set_override('filesystem_store_file_perm', 700, # -rwx------ group='file1') self.store.configure_add() self.assertEqual(self.store.datadir, store) def test_configure_add_with_inaccessible_file_perm(self): """Tests adding with inaccessible file permissions. Tests BadStoreConfiguration exception is raised if an invalid file permission specified in filesystem_store_file_perm. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='file1') self.conf.set_override('filesystem_store_file_perm', 7, # -------rwx group='file1') self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure_add) def test_add_with_file_perm_for_group_other_users_access(self): """Tests adding image with file permissions. Test that we can add an image via the filesystem backend with a required image file permission. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='file1') self.conf.set_override('filesystem_store_file_perm', 744, # -rwxr--r-- group='file1') # -rwx------ os.chmod(store, 0o700) self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) self.store.configure_add() filesystem.Store.WRITE_CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_location = "file://%s/%s" % (store, expected_image_id) image_file = io.BytesIO(expected_file_contents) location, size, checksum, metadata = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual("file1", metadata["store"]) self.assertEqual(expected_location, location) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) # -rwx--x--x for store directory self.assertEqual(0o711, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) # -rwxr--r-- for image file mode = os.stat(expected_location[len('file:/'):])[stat.ST_MODE] perm = int(str(getattr(self.conf, "file1").filesystem_store_file_perm), 8) self.assertEqual(perm, stat.S_IMODE(mode)) def test_add_with_file_perm_for_owner_users_access(self): """Tests adding image with file permissions. Test that we can add an image via the filesystem backend with a required image file permission. """ store = self.useFixture(fixtures.TempDir()).path self.conf.set_override('filesystem_store_datadir', store, group='file1') self.conf.set_override('filesystem_store_file_perm', 600, # -rw------- group='file1') # -rwx------ os.chmod(store, 0o700) self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) self.store.configure_add() filesystem.Store.WRITE_CHUNKSIZE = units.Ki expected_image_id = str(uuid.uuid4()) expected_file_size = 5 * units.Ki # 5K expected_file_contents = b"*" * expected_file_size expected_checksum = md5(expected_file_contents, usedforsecurity=False).hexdigest() expected_location = "file://%s/%s" % (store, expected_image_id) image_file = io.BytesIO(expected_file_contents) location, size, checksum, metadata = self.store.add(expected_image_id, image_file, expected_file_size) self.assertEqual("file1", metadata["store"]) self.assertEqual(expected_location, location) self.assertEqual(expected_file_size, size) self.assertEqual(expected_checksum, checksum) # -rwx------ for store directory self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE])) # -rw------- for image file mode = os.stat(expected_location[len('file:/'):])[stat.ST_MODE] perm = int(str(getattr(self.conf, "file1").filesystem_store_file_perm), 8) self.assertEqual(perm, stat.S_IMODE(mode)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_multistore_rbd.py0000664000175000017500000004210600000000000025454 0ustar00zuulzuul00000000000000# Copyright 2018 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import io from unittest import mock from oslo_config import cfg from oslo_utils import units import glance_store as store from glance_store._drivers import rbd as rbd_store from glance_store import exceptions from glance_store import location as g_location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities class TestException(Exception): pass class MockRados(object): class Error(Exception): pass class ObjectNotFound(Exception): pass class ioctx(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def close(self, *args, **kwargs): pass class Rados(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def connect(self, *args, **kwargs): pass def open_ioctx(self, *args, **kwargs): return MockRados.ioctx() def shutdown(self, *args, **kwargs): pass def conf_get(self, *args, **kwargs): pass class MockRBD(object): class ImageExists(Exception): pass class ImageHasSnapshots(Exception): pass class ImageBusy(Exception): pass class ImageNotFound(Exception): pass class InvalidArgument(Exception): pass class NoSpace(Exception): pass class Image(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): pass def create_snap(self, *args, **kwargs): pass def remove_snap(self, *args, **kwargs): pass def set_snap(self, *args, **kwargs): pass def list_children(self, *args, **kwargs): pass def protect_snap(self, *args, **kwargs): pass def unprotect_snap(self, *args, **kwargs): pass def read(self, *args, **kwargs): raise NotImplementedError() def write(self, *args, **kwargs): raise NotImplementedError() def resize(self, *args, **kwargs): raise NotImplementedError() def discard(self, offset, length): raise NotImplementedError() def close(self): pass def list_snaps(self): raise NotImplementedError() def parent_info(self): raise NotImplementedError() def size(self): raise NotImplementedError() class RBD(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def create(self, *args, **kwargs): pass def remove(self, *args, **kwargs): pass def list(self, *args, **kwargs): raise NotImplementedError() def clone(self, *args, **kwargs): raise NotImplementedError() def trash_move(self, *args, **kwargs): pass RBD_FEATURE_LAYERING = 1 class TestMultiStore(base.MultiStoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): """Establish a clean test environment.""" super(TestMultiStore, self).setUp() enabled_backends = { "ceph1": "rbd", "ceph2": "rbd" } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='ceph1', group='glance_store') # Ensure stores + locations cleared g_location.SCHEME_TO_CLS_BACKEND_MAP = {} with mock.patch.object(rbd_store.Store, '_set_url_prefix'): store.create_multi_stores(self.conf) self.addCleanup(setattr, g_location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.addCleanup(self.conf.reset) rbd_store.rados = MockRados rbd_store.rbd = MockRBD self.store = rbd_store.Store(self.conf, backend="ceph1") self.store.configure() self.store.chunk_size = 2 self.called_commands_actual = [] self.called_commands_expected = [] self.store_specs = {'pool': 'fake_pool', 'image': 'fake_image', 'snapshot': 'fake_snapshot'} self.location = rbd_store.StoreLocation(self.store_specs, self.conf) # Provide enough data to get more than one chunk iteration. self.data_len = 3 * units.Ki self.data_iter = io.BytesIO(b'*' * self.data_len) def test_location_url_prefix_is_set(self): expected_url_prefix = "rbd://" self.assertEqual(expected_url_prefix, self.store.url_prefix) def test_add_w_image_size_zero(self): """Assert that correct size is returned even though 0 was provided.""" self.store.chunk_size = units.Ki with mock.patch.object(rbd_store.rbd.Image, 'resize') as resize: with mock.patch.object(rbd_store.rbd.Image, 'write') as write: ret = self.store.add('fake_image_id', self.data_iter, 0) self.assertTrue(resize.called) self.assertTrue(write.called) self.assertEqual(ret[1], self.data_len) self.assertEqual("ceph1", ret[3]['store']) def test_add_w_image_size_zero_to_different_backend(self): """Assert that correct size is returned even though 0 was provided.""" self.store = rbd_store.Store(self.conf, backend="ceph2") self.store.configure() self.called_commands_actual = [] self.called_commands_expected = [] self.store_specs = {'pool': 'fake_pool_1', 'image': 'fake_image_1', 'snapshot': 'fake_snapshot_1'} self.location = rbd_store.StoreLocation(self.store_specs, self.conf) # Provide enough data to get more than one chunk iteration. self.data_len = 3 * units.Ki self.data_iter = io.BytesIO(b'*' * self.data_len) self.store.chunk_size = units.Ki with mock.patch.object(rbd_store.rbd.Image, 'resize') as resize: with mock.patch.object(rbd_store.rbd.Image, 'write') as write: ret = self.store.add('fake_image_id', self.data_iter, 0) self.assertTrue(resize.called) self.assertTrue(write.called) self.assertEqual(ret[1], self.data_len) self.assertEqual("ceph2", ret[3]['store']) @mock.patch.object(MockRBD.Image, '__enter__') @mock.patch.object(rbd_store.Store, '_create_image') @mock.patch.object(rbd_store.Store, '_delete_image') def test_add_w_rbd_image_exception(self, delete, create, enter): def _fake_create_image(*args, **kwargs): self.called_commands_actual.append('create') return self.location def _fake_delete_image(target_pool, image_name, snapshot_name=None): self.assertEqual(self.location.pool, target_pool) self.assertEqual(self.location.image, image_name) self.assertEqual(self.location.snapshot, snapshot_name) self.called_commands_actual.append('delete') def _fake_enter(*args, **kwargs): raise exceptions.NotFound(image="fake_image_id") create.side_effect = _fake_create_image delete.side_effect = _fake_delete_image enter.side_effect = _fake_enter self.assertRaises(exceptions.NotFound, self.store.add, 'fake_image_id', self.data_iter, self.data_len) self.called_commands_expected = ['create', 'delete'] def test_add_duplicate_image(self): def _fake_create_image(*args, **kwargs): self.called_commands_actual.append('create') raise MockRBD.ImageExists() with mock.patch.object(self.store, '_create_image') as create_image: create_image.side_effect = _fake_create_image self.assertRaises(exceptions.Duplicate, self.store.add, 'fake_image_id', self.data_iter, self.data_len) self.called_commands_expected = ['create'] def test_delete(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') with mock.patch.object(MockRBD.RBD, 'remove') as remove_image: remove_image.side_effect = _fake_remove self.store.delete(g_location.Location('test_rbd_store', rbd_store.StoreLocation, self.conf, uri=self.location.get_uri())) self.called_commands_expected = ['remove'] def test_delete_image(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') with mock.patch.object(MockRBD.RBD, 'remove') as remove_image: remove_image.side_effect = _fake_remove self.store._delete_image('fake_pool', self.location.image) self.called_commands_expected = ['remove'] def test_delete_image_exc_image_not_found(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') raise MockRBD.ImageNotFound() with mock.patch.object(MockRBD.RBD, 'remove') as remove: remove.side_effect = _fake_remove self.assertRaises(exceptions.NotFound, self.store._delete_image, 'fake_pool', self.location.image) self.called_commands_expected = ['remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_snap(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.store._delete_image('fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap', 'remove_snap', 'remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_unprotected_snap(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise MockRBD.InvalidArgument() def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.store._delete_image('fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap', 'remove_snap', 'remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_snap_with_error(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise TestException() def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.assertRaises(TestException, self.store._delete_image, 'fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap'] def test_delete_image_w_snap_exc_image_busy(self): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise MockRBD.ImageBusy() with mock.patch.object(MockRBD.Image, 'unprotect_snap') as mocked: mocked.side_effect = _fake_unprotect_snap self.assertRaises(exceptions.InUseByStore, self.store._delete_image, 'fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap'] def test_delete_image_snap_has_external_references(self): with mock.patch.object(MockRBD.Image, 'list_children') as mocked: mocked.return_value = True self.store._delete_image('fake_pool', self.location.image, snapshot_name='snap') def test_delete_image_w_snap_exc_image_has_snap(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') raise MockRBD.ImageHasSnapshots() with mock.patch.object(MockRBD.RBD, 'remove') as remove: remove.side_effect = _fake_remove self.store._delete_image('fake_pool', self.location.image) self.called_commands_expected = ['remove'] def test_get_partial_image(self): loc = g_location.Location('test_rbd_store', rbd_store.StoreLocation, self.conf, store_specs=self.store_specs) self.assertRaises(exceptions.StoreRandomGetNotSupported, self.store.get, loc, chunk_size=1) @mock.patch.object(MockRados.Rados, 'connect', side_effect=MockRados.Error) def test_rados_connect_error(self, _): rbd_store.rados.Error = MockRados.Error rbd_store.rados.ObjectNotFound = MockRados.ObjectNotFound def test(): with self.store.get_connection('conffile', 'rados_id'): pass self.assertRaises(exceptions.BadStoreConfiguration, test) def test_create_image_conf_features(self): # Tests that we use non-0 features from ceph.conf and cast to int. fsid = 'fake' features = '3' conf_get_mock = mock.Mock(return_value=features) conn = mock.Mock(conf_get=conf_get_mock) ioctxt = mock.sentinel.ioctxt name = '1' size = 1024 order = 3 with mock.patch.object(rbd_store.rbd.RBD, 'create') as create_mock: location = self.store._create_image( fsid, conn, ioctxt, name, size, order) self.assertEqual(fsid, location.specs['fsid']) self.assertEqual(rbd_store.DEFAULT_POOL, location.specs['pool']) self.assertEqual(name, location.specs['image']) self.assertEqual(rbd_store.DEFAULT_SNAPNAME, location.specs['snapshot']) create_mock.assert_called_once_with(ioctxt, name, size, order, old_format=False, features=3) def tearDown(self): self.assertEqual(self.called_commands_expected, self.called_commands_actual) super(TestMultiStore, self).tearDown() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_multistore_s3.py0000664000175000017500000006022500000000000025234 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the Multiple S3 backend store""" import hashlib import io from unittest import mock import uuid import boto3 import botocore from botocore import exceptions as boto_exceptions from botocore import stub from oslo_config import cfg from oslo_utils.secretutils import md5 from oslo_utils import units import glance_store as store from glance_store._drivers import s3 from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities FAKE_UUID = str(uuid.uuid4()) FIVE_KB = 5 * units.Ki S3_CONF = { 's3_store_access_key': 'user', 's3_store_secret_key': 'key', 's3_store_host': 'https://s3-region1.com', 's3_store_bucket': 'glance', 's3_store_large_object_size': 9, # over 9MB is large 's3_store_large_object_chunk_size': 6, # part size is 6MB } def format_s3_location(user, key, authurl, bucket, obj): """Helper method that returns a S3 store URI given the component pieces.""" scheme = 's3' if authurl.startswith('https://'): scheme = 's3+https' authurl = authurl[8:] elif authurl.startswith('http://'): authurl = authurl[7:] authurl = authurl.strip('/') return "%s://%s:%s@%s/%s/%s" % (scheme, user, key, authurl, bucket, obj) class TestMultiS3Store(base.MultiStoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): """Establish a clean test environment.""" super(TestMultiS3Store, self).setUp() enabled_backends = { "s3_region1": "s3", "s3_region2": "s3" } self.hash_algo = 'sha256' self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='s3_region1', group='glance_store') # set s3 related config options self.config(group='s3_region1', s3_store_access_key='user', s3_store_secret_key='key', s3_store_host='https://s3-region1.com', s3_store_region_name='custom_region_name', s3_store_bucket='glance', s3_store_large_object_size=S3_CONF[ 's3_store_large_object_size' ], s3_store_large_object_chunk_size=6) self.config(group='s3_region2', s3_store_access_key='user', s3_store_secret_key='key', s3_store_host='http://s3-region2.com', s3_store_bucket='glance', s3_store_large_object_size=S3_CONF[ 's3_store_large_object_size' ], s3_store_large_object_chunk_size=6) # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.addCleanup(self.conf.reset) self.store = s3.Store(self.conf, backend="s3_region1") self.store.configure() self.register_store_backend_schemes(self.store, 's3', 's3_region1') def test_location_url_prefix_is_set(self): expected_url_prefix = "s3+https://user:key@s3-region1.com/glance" self.assertEqual(expected_url_prefix, self.store.url_prefix) def test_get_invalid_bucket_name(self): self.config(s3_store_bucket_url_format='virtual', group='s3_region1') invalid_buckets = ['not.dns.compliant', 'aa', 'bucket-'] for bucket in invalid_buckets: loc = location.get_location_from_uri_and_backend( "s3+https://user:key@auth_address/%s/key" % bucket, 's3_region1', conf=self.conf) self.assertRaises(boto_exceptions.InvalidDNSNameError, self.store.get, loc) @mock.patch('glance_store.location.Location') @mock.patch.object(boto3.session.Session, "client") def test_client_custom_region_name(self, mock_client, mock_loc): """Test a custom s3_store_region_name in config""" mock_loc.accesskey = 'abcd' mock_loc.secretkey = 'efgh' mock_loc.bucket = 'bucket1' self.store._create_s3_client(mock_loc) mock_client.assert_called_with( config=mock.ANY, endpoint_url='https://s3-region1.com', region_name='custom_region_name', service_name='s3', use_ssl=False, ) @mock.patch.object(boto3.session.Session, "client") def test_get(self, mock_client): """Test a "normal" retrieval of an image in chunks.""" bucket, key = 'glance', FAKE_UUID fixture_object = { 'Body': io.BytesIO(b"*" * FIVE_KB), 'ContentLength': FIVE_KB } fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_object', service_response={}, expected_params={ 'Bucket': bucket, 'Key': key }) stubber.add_response(method='get_object', service_response=fixture_object, expected_params={ 'Bucket': bucket, 'Key': key }) mock_client.return_value = fake_s3_client loc = location.get_location_from_uri_and_backend( "s3+https://user:key@auth_address/%s/%s" % (bucket, key), 's3_region1', conf=self.conf) (image_s3, image_size) = self.store.get(loc) self.assertEqual(FIVE_KB, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_s3: data += chunk self.assertEqual(expected_data, data) def test_partial_get(self): loc = location.get_location_from_uri_and_backend( "s3+https://user:key@auth_address/glance/%s" % FAKE_UUID, 's3_region1', conf=self.conf) self.assertRaises(exceptions.StoreRandomGetNotSupported, self.store.get, loc, chunk_size=1) @mock.patch.object(boto3.session.Session, "client") def test_get_non_existing(self, mock_client): """Test that trying to retrieve a s3 that doesn't exist raises an error """ bucket, key = 'glance', 'no_exist' fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_client_error(method='head_object', service_error_code='404', service_message=''' The specified key does not exist. ''', expected_params={ 'Bucket': bucket, 'Key': key }) mock_client.return_value = fake_s3_client uri = "s3+https://user:key@auth_address/%s/%s" % (bucket, key) loc = location.get_location_from_uri_and_backend(uri, 's3_region1', conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch.object(boto3.session.Session, "client") def test_add_singlepart(self, mock_client): """Test that we can add an image via the s3 backend.""" expected_image_id = str(uuid.uuid4()) # 5KiB is smaller than WRITE_CHUNKSIZE expected_s3_size = FIVE_KB expected_s3_contents = b"*" * expected_s3_size expected_checksum = md5(expected_s3_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_s3_contents).hexdigest() expected_location = format_s3_location( S3_CONF['s3_store_access_key'], S3_CONF['s3_store_secret_key'], S3_CONF['s3_store_host'], S3_CONF['s3_store_bucket'], expected_image_id) image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'] }) stubber.add_client_error(method='head_object', service_error_code='404', service_message='', expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id }) stubber.add_response(method='put_object', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id, 'Body': botocore.stub.ANY }) mock_client.return_value = fake_s3_client loc, size, checksum, multihash, metadata = \ self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo) self.assertEqual("s3_region1", metadata["store"]) self.assertEqual(expected_location, loc) self.assertEqual(expected_s3_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(boto3.session.Session, "client") def test_add_singlepart_bigger_than_write_chunk(self, mock_client): """Test that we can add an image via the s3 backend.""" expected_image_id = str(uuid.uuid4()) # 8 MiB is bigger than WRITE_CHUNKSIZE(=5MiB), # but smaller than s3_store_large_object_size expected_s3_size = 8 * units.Mi expected_s3_contents = b"*" * expected_s3_size expected_checksum = md5(expected_s3_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_s3_contents).hexdigest() expected_location = format_s3_location( S3_CONF['s3_store_access_key'], S3_CONF['s3_store_secret_key'], S3_CONF['s3_store_host'], S3_CONF['s3_store_bucket'], expected_image_id) image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'] }) stubber.add_client_error(method='head_object', service_error_code='404', service_message='', expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id }) stubber.add_response(method='put_object', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id, 'Body': botocore.stub.ANY }) mock_client.return_value = fake_s3_client loc, size, checksum, multihash, metadata = \ self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo) self.assertEqual("s3_region1", metadata["store"]) self.assertEqual(expected_location, loc) self.assertEqual(expected_s3_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(boto3.session.Session, "client") def test_add_different_backend(self, mock_client): self.store = s3.Store(self.conf, backend="s3_region2") self.store.configure() self.register_store_backend_schemes(self.store, 's3', 's3_region2') expected_image_id = str(uuid.uuid4()) expected_s3_size = FIVE_KB expected_s3_contents = b"*" * expected_s3_size expected_checksum = md5(expected_s3_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_s3_contents).hexdigest() expected_location = format_s3_location( S3_CONF['s3_store_access_key'], S3_CONF['s3_store_secret_key'], 'http://s3-region2.com', S3_CONF['s3_store_bucket'], expected_image_id) image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'] }) stubber.add_client_error(method='head_object', service_error_code='404', service_message='', expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id }) stubber.add_response(method='put_object', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id, 'Body': botocore.stub.ANY }) mock_client.return_value = fake_s3_client loc, size, checksum, multihash, metadata = \ self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo) self.assertEqual("s3_region2", metadata["store"]) self.assertEqual(expected_location, loc) self.assertEqual(expected_s3_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(boto3.session.Session, "client") def test_add_with_verifier(self, mock_client): """Assert 'verifier.update' is called when verifier is provided""" expected_image_id = str(uuid.uuid4()) expected_s3_size = FIVE_KB expected_s3_contents = b"*" * expected_s3_size image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') verifier = mock.MagicMock(name='mock_verifier') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}) stubber.add_client_error(method='head_object', service_error_code='404', service_message='') stubber.add_response(method='put_object', service_response={}) mock_client.return_value = fake_s3_client self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo, verifier=verifier) verifier.update.assert_called_with(expected_s3_contents) @mock.patch.object(boto3.session.Session, "client") def test_add_multipart(self, mock_client): """Test that we can add an image via the s3 backend.""" expected_image_id = str(uuid.uuid4()) expected_s3_size = 16 * units.Mi expected_s3_contents = b"*" * expected_s3_size expected_checksum = md5(expected_s3_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_s3_contents).hexdigest() expected_location = format_s3_location( S3_CONF['s3_store_access_key'], S3_CONF['s3_store_secret_key'], S3_CONF['s3_store_host'], S3_CONF['s3_store_bucket'], expected_image_id) image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') num_parts = 3 # image size = 16MB and chunk size is 6MB with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'] }) stubber.add_client_error(method='head_object', service_error_code='404', service_message='', expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id }) stubber.add_response(method='create_multipart_upload', service_response={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, "UploadId": 'UploadId' }, expected_params={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, }) parts = [] remaining_image_size = expected_s3_size chunk_size = S3_CONF['s3_store_large_object_chunk_size'] * units.Mi for i in range(num_parts): part_number = i + 1 stubber.add_response(method='upload_part', service_response={ 'ETag': 'ETag' }, expected_params={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, "Body": botocore.stub.ANY, 'ContentLength': chunk_size, "PartNumber": part_number, "UploadId": 'UploadId' }) parts.append({'ETag': 'ETag', 'PartNumber': part_number}) remaining_image_size -= chunk_size if remaining_image_size < chunk_size: chunk_size = remaining_image_size stubber.add_response(method='complete_multipart_upload', service_response={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, 'ETag': 'ETag' }, expected_params={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, "MultipartUpload": { "Parts": parts }, "UploadId": 'UploadId' }) mock_client.return_value = fake_s3_client loc, size, checksum, multihash, metadata = \ self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo) self.assertEqual("s3_region1", metadata["store"]) self.assertEqual(expected_location, loc) self.assertEqual(expected_s3_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(boto3.session.Session, "client") def test_add_already_existing(self, mock_client): """Tests that adding an image with an existing identifier raises an appropriate exception """ image_s3 = io.BytesIO(b"never_gonna_make_it") fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}) stubber.add_response(method='head_object', service_response={}) mock_client.return_value = fake_s3_client self.assertRaises(exceptions.Duplicate, self.store.add, FAKE_UUID, image_s3, 0, self.hash_algo) @mock.patch.object(boto3.session.Session, "client") def test_delete_non_existing(self, mock_client): """Test that trying to delete a s3 that doesn't exist raises an error """ bucket, key = 'glance', 'no_exist' fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_client_error(method='head_object', service_error_code='404', service_message=''' The specified key does not exist. ''', expected_params={ 'Bucket': bucket, 'Key': key }) fake_s3_client.head_bucket = mock.MagicMock() mock_client.return_value = fake_s3_client uri = "s3+https://user:key@auth_address/%s/%s" % (bucket, key) loc = location.get_location_from_uri_and_backend(uri, 's3_region1', conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.delete, loc) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_multistore_vmware.py0000664000175000017500000007226300000000000026215 0ustar00zuulzuul00000000000000# Copyright 2018 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the Multiple VMware Datastore backend store""" import hashlib import io from unittest import mock import uuid from oslo_config import cfg from oslo_utils import secretutils from oslo_utils import units from oslo_vmware import api from oslo_vmware import exceptions as vmware_exceptions from oslo_vmware.objects import datacenter as oslo_datacenter from oslo_vmware.objects import datastore as oslo_datastore import glance_store as store import glance_store._drivers.vmware_datastore as vm_store from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities from glance_store.tests import utils FAKE_UUID = str(uuid.uuid4()) FIVE_KB = 5 * units.Ki VMWARE_DS = { 'debug': True, 'vmware_server_host': '127.0.0.1', 'vmware_server_username': 'username', 'vmware_server_password': 'password', 'vmware_store_image_dir': '/openstack_glance', 'vmware_insecure': 'True', 'vmware_datastores': ['a:b:0'], } def format_location(host_ip, folder_name, image_id, datastores): """ Helper method that returns a VMware Datastore store URI given the component pieces. """ scheme = 'vsphere' (datacenter_path, datastore_name, weight) = datastores[0].split(':') return ("%s://%s/folder%s/%s?dcPath=%s&dsName=%s" % (scheme, host_ip, folder_name, image_id, datacenter_path, datastore_name)) def fake_datastore_obj(*args, **kwargs): dc_obj = oslo_datacenter.Datacenter(ref='fake-ref', name='fake-name') dc_obj.path = args[0] return oslo_datastore.Datastore(ref='fake-ref', datacenter=dc_obj, name=args[1]) class TestMultiStore(base.MultiStoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch('oslo_vmware.api.VMwareAPISession') def setUp(self, mock_api_session, mock_get_datastore): """Establish a clean test environment.""" super(TestMultiStore, self).setUp() enabled_backends = { "vmware1": "vmware", "vmware2": "vmware" } self.hash_algo = 'sha256' self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='vmware1', group='glance_store') # set vmware related config options self.config(group='vmware1', vmware_server_username='admin', vmware_server_password='admin', vmware_server_host='127.0.0.1', vmware_insecure='True', vmware_datastores=['a:b:0'], vmware_store_image_dir='/openstack_glance') self.config(group='vmware2', vmware_server_username='admin', vmware_server_password='admin', vmware_server_host='127.0.0.1', vmware_insecure='True', vmware_datastores=['a:b:1'], vmware_store_image_dir='/openstack_glance_1') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.addCleanup(self.conf.reset) vm_store.Store.CHUNKSIZE = 2 mock_get_datastore.side_effect = fake_datastore_obj self.store = vm_store.Store(self.conf, backend="vmware1") self.store.configure() def _mock_http_connection(self): return mock.patch('http.client.HTTPConnection') def test_location_url_prefix_is_set(self): expected_url_prefix = "vsphere://127.0.0.1/openstack_glance" self.assertEqual(expected_url_prefix, self.store.url_prefix) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get(self, mock_api_session): """Test a "normal" retrieval of an image in chunks.""" expected_image_size = 31 expected_returns = ['I am a teapot, short and stout\n'] loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() (image_file, image_size) = self.store.get(loc) self.assertEqual(expected_image_size, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_non_existing(self, mock_api_session): """ Test that trying to retrieve an image that doesn't exist raises an error """ loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glan" "ce/%s?dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch.object(vm_store.Store, '_build_vim_cookie_header') @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(vm_store._Reader, 'size') @mock.patch.object(api, 'VMwareAPISession') def test_add(self, fake_api_session, fake_size, fake_select_datastore, fake_cookie): """Test that we can add an image via the VMware backend.""" fake_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size hash_code = secretutils.md5(expected_contents, usedforsecurity=False) expected_checksum = hash_code.hexdigest() fake_size.__get__ = mock.Mock(return_value=expected_size) expected_cookie = 'vmware_soap_session=fake-uuid' fake_cookie.return_value = expected_cookie expected_headers = {'Content-Length': str(expected_size), 'Cookie': expected_cookie} with mock.patch('hashlib.md5') as md5: md5.return_value = hash_code expected_location = format_location( VMWARE_DS['vmware_server_host'], VMWARE_DS['vmware_store_image_dir'], expected_image_id, VMWARE_DS['vmware_datastores']) image = io.BytesIO(expected_contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() location, size, checksum, metadata = self.store.add( expected_image_id, image, expected_size) _, kwargs = HttpConn.call_args self.assertEqual(expected_headers, kwargs['headers']) self.assertEqual("vmware1", metadata["store"]) self.assertEqual(utils.sort_url_by_qs_keys(expected_location), utils.sort_url_by_qs_keys(location)) self.assertEqual(expected_size, size) self.assertEqual(expected_checksum, checksum) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(vm_store._Reader, 'size') @mock.patch('oslo_vmware.api.VMwareAPISession') def test_add_size_zero(self, mock_api_session, fake_size, fake_select_datastore): """ Test that when specifying size zero for the image to add, the actual size of the image is returned. """ fake_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size hash_code = secretutils.md5(expected_contents, usedforsecurity=False) expected_checksum = hash_code.hexdigest() fake_size.__get__ = mock.Mock(return_value=expected_size) with mock.patch('hashlib.md5') as md5: md5.return_value = hash_code expected_location = format_location( VMWARE_DS['vmware_server_host'], VMWARE_DS['vmware_store_image_dir'], expected_image_id, VMWARE_DS['vmware_datastores']) image = io.BytesIO(expected_contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() location, size, checksum, metadata = self.store.add( expected_image_id, image, 0) self.assertEqual("vmware1", metadata["store"]) self.assertEqual(utils.sort_url_by_qs_keys(expected_location), utils.sort_url_by_qs_keys(location)) self.assertEqual(expected_size, size) self.assertEqual(expected_checksum, checksum) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch('glance_store._drivers.vmware_datastore._Reader') def test_add_with_verifier(self, fake_reader, fake_select_datastore): """Test that the verifier is passed to the _Reader during add.""" verifier = mock.MagicMock(name='mock_verifier') image_id = str(uuid.uuid4()) size = FIVE_KB contents = b"*" * size image = io.BytesIO(contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() location, size, checksum, multihash, metadata = self.store.add( image_id, image, size, self.hash_algo, verifier=verifier) self.assertEqual("vmware1", metadata["store"]) fake_reader.assert_called_with(image, self.hash_algo, verifier) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch('glance_store._drivers.vmware_datastore._Reader') def test_add_with_verifier_size_zero(self, fake_reader, fake_select_ds): """Test that the verifier is passed to the _ChunkReader during add.""" verifier = mock.MagicMock(name='mock_verifier') image_id = str(uuid.uuid4()) size = FIVE_KB contents = b"*" * size image = io.BytesIO(contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() location, size, checksum, multihash, metadata = self.store.add( image_id, image, 0, self.hash_algo, verifier=verifier) self.assertEqual("vmware1", metadata["store"]) fake_reader.assert_called_with(image, self.hash_algo, verifier) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_delete(self, mock_api_session): """Test we can delete an existing image in the VMware store.""" loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glance/%s?" "dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() vm_store.Store._service_content = mock.Mock() self.store.delete(loc) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_delete_non_existing(self, mock_api_session): """ Test that trying to delete an image that doesn't exist raises an error """ loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glance/%s?" "dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch.object(self.store.session, 'wait_for_task') as mock_task: mock_task.side_effect = vmware_exceptions.FileNotFoundException self.assertRaises(exceptions.NotFound, self.store.delete, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_size(self, mock_api_session): """ Test we can get the size of an existing image in the VMware store """ loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() image_size = self.store.get_size(loc) self.assertEqual(image_size, 31) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_size_non_existing(self, mock_api_session): """ Test that trying to retrieve an image size that doesn't exist raises an error """ loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glan" "ce/%s?dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get_size, loc) def test_reader_full(self): content = b'XXX' image = io.BytesIO(content) expected_checksum = secretutils.md5(content, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(content).hexdigest() reader = vm_store._Reader(image, self.hash_algo) ret = reader.read() self.assertEqual(content, ret) self.assertEqual(expected_checksum, reader.checksum.hexdigest()) self.assertEqual(expected_multihash, reader.os_hash_value.hexdigest()) self.assertEqual(len(content), reader.size) def test_reader_partial(self): content = b'XXX' image = io.BytesIO(content) expected_checksum = secretutils.md5(b'X', usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(b'X').hexdigest() reader = vm_store._Reader(image, self.hash_algo) ret = reader.read(1) self.assertEqual(b'X', ret) self.assertEqual(expected_checksum, reader.checksum.hexdigest()) self.assertEqual(expected_multihash, reader.os_hash_value.hexdigest()) self.assertEqual(1, reader.size) def test_reader_with_verifier(self): content = b'XXX' image = io.BytesIO(content) verifier = mock.MagicMock(name='mock_verifier') reader = vm_store._Reader(image, self.hash_algo, verifier) reader.read() verifier.update.assert_called_with(content) def test_sanity_check_multiple_datastores(self): self.config(group='vmware1', vmware_api_retry_count=1) self.config(group='vmware1', vmware_task_poll_interval=1) self.config(group='vmware1', vmware_datastores=['a:b:0', 'a:d:0']) try: self.store._sanity_check() except exceptions.BadStoreConfiguration: self.fail() def test_parse_datastore_info_and_weight_less_opts(self): datastore = 'a' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight_invalid_weight(self): datastore = 'a:b:c' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight_empty_opts(self): datastore = 'a: :0' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) datastore = ':b:0' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight(self): datastore = 'a:b:100' parts = self.store._parse_datastore_info_and_weight(datastore) self.assertEqual('a', parts[0]) self.assertEqual('b', parts[1]) self.assertEqual(100, parts[2]) def test_parse_datastore_info_and_weight_default_weight(self): datastore = 'a:b' parts = self.store._parse_datastore_info_and_weight(datastore) self.assertEqual('a', parts[0]) self.assertEqual('b', parts[1]) self.assertEqual(0, parts[2]) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_unexpected_status(self, mock_api_session, mock_select_datastore): expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = io.BytesIO(expected_contents) self.session = mock.Mock() with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=401) self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_unexpected_status_no_response_body(self, mock_api_session, mock_select_datastore): expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = io.BytesIO(expected_contents) self.session = mock.Mock() with self._mock_http_connection() as HttpConn: HttpConn.return_value = utils.fake_response(status_code=500, no_response_body=True) self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size) @mock.patch.object(api, 'VMwareAPISession') def test_reset_session(self, mock_api_session): self.store.reset_session() self.assertTrue(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_active(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = True self.store._build_vim_cookie_header(True) self.assertFalse(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_expired(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = False self.store._build_vim_cookie_header(True) self.assertTrue(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_expired_noverify(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = False self.store._build_vim_cookie_header() self.assertFalse(mock_api_session.called) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_add_ioerror(self, mock_api_session, mock_select_datastore): mock_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = io.BytesIO(expected_contents) self.session = mock.Mock() with mock.patch('requests.Session.request') as HttpConn: HttpConn.request.side_effect = IOError self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size) def test_qs_sort_with_literal_question_mark(self): url = 'scheme://example.com/path?key2=val2&key1=val1?sort=true' exp_url = 'scheme://example.com/path?key1=val1%3Fsort%3Dtrue&key2=val2' self.assertEqual(exp_url, utils.sort_url_by_qs_keys(url)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map(self, mock_api_session, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] mock_ds_obj.side_effect = fake_datastore_obj ret = self.store._build_datastore_weighted_map(datastores) ds = ret[200] self.assertEqual('e', ds[0].datacenter.path) self.assertEqual('f', ds[0].name) ds = ret[100] self.assertEqual(2, len(ds)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map_equal_weight(self, mock_api_session, mock_ds_obj): datastores = ['a:b:200', 'a:b:200'] mock_ds_obj.side_effect = fake_datastore_obj ret = self.store._build_datastore_weighted_map(datastores) ds = ret[200] self.assertEqual(2, len(ds)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map_empty_list(self, mock_api_session, mock_ds_ref): datastores = [] ret = self.store._build_datastore_weighted_map(datastores) self.assertEqual({}, ret) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_insufficient_freespace(self, mock_get_freespace, mock_ds_ref): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5, 5, 5] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp self.assertRaises(exceptions.StorageFull, self.store.select_datastore, image_size) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_insufficient_fs_one_ds(self, mock_get_freespace, mock_ds_ref): # Tests if fs is updated with just one datastore. datastores = ['a:b:100'] image_size = 10 self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp self.assertRaises(exceptions.StorageFull, self.store.select_datastore, image_size) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_equal_freespace(self, mock_get_freespace, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 mock_ds_obj.side_effect = fake_datastore_obj self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [11, 11, 11] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp ds = self.store.select_datastore(image_size) self.assertEqual('e', ds.datacenter.path) self.assertEqual('f', ds.name) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_contention(self, mock_get_freespace, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 mock_ds_obj.side_effect = fake_datastore_obj self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5, 11, 12] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp ds = self.store.select_datastore(image_size) self.assertEqual('c', ds.datacenter.path) self.assertEqual('d', ds.name) def test_select_datastore_empty_list(self): datastores = [] self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) self.assertRaises(exceptions.StorageFull, self.store.select_datastore, 10) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_datacenter_ref(self, mock_api_session): datacenter_path = 'Datacenter1' self.store._get_datacenter(datacenter_path) self.store.session.invoke_api.assert_called_with( self.store.session.vim, 'FindByInventoryPath', self.store.session.vim.service_content.searchIndex, inventoryPath=datacenter_path) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_redirect(self, mock_api_session): # Add two layers of redirects to the response stack, which will # return the default 200 OK with the expected data after resolving # both redirects. redirect1 = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} redirect2 = {"location": "https://example.com?dsName=ds2&dcPath=dc2"} responses = [utils.fake_response(), utils.fake_response(status_code=302, headers=redirect1), utils.fake_response(status_code=301, headers=redirect2)] def getresponse(*args, **kwargs): return responses.pop() expected_image_size = 31 expected_returns = ['I am a teapot, short and stout\n'] loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.side_effect = getresponse (image_file, image_size) = self.store.get(loc) self.assertEqual(expected_image_size, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_max_redirects(self, mock_api_session): redirect = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} responses = ([utils.fake_response(status_code=302, headers=redirect)] * (vm_store.MAX_REDIRECTS + 1)) def getresponse(*args, **kwargs): return responses.pop() loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.side_effect = getresponse self.assertRaises(exceptions.MaxRedirectsExceeded, self.store.get, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_redirect_invalid(self, mock_api_session): redirect = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} loc = location.get_location_from_uri_and_backend( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, "vmware1", conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=307, headers=redirect) self.assertRaises(exceptions.BadStoreUri, self.store.get, loc) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_opts.py0000664000175000017500000001301300000000000023376 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import stevedore from testtools import matchers from glance_store import backend from glance_store.tests import base def on_load_failure_callback(*args, **kwargs): raise class OptsTestCase(base.StoreBaseTest): def _check_opt_groups(self, opt_list, expected_opt_groups): self.assertThat(opt_list, matchers.HasLength(len(expected_opt_groups))) groups = [g for (g, _l) in opt_list] self.assertThat(groups, matchers.HasLength(len(expected_opt_groups))) for idx, group in enumerate(groups): self.assertEqual(expected_opt_groups[idx], group) def _check_opt_names(self, opt_list, expected_opt_names): opt_names = [o.name for (g, l) in opt_list for o in l] self.assertThat(opt_names, matchers.HasLength(len(expected_opt_names))) for opt in opt_names: self.assertIn(opt, expected_opt_names) def _test_entry_point(self, namespace, expected_opt_groups, expected_opt_names): opt_list = None mgr = stevedore.NamedExtensionManager( 'oslo.config.opts', names=[namespace], invoke_on_load=False, on_load_failure_callback=on_load_failure_callback, ) for ext in mgr: list_fn = ext.plugin opt_list = list_fn() break self.assertIsNotNone(opt_list) self._check_opt_groups(opt_list, expected_opt_groups) self._check_opt_names(opt_list, expected_opt_names) def test_list_api_opts(self): opt_list = backend._list_opts() expected_opt_groups = ['glance_store', 'glance_store'] expected_opt_names = [ 'default_store', 'stores', 'cinder_api_insecure', 'cinder_ca_certificates_file', 'cinder_catalog_info', 'cinder_endpoint_template', 'cinder_http_retries', 'cinder_mount_point_base', 'cinder_os_region_name', 'cinder_state_transition_timeout', 'cinder_store_auth_address', 'cinder_store_user_name', 'cinder_store_user_domain_name', 'cinder_store_password', 'cinder_store_project_name', 'cinder_store_project_domain_name', 'cinder_volume_type', 'cinder_use_multipath', 'cinder_enforce_multipath', 'cinder_do_extend_attached', 'default_swift_reference', 'https_insecure', 'filesystem_store_chunk_size', 'filesystem_store_datadir', 'filesystem_store_datadirs', 'filesystem_store_file_perm', 'filesystem_store_metadata_file', 'filesystem_thin_provisioning', 'http_proxy_information', 'https_ca_certificates_file', 'rbd_store_ceph_conf', 'rbd_store_chunk_size', 'rbd_store_pool', 'rbd_store_user', 'rbd_thin_provisioning', 'rados_connect_timeout', 'rootwrap_config', 's3_store_access_key', 's3_store_bucket', 's3_store_bucket_url_format', 's3_store_create_bucket_on_put', 's3_store_host', 's3_store_region_name', 's3_store_secret_key', 's3_store_large_object_size', 's3_store_large_object_chunk_size', 's3_store_thread_pools', 'swift_store_expire_soon_interval', 'swift_store_admin_tenants', 'swift_store_auth_address', 'swift_store_cacert', 'swift_store_auth_insecure', 'swift_store_auth_version', 'swift_store_config_file', 'swift_store_container', 'swift_store_create_container_on_put', 'swift_store_endpoint', 'swift_store_endpoint_type', 'swift_store_key', 'swift_store_large_object_chunk_size', 'swift_store_large_object_size', 'swift_store_multi_tenant', 'swift_store_multiple_containers_seed', 'swift_store_region', 'swift_store_retry_get_count', 'swift_store_service_type', 'swift_store_ssl_compression', 'swift_store_use_trusts', 'swift_store_user', 'swift_buffer_on_upload', 'swift_upload_buffer_dir', 'vmware_insecure', 'vmware_ca_file', 'vmware_api_retry_count', 'vmware_datastores', 'vmware_server_host', 'vmware_server_password', 'vmware_server_username', 'vmware_store_image_dir', 'vmware_task_poll_interval' ] self._check_opt_groups(opt_list, expected_opt_groups) self._check_opt_names(opt_list, expected_opt_names) self._test_entry_point('glance.store', expected_opt_groups, expected_opt_names) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_rbd_store.py0000664000175000017500000007350400000000000024407 0ustar00zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib import io from unittest import mock from oslo_utils.secretutils import md5 from oslo_utils import units from glance_store._drivers import rbd as rbd_store from glance_store import exceptions from glance_store import location as g_location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities from glance_store.tests import utils as test_utils class TestException(Exception): pass class MockRados(object): class Error(Exception): pass class ObjectNotFound(Exception): pass class ioctx(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def close(self, *args, **kwargs): pass class Rados(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def connect(self, *args, **kwargs): pass def open_ioctx(self, *args, **kwargs): return MockRados.ioctx() def shutdown(self, *args, **kwargs): pass def conf_get(self, *args, **kwargs): pass class MockRBD(object): class ImageExists(Exception): pass class ImageHasSnapshots(Exception): pass class ImageBusy(Exception): pass class ImageNotFound(Exception): pass class InvalidArgument(Exception): pass class NoSpace(Exception): pass class Image(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): pass def create_snap(self, *args, **kwargs): pass def remove_snap(self, *args, **kwargs): pass def set_snap(self, *args, **kwargs): pass def list_children(self, *args, **kwargs): pass def protect_snap(self, *args, **kwargs): pass def unprotect_snap(self, *args, **kwargs): pass def read(self, *args, **kwargs): raise NotImplementedError() def write(self, *args, **kwargs): raise NotImplementedError() def resize(self, *args, **kwargs): pass def discard(self, offset, length): raise NotImplementedError() def close(self): pass def list_snaps(self): raise NotImplementedError() def parent_info(self): raise NotImplementedError() def size(self): raise NotImplementedError() class RBD(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def create(self, *args, **kwargs): pass def remove(self, *args, **kwargs): pass def list(self, *args, **kwargs): raise NotImplementedError() def clone(self, *args, **kwargs): raise NotImplementedError() def trash_move(self, *args, **kwargs): pass RBD_FEATURE_LAYERING = 1 class TestReSize(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): """Establish a clean test environment.""" super(TestReSize, self).setUp() rbd_store.rados = MockRados rbd_store.rbd = MockRBD self.store = rbd_store.Store(self.conf) self.store.configure() self.store_specs = {'pool': 'fake_pool', 'image': 'fake_image', 'snapshot': 'fake_snapshot'} self.location = rbd_store.StoreLocation(self.store_specs, self.conf) self.hash_algo = 'sha256' def test_add_w_image_size_zero_less_resizes(self): """Assert that correct size is returned even though 0 was provided.""" data_len = 57 * units.Mi data_iter = test_utils.FakeData(data_len) with mock.patch.object(rbd_store.rbd.Image, 'resize') as resize: with mock.patch.object(rbd_store.rbd.Image, 'write') as write: ret = self.store.add( 'fake_image_id', data_iter, 0, self.hash_algo) # We expect to trim at the end so +1 expected = 1 expected_calls = [] data_len_temp = data_len resize_amount = self.store.WRITE_CHUNKSIZE while data_len_temp > 0: resize_amount *= 2 expected_calls.append(resize_amount + (data_len - data_len_temp)) data_len_temp -= resize_amount expected += 1 self.assertEqual(expected, resize.call_count) resize.assert_has_calls([mock.call(call) for call in expected_calls]) expected = ([self.store.WRITE_CHUNKSIZE for i in range(int( data_len / self.store.WRITE_CHUNKSIZE))] + [(data_len % self.store.WRITE_CHUNKSIZE)]) actual = ([len(args[0]) for args, kwargs in write.call_args_list]) self.assertEqual(expected, actual) self.assertEqual(data_len, resize.call_args_list[-1][0][0]) self.assertEqual(data_len, ret[1]) def test_resize_on_write_ceiling(self): image = mock.MagicMock() # image, size, written, chunk # Non-zero image size means no resize ret = self.store._resize_on_write(image, 32, 16, 16) self.assertEqual(0, ret) image.resize.assert_not_called() # Current size is smaller than we need self.store.size = 8 ret = self.store._resize_on_write(image, 0, 16, 16) self.assertEqual(8 + self.store.WRITE_CHUNKSIZE * 2, ret) self.assertEqual(self.store.WRITE_CHUNKSIZE * 2, self.store.resize_amount) image.resize.assert_called_once_with(ret) # More reads under the limit do not require a resize image.resize.reset_mock() self.store.size = ret ret = self.store._resize_on_write(image, 0, 64, 16) self.assertEqual(8 + self.store.WRITE_CHUNKSIZE * 2, ret) image.resize.assert_not_called() # Read past the limit triggers another resize ret = self.store._resize_on_write(image, 0, ret + 1, 16) self.assertEqual(8 + self.store.WRITE_CHUNKSIZE * 6, ret) image.resize.assert_called_once_with(ret) self.assertEqual(self.store.WRITE_CHUNKSIZE * 4, self.store.resize_amount) # Check that we do not resize past the 8G ceiling. # Start with resize_amount at 2G, 1G read so far image.resize.reset_mock() self.store.resize_amount = 2 * units.Gi self.store.size = 1 * units.Gi # First resize happens and we get to 5G, # resize_amount goes to limit of 4G ret = self.store._resize_on_write(image, 0, 4097 * units.Mi, 16) self.assertEqual(4 * units.Gi, self.store.resize_amount) self.assertEqual((1 + 4) * units.Gi, ret) self.store.size = ret # Second resize happens and we stay at 13, no resize # resize amount stays at limit of 8G ret = self.store._resize_on_write(image, 0, 6144 * units.Mi, 16) self.assertEqual(8 * units.Gi, self.store.resize_amount) self.assertEqual((1 + 4 + 8) * units.Gi, ret) self.store.size = ret # Third resize happens and we get to 21, # resize amount stays at limit of 8G ret = self.store._resize_on_write(image, 0, 14336 * units.Mi, 16) self.assertEqual(8 * units.Gi, self.store.resize_amount) self.assertEqual((1 + 4 + 8 + 8) * units.Gi, ret) self.store.size = ret # Fourth resize happens and we get to 29, # resize amount stays at limit of 8G ret = self.store._resize_on_write(image, 0, 22528 * units.Mi, 16) self.assertEqual(8 * units.Gi, self.store.resize_amount) self.assertEqual((1 + 4 + 8 + 8 + 8) * units.Gi, ret) image.resize.assert_has_calls([ mock.call(5 * units.Gi), mock.call(13 * units.Gi), mock.call(21 * units.Gi), mock.call(29 * units.Gi)]) class TestStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): """Establish a clean test environment.""" super(TestStore, self).setUp() rbd_store.rados = MockRados rbd_store.rbd = MockRBD self.store = rbd_store.Store(self.conf) self.store.configure() self.store.chunk_size = 2 self.called_commands_actual = [] self.called_commands_expected = [] self.store_specs = {'pool': 'fake_pool', 'image': 'fake_image', 'snapshot': 'fake_snapshot'} self.location = rbd_store.StoreLocation(self.store_specs, self.conf) # Provide enough data to get more than one chunk iteration. self.data_len = 3 * units.Ki self.data_iter = io.BytesIO(b'*' * self.data_len) self.hash_algo = 'sha256' def test_thin_provisioning_is_disabled_by_default(self): self.assertEqual(self.store.thin_provisioning, False) def test_add_w_image_size_zero(self): """Assert that correct size is returned even though 0 was provided.""" self.store.chunk_size = units.Ki with mock.patch.object(rbd_store.rbd.Image, 'resize') as resize: with mock.patch.object(rbd_store.rbd.Image, 'write') as write: ret = self.store.add( 'fake_image_id', self.data_iter, 0, self.hash_algo) self.assertTrue(resize.called) self.assertTrue(write.called) self.assertEqual(ret[1], self.data_len) @mock.patch.object(MockRBD.Image, '__enter__') @mock.patch.object(rbd_store.Store, '_create_image') @mock.patch.object(rbd_store.Store, '_delete_image') def test_add_w_rbd_image_exception(self, delete, create, enter): def _fake_create_image(*args, **kwargs): self.called_commands_actual.append('create') return self.location def _fake_delete_image(target_pool, image_name, snapshot_name=None): self.assertEqual(self.location.pool, target_pool) self.assertEqual(self.location.image, image_name) self.assertEqual(self.location.snapshot, snapshot_name) self.called_commands_actual.append('delete') def _fake_enter(*args, **kwargs): raise exceptions.NotFound(image="fake_image_id") create.side_effect = _fake_create_image delete.side_effect = _fake_delete_image enter.side_effect = _fake_enter self.assertRaises(exceptions.NotFound, self.store.add, 'fake_image_id', self.data_iter, self.data_len, self.hash_algo) self.called_commands_expected = ['create', 'delete'] @mock.patch.object(MockRBD.Image, 'resize') @mock.patch.object(rbd_store.Store, '_create_image') @mock.patch.object(rbd_store.Store, '_delete_image') def test_add_w_rbd_no_space_exception(self, delete, create, resize): def _fake_create_image(*args, **kwargs): self.called_commands_actual.append('create') return self.location def _fake_delete_image(target_pool, image_name, snapshot_name=None): self.assertEqual(self.location.pool, target_pool) self.assertEqual(self.location.image, image_name) self.assertEqual(self.location.snapshot, snapshot_name) self.called_commands_actual.append('delete') def _fake_resize(*args, **kwargs): raise MockRBD.NoSpace() create.side_effect = _fake_create_image delete.side_effect = _fake_delete_image resize.side_effect = _fake_resize self.assertRaises(exceptions.StorageFull, self.store.add, 'fake_image_id', self.data_iter, 0, self.hash_algo) self.called_commands_expected = ['create', 'delete'] def test_add_duplicate_image(self): def _fake_create_image(*args, **kwargs): self.called_commands_actual.append('create') raise MockRBD.ImageExists() with mock.patch.object(self.store, '_create_image') as create_image: create_image.side_effect = _fake_create_image self.assertRaises(exceptions.Duplicate, self.store.add, 'fake_image_id', self.data_iter, self.data_len, self.hash_algo) self.called_commands_expected = ['create'] def test_add_with_verifier(self): """Assert 'verifier.update' is called when verifier is provided.""" self.store.chunk_size = units.Ki verifier = mock.MagicMock(name='mock_verifier') image_id = 'fake_image_id' file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) with mock.patch.object(rbd_store.rbd.Image, 'write'): self.store.add(image_id, image_file, file_size, self.hash_algo, verifier=verifier) verifier.update.assert_called_with(file_contents) def test_add_checksums(self): self.store.chunk_size = units.Ki image_id = 'fake_image_id' file_size = 5 * units.Ki # 5K file_contents = b"*" * file_size image_file = io.BytesIO(file_contents) expected_checksum = md5(file_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(file_contents).hexdigest() with mock.patch.object(rbd_store.rbd.Image, 'write'): loc, size, checksum, multihash, _ = self.store.add( image_id, image_file, file_size, self.hash_algo) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) def test_add_thick_provisioning_with_holes_in_file(self): """ Tests that a file which contains null bytes chunks is fully written to rbd backend in a thick provisioning configuration. """ chunk_size = units.Mi content = b"*" * chunk_size + b"\x00" * chunk_size + b"*" * chunk_size self._do_test_thin_provisioning(content, 3 * chunk_size, 3, False) def test_add_thin_provisioning_with_holes_in_file(self): """ Tests that a file which contains null bytes chunks is sparsified in rbd backend with a thin provisioning configuration. """ chunk_size = units.Mi content = b"*" * chunk_size + b"\x00" * chunk_size + b"*" * chunk_size self._do_test_thin_provisioning(content, 3 * chunk_size, 2, True) def test_add_thick_provisioning_without_holes_in_file(self): """ Tests that a file which not contain null bytes chunks is fully written to rbd backend in a thick provisioning configuration. """ chunk_size = units.Mi content = b"*" * 3 * chunk_size self._do_test_thin_provisioning(content, 3 * chunk_size, 3, False) def test_add_thin_provisioning_without_holes_in_file(self): """ Tests that a file which not contain null bytes chunks is fully written to rbd backend in a thin provisioning configuration. """ chunk_size = units.Mi content = b"*" * 3 * chunk_size self._do_test_thin_provisioning(content, 3 * chunk_size, 3, True) def test_add_thick_provisioning_with_partial_holes_in_file(self): """ Tests that a file which contains null bytes not aligned with chunk size is fully written with a thick provisioning configuration. """ chunk_size = units.Mi my_chunk = int(chunk_size * 1.5) content = b"*" * my_chunk + b"\x00" * my_chunk + b"*" * my_chunk self._do_test_thin_provisioning(content, 3 * my_chunk, 5, False) def test_add_thin_provisioning_with_partial_holes_in_file(self): """ Tests that a file which contains null bytes not aligned with chunk size is sparsified with a thin provisioning configuration. """ chunk_size = units.Mi my_chunk = int(chunk_size * 1.5) content = b"*" * my_chunk + b"\x00" * my_chunk + b"*" * my_chunk self._do_test_thin_provisioning(content, 3 * my_chunk, 4, True) def _do_test_thin_provisioning(self, content, size, write, thin): self.config(rbd_store_chunk_size=1, rbd_thin_provisioning=thin) self.store.configure() image_id = 'fake_image_id' image_file = io.BytesIO(content) expected_checksum = md5(content, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(content).hexdigest() with mock.patch.object(rbd_store.rbd.Image, 'write') as mock_write: loc, size, checksum, multihash, _ = self.store.add( image_id, image_file, size, self.hash_algo) self.assertEqual(mock_write.call_count, write) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) def test_delete(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') with mock.patch.object(MockRBD.RBD, 'remove') as remove_image: remove_image.side_effect = _fake_remove self.store.delete(g_location.Location('test_rbd_store', rbd_store.StoreLocation, self.conf, uri=self.location.get_uri())) self.called_commands_expected = ['remove'] def test_delete_image(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') with mock.patch.object(MockRBD.RBD, 'remove') as remove_image: remove_image.side_effect = _fake_remove self.store._delete_image('fake_pool', self.location.image) self.called_commands_expected = ['remove'] def test_delete_image_exc_image_not_found(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') raise MockRBD.ImageNotFound() with mock.patch.object(MockRBD.RBD, 'remove') as remove: remove.side_effect = _fake_remove self.assertRaises(exceptions.NotFound, self.store._delete_image, 'fake_pool', self.location.image) self.called_commands_expected = ['remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_snap(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.store._delete_image('fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap', 'remove_snap', 'remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_unprotected_snap(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise MockRBD.InvalidArgument() def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.store._delete_image('fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap', 'remove_snap', 'remove'] @mock.patch.object(MockRBD.RBD, 'remove') @mock.patch.object(MockRBD.Image, 'remove_snap') @mock.patch.object(MockRBD.Image, 'unprotect_snap') def test_delete_image_w_snap_with_error(self, unprotect, remove_snap, remove): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise TestException() def _fake_remove_snap(*args, **kwargs): self.called_commands_actual.append('remove_snap') def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') remove.side_effect = _fake_remove unprotect.side_effect = _fake_unprotect_snap remove_snap.side_effect = _fake_remove_snap self.assertRaises(TestException, self.store._delete_image, 'fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap'] def test_delete_image_w_snap_exc_image_busy(self): def _fake_unprotect_snap(*args, **kwargs): self.called_commands_actual.append('unprotect_snap') raise MockRBD.ImageBusy() with mock.patch.object(MockRBD.Image, 'unprotect_snap') as mocked: mocked.side_effect = _fake_unprotect_snap self.assertRaises(exceptions.InUseByStore, self.store._delete_image, 'fake_pool', self.location.image, snapshot_name='snap') self.called_commands_expected = ['unprotect_snap'] def test_delete_image_snap_has_external_references(self): with mock.patch.object(MockRBD.Image, 'list_children') as mocked: mocked.return_value = True self.store._delete_image('fake_pool', self.location.image, snapshot_name='snap') def test_delete_image_w_snap_exc_image_has_snap(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') raise MockRBD.ImageHasSnapshots() mock.patch.object(MockRBD.RBD, 'trash_move').start() with mock.patch.object(MockRBD.RBD, 'remove') as remove: remove.side_effect = _fake_remove self.store._delete_image('fake_pool', self.location.image) self.called_commands_expected = ['remove'] MockRBD.RBD.trash_move.assert_called_once_with(mock.ANY, 'fake_image') def test_delete_image_w_snap_exc_image_has_snap_2(self): def _fake_remove(*args, **kwargs): self.called_commands_actual.append('remove') raise MockRBD.ImageHasSnapshots() mock.patch.object(MockRBD.RBD, 'trash_move', side_effect=MockRBD.ImageBusy).start() with mock.patch.object(MockRBD.RBD, 'remove') as remove: remove.side_effect = _fake_remove self.assertRaises(exceptions.InUseByStore, self.store._delete_image, 'fake_pool', self.location.image) self.called_commands_expected = ['remove'] MockRBD.RBD.trash_move.assert_called_once_with(mock.ANY, 'fake_image') def test_get_partial_image(self): loc = g_location.Location('test_rbd_store', rbd_store.StoreLocation, self.conf, store_specs=self.store_specs) self.assertRaises(exceptions.StoreRandomGetNotSupported, self.store.get, loc, chunk_size=1) @mock.patch.object(MockRados.Rados, 'connect', side_effect=MockRados.Error) def test_rados_connect_error(self, _): rbd_store.rados.Error = MockRados.Error rbd_store.rados.ObjectNotFound = MockRados.ObjectNotFound def test(): with self.store.get_connection('conffile', 'rados_id'): pass self.assertRaises(exceptions.BackendException, test) def test_create_image_conf_features(self): # Tests that we use non-0 features from ceph.conf and cast to int. fsid = 'fake' features = '3' conf_get_mock = mock.Mock(return_value=features) conn = mock.Mock(conf_get=conf_get_mock) ioctxt = mock.sentinel.ioctxt name = '1' size = 1024 order = 3 with mock.patch.object(rbd_store.rbd.RBD, 'create') as create_mock: location = self.store._create_image( fsid, conn, ioctxt, name, size, order) self.assertEqual(fsid, location.specs['fsid']) self.assertEqual(rbd_store.DEFAULT_POOL, location.specs['pool']) self.assertEqual(name, location.specs['image']) self.assertEqual(rbd_store.DEFAULT_SNAPNAME, location.specs['snapshot']) create_mock.assert_called_once_with(ioctxt, name, size, order, old_format=False, features=3) def tearDown(self): self.assertEqual(self.called_commands_expected, self.called_commands_actual) super(TestStore, self).tearDown() @mock.patch('oslo_utils.eventletutils.is_monkey_patched') def test_create_image_in_native_thread(self, mock_patched): mock_patched.return_value = True # Tests that we use non-0 features from ceph.conf and cast to int. fsid = 'fake' features = '3' conf_get_mock = mock.Mock(return_value=features) conn = mock.Mock(conf_get=conf_get_mock) ioctxt = mock.sentinel.ioctxt name = '1' size = 1024 order = 3 fake_proxy = mock.MagicMock() fake_rbd = mock.MagicMock() with mock.patch.object(rbd_store.tpool, 'Proxy') as tpool_mock, \ mock.patch.object(rbd_store.rbd, 'RBD') as rbd_mock: tpool_mock.return_value = fake_proxy rbd_mock.return_value = fake_rbd location = self.store._create_image( fsid, conn, ioctxt, name, size, order) self.assertEqual(fsid, location.specs['fsid']) self.assertEqual(rbd_store.DEFAULT_POOL, location.specs['pool']) self.assertEqual(name, location.specs['image']) self.assertEqual(rbd_store.DEFAULT_SNAPNAME, location.specs['snapshot']) tpool_mock.assert_called_once_with(fake_rbd) fake_proxy.create.assert_called_once_with(ioctxt, name, size, order, old_format=False, features=3) @mock.patch('oslo_utils.eventletutils.is_monkey_patched') def test_delete_image_in_native_thread(self, mock_patched): mock_patched.return_value = True fake_proxy = mock.MagicMock() fake_rbd = mock.MagicMock() fake_ioctx = mock.MagicMock() with mock.patch.object(rbd_store.tpool, 'Proxy') as tpool_mock, \ mock.patch.object(rbd_store.rbd, 'RBD') as rbd_mock, \ mock.patch.object(self.store, 'get_connection') as mock_conn: mock_get_conn = mock_conn.return_value.__enter__.return_value mock_ioctx = mock_get_conn.open_ioctx.return_value.__enter__ mock_ioctx.return_value = fake_ioctx tpool_mock.return_value = fake_proxy rbd_mock.return_value = fake_rbd self.store._delete_image('fake_pool', self.location.image) tpool_mock.assert_called_once_with(fake_rbd) fake_proxy.remove.assert_called_once_with(fake_ioctx, self.location.image) @mock.patch.object(rbd_store, 'rbd') @mock.patch.object(rbd_store, 'tpool') @mock.patch('oslo_utils.eventletutils.is_monkey_patched') def test_rbd_proxy(self, mock_patched, mock_tpool, mock_rbd): mock_patched.return_value = False self.assertEqual(mock_rbd.RBD(), self.store.RBDProxy()) mock_patched.return_value = True self.assertEqual(mock_tpool.Proxy.return_value, self.store.RBDProxy()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_s3_store.py0000664000175000017500000005566400000000000024174 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the S3 backend store""" import hashlib import io from unittest import mock import uuid import boto3 import botocore from botocore import exceptions as boto_exceptions from botocore import stub from oslo_utils.secretutils import md5 from oslo_utils import units from glance_store._drivers import s3 from glance_store import capabilities from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities FAKE_UUID = str(uuid.uuid4()) FIVE_KB = 5 * units.Ki S3_CONF = { 's3_store_access_key': 'user', 's3_store_secret_key': 'key', 's3_store_region_name': '', 's3_store_host': 'localhost', 's3_store_bucket': 'glance', 's3_store_large_object_size': 9, # over 9MB is large 's3_store_large_object_chunk_size': 6, # part size is 6MB } def format_s3_location(user, key, authurl, bucket, obj): """Helper method that returns a S3 store URI given the component pieces.""" scheme = 's3' if authurl.startswith('https://'): scheme = 's3+https' authurl = authurl[len('https://'):] elif authurl.startswith('http://'): authurl = authurl[len('http://'):] authurl = authurl.strip('/') return "%s://%s:%s@%s/%s/%s" % (scheme, user, key, authurl, bucket, obj) class TestStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): def setUp(self): """Establish a clean test environment.""" super(TestStore, self).setUp() self.store = s3.Store(self.conf) self.config(**S3_CONF) self.store.configure() self.register_store_schemes(self.store, 's3') self.hash_algo = 'sha256' def test_get_invalid_bucket_name(self): self.config(s3_store_bucket_url_format='virtual') invalid_buckets = ['not.dns.compliant', 'aa', 'bucket-'] for bucket in invalid_buckets: loc = location.get_location_from_uri( "s3://user:key@auth_address/%s/key" % bucket, conf=self.conf) self.assertRaises(boto_exceptions.InvalidDNSNameError, self.store.get, loc) @mock.patch('glance_store.location.Location') @mock.patch.object(boto3.session.Session, "client") def test_client_custom_region_name(self, mock_client, mock_loc): """Test a custom s3_store_region_name in config""" self.config(s3_store_host='http://example.com') self.config(s3_store_region_name='regionOne') self.config(s3_store_bucket_url_format='path') self.store.configure() mock_loc.accesskey = 'abcd' mock_loc.secretkey = 'efgh' mock_loc.bucket = 'bucket1' self.store._create_s3_client(mock_loc) mock_client.assert_called_with( config=mock.ANY, endpoint_url='http://example.com', region_name='regionOne', service_name='s3', use_ssl=False, ) @mock.patch.object(boto3.session.Session, "client") def test_get(self, mock_client): """Test a "normal" retrieval of an image in chunks.""" bucket, key = 'glance', FAKE_UUID fixture_object = { 'Body': io.BytesIO(b"*" * FIVE_KB), 'ContentLength': FIVE_KB } fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_object', service_response={}, expected_params={ 'Bucket': bucket, 'Key': key }) stubber.add_response(method='get_object', service_response=fixture_object, expected_params={ 'Bucket': bucket, 'Key': key }) mock_client.return_value = fake_s3_client loc = location.get_location_from_uri( "s3://user:key@auth_address/%s/%s" % (bucket, key), conf=self.conf) (image_s3, image_size) = self.store.get(loc) self.assertEqual(FIVE_KB, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_s3: data += chunk self.assertEqual(expected_data, data) def test_partial_get(self): loc = location.get_location_from_uri( "s3://user:key@auth_address/glance/%s" % FAKE_UUID, conf=self.conf) self.assertRaises(exceptions.StoreRandomGetNotSupported, self.store.get, loc, chunk_size=1) @mock.patch.object(boto3.session.Session, "client") def test_get_non_existing(self, mock_client): """Test that trying to retrieve a s3 that doesn't exist raises an error """ bucket, key = 'glance', 'no_exist' fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_client_error(method='head_object', service_error_code='404', service_message=''' The specified key does not exist. ''', expected_params={ 'Bucket': bucket, 'Key': key }) mock_client.return_value = fake_s3_client uri = "s3://user:key@auth_address/%s/%s" % (bucket, key) loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch.object(boto3.session.Session, "client") def test_add_singlepart(self, mock_client): """Test that we can add an image via the s3 backend.""" expected_image_id = str(uuid.uuid4()) # 5KiB is smaller than WRITE_CHUNKSIZE expected_s3_size = FIVE_KB expected_s3_contents = b"*" * expected_s3_size expected_checksum = md5(expected_s3_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_s3_contents).hexdigest() expected_location = format_s3_location( S3_CONF['s3_store_access_key'], S3_CONF['s3_store_secret_key'], S3_CONF['s3_store_host'], S3_CONF['s3_store_bucket'], expected_image_id) image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'] }) stubber.add_client_error(method='head_object', service_error_code='404', service_message='', expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id }) stubber.add_response(method='put_object', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id, 'Body': botocore.stub.ANY }) mock_client.return_value = fake_s3_client loc, size, checksum, multihash, _ = \ self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo) self.assertEqual(expected_location, loc) self.assertEqual(expected_s3_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(boto3.session.Session, "client") def test_add_singlepart_bigger_than_write_chunk(self, mock_client): """Test that we can add a large image via the s3 backend.""" expected_image_id = str(uuid.uuid4()) # 8 MiB is bigger than WRITE_CHUNKSIZE(=5MiB), # but smaller than s3_store_large_object_size expected_s3_size = 8 * units.Mi expected_s3_contents = b"*" * expected_s3_size expected_checksum = md5(expected_s3_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_s3_contents).hexdigest() expected_location = format_s3_location( S3_CONF['s3_store_access_key'], S3_CONF['s3_store_secret_key'], S3_CONF['s3_store_host'], S3_CONF['s3_store_bucket'], expected_image_id) image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'] }) stubber.add_client_error(method='head_object', service_error_code='404', service_message='', expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id }) stubber.add_response(method='put_object', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id, 'Body': botocore.stub.ANY }) mock_client.return_value = fake_s3_client loc, size, checksum, multihash, _ = \ self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo) self.assertEqual(expected_location, loc) self.assertEqual(expected_s3_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(boto3.session.Session, "client") def test_add_with_verifier(self, mock_client): """Assert 'verifier.update' is called when verifier is provided""" expected_image_id = str(uuid.uuid4()) expected_s3_size = FIVE_KB expected_s3_contents = b"*" * expected_s3_size image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') verifier = mock.MagicMock(name='mock_verifier') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}) stubber.add_client_error(method='head_object', service_error_code='404', service_message='') stubber.add_response(method='put_object', service_response={}) mock_client.return_value = fake_s3_client self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo, verifier=verifier) verifier.update.assert_called_with(expected_s3_contents) @mock.patch.object(boto3.session.Session, "client") def test_add_multipart(self, mock_client): """Test that we can add an image via the s3 backend.""" expected_image_id = str(uuid.uuid4()) expected_s3_size = 16 * units.Mi expected_s3_contents = b"*" * expected_s3_size expected_checksum = md5(expected_s3_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(expected_s3_contents).hexdigest() expected_location = format_s3_location( S3_CONF['s3_store_access_key'], S3_CONF['s3_store_secret_key'], S3_CONF['s3_store_host'], S3_CONF['s3_store_bucket'], expected_image_id) image_s3 = io.BytesIO(expected_s3_contents) fake_s3_client = botocore.session.get_session().create_client('s3') num_parts = 3 # image size is 16MB and chunk size is 6MB with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}, expected_params={ 'Bucket': S3_CONF['s3_store_bucket'] }) stubber.add_client_error(method='head_object', service_error_code='404', service_message='', expected_params={ 'Bucket': S3_CONF['s3_store_bucket'], 'Key': expected_image_id }) stubber.add_response(method='create_multipart_upload', service_response={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, "UploadId": 'UploadId' }, expected_params={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, }) parts = [] remaining_image_size = expected_s3_size chunk_size = S3_CONF['s3_store_large_object_chunk_size'] * units.Mi for i in range(num_parts): part_number = i + 1 stubber.add_response(method='upload_part', service_response={ 'ETag': 'ETag' }, expected_params={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, "Body": botocore.stub.ANY, 'ContentLength': chunk_size, "PartNumber": part_number, "UploadId": 'UploadId' }) parts.append({'ETag': 'ETag', 'PartNumber': part_number}) remaining_image_size -= chunk_size if remaining_image_size < chunk_size: chunk_size = remaining_image_size stubber.add_response(method='complete_multipart_upload', service_response={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, 'ETag': 'ETag' }, expected_params={ "Bucket": S3_CONF['s3_store_bucket'], "Key": expected_image_id, "MultipartUpload": { "Parts": parts }, "UploadId": 'UploadId' }) mock_client.return_value = fake_s3_client loc, size, checksum, multihash, _ = \ self.store.add(expected_image_id, image_s3, expected_s3_size, self.hash_algo) self.assertEqual(expected_location, loc) self.assertEqual(expected_s3_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(boto3.session.Session, "client") def test_add_already_existing(self, mock_client): """Tests that adding an image with an existing identifier raises an appropriate exception """ image_s3 = io.BytesIO(b"never_gonna_make_it") fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_response(method='head_bucket', service_response={}) stubber.add_response(method='head_object', service_response={}) mock_client.return_value = fake_s3_client self.assertRaises(exceptions.Duplicate, self.store.add, FAKE_UUID, image_s3, 0, self.hash_algo) def _option_required(self, key): conf = S3_CONF.copy() conf[key] = None try: self.config(**conf) self.store = s3.Store(self.conf) self.store.configure() return not self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS) except Exception: return False def test_no_access_key(self): """Tests that options without access key disables the add method""" self.assertTrue(self._option_required('s3_store_access_key')) def test_no_secret_key(self): """Tests that options without secret key disables the add method""" self.assertTrue(self._option_required('s3_store_secret_key')) def test_no_host(self): """Tests that options without host disables the add method""" self.assertTrue(self._option_required('s3_store_host')) def test_no_bucket(self): """Tests that options without bucket name disables the add method""" self.assertTrue(self._option_required('s3_store_bucket')) @mock.patch.object(boto3.session.Session, "client") def test_delete_non_existing(self, mock_client): """Test that trying to delete a s3 that doesn't exist raises an error """ bucket, key = 'glance', 'no_exist' fake_s3_client = botocore.session.get_session().create_client('s3') with stub.Stubber(fake_s3_client) as stubber: stubber.add_client_error(method='head_object', service_error_code='404', service_message=''' The specified key does not exist. ''', expected_params={ 'Bucket': bucket, 'Key': key }) fake_s3_client.head_bucket = mock.MagicMock() mock_client.return_value = fake_s3_client uri = "s3://user:key@auth_address/%s/%s" % (bucket, key) loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.delete, loc) def _do_test_get_s3_location(self, host, loc): self.assertEqual(s3.get_s3_location(host), loc) self.assertEqual(s3.get_s3_location(host + '/'), loc) self.assertEqual(s3.get_s3_location(host + ':80'), loc) self.assertEqual(s3.get_s3_location(host + ':80/'), loc) self.assertEqual(s3.get_s3_location('http://' + host), loc) self.assertEqual(s3.get_s3_location('http://' + host + '/'), loc) self.assertEqual(s3.get_s3_location('http://' + host + ':80'), loc) self.assertEqual(s3.get_s3_location('http://' + host + ':80/'), loc) self.assertEqual(s3.get_s3_location('https://' + host), loc) self.assertEqual(s3.get_s3_location('https://' + host + '/'), loc) self.assertEqual(s3.get_s3_location('https://' + host + ':80'), loc) self.assertEqual(s3.get_s3_location('https://' + host + ':80/'), loc) def test_get_s3_good_location(self): """Test that the s3 location can be derived from the host""" good_locations = [ ('s3.amazonaws.com', ''), ('s3-us-east-1.amazonaws.com', 'us-east-1'), ('s3-us-east-2.amazonaws.com', 'us-east-2'), ('s3-us-west-1.amazonaws.com', 'us-west-1'), ('s3-us-west-2.amazonaws.com', 'us-west-2'), ('s3-ap-east-1.amazonaws.com', 'ap-east-1'), ('s3-ap-south-1.amazonaws.com', 'ap-south-1'), ('s3-ap-northeast-1.amazonaws.com', 'ap-northeast-1'), ('s3-ap-northeast-2.amazonaws.com', 'ap-northeast-2'), ('s3-ap-northeast-3.amazonaws.com', 'ap-northeast-3'), ('s3-ap-southeast-1.amazonaws.com', 'ap-southeast-1'), ('s3-ap-southeast-2.amazonaws.com', 'ap-southeast-2'), ('s3-ca-central-1.amazonaws.com', 'ca-central-1'), ('s3-cn-north-1.amazonaws.com.cn', 'cn-north-1'), ('s3-cn-northwest-1.amazonaws.com.cn', 'cn-northwest-1'), ('s3-eu-central-1.amazonaws.com', 'eu-central-1'), ('s3-eu-west-1.amazonaws.com', 'eu-west-1'), ('s3-eu-west-2.amazonaws.com', 'eu-west-2'), ('s3-eu-west-3.amazonaws.com', 'eu-west-3'), ('s3-eu-north-1.amazonaws.com', 'eu-north-1'), ('s3-sa-east-1.amazonaws.com', 'sa-east-1'), ] for (url, expected) in good_locations: self._do_test_get_s3_location(url, expected) def test_get_my_object_storage_location(self): """Test that the my object storage location convert to ''""" my_object_storage_locations = [ ('my-object-storage.com', ''), ('s3-my-object.jp', ''), ('192.168.100.12', ''), ] for (url, expected) in my_object_storage_locations: self._do_test_get_s3_location(url, expected) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_store_base.py0000664000175000017500000000652000000000000024544 0ustar00zuulzuul00000000000000# Copyright 2011-2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from oslo_config import cfg import glance_store as store from glance_store import backend from glance_store import location from glance_store import multi_backend from glance_store.tests import base class TestStoreBase(base.StoreBaseTest): def setUp(self): super(TestStoreBase, self).setUp() self.config(default_store='file', group='glance_store') @mock.patch.object(store.driver, 'LOG') def test_configure_does_not_raise_on_missing_driver_conf(self, mock_log): self.config(stores=['file'], group='glance_store') self.config(filesystem_store_datadir=None, group='glance_store') self.config(filesystem_store_datadirs=None, group='glance_store') for (__, store_instance) in backend._load_stores(self.conf): store_instance.configure() mock_log.warning.assert_called_once_with( "Failed to configure store correctly: Store filesystem " "could not be configured correctly. Reason: Specify " "at least 'filesystem_store_datadir' or " "'filesystem_store_datadirs' option Disabling add method.") class TestMultiStoreBase(base.MultiStoreBaseTest): _CONF = multi_backend.CONF def setUp(self): super(TestMultiStoreBase, self).setUp() enabled_backends = { "fast": "file", "cheap": "file", } self.reserved_stores = { 'consuming_service_reserved_store': 'file' } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf, reserved_stores=self.reserved_stores) self.config(default_backend='file1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf, reserved_stores=self.reserved_stores) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.addCleanup(self.conf.reset) def test_reserved_stores_loaded(self): # assert global map has reserved stores registered store = multi_backend.get_store_from_store_identifier( 'consuming_service_reserved_store') self.assertIsNotNone(store) self.assertEqual(self.reserved_stores, multi_backend._RESERVED_STORES) # verify that store config group in conf file is same as # reserved store name self.assertEqual('consuming_service_reserved_store', store.backend_group) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_store_capabilities.py0000664000175000017500000001444300000000000026266 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store import capabilities as caps from glance_store.tests import base class FakeStoreWithStaticCapabilities(caps.StoreCapability): _CAPABILITIES = caps.BitMasks.READ_RANDOM | caps.BitMasks.DRIVER_REUSABLE class FakeStoreWithDynamicCapabilities(caps.StoreCapability): def __init__(self, *cap_list): super(FakeStoreWithDynamicCapabilities, self).__init__() if not cap_list: cap_list = [caps.BitMasks.READ_RANDOM, caps.BitMasks.DRIVER_REUSABLE] self.set_capabilities(*cap_list) class FakeStoreWithMixedCapabilities(caps.StoreCapability): _CAPABILITIES = caps.BitMasks.READ_RANDOM def __init__(self): super(FakeStoreWithMixedCapabilities, self).__init__() self.set_capabilities(caps.BitMasks.DRIVER_REUSABLE) class TestStoreCapabilitiesChecking(object): def test_store_capabilities_checked_on_io_operations(self): self.assertEqual('op_checker', self.store.add.__name__) self.assertEqual('op_checker', self.store.get.__name__) self.assertEqual('op_checker', self.store.delete.__name__) class TestStoreCapabilities(base.StoreBaseTest): def _verify_store_capabilities(self, store): # This function tested is_capable() as well. self.assertTrue(store.is_capable(caps.BitMasks.READ_RANDOM)) self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE)) self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS)) def test_static_capabilities_setup(self): self._verify_store_capabilities(FakeStoreWithStaticCapabilities()) def test_dynamic_capabilities_setup(self): self._verify_store_capabilities(FakeStoreWithDynamicCapabilities()) def test_mixed_capabilities_setup(self): self._verify_store_capabilities(FakeStoreWithMixedCapabilities()) def test_set_unset_capabilities(self): store = FakeStoreWithStaticCapabilities() self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS)) # Set and unset single capability on one time store.set_capabilities(caps.BitMasks.WRITE_ACCESS) self.assertTrue(store.is_capable(caps.BitMasks.WRITE_ACCESS)) store.unset_capabilities(caps.BitMasks.WRITE_ACCESS) self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS)) # Set and unset multiple capabilities on one time cap_list = [caps.BitMasks.WRITE_ACCESS, caps.BitMasks.WRITE_OFFSET] store.set_capabilities(*cap_list) self.assertTrue(store.is_capable(*cap_list)) store.unset_capabilities(*cap_list) self.assertFalse(store.is_capable(*cap_list)) def test_store_capabilities_property(self): store1 = FakeStoreWithDynamicCapabilities() self.assertTrue(hasattr(store1, 'capabilities')) store2 = FakeStoreWithMixedCapabilities() self.assertEqual(store1.capabilities, store2.capabilities) def test_cascaded_unset_capabilities(self): # Test read capability store = FakeStoreWithMixedCapabilities() self._verify_store_capabilities(store) store.unset_capabilities(caps.BitMasks.READ_ACCESS) cap_list = [caps.BitMasks.READ_ACCESS, caps.BitMasks.READ_OFFSET, caps.BitMasks.READ_CHUNK, caps.BitMasks.READ_RANDOM] for cap in cap_list: # To make sure all of them are unsetted. self.assertFalse(store.is_capable(cap)) self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE)) # Test write capability store = FakeStoreWithDynamicCapabilities(caps.BitMasks.WRITE_RANDOM, caps.BitMasks.DRIVER_REUSABLE) self.assertTrue(store.is_capable(caps.BitMasks.WRITE_RANDOM)) self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE)) store.unset_capabilities(caps.BitMasks.WRITE_ACCESS) cap_list = [caps.BitMasks.WRITE_ACCESS, caps.BitMasks.WRITE_OFFSET, caps.BitMasks.WRITE_CHUNK, caps.BitMasks.WRITE_RANDOM] for cap in cap_list: # To make sure all of them are unsetted. self.assertFalse(store.is_capable(cap)) self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE)) class TestStoreCapabilityConstants(base.StoreBaseTest): def test_one_single_capability_own_one_bit(self): cap_list = [ caps.BitMasks.READ_ACCESS, caps.BitMasks.WRITE_ACCESS, caps.BitMasks.DRIVER_REUSABLE, ] for cap in cap_list: self.assertEqual(1, bin(cap).count('1')) def test_combined_capability_bits(self): check = caps.StoreCapability.contains check(caps.BitMasks.READ_OFFSET, caps.BitMasks.READ_ACCESS) check(caps.BitMasks.READ_CHUNK, caps.BitMasks.READ_ACCESS) check(caps.BitMasks.READ_RANDOM, caps.BitMasks.READ_CHUNK) check(caps.BitMasks.READ_RANDOM, caps.BitMasks.READ_OFFSET) check(caps.BitMasks.WRITE_OFFSET, caps.BitMasks.WRITE_ACCESS) check(caps.BitMasks.WRITE_CHUNK, caps.BitMasks.WRITE_ACCESS) check(caps.BitMasks.WRITE_RANDOM, caps.BitMasks.WRITE_CHUNK) check(caps.BitMasks.WRITE_RANDOM, caps.BitMasks.WRITE_OFFSET) check(caps.BitMasks.RW_ACCESS, caps.BitMasks.READ_ACCESS) check(caps.BitMasks.RW_ACCESS, caps.BitMasks.WRITE_ACCESS) check(caps.BitMasks.RW_OFFSET, caps.BitMasks.READ_OFFSET) check(caps.BitMasks.RW_OFFSET, caps.BitMasks.WRITE_OFFSET) check(caps.BitMasks.RW_CHUNK, caps.BitMasks.READ_CHUNK) check(caps.BitMasks.RW_CHUNK, caps.BitMasks.WRITE_CHUNK) check(caps.BitMasks.RW_RANDOM, caps.BitMasks.READ_RANDOM) check(caps.BitMasks.RW_RANDOM, caps.BitMasks.WRITE_RANDOM) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_swift_store.py0000664000175000017500000030560100000000000024770 0ustar00zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the Swift backend store""" import copy from unittest import mock import fixtures import hashlib import http.client import importlib import io import tempfile import uuid from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils.secretutils import md5 from oslo_utils import units import requests_mock import swiftclient from glance_store._drivers.swift import buffered from glance_store._drivers.swift import connection_manager as manager from glance_store._drivers.swift import store as swift from glance_store._drivers.swift import utils as sutils from glance_store import backend from glance_store import capabilities from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities CONF = cfg.CONF FAKE_UUID = lambda: str(uuid.uuid4()) # noqa: E731 FAKE_UUID2 = lambda: str(uuid.uuid4()) # noqa: E731 FAKE_UUID3 = lambda: str(uuid.uuid4()) # noqa: E731 Store = swift.Store FIVE_KB = 5 * units.Ki FIVE_GB = 5 * units.Gi HASH_ALGO = 'sha256' MAX_SWIFT_OBJECT_SIZE = FIVE_GB SWIFT_PUT_OBJECT_CALLS = 0 SWIFT_CONF = {'swift_store_auth_address': 'localhost:8080', 'swift_store_container': 'glance', 'swift_store_user': 'user', 'swift_store_key': 'key', 'swift_store_retry_get_count': 1, 'default_swift_reference': 'ref1' } class SwiftTests(object): def mock_keystone_client(self): # mock keystone client functions to avoid dependency errors swift.ks_v3 = mock.MagicMock() swift.ks_session = mock.MagicMock() swift.ks_client = mock.MagicMock() def stub_out_swiftclient(self, swift_store_auth_version): fixture_containers = ['glance'] fixture_container_headers = {} fixture_headers = { 'glance/%s' % FAKE_UUID: { 'content-length': FIVE_KB, 'etag': 'c2e5db72bd7fd153f53ede5da5a06de3' }, 'glance/%s' % FAKE_UUID2: {'x-static-large-object': 'true', }, 'glance/%s' % FAKE_UUID3: { 'content-length': 0, 'etag': "doesn't really matter", }, } fixture_objects = { 'glance/%s' % FAKE_UUID: io.BytesIO(b"*" * FIVE_KB), 'glance/%s' % FAKE_UUID2: io.BytesIO(b"*" * FIVE_KB), 'glance/%s' % FAKE_UUID3: io.BytesIO(), } def fake_head_container(url, token, container, **kwargs): if container not in fixture_containers: msg = "No container %s found" % container status = http.client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) return fixture_container_headers def fake_put_container(url, token, container, **kwargs): fixture_containers.append(container) def fake_post_container(url, token, container, headers, **kwargs): for key, value in headers.items(): fixture_container_headers[key] = value def fake_put_object(url, token, container, name, contents, **kwargs): # PUT returns the ETag header for the newly-added object # Large object manifest... global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS += 1 CHUNKSIZE = 64 * units.Ki fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: if kwargs.get('headers'): manifest = kwargs.get('headers').get('X-Object-Manifest') etag = kwargs.get('headers') \ .get('ETag', md5( b'', usedforsecurity=False).hexdigest()) fixture_headers[fixture_key] = { 'manifest': True, 'etag': etag, 'x-object-manifest': manifest } fixture_objects[fixture_key] = None return etag if hasattr(contents, 'read'): fixture_object = io.BytesIO() read_len = 0 chunk = contents.read(CHUNKSIZE) checksum = md5(usedforsecurity=False) while chunk: fixture_object.write(chunk) read_len += len(chunk) checksum.update(chunk) chunk = contents.read(CHUNKSIZE) etag = checksum.hexdigest() else: fixture_object = io.BytesIO(contents) read_len = len(contents) etag = md5(fixture_object.getvalue(), usedforsecurity=False).hexdigest() if read_len > MAX_SWIFT_OBJECT_SIZE: msg = ('Image size:%d exceeds Swift max:%d' % (read_len, MAX_SWIFT_OBJECT_SIZE)) raise swiftclient.ClientException( msg, http_status=http.client.REQUEST_ENTITY_TOO_LARGE) fixture_objects[fixture_key] = fixture_object fixture_headers[fixture_key] = { 'content-length': read_len, 'etag': etag} return etag else: msg = ("Object PUT failed - Object with key %s already exists" % fixture_key) raise swiftclient.ClientException( msg, http_status=http.client.CONFLICT) def fake_get_object(conn, container, name, **kwargs): # GET returns the tuple (list of headers, file object) fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: msg = "Object GET failed" status = http.client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) byte_range = None headers = kwargs.get('headers', dict()) if headers is not None: headers = dict((k.lower(), v) for k, v in headers.items()) if 'range' in headers: byte_range = headers.get('range') fixture = fixture_headers[fixture_key] if 'manifest' in fixture: # Large object manifest... we return a file containing # all objects with prefix of this fixture key chunk_keys = sorted([k for k in fixture_headers.keys() if k.startswith(fixture_key) and k != fixture_key]) result = io.BytesIO() for key in chunk_keys: result.write(fixture_objects[key].getvalue()) else: result = fixture_objects[fixture_key] if byte_range is not None: start = int(byte_range.split('=')[1].strip('-')) result = io.BytesIO(result.getvalue()[start:]) fixture_headers[fixture_key]['content-length'] = len( result.getvalue()) return fixture_headers[fixture_key], result def fake_head_object(url, token, container, name, **kwargs): # HEAD returns the list of headers for an object try: fixture_key = "%s/%s" % (container, name) return fixture_headers[fixture_key] except KeyError: msg = "Object HEAD failed - Object does not exist" status = http.client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) def fake_delete_object(url, token, container, name, **kwargs): # DELETE returns nothing fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: msg = "Object DELETE failed - Object does not exist" status = http.client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) else: del fixture_headers[fixture_key] del fixture_objects[fixture_key] def fake_http_connection(*args, **kwargs): return None def fake_get_auth(url, user, key, auth_version, **kwargs): if url is None: return None, None if 'http' in url and '://' not in url: raise ValueError('Invalid url %s' % url) # Check the auth version against the configured value if swift_store_auth_version != auth_version: msg = 'AUTHENTICATION failed (version mismatch)' raise swiftclient.ClientException(msg) return None, None self.useFixture(fixtures.MockPatch( 'swiftclient.client.head_container', fake_head_container)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.put_container', fake_put_container)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.post_container', fake_post_container)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.put_object', fake_put_object)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.delete_object', fake_delete_object)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.head_object', fake_head_object)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.Connection.get_object', fake_get_object)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.get_auth', fake_get_auth)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.http_connection', fake_http_connection)) @property def swift_store_user(self): return 'tenant:user1' def test_get_size(self): """ Test that we can get the size of an object in the swift store """ uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) image_size = self.store.get_size(loc) self.assertEqual(5120, image_size) def test_get_size_with_multi_tenant_on(self): """Test that single tenant uris work with multi tenant on.""" uri = ("swift://%s:key@auth_address/glance/%s" % (self.swift_store_user, FAKE_UUID)) self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) # NOTE(markwash): ensure the image is found ctxt = mock.MagicMock() size = backend.get_size_from_backend(uri, context=ctxt) self.assertEqual(5120, size) def test_multi_tenant_with_swift_config(self): """ Test that Glance does not start when a config file is set on multi-tenant mode """ schemes = ['swift', 'swift+config'] for s in schemes: self.config(default_store=s, swift_store_config_file='not/none', swift_store_multi_tenant=True) self.assertRaises(exceptions.BadStoreConfiguration, Store, self.conf) def test_get(self): """Test a "normal" retrieval of an image in chunks.""" uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) (image_swift, image_size) = self.store.get(loc) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_using_slice(self): """Test a "normal" retrieval of an image in chunks.""" uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) (image_swift, image_size) = self.store.get(loc) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB self.assertEqual(expected_data, image_swift[:]) expected_data = b"*" * (FIVE_KB - 100) self.assertEqual(expected_data, image_swift[100:]) def test_get_empty_using_slice(self): """Test a "normal" retrieval of a blank image.""" uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID3) loc = location.get_location_from_uri(uri, conf=self.conf) (image_swift, image_size) = self.store.get(loc) self.assertEqual(0, image_size) self.assertEqual(b'', image_swift[0:]) def test_get_with_retry(self): """ Test a retrieval where Swift does not get the full image in a single request. """ uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) ctxt = mock.MagicMock() (image_swift, image_size) = self.store.get(loc, context=ctxt) resp_full = b''.join([chunk for chunk in image_swift.wrapped]) resp_half = resp_full[:len(resp_full) // 2] resp_half = io.BytesIO(resp_half) manager = self.store.get_manager(loc.store_location, ctxt) image_swift.wrapped = swift.swift_retry_iter(resp_half, image_size, self.store, loc.store_location, manager) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_with_http_auth(self): """ Test a retrieval from Swift with an HTTP authurl. This is specified either via a Location header with swift+http:// or using http:// in the swift_store_auth_address config value """ loc = location.get_location_from_uri( "swift+http://%s:key@auth_address/glance/%s" % (self.swift_store_user, FAKE_UUID), conf=self.conf) ctxt = mock.MagicMock() (image_swift, image_size) = self.store.get(loc, context=ctxt) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_non_existing(self): """ Test that trying to retrieve a swift that doesn't exist raises an error """ loc = location.get_location_from_uri( "swift://%s:key@authurl/glance/noexist" % (self.swift_store_user), conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_buffered_reader_opts(self): self.config(swift_buffer_on_upload=True) self.config(swift_upload_buffer_dir=self.test_dir) try: self.store = Store(self.conf) except exceptions.BadStoreConfiguration: self.fail("Buffered Reader exception raised when it " "should not have been") def test_buffered_reader_with_invalid_path(self): self.config(swift_buffer_on_upload=True) self.config(swift_upload_buffer_dir="/some/path") self.store = Store(self.conf) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure) def test_buffered_reader_with_no_path_given(self): self.config(swift_buffer_on_upload=True) self.store = Store(self.conf) self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_add(self): """Test that we can add an image via the swift backend.""" importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256( expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) loc = "swift+https://tenant%%3Auser1:key@localhost:8080/glance/%s" expected_location = loc % (expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 loc, size, checksum, multihash, _ = self.store.add( expected_image_id, image_swift, expected_swift_size, HASH_ALGO) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) # Expecting a single object to be created on Swift i.e. no chunking. self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_multi_store(self): conf = copy.deepcopy(SWIFT_CONF) conf['default_swift_reference'] = 'store_2' self.config(**conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 loc = 'swift+config://store_2/glance/%s' expected_location = loc % (expected_image_id) location, size, checksum, multihash, arg = self.store.add( expected_image_id, image_swift, expected_swift_size, HASH_ALGO) self.assertEqual(expected_location, location) def test_add_raises_storage_full(self): conf = copy.deepcopy(SWIFT_CONF) conf['default_swift_reference'] = 'store_2' self.config(**conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() def fake_put_object_entity_too_large(*args, **kwargs): msg = "Test Out of Quota" raise swiftclient.ClientException( msg, http_status=http.client.REQUEST_ENTITY_TOO_LARGE) self.useFixture(fixtures.MockPatch( 'swiftclient.client.put_object', fake_put_object_entity_too_large)) expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) image_swift = io.BytesIO(expected_swift_contents) self.assertRaises(exceptions.StorageFull, self.store.add, expected_image_id, image_swift, expected_swift_size, HASH_ALGO) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_multi_tenant_image_add_uses_users_context(self): expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) expected_container = 'container_' + expected_image_id loc = 'swift+https://some_endpoint/%s/%s' expected_location = loc % (expected_container, expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.config(swift_store_container='container') self.config(swift_store_create_container_on_put=True) self.config(swift_store_multi_tenant=True) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] ctxt = mock.MagicMock( user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) store = swift.MultiTenantStore(self.conf) store.configure() loc, size, checksum, multihash, _ = store.add( expected_image_id, image_swift, expected_swift_size, HASH_ALGO, context=ctxt) # ensure that image add uses user's context self.assertEqual(expected_location, loc) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_auth_url_variations(self): """ Test that we can add an image via the swift backend with a variety of different auth_address values """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) variations = { 'store_4': 'swift+config://store_4/glance/%s', 'store_5': 'swift+config://store_5/glance/%s', 'store_6': 'swift+config://store_6/glance/%s' } for variation, expected_location in variations.items(): image_id = str(uuid.uuid4()) expected_location = expected_location % image_id expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = \ md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_multihash = \ hashlib.sha256(expected_swift_contents).hexdigest() image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf['default_swift_reference'] = variation self.config(**conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() loc, size, checksum, multihash, _ = self.store.add( image_id, image_swift, expected_swift_size, HASH_ALGO) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_no_container_no_create(self): """ Tests that adding an image with a non-existing container raises an appropriate exception """ conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = False conf['swift_store_container'] = 'noexist' self.config(**conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() image_swift = io.BytesIO(b"nevergonnamakeit") global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # We check the exception text to ensure the container # missing text is found in it, otherwise, we would have # simply used self.assertRaises here exception_caught = False try: self.store.add(str(uuid.uuid4()), image_swift, 0, HASH_ALGO) except exceptions.BackendException as e: exception_caught = True self.assertIn("container noexist does not exist in Swift", encodeutils.exception_to_unicode(e)) self.assertTrue(exception_caught) self.assertEqual(0, SWIFT_PUT_OBJECT_CALLS) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_create(self): """ Tests that adding an image with a non-existing container creates the container automatically if flag is set """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_multihash = \ hashlib.sha256(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/noexist/%s' expected_location = loc % (expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = True conf['swift_store_container'] = 'noexist' self.config(**conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() loc, size, checksum, multihash, _ = self.store.add( expected_image_id, image_swift, expected_swift_size, HASH_ALGO) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_multiple_containers_create(self): """ Tests that adding an image with a non-existing container while using multi containers will create the container automatically if flag is set """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_multihash = \ hashlib.sha256(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) container = 'randomname_' + expected_image_id[:2] loc = 'swift+config://ref1/%s/%s' expected_location = loc % (container, expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = True conf['swift_store_container'] = 'randomname' conf['swift_store_multiple_containers_seed'] = 2 self.config(**conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() loc, size, checksum, multihash, _ = self.store.add( expected_image_id, image_swift, expected_swift_size, HASH_ALGO) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_multiple_containers_no_create(self): """ Tests that adding an image with a non-existing container while using multiple containers raises an appropriate exception """ conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = False conf['swift_store_container'] = 'randomname' conf['swift_store_multiple_containers_seed'] = 2 self.config(**conf) importlib.reload(swift) self.mock_keystone_client() expected_image_id = str(uuid.uuid4()) expected_container = 'randomname_' + expected_image_id[:2] self.store = Store(self.conf) self.store.configure() image_swift = io.BytesIO(b"nevergonnamakeit") global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # We check the exception text to ensure the container # missing text is found in it, otherwise, we would have # simply used self.assertRaises here exception_caught = False try: self.store.add(expected_image_id, image_swift, 0, HASH_ALGO) except exceptions.BackendException as e: exception_caught = True expected_msg = "container %s does not exist in Swift" expected_msg = expected_msg % expected_container self.assertIn(expected_msg, encodeutils.exception_to_unicode(e)) self.assertTrue(exception_caught) self.assertEqual(0, SWIFT_PUT_OBJECT_CALLS) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_with_verifier(self): """Test that the verifier is updated when verifier is provided.""" swift_size = FIVE_KB base_byte = b"12345678" swift_contents = base_byte * (swift_size // 8) image_id = str(uuid.uuid4()) image_swift = io.BytesIO(swift_contents) self.store = Store(self.conf) self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size custom_size = units.Ki verifier = mock.MagicMock(name='mock_verifier') try: self.store.large_object_size = custom_size self.store.large_object_chunk_size = custom_size self.store.add(image_id, image_swift, swift_size, HASH_ALGO, verifier=verifier) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size # Confirm verifier update called expected number of times self.assertEqual(2 * swift_size / custom_size, verifier.update.call_count) # define one chunk of the contents swift_contents_piece = base_byte * (custom_size // 8) # confirm all expected calls to update have occurred calls = [mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b'')] verifier.update.assert_has_calls(calls) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_with_verifier_small(self): """Test that the verifier is updated for smaller images.""" swift_size = FIVE_KB base_byte = b"12345678" swift_contents = base_byte * (swift_size // 8) image_id = str(uuid.uuid4()) image_swift = io.BytesIO(swift_contents) self.store = Store(self.conf) self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size custom_size = 6 * units.Ki verifier = mock.MagicMock(name='mock_verifier') try: self.store.large_object_size = custom_size self.store.large_object_chunk_size = custom_size self.store.add(image_id, image_swift, swift_size, HASH_ALGO, verifier=verifier) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size # Confirm verifier update called expected number of times self.assertEqual(2, verifier.update.call_count) # define one chunk of the contents swift_contents_piece = base_byte * (swift_size // 8) # confirm all expected calls to update have occurred calls = [mock.call(swift_contents_piece), mock.call(b'')] verifier.update.assert_has_calls(calls) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_multi_container_doesnt_impact_multi_tenant_add(self): expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) expected_container = 'container_' + expected_image_id loc = 'swift+https://some_endpoint/%s/%s' expected_location = loc % (expected_container, expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.config(swift_store_container='container') self.config(swift_store_create_container_on_put=True) self.config(swift_store_multiple_containers_seed=2) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] ctxt = mock.MagicMock( user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) store = swift.MultiTenantStore(self.conf) store.configure() location, size, checksum, multihash, _ = store.add( expected_image_id, image_swift, expected_swift_size, HASH_ALGO, context=ctxt) self.assertEqual(expected_location, location) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_large_object(self): """ Tests that adding a very large image. We simulate the large object by setting store.large_object_size to a small number and then verify that there have been a number of calls to put_object()... """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_multihash = \ hashlib.sha256(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/glance/%s' expected_location = loc % (expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.store = Store(self.conf) self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size try: self.store.large_object_size = units.Ki self.store.large_object_chunk_size = units.Ki loc, size, checksum, multihash, _ = self.store.add( expected_image_id, image_swift, expected_swift_size, HASH_ALGO) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) # Expecting 6 objects to be created on Swift -- 5 chunks and 1 # manifest. self.assertEqual(6, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_contents) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_large_object_zero_size(self): """ Tests that adding an image to Swift which has both an unknown size and exceeds Swift's maximum limit of 5GB is correctly uploaded. We avoid the overhead of creating a 5GB object for this test by temporarily setting MAX_SWIFT_OBJECT_SIZE to 1KB, and then adding an object of 5KB. Bug lp:891738 """ # Set up a 'large' image of 5KB expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_multihash = \ hashlib.sha256(expected_swift_contents).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/glance/%s' expected_location = loc % (expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # Temporarily set Swift MAX_SWIFT_OBJECT_SIZE to 1KB and add our image, # explicitly setting the image_length to 0 self.store = Store(self.conf) self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size global MAX_SWIFT_OBJECT_SIZE orig_max_swift_object_size = MAX_SWIFT_OBJECT_SIZE try: MAX_SWIFT_OBJECT_SIZE = units.Ki self.store.large_object_size = units.Ki self.store.large_object_chunk_size = units.Ki loc, size, checksum, multihash, _ = self.store.add( expected_image_id, image_swift, 0, HASH_ALGO) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size MAX_SWIFT_OBJECT_SIZE = orig_max_swift_object_size self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) # Expecting 6 calls to put_object -- 5 chunks, and the manifest. self.assertEqual(6, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri(expected_location, conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_contents) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_already_existing(self): """ Tests that adding an image with an existing identifier raises an appropriate exception """ self.store = Store(self.conf) self.store.configure() image_swift = io.BytesIO(b"nevergonnamakeit") self.assertRaises(exceptions.Duplicate, self.store.add, FAKE_UUID, image_swift, 0, HASH_ALGO) def _option_required(self, key): conf = self.getConfig() conf[key] = None try: self.config(**conf) self.store = Store(self.conf) return not self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS) except Exception: return False def test_no_store_credentials(self): """ Tests that options without a valid credentials disables the add method """ self.store = Store(self.conf) self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'user': '', 'key': ''}} self.store.configure() self.assertFalse(self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS)) def test_no_auth_address(self): """ Tests that options without auth address disables the add method """ self.store = Store(self.conf) self.store.ref_params = {'ref1': {'auth_address': '', 'user': 'user1', 'key': 'key1'}} self.store.configure() self.assertFalse(self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS)) def test_delete(self): """ Test we can delete an existing image in the swift store """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch.object(swiftclient.client, 'delete_object') def test_delete_slo(self, mock_del_obj): """ Test we can delete an existing image stored as SLO, static large object """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) importlib.reload(swift) self.store = Store(self.conf) self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % (self.swift_store_user, FAKE_UUID2) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertEqual(1, mock_del_obj.call_count) _, kwargs = mock_del_obj.call_args self.assertEqual('multipart-manifest=delete', kwargs.get('query_string')) @mock.patch.object(swiftclient.client, 'delete_object') def test_delete_nonslo_not_deleted_as_slo(self, mock_del_obj): """ Test that non-SLOs are not being deleted the SLO way """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % (self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertEqual(1, mock_del_obj.call_count) _, kwargs = mock_del_obj.call_args self.assertIsNone(kwargs.get('query_string')) def test_delete_with_reference_params(self): """ Test we can delete an existing image in the swift store """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) importlib.reload(swift) # mock client because v3 uses it to receive auth_info self.mock_keystone_client() self.store = Store(self.conf) self.store.configure() uri = "swift+config://ref1/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_delete_non_existing(self): """ Test that trying to delete a swift that doesn't exist raises an error """ conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) importlib.reload(swift) self.store = Store(self.conf) self.store.configure() loc = location.get_location_from_uri( "swift://%s:key@authurl/glance/noexist" % (self.swift_store_user), conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.delete, loc) def test_delete_with_some_segments_failing(self): """ Tests that delete of a segmented object recovers from error(s) while deleting one or more segments. To test this we add a segmented object first and then delete it, while simulating errors on one or more segments. """ test_image_id = str(uuid.uuid4()) def fake_head_object(container, object_name): object_manifest = '/'.join([container, object_name]) + '-' return {'x-object-manifest': object_manifest} def fake_get_container(container, **kwargs): # Returning 5 fake segments return None, [{'name': '%s-%03d' % (test_image_id, x)} for x in range(1, 6)] def fake_delete_object(container, object_name): # Simulate error on 1st and 3rd segments global SWIFT_DELETE_OBJECT_CALLS SWIFT_DELETE_OBJECT_CALLS += 1 if object_name.endswith('-001') or object_name.endswith('-003'): raise swiftclient.ClientException('Object DELETE failed') else: pass conf = copy.deepcopy(SWIFT_CONF) self.config(**conf) importlib.reload(swift) self.store = Store(self.conf) self.store.configure() loc_uri = "swift+https://%s:key@localhost:8080/glance/%s" loc_uri = loc_uri % (self.swift_store_user, test_image_id) loc = location.get_location_from_uri(loc_uri) conn = self.store.get_connection(loc.store_location) conn.delete_object = fake_delete_object conn.head_object = fake_head_object conn.get_container = fake_get_container global SWIFT_DELETE_OBJECT_CALLS SWIFT_DELETE_OBJECT_CALLS = 0 self.store.delete(loc, connection=conn) # Expecting 6 delete calls, 5 for the segments and 1 for the manifest self.assertEqual(6, SWIFT_DELETE_OBJECT_CALLS) def test_read_acl_public(self): """ Test that we can set a public read acl. """ self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri(uri, conf=self.conf) ctxt = mock.MagicMock() store.set_acls(loc, public=True, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual("*:*", container_headers['X-Container-Read']) def test_read_acl_tenants(self): """ Test that we can set read acl for tenants. """ self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri(uri, conf=self.conf) read_tenants = ['matt', 'mark'] ctxt = mock.MagicMock() store.set_acls(loc, read_tenants=read_tenants, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual('matt:*,mark:*', container_headers[ 'X-Container-Read']) def test_write_acls(self): """ Test that we can set write acl for tenants. """ self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri(uri, conf=self.conf) read_tenants = ['frank', 'jim'] ctxt = mock.MagicMock() store.set_acls(loc, write_tenants=read_tenants, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual('frank:*,jim:*', container_headers[ 'X-Container-Write']) @mock.patch("glance_store._drivers.swift." "connection_manager.MultiTenantConnectionManager") def test_get_connection_manager_multi_tenant(self, manager_class): manager = mock.MagicMock() manager_class.return_value = manager self.config(swift_store_config_file=None) self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() loc = mock.MagicMock() self.assertEqual(store.get_manager(loc), manager) @mock.patch("glance_store._drivers.swift." "connection_manager.SingleTenantConnectionManager") def test_get_connection_manager_single_tenant(self, manager_class): manager = mock.MagicMock() manager_class.return_value = manager store = Store(self.conf) store.configure() loc = mock.MagicMock() self.assertEqual(store.get_manager(loc), manager) def test_get_connection_manager_failed(self): store = swift.BaseStore(mock.MagicMock()) loc = mock.MagicMock() self.assertRaises(NotImplementedError, store.get_manager, loc) def test_init_client_multi_tenant(self): """Test that keystone client was initialized correctly""" self._init_client(verify=True, swift_store_multi_tenant=True, swift_store_config_file=None) def test_init_client_multi_tenant_swift_cacert(self): """Test that keystone client was initialized with swift cacert""" self._init_client(verify='/foo/bar', swift_store_multi_tenant=True, swift_store_config_file=None, swift_store_cacert='/foo/bar') def test_init_client_multi_tenant_insecure(self): """ Test that keystone client was initialized correctly with no certificate verification. """ self._init_client(verify=False, swift_store_multi_tenant=True, swift_store_auth_insecure=True, swift_store_config_file=None) @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def _init_client(self, mock_client, mock_session, mock_identity, verify, **kwargs): # initialize store and connection parameters self.config(**kwargs) store = Store(self.conf) store.configure() ref_params = sutils.SwiftParams(self.conf).params default_ref = self.conf.glance_store.default_swift_reference default_swift_reference = ref_params.get(default_ref) # prepare client and session trustee_session = mock.MagicMock() trustor_session = mock.MagicMock() main_session = mock.MagicMock() trustee_client = mock.MagicMock() trustee_client.session.get_user_id.return_value = 'fake_user' trustor_client = mock.MagicMock() trustor_client.session.auth.get_auth_ref.return_value = { 'roles': [{'name': 'fake_role'}] } trustor_client.trusts.create.return_value = mock.MagicMock( id='fake_trust') main_client = mock.MagicMock() mock_session.Session.side_effect = [trustor_session, trustee_session, main_session] mock_client.Client.side_effect = [trustor_client, trustee_client, main_client] # initialize client ctxt = mock.MagicMock() client = store.init_client(location=mock.MagicMock(), context=ctxt) # test trustor usage mock_identity.V3Token.assert_called_once_with( auth_url=default_swift_reference.get('auth_address'), token=ctxt.auth_token, project_id=ctxt.project_id ) mock_session.Session.assert_any_call(auth=mock_identity.V3Token(), verify=verify) mock_client.Client.assert_any_call(session=trustor_session) # test trustee usage and trust creation tenant_name, user = default_swift_reference.get('user').split(':') mock_identity.V3Password.assert_any_call( auth_url=default_swift_reference.get('auth_address'), username=user, password=default_swift_reference.get('key'), project_name=tenant_name, user_domain_id=default_swift_reference.get('user_domain_id'), user_domain_name=default_swift_reference.get('user_domain_name'), project_domain_id=default_swift_reference.get('project_domain_id'), project_domain_name=default_swift_reference.get( 'project_domain_name') ) mock_session.Session.assert_any_call(auth=mock_identity.V3Password(), verify=verify) mock_client.Client.assert_any_call(session=trustee_session) trustor_client.trusts.create.assert_called_once_with( trustee_user='fake_user', trustor_user=ctxt.user_id, project=ctxt.project_id, impersonation=True, role_names=['fake_role'] ) mock_identity.V3Password.assert_any_call( auth_url=default_swift_reference.get('auth_address'), username=user, password=default_swift_reference.get('key'), trust_id='fake_trust', user_domain_id=default_swift_reference.get('user_domain_id'), user_domain_name=default_swift_reference.get('user_domain_name'), project_domain_id=default_swift_reference.get('project_domain_id'), project_domain_name=default_swift_reference.get( 'project_domain_name') ) mock_client.Client.assert_any_call(session=main_session) self.assertEqual(main_client, client) class TestStoreAuthV1(base.StoreBaseTest, SwiftTests, test_store_capabilities.TestStoreCapabilitiesChecking): _CONF = cfg.CONF def getConfig(self): conf = SWIFT_CONF.copy() conf['swift_store_auth_version'] = '1' conf['swift_store_user'] = 'tenant:user1' return conf def setUp(self): """Establish a clean test environment.""" super(TestStoreAuthV1, self).setUp() conf = self.getConfig() conf_file = 'glance-swift.conf' self.swift_config_file = self.copy_data_file(conf_file, self.test_dir) conf.update({'swift_store_config_file': self.swift_config_file}) self.stub_out_swiftclient(conf['swift_store_auth_version']) self.mock_keystone_client() self.store = Store(self.conf) self.config(**conf) self.store.configure() self.register_store_schemes(self.store, 'swift') self.addCleanup(self.conf.reset) class TestStoreAuthV2(TestStoreAuthV1): def getConfig(self): conf = super(TestStoreAuthV2, self).getConfig() conf['swift_store_auth_version'] = '2' conf['swift_store_user'] = 'tenant:user1' return conf def test_v2_with_no_tenant(self): uri = "swift://failme:key@auth_address/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.assertRaises(exceptions.BadStoreUri, self.store.get, loc) def test_v2_multi_tenant_location(self): conf = self.getConfig() conf['swift_store_multi_tenant'] = True uri = "swift://auth_address/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) self.assertEqual('swift', loc.store_name) class TestStoreAuthV3(TestStoreAuthV1): def getConfig(self): conf = super(TestStoreAuthV3, self).getConfig() conf['swift_store_auth_version'] = '3' conf['swift_store_user'] = 'tenant:user1' return conf @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def test_init_client_single_tenant(self, mock_client, mock_session, mock_identity): """Test that keystone client was initialized correctly""" # initialize client store = Store(self.conf) store.configure() uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) ctxt = mock.MagicMock() store.init_client(location=loc.store_location, context=ctxt) mock_identity.V3Password.assert_called_once_with( auth_url=loc.store_location.swift_url + '/', username="user1", password="key", project_name="tenant", project_domain_id='default', project_domain_name=None, user_domain_id='default', user_domain_name=None,) mock_session.Session.assert_called_once_with( auth=mock_identity.V3Password(), verify=True) mock_client.Client.assert_called_once_with( session=mock_session.Session()) @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def test_init_client_single_tenant_with_domain_ids(self, mock_client, mock_session, mock_identity): """Test that keystone client was initialized correctly""" # initialize client conf = self.getConfig() conf['default_swift_reference'] = 'ref4' self.config(**conf) store = Store(self.conf) store.configure() uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) ctxt = mock.MagicMock() store.init_client(location=loc.store_location, context=ctxt) mock_identity.V3Password.assert_called_once_with( auth_url=loc.store_location.swift_url + '/', username="user1", password="key", project_name="tenant", project_domain_id='projdomainid', project_domain_name=None, user_domain_id='userdomainid', user_domain_name=None,) mock_session.Session.assert_called_once_with( auth=mock_identity.V3Password(), verify=True) mock_client.Client.assert_called_once_with( session=mock_session.Session()) @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def test_init_client_single_tenant_with_domain_names(self, mock_client, mock_session, mock_identity): """Test that keystone client was initialized correctly""" # initialize client conf = self.getConfig() conf['default_swift_reference'] = 'ref5' self.config(**conf) store = Store(self.conf) store.configure() uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri(uri, conf=self.conf) ctxt = mock.MagicMock() store.init_client(location=loc.store_location, context=ctxt) mock_identity.V3Password.assert_called_once_with( auth_url=loc.store_location.swift_url + '/', username="user1", password="key", project_name="tenant", project_domain_id=None, project_domain_name='projdomain', user_domain_id=None, user_domain_name='userdomain',) mock_session.Session.assert_called_once_with( auth=mock_identity.V3Password(), verify=True) mock_client.Client.assert_called_once_with( session=mock_session.Session()) class FakeConnection(object): def __init__(self, authurl=None, user=None, key=None, retries=5, preauthurl=None, preauthtoken=None, starting_backoff=1, tenant_name=None, os_options=None, auth_version="1", insecure=False, ssl_compression=True, cacert=None): if os_options is None: os_options = {} self.authurl = authurl self.user = user self.key = key self.preauthurl = preauthurl self.preauthtoken = preauthtoken self.tenant_name = tenant_name self.os_options = os_options self.auth_version = auth_version self.insecure = insecure self.cacert = cacert class TestSingleTenantStoreConnections(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestSingleTenantStoreConnections, self).setUp() self.useFixture( fixtures.MockPatch('swiftclient.Connection', FakeConnection)) self.store = swift.SingleTenantStore(self.conf) self.store.configure() specs = {'scheme': 'swift', 'auth_or_store_url': 'example.com/v2/', 'user': 'tenant:user1', 'key': 'key1', 'container': 'cont', 'obj': 'object'} self.location = swift.StoreLocation(specs, self.conf) self.addCleanup(self.conf.reset) def test_basic_connection(self): connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertIsNone(connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_conf_endpoint(self): ctx = mock.MagicMock(user='tenant:user1', tenant='tenant') self.config(swift_store_endpoint='https://internal.com') self.store.configure() connection = self.store.get_connection(self.location, context=ctx) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertEqual('https://internal.com', connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_conf_endpoint_no_context(self): self.config(swift_store_endpoint='https://internal.com') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertEqual('https://internal.com', connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) @mock.patch("keystoneauth1.session.Session.get_endpoint") @mock.patch("keystoneauth1.session.Session.get_auth_headers", new=mock.Mock()) def _test_connection_manager_authv3_conf_endpoint( self, mock_ep, expected_endpoint="https://from-catalog.com"): self.config(swift_store_auth_version='3') mock_ep.return_value = "https://from-catalog.com" ctx = mock.MagicMock() self.store.configure() connection_manager = manager.SingleTenantConnectionManager( store=self.store, store_location=self.location, context=ctx ) conn = connection_manager._init_connection() self.assertEqual(expected_endpoint, conn.preauthurl) def test_connection_manager_authv3_without_conf_endpoint(self): self._test_connection_manager_authv3_conf_endpoint() def test_connection_manager_authv3_with_conf_endpoint(self): self.config(swift_store_endpoint='http://localhost') self._test_connection_manager_authv3_conf_endpoint( expected_endpoint='http://localhost') def test_connection_with_no_trailing_slash(self): self.location.auth_or_store_url = 'example.com/v2' connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) def test_connection_insecure(self): self.config(swift_store_auth_insecure=True) self.store.configure() connection = self.store.get_connection(self.location) self.assertTrue(connection.insecure) def test_connection_with_auth_v1(self): self.config(swift_store_auth_version='1') self.store.configure() self.location.user = 'auth_v1_user' connection = self.store.get_connection(self.location) self.assertEqual('1', connection.auth_version) self.assertEqual('auth_v1_user', connection.user) self.assertIsNone(connection.tenant_name) def test_connection_invalid_user(self): self.store.configure() self.location.user = 'invalid:format:user' self.assertRaises(exceptions.BadStoreUri, self.store.get_connection, self.location) def test_connection_missing_user(self): self.store.configure() self.location.user = None self.assertRaises(exceptions.BadStoreUri, self.store.get_connection, self.location) def test_connection_with_region(self): self.config(swift_store_region='Sahara') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'region_name': 'Sahara', 'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_service_type(self): self.config(swift_store_service_type='shoe-store') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'service_type': 'shoe-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_endpoint_type(self): self.config(swift_store_endpoint_type='internalURL') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'internalURL'}, connection.os_options) def test_bad_location_uri(self): self.store.configure() self.location.uri = 'http://bad_uri://' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_bad_location_uri_invalid_credentials(self): self.store.configure() self.location.uri = 'swift://bad_creds@uri/cont/obj' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_bad_location_uri_invalid_object_path(self): self.store.configure() self.location.uri = 'swift://user:key@uri/cont' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_ref_overrides_defaults(self): self.config(swift_store_auth_version='2', swift_store_user='testuser', swift_store_key='testpass', swift_store_auth_address='testaddress', swift_store_endpoint_type='internalURL', swift_store_config_file='somefile') self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'auth_version': '3', 'user': 'user:pass', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}} self.store.configure() self.assertEqual('user:pass', self.store.user) self.assertEqual('3', self.store.auth_version) self.assertEqual('authurl.com', self.store.auth_address) self.assertEqual('default', self.store.user_domain_id) self.assertEqual('ignored', self.store.user_domain_name) self.assertEqual('default', self.store.project_domain_id) self.assertEqual('ignored', self.store.project_domain_name) def test_with_v3_auth(self): self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'auth_version': '3', 'user': 'user:pass', 'key': 'password', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}} self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual('3', connection.auth_version) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}, connection.os_options) class TestMultiTenantStoreConnections(base.StoreBaseTest): def setUp(self): super(TestMultiTenantStoreConnections, self).setUp() self.useFixture( fixtures.MockPatch('swiftclient.Connection', FakeConnection)) self.context = mock.MagicMock( user='tenant:user1', tenant='tenant', auth_token='0123') self.store = swift.MultiTenantStore(self.conf) specs = {'scheme': 'swift', 'auth_or_store_url': 'example.com', 'container': 'cont', 'obj': 'object'} self.location = swift.StoreLocation(specs, self.conf) self.addCleanup(self.conf.reset) def test_basic_connection(self): self.store.configure() connection = self.store.get_connection(self.location, context=self.context) self.assertIsNone(connection.authurl) self.assertEqual('1', connection.auth_version) self.assertIsNone(connection.user) self.assertIsNone(connection.tenant_name) self.assertIsNone(connection.key) self.assertEqual('https://example.com', connection.preauthurl) self.assertEqual('0123', connection.preauthtoken) self.assertEqual({}, connection.os_options) def test_connection_does_not_use_endpoint_from_catalog(self): self.store.configure() self.context.service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'https://scexample.com', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] connection = self.store.get_connection(self.location, context=self.context) self.assertIsNone(connection.authurl) self.assertEqual('1', connection.auth_version) self.assertIsNone(connection.user) self.assertIsNone(connection.tenant_name) self.assertIsNone(connection.key) self.assertNotEqual('https://scexample.com', connection.preauthurl) self.assertEqual('https://example.com', connection.preauthurl) self.assertEqual('0123', connection.preauthtoken) self.assertEqual({}, connection.os_options) def test_connection_manager_does_not_use_endpoint_from_catalog(self): self.store.configure() self.context.service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'https://scexample.com', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] connection_manager = manager.MultiTenantConnectionManager( store=self.store, store_location=self.location, context=self.context ) conn = connection_manager._init_connection() self.assertNotEqual('https://scexample.com', conn.preauthurl) self.assertEqual('https://example.com', conn.preauthurl) class TestMultiTenantStoreContext(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): """Establish a clean test environment.""" super(TestMultiTenantStoreContext, self).setUp() conf = SWIFT_CONF.copy() self.store = Store(self.conf) self.config(**conf) self.store.configure() self.register_store_schemes(self.store, 'swift') service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'http://127.0.0.1:0', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] self.ctx = mock.MagicMock( service_catalog=service_catalog, user='tenant:user1', tenant='tenant', auth_token='0123') self.addCleanup(self.conf.reset) @requests_mock.mock() def test_download_context(self, m): """Verify context (ie token) is passed to swift on download.""" self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() uri = "swift+http://127.0.0.1/glance_123/123" loc = location.get_location_from_uri(uri, conf=self.conf) m.get("http://127.0.0.1/glance_123/123", headers={'Content-Length': '0'}) store.get(loc, context=self.ctx) self.assertEqual(b'0123', m.last_request.headers['X-Auth-Token']) @requests_mock.mock() def test_upload_context(self, m): """Verify context (ie token) is passed to swift on upload.""" head_req = m.head("http://127.0.0.1/glance_123", text='Some data', status_code=201) put_req = m.put("http://127.0.0.1/glance_123/123") self.config(swift_store_multi_tenant=True) store = Store(self.conf) store.configure() content = b'Some data' pseudo_file = io.BytesIO(content) store.add('123', pseudo_file, len(content), HASH_ALGO, context=self.ctx) self.assertEqual(b'0123', head_req.last_request.headers['X-Auth-Token']) self.assertEqual(b'0123', put_req.last_request.headers['X-Auth-Token']) class TestCreatingLocations(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestCreatingLocations, self).setUp() conf = copy.deepcopy(SWIFT_CONF) self.store = Store(self.conf) self.config(**conf) importlib.reload(swift) self.addCleanup(self.conf.reset) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] self.ctxt = mock.MagicMock(user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) def test_single_tenant_location(self): conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_container'] = 'container' conf_file = "glance-swift.conf" self.swift_config_file = self.copy_data_file(conf_file, self.test_dir) conf.update({'swift_store_config_file': self.swift_config_file}) conf['default_swift_reference'] = 'ref1' self.config(**conf) importlib.reload(swift) store = swift.SingleTenantStore(self.conf) store.configure() location = store.create_location('image-id') self.assertEqual('swift+https', location.scheme) self.assertEqual('https://example.com', location.swift_url) self.assertEqual('container', location.container) self.assertEqual('image-id', location.obj) self.assertEqual('tenant:user1', location.user) self.assertEqual('key1', location.key) def test_single_tenant_location_http(self): conf_file = "glance-swift.conf" test_dir = self.useFixture(fixtures.TempDir()).path self.swift_config_file = self.copy_data_file(conf_file, test_dir) self.config(swift_store_container='container', default_swift_reference='ref2', swift_store_config_file=self.swift_config_file) store = swift.SingleTenantStore(self.conf) store.configure() location = store.create_location('image-id') self.assertEqual('swift+http', location.scheme) self.assertEqual('http://example.com', location.swift_url) def test_multi_tenant_location(self): self.config(swift_store_container='container') store = swift.MultiTenantStore(self.conf) store.configure() location = store.create_location('image-id', context=self.ctxt) self.assertEqual('swift+https', location.scheme) self.assertEqual('https://some_endpoint', location.swift_url) self.assertEqual('container_image-id', location.container) self.assertEqual('image-id', location.obj) self.assertIsNone(location.user) self.assertIsNone(location.key) def test_multi_tenant_location_http(self): store = swift.MultiTenantStore(self.conf) store.configure() self.ctxt.service_catalog[0]['endpoints'][0]['publicURL'] = \ 'http://some_endpoint' location = store.create_location('image-id', context=self.ctxt) self.assertEqual('swift+http', location.scheme) self.assertEqual('http://some_endpoint', location.swift_url) def test_multi_tenant_location_with_region(self): self.config(swift_store_region='WestCarolina') store = swift.MultiTenantStore(self.conf) store.configure() self.ctxt.service_catalog[0]['endpoints'][0]['region'] = 'WestCarolina' self.assertEqual('https://some_endpoint', store._get_endpoint(self.ctxt)) def test_multi_tenant_location_custom_service_type(self): self.config(swift_store_service_type='toy-store') self.ctxt.service_catalog[0]['type'] = 'toy-store' store = swift.MultiTenantStore(self.conf) store.configure() store._get_endpoint(self.ctxt) self.assertEqual('https://some_endpoint', store._get_endpoint(self.ctxt)) def test_multi_tenant_location_custom_endpoint_type(self): self.config(swift_store_endpoint_type='internalURL') store = swift.MultiTenantStore(self.conf) store.configure() self.assertEqual('https://some_internal_endpoint', store._get_endpoint(self.ctxt)) class TestChunkReader(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestChunkReader, self).setUp() conf = copy.deepcopy(SWIFT_CONF) Store(self.conf) self.config(**conf) def test_read_all_data(self): """ Replicate what goes on in the Swift driver with the repeated creation of the ChunkReader object """ CHUNKSIZE = 100 data = b'*' * units.Ki expected_checksum = md5(data, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(data).hexdigest() data_file = tempfile.NamedTemporaryFile() data_file.write(data) data_file.flush() infile = open(data_file.name, 'rb') bytes_read = 0 checksum = md5(usedforsecurity=False) os_hash_value = hashlib.sha256() while True: cr = swift.ChunkReader(infile, checksum, os_hash_value, CHUNKSIZE) chunk = cr.read(CHUNKSIZE) if len(chunk) == 0: self.assertEqual(True, cr.is_zero_size) break bytes_read += len(chunk) self.assertEqual(units.Ki, bytes_read) self.assertEqual(expected_checksum, cr.checksum.hexdigest()) self.assertEqual(expected_multihash, cr.os_hash_value.hexdigest()) data_file.close() infile.close() def test_read_zero_size_data(self): """ Replicate what goes on in the Swift driver with the repeated creation of the ChunkReader object """ expected_checksum = md5(b'', usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(b'').hexdigest() CHUNKSIZE = 100 checksum = md5(usedforsecurity=False) os_hash_value = hashlib.sha256() data_file = tempfile.NamedTemporaryFile() infile = open(data_file.name, 'rb') bytes_read = 0 while True: cr = swift.ChunkReader(infile, checksum, os_hash_value, CHUNKSIZE) chunk = cr.read(CHUNKSIZE) if len(chunk) == 0: break bytes_read += len(chunk) self.assertEqual(True, cr.is_zero_size) self.assertEqual(0, bytes_read) self.assertEqual(expected_checksum, cr.checksum.hexdigest()) self.assertEqual(expected_multihash, cr.os_hash_value.hexdigest()) data_file.close() infile.close() class TestMultipleContainers(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestMultipleContainers, self).setUp() self.config(swift_store_multiple_containers_seed=3) self.store = swift.SingleTenantStore(self.conf) self.store.configure() def test_get_container_name_happy_path_with_seed_three(self): test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_fda' self.assertEqual(expected, actual) def test_get_container_name_with_negative_seed(self): self.assertRaises(ValueError, self.config, swift_store_multiple_containers_seed=-1) def test_get_container_name_with_seed_beyond_max(self): self.assertRaises(ValueError, self.config, swift_store_multiple_containers_seed=33) def test_get_container_name_with_max_seed(self): self.config(swift_store_multiple_containers_seed=32) self.store = swift.SingleTenantStore(self.conf) test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + test_image_id self.assertEqual(expected, actual) def test_get_container_name_with_dash(self): self.config(swift_store_multiple_containers_seed=10) self.store = swift.SingleTenantStore(self.conf) test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + 'fdae39a1-ba' self.assertEqual(expected, actual) def test_get_container_name_with_min_seed(self): self.config(swift_store_multiple_containers_seed=1) self.store = swift.SingleTenantStore(self.conf) test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + 'f' self.assertEqual(expected, actual) def test_get_container_name_with_multiple_containers_turned_off(self): self.config(swift_store_multiple_containers_seed=0) self.store.configure() test_image_id = 'random_id' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container' self.assertEqual(expected, actual) class TestBufferedReader(base.StoreBaseTest): _CONF = cfg.CONF def setUp(self): super(TestBufferedReader, self).setUp() self.config(swift_upload_buffer_dir=self.test_dir) s = b'1234567890' self.infile = io.BytesIO(s) self.infile.seek(0) self.checksum = md5(usedforsecurity=False) self.hash_algo = HASH_ALGO self.os_hash_value = hashlib.sha256() self.verifier = mock.MagicMock(name='mock_verifier') total = 7 # not the full 10 byte string - defines segment boundary self.reader = buffered.BufferedReader(self.infile, self.checksum, self.os_hash_value, total, self.verifier) self.addCleanup(self.conf.reset) def tearDown(self): super(TestBufferedReader, self).tearDown() self.reader.__exit__(None, None, None) def test_buffer(self): self.reader.read(4) self.assertTrue(self.reader._buffered) # test buffer position self.assertEqual(4, self.reader.tell()) # also test buffer contents buf = self.reader._tmpfile buf.seek(0) self.assertEqual(b'1234567', buf.read()) def test_read(self): buf = self.reader.read(4) # buffer and return 1234 self.assertEqual(b'1234', buf) buf = self.reader.read(4) # return 567 self.assertEqual(b'567', buf) self.assertEqual(7, self.reader.tell()) def test_read_limited(self): # read should not exceed the segment boundary described # by 'total' self.assertEqual(b'1234567', self.reader.read(100)) def test_reset(self): # test a reset like what swiftclient would do # if a segment upload failed. self.assertEqual(0, self.reader.tell()) self.reader.read(4) self.assertEqual(4, self.reader.tell()) self.reader.seek(0) self.assertEqual(0, self.reader.tell()) # confirm a read after reset self.assertEqual(b'1234', self.reader.read(4)) def test_partial_reset(self): # reset, but not all the way to the beginning self.reader.read(4) self.reader.seek(2) self.assertEqual(b'34567', self.reader.read(10)) def test_checksums(self): # checksums are updated only once on a full segment read expected_csum = md5(usedforsecurity=False) expected_csum.update(b'1234567') expected_multihash = hashlib.sha256() expected_multihash.update(b'1234567') self.reader.read(7) self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.assertEqual(expected_multihash.hexdigest(), self.os_hash_value.hexdigest()) def test_checksum_updated_only_once_w_full_segment_read(self): # Test that checksums are updated only once when a full segment read # is followed by a seek and partial reads. expected_csum = md5(usedforsecurity=False) expected_csum.update(b'1234567') expected_multihash = hashlib.sha256() expected_multihash.update(b'1234567') self.reader.read(7) # attempted read of the entire chunk self.reader.seek(4) # seek back due to possible partial failure self.reader.read(1) # read one more byte # checksum was updated just once during the first attempted full read self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.assertEqual(expected_multihash.hexdigest(), self.os_hash_value.hexdigest()) def test_checksum_updates_during_partial_segment_reads(self): # Test to check that checksums are updated with only the bytes # not seen when the number of bytes being read is changed expected_csum = md5(usedforsecurity=False) expected_multihash = hashlib.sha256() self.reader.read(4) expected_csum.update(b'1234') expected_multihash.update(b'1234') self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.assertEqual(expected_multihash.hexdigest(), self.os_hash_value.hexdigest()) self.reader.seek(0) # possible failure self.reader.read(2) self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.assertEqual(expected_multihash.hexdigest(), self.os_hash_value.hexdigest()) self.reader.read(4) # checksum missing two bytes expected_csum.update(b'56') expected_multihash.update(b'56') # checksum updated with only the bytes it did not see self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.assertEqual(expected_multihash.hexdigest(), self.os_hash_value.hexdigest()) def test_checksum_rolling_calls(self): # Test that the checksum continues on to the next segment expected_csum = md5(usedforsecurity=False) expected_multihash = hashlib.sha256() self.reader.read(7) expected_csum.update(b'1234567') expected_multihash.update(b'1234567') self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.assertEqual(expected_multihash.hexdigest(), self.os_hash_value.hexdigest()) # another reader to complete reading the image file reader1 = buffered.BufferedReader(self.infile, self.checksum, self.os_hash_value, 3, self.reader.verifier) reader1.read(3) expected_csum.update(b'890') expected_multihash.update(b'890') self.assertEqual(expected_csum.hexdigest(), self.checksum.hexdigest()) self.assertEqual(expected_multihash.hexdigest(), self.os_hash_value.hexdigest()) def test_verifier(self): # Test that the verifier is updated only once on a full segment read. self.reader.read(7) self.verifier.update.assert_called_once_with(b'1234567') def test_verifier_updated_only_once_w_full_segment_read(self): # Test that the verifier is updated only once when a full segment read # is followed by a seek and partial reads. self.reader.read(7) # attempted read of the entire chunk self.reader.seek(4) # seek back due to possible partial failure self.reader.read(5) # continue reading # verifier was updated just once during the first attempted full read self.verifier.update.assert_called_once_with(b'1234567') def test_verifier_updates_during_partial_segment_reads(self): # Test to check that verifier is updated with only the bytes it has # not seen when the number of bytes being read is changed self.reader.read(4) self.verifier.update.assert_called_once_with(b'1234') self.reader.seek(0) # possible failure self.reader.read(2) # verifier knows ahead self.verifier.update.assert_called_once_with(b'1234') self.reader.read(4) # verify missing 2 bytes # verifier updated with only the bytes it did not see self.verifier.update.assert_called_with(b'56') # verifier updated self.assertEqual(2, self.verifier.update.call_count) def test_verifier_rolling_calls(self): # Test that the verifier continues on to the next segment self.reader.read(7) self.verifier.update.assert_called_once_with(b'1234567') self.assertEqual(1, self.verifier.update.call_count) # another reader to complete reading the image file reader1 = buffered.BufferedReader(self.infile, self.checksum, self.os_hash_value, 3, self.reader.verifier) reader1.read(3) self.verifier.update.assert_called_with(b'890') self.assertEqual(2, self.verifier.update.call_count) def test_light_buffer(self): # eventlet nonblocking fds means sometimes the buffer won't fill. # simulate testing where there is less in the buffer than a # full segment s = b'12' infile = io.BytesIO(s) infile.seek(0) total = 7 checksum = md5(usedforsecurity=False) os_hash_value = hashlib.sha256() self.reader = buffered.BufferedReader( infile, checksum, os_hash_value, total) self.reader.read(0) # read into buffer self.assertEqual(b'12', self.reader.read(7)) self.assertEqual(2, self.reader.tell()) def test_context_exit(self): # should close tempfile on context exit with self.reader: pass # file objects are not required to have a 'close' attribute if getattr(self.reader._tmpfile, 'closed'): self.assertTrue(self.reader._tmpfile.closed) def test_read_all_data(self): """ Replicate what goes on in the Swift driver with the repeated creation of the BufferedReader object """ CHUNKSIZE = 100 data = b'*' * units.Ki expected_checksum = md5(data, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(data).hexdigest() data_file = tempfile.NamedTemporaryFile() data_file.write(data) data_file.flush() infile = open(data_file.name, 'rb') bytes_read = 0 checksum = md5(usedforsecurity=False) os_hash_value = hashlib.sha256() while True: cr = buffered.BufferedReader(infile, checksum, os_hash_value, CHUNKSIZE) chunk = cr.read(CHUNKSIZE) if len(chunk) == 0: self.assertEqual(True, cr.is_zero_size) break else: self.assertEqual(False, cr.is_zero_size) bytes_read += len(chunk) self.assertEqual(units.Ki, bytes_read) self.assertEqual(expected_checksum, cr.checksum.hexdigest()) self.assertEqual(expected_multihash, cr.os_hash_value.hexdigest()) data_file.close() infile.close() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_swift_store_multibackend.py0000664000175000017500000027700100000000000027514 0ustar00zuulzuul00000000000000# Copyright 2018 RedHat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the Swift backend store""" import copy from unittest import mock import fixtures import hashlib import http.client import importlib import io import tempfile import uuid from oslo_config import cfg from oslo_utils import encodeutils from oslo_utils.secretutils import md5 from oslo_utils import units import requests_mock import swiftclient from glance_store._drivers.swift import connection_manager as manager from glance_store._drivers.swift import store as swift from glance_store._drivers.swift import utils as sutils from glance_store import capabilities from glance_store import exceptions from glance_store import location import glance_store.multi_backend as store from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities CONF = cfg.CONF FAKE_UUID = lambda: str(uuid.uuid4()) # noqa: E731 FAKE_UUID2 = lambda: str(uuid.uuid4()) # noqa: E731 Store = swift.Store FIVE_KB = 5 * units.Ki FIVE_GB = 5 * units.Gi MAX_SWIFT_OBJECT_SIZE = FIVE_GB SWIFT_PUT_OBJECT_CALLS = 0 SWIFT_CONF = {'swift_store_auth_address': 'localhost:8080', 'swift_store_container': 'glance', 'swift_store_user': 'user', 'swift_store_key': 'key', 'swift_store_retry_get_count': 1, 'default_swift_reference': 'ref1' } class SwiftTests(object): def mock_keystone_client(self): # mock keystone client functions to avoid dependency errors swift.ks_v3 = mock.MagicMock() swift.ks_session = mock.MagicMock() swift.ks_client = mock.MagicMock() def stub_out_swiftclient(self, swift_store_auth_version): fixture_containers = ['glance'] fixture_container_headers = {} fixture_headers = { 'glance/%s' % FAKE_UUID: { 'content-length': FIVE_KB, 'etag': 'c2e5db72bd7fd153f53ede5da5a06de3' }, 'glance/%s' % FAKE_UUID2: {'x-static-large-object': 'true', }, } fixture_objects = { 'glance/%s' % FAKE_UUID: io.BytesIO(b"*" * FIVE_KB), 'glance/%s' % FAKE_UUID2: io.BytesIO(b"*" * FIVE_KB), } def fake_head_container(url, token, container, **kwargs): if container not in fixture_containers: msg = "No container %s found" % container status = http.client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) return fixture_container_headers def fake_put_container(url, token, container, **kwargs): fixture_containers.append(container) def fake_post_container(url, token, container, headers, **kwargs): for key, value in headers.items(): fixture_container_headers[key] = value def fake_put_object(url, token, container, name, contents, **kwargs): # PUT returns the ETag header for the newly-added object # Large object manifest... global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS += 1 CHUNKSIZE = 64 * units.Ki fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: if kwargs.get('headers'): manifest = kwargs.get('headers').get('X-Object-Manifest') etag = kwargs.get('headers') \ .get('ETag', md5( b'', usedforsecurity=False).hexdigest()) fixture_headers[fixture_key] = { 'manifest': True, 'etag': etag, 'x-object-manifest': manifest } fixture_objects[fixture_key] = None return etag if hasattr(contents, 'read'): fixture_object = io.BytesIO() read_len = 0 chunk = contents.read(CHUNKSIZE) checksum = md5(usedforsecurity=False) while chunk: fixture_object.write(chunk) read_len += len(chunk) checksum.update(chunk) chunk = contents.read(CHUNKSIZE) etag = checksum.hexdigest() else: fixture_object = io.BytesIO(contents) read_len = len(contents) etag = md5(fixture_object.getvalue(), usedforsecurity=False).hexdigest() if read_len > MAX_SWIFT_OBJECT_SIZE: msg = ('Image size:%d exceeds Swift max:%d' % (read_len, MAX_SWIFT_OBJECT_SIZE)) raise swiftclient.ClientException( msg, http_status=http.client.REQUEST_ENTITY_TOO_LARGE) fixture_objects[fixture_key] = fixture_object fixture_headers[fixture_key] = { 'content-length': read_len, 'etag': etag} return etag else: msg = ("Object PUT failed - Object with key %s already exists" % fixture_key) raise swiftclient.ClientException( msg, http_status=http.client.CONFLICT) def fake_get_object(conn, container, name, **kwargs): # GET returns the tuple (list of headers, file object) fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: msg = "Object GET failed" status = http.client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) byte_range = None headers = kwargs.get('headers', dict()) if headers is not None: headers = dict((k.lower(), v) for k, v in headers.items()) if 'range' in headers: byte_range = headers.get('range') fixture = fixture_headers[fixture_key] if 'manifest' in fixture: # Large object manifest... we return a file containing # all objects with prefix of this fixture key chunk_keys = sorted([k for k in fixture_headers.keys() if k.startswith(fixture_key) and k != fixture_key]) result = io.BytesIO() for key in chunk_keys: result.write(fixture_objects[key].getvalue()) else: result = fixture_objects[fixture_key] if byte_range is not None: start = int(byte_range.split('=')[1].strip('-')) result = io.BytesIO(result.getvalue()[start:]) fixture_headers[fixture_key]['content-length'] = len( result.getvalue()) return fixture_headers[fixture_key], result def fake_head_object(url, token, container, name, **kwargs): # HEAD returns the list of headers for an object try: fixture_key = "%s/%s" % (container, name) return fixture_headers[fixture_key] except KeyError: msg = "Object HEAD failed - Object does not exist" status = http.client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) def fake_delete_object(url, token, container, name, **kwargs): # DELETE returns nothing fixture_key = "%s/%s" % (container, name) if fixture_key not in fixture_headers: msg = "Object DELETE failed - Object does not exist" status = http.client.NOT_FOUND raise swiftclient.ClientException(msg, http_status=status) else: del fixture_headers[fixture_key] del fixture_objects[fixture_key] def fake_http_connection(*args, **kwargs): return None def fake_get_auth(url, user, key, auth_version, **kwargs): if url is None: return None, None if 'http' in url and '://' not in url: raise ValueError('Invalid url %s' % url) # Check the auth version against the configured value if swift_store_auth_version != auth_version: msg = 'AUTHENTICATION failed (version mismatch)' raise swiftclient.ClientException(msg) return None, None self.useFixture(fixtures.MockPatch( 'swiftclient.client.head_container', fake_head_container)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.put_container', fake_put_container)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.post_container', fake_post_container)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.put_object', fake_put_object)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.delete_object', fake_delete_object)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.head_object', fake_head_object)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.Connection.get_object', fake_get_object)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.get_auth', fake_get_auth)) self.useFixture(fixtures.MockPatch( 'swiftclient.client.http_connection', fake_http_connection)) @property def swift_store_user(self): return 'tenant:user1' def test_get_size(self): """ Test that we can get the size of an object in the swift store """ uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) image_size = self.store.get_size(loc) self.assertEqual(5120, image_size) @mock.patch.object(store, 'get_store_from_store_identifier') def test_get_size_with_multi_tenant_on(self, mock_get): """Test that single tenant uris work with multi tenant on.""" mock_get.return_value = self.store uri = ("swift://%s:key@auth_address/glance/%s" % (self.swift_store_user, FAKE_UUID)) self.config(group="swift1", swift_store_config_file=None) self.config(group="swift1", swift_store_multi_tenant=True) # NOTE(markwash): ensure the image is found ctxt = mock.MagicMock() size = store.get_size_from_uri_and_backend( uri, "swift1", context=ctxt) self.assertEqual(5120, size) def test_multi_tenant_with_swift_config(self): """ Test that Glance does not start when a config file is set on multi-tenant mode """ schemes = ['swift', 'swift+config'] for s in schemes: self.config(group='glance_store', default_backend="swift1") self.config(group="swift1", swift_store_config_file='not/none', swift_store_multi_tenant=True) self.assertRaises(exceptions.BadStoreConfiguration, Store, self.conf, backend="swift1") def test_get(self): """Test a "normal" retrieval of an image in chunks.""" uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) (image_swift, image_size) = self.store.get(loc) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_with_retry(self): """ Test a retrieval where Swift does not get the full image in a single request. """ uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) ctxt = mock.MagicMock() (image_swift, image_size) = self.store.get(loc, context=ctxt) resp_full = b''.join([chunk for chunk in image_swift.wrapped]) resp_half = resp_full[:len(resp_full) // 2] resp_half = io.BytesIO(resp_half) manager = self.store.get_manager(loc.store_location, ctxt) image_swift.wrapped = swift.swift_retry_iter(resp_half, image_size, self.store, loc.store_location, manager) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_with_http_auth(self): """ Test a retrieval from Swift with an HTTP authurl. This is specified either via a Location header with swift+http:// or using http:// in the swift_store_auth_address config value """ loc = location.get_location_from_uri_and_backend( "swift+http://%s:key@auth_address/glance/%s" % (self.swift_store_user, FAKE_UUID), "swift1", conf=self.conf) ctxt = mock.MagicMock() (image_swift, image_size) = self.store.get(loc, context=ctxt) self.assertEqual(5120, image_size) expected_data = b"*" * FIVE_KB data = b"" for chunk in image_swift: data += chunk self.assertEqual(expected_data, data) def test_get_non_existing(self): """ Test that trying to retrieve a swift that doesn't exist raises an error """ loc = location.get_location_from_uri_and_backend( "swift://%s:key@authurl/glance/noexist" % (self.swift_store_user), "swift1", conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_buffered_reader_opts(self): self.config(group="swift1", swift_buffer_on_upload=True) self.config(group="swift1", swift_upload_buffer_dir=self.test_dir) try: self.store = Store(self.conf, backend="swift1") except exceptions.BadStoreConfiguration: self.fail("Buffered Reader exception raised when it " "should not have been") def test_buffered_reader_with_invalid_path(self): self.config(group="swift1", swift_buffer_on_upload=True) self.config(group="swift1", swift_upload_buffer_dir="/some/path") self.store = Store(self.conf, backend="swift1") self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure) def test_buffered_reader_with_no_path_given(self): self.config(group="swift1", swift_buffer_on_upload=True) self.store = Store(self.conf, backend="swift1") self.assertRaises(exceptions.BadStoreConfiguration, self.store.configure) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_add(self): """Test that we can add an image via the swift backend.""" importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.store.configure() expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_image_id = str(uuid.uuid4()) loc = "swift+https://tenant%%3Auser1:key@localhost:8080/glance/%s" expected_location = loc % (expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 loc, size, checksum, metadata = self.store.add( expected_image_id, image_swift, expected_swift_size) self.assertEqual("swift1", metadata["store"]) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) # Expecting a single object to be created on Swift i.e. no chunking. self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri_and_backend( expected_location, "swift1", conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_multi_store(self): conf = copy.deepcopy(SWIFT_CONF) conf['default_swift_reference'] = 'store_2' self.config(group="swift1", **conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.store.configure() expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 loc = 'swift+config://store_2/glance/%s' expected_location = loc % (expected_image_id) location, size, checksum, arg = self.store.add(expected_image_id, image_swift, expected_swift_size) self.assertEqual("swift1", arg['store']) self.assertEqual(expected_location, location) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_multi_tenant_image_add_uses_users_context(self): expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) expected_container = 'container_' + expected_image_id loc = 'swift+https://some_endpoint/%s/%s' expected_location = loc % (expected_container, expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.config(group='swift1', swift_store_container='container') self.config(group='swift1', swift_store_create_container_on_put=True) self.config(group='swift1', swift_store_multi_tenant=True) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] ctxt = mock.MagicMock( user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) store = swift.MultiTenantStore(self.conf, backend='swift1') store.configure() loc, size, checksum, metadata = store.add(expected_image_id, image_swift, expected_swift_size, context=ctxt) self.assertEqual("swift1", metadata['store']) # ensure that image add uses user's context self.assertEqual(expected_location, loc) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_auth_url_variations(self): """ Test that we can add an image via the swift backend with a variety of different auth_address values """ conf = copy.deepcopy(SWIFT_CONF) self.config(group="swift1", **conf) variations = { 'store_4': 'swift+config://store_4/glance/%s', 'store_5': 'swift+config://store_5/glance/%s', 'store_6': 'swift+config://store_6/glance/%s' } for variation, expected_location in variations.items(): image_id = str(uuid.uuid4()) expected_location = expected_location % image_id expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = \ md5(expected_swift_contents, usedforsecurity=False).hexdigest() image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf['default_swift_reference'] = variation self.config(group="swift1", **conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.store.configure() loc, size, checksum, metadata = self.store.add(image_id, image_swift, expected_swift_size) self.assertEqual("swift1", metadata['store']) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri_and_backend( expected_location, "swift1", conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_no_container_no_create(self): """ Tests that adding an image with a non-existing container raises an appropriate exception """ conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = False conf['swift_store_container'] = 'noexist' self.config(group="swift1", **conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf, backend='swift1') self.store.configure() image_swift = io.BytesIO(b"nevergonnamakeit") global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # We check the exception text to ensure the container # missing text is found in it, otherwise, we would have # simply used self.assertRaises here exception_caught = False try: self.store.add(str(uuid.uuid4()), image_swift, 0) except exceptions.BackendException as e: exception_caught = True self.assertIn("container noexist does not exist in Swift", encodeutils.exception_to_unicode(e)) self.assertTrue(exception_caught) self.assertEqual(0, SWIFT_PUT_OBJECT_CALLS) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_create(self): """ Tests that adding an image with a non-existing container creates the container automatically if flag is set """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/noexist/%s' expected_location = loc % (expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = True conf['swift_store_container'] = 'noexist' self.config(group="swift1", **conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.store.configure() loc, size, checksum, metadata = self.store.add(expected_image_id, image_swift, expected_swift_size) self.assertEqual("swift1", metadata['store']) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri_and_backend( expected_location, "swift1", conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_multiple_containers_create(self): """ Tests that adding an image with a non-existing container while using multi containers will create the container automatically if flag is set """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_image_id = str(uuid.uuid4()) container = 'randomname_' + expected_image_id[:2] loc = 'swift+config://ref1/%s/%s' expected_location = loc % (container, expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = True conf['swift_store_container'] = 'randomname' conf['swift_store_multiple_containers_seed'] = 2 self.config(group="swift1", **conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.store.configure() loc, size, checksum, metadata = self.store.add(expected_image_id, image_swift, expected_swift_size) self.assertEqual("swift1", metadata['store']) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(1, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri_and_backend( expected_location, "swift1", conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_swift) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_no_container_and_multiple_containers_no_create(self): """ Tests that adding an image with a non-existing container while using multiple containers raises an appropriate exception """ conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_user'] = 'tenant:user' conf['swift_store_create_container_on_put'] = False conf['swift_store_container'] = 'randomname' conf['swift_store_multiple_containers_seed'] = 2 self.config(group="swift1", **conf) importlib.reload(swift) self.mock_keystone_client() expected_image_id = str(uuid.uuid4()) expected_container = 'randomname_' + expected_image_id[:2] self.store = Store(self.conf, backend="swift1") self.store.configure() image_swift = io.BytesIO(b"nevergonnamakeit") global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # We check the exception text to ensure the container # missing text is found in it, otherwise, we would have # simply used self.assertRaises here exception_caught = False try: self.store.add(expected_image_id, image_swift, 0) except exceptions.BackendException as e: exception_caught = True expected_msg = "container %s does not exist in Swift" expected_msg = expected_msg % expected_container self.assertIn(expected_msg, encodeutils.exception_to_unicode(e)) self.assertTrue(exception_caught) self.assertEqual(0, SWIFT_PUT_OBJECT_CALLS) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_with_verifier(self): """Test that the verifier is updated when verifier is provided.""" swift_size = FIVE_KB base_byte = b"12345678" swift_contents = base_byte * (swift_size // 8) image_id = str(uuid.uuid4()) image_swift = io.BytesIO(swift_contents) self.store = Store(self.conf, backend="swift1") self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size custom_size = units.Ki verifier = mock.MagicMock(name='mock_verifier') try: self.store.large_object_size = custom_size self.store.large_object_chunk_size = custom_size self.store.add(image_id, image_swift, swift_size, verifier=verifier) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size # Confirm verifier update called expected number of times self.assertEqual(2 * swift_size / custom_size, verifier.update.call_count) # define one chunk of the contents swift_contents_piece = base_byte * (custom_size // 8) # confirm all expected calls to update have occurred calls = [mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b''), mock.call(swift_contents_piece), mock.call(b'')] verifier.update.assert_has_calls(calls) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_with_verifier_small(self): """Test that the verifier is updated for smaller images.""" swift_size = FIVE_KB base_byte = b"12345678" swift_contents = base_byte * (swift_size // 8) image_id = str(uuid.uuid4()) image_swift = io.BytesIO(swift_contents) self.store = Store(self.conf, backend="swift1") self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size custom_size = 6 * units.Ki verifier = mock.MagicMock(name='mock_verifier') try: self.store.large_object_size = custom_size self.store.large_object_chunk_size = custom_size self.store.add(image_id, image_swift, swift_size, verifier=verifier) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size # Confirm verifier update called expected number of times self.assertEqual(2, verifier.update.call_count) # define one chunk of the contents swift_contents_piece = base_byte * (swift_size // 8) # confirm all expected calls to update have occurred calls = [mock.call(swift_contents_piece), mock.call(b'')] verifier.update.assert_has_calls(calls) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=False)) def test_multi_container_doesnt_impact_multi_tenant_add(self): expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_image_id = str(uuid.uuid4()) expected_container = 'container_' + expected_image_id loc = 'swift+https://some_endpoint/%s/%s' expected_location = loc % (expected_container, expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.config(group="swift1", swift_store_container='container') self.config(group="swift1", swift_store_create_container_on_put=True) self.config(group="swift1", swift_store_multiple_containers_seed=2) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] ctxt = mock.MagicMock( user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) store = swift.MultiTenantStore(self.conf, backend="swift1") store.configure() location, size, checksum, metadata = store.add(expected_image_id, image_swift, expected_swift_size, context=ctxt) self.assertEqual("swift1", metadata['store']) self.assertEqual(expected_location, location) @mock.patch('glance_store._drivers.swift.utils' '.is_multiple_swift_store_accounts_enabled', mock.Mock(return_value=True)) def test_add_large_object(self): """ Tests that adding a very large image. We simulate the large object by setting store.large_object_size to a small number and then verify that there have been a number of calls to put_object()... """ expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/glance/%s' expected_location = loc % (expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 self.store = Store(self.conf, backend="swift1") self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size try: self.store.large_object_size = units.Ki self.store.large_object_chunk_size = units.Ki loc, size, checksum, metadata = self.store.add(expected_image_id, image_swift, expected_swift_size) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size self.assertEqual("swift1", metadata['store']) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) # Expecting 6 objects to be created on Swift -- 5 chunks and 1 # manifest. self.assertEqual(6, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri_and_backend( expected_location, "swift1", conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_contents) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_add_large_object_zero_size(self): """ Tests that adding an image to Swift which has both an unknown size and exceeds Swift's maximum limit of 5GB is correctly uploaded. We avoid the overhead of creating a 5GB object for this test by temporarily setting MAX_SWIFT_OBJECT_SIZE to 1KB, and then adding an object of 5KB. Bug lp:891738 """ # Set up a 'large' image of 5KB expected_swift_size = FIVE_KB expected_swift_contents = b"*" * expected_swift_size expected_checksum = md5(expected_swift_contents, usedforsecurity=False).hexdigest() expected_image_id = str(uuid.uuid4()) loc = 'swift+config://ref1/glance/%s' expected_location = loc % (expected_image_id) image_swift = io.BytesIO(expected_swift_contents) global SWIFT_PUT_OBJECT_CALLS SWIFT_PUT_OBJECT_CALLS = 0 # Temporarily set Swift MAX_SWIFT_OBJECT_SIZE to 1KB and add our image, # explicitly setting the image_length to 0 self.store = Store(self.conf, backend="swift1") self.store.configure() orig_max_size = self.store.large_object_size orig_temp_size = self.store.large_object_chunk_size global MAX_SWIFT_OBJECT_SIZE orig_max_swift_object_size = MAX_SWIFT_OBJECT_SIZE try: MAX_SWIFT_OBJECT_SIZE = units.Ki self.store.large_object_size = units.Ki self.store.large_object_chunk_size = units.Ki loc, size, checksum, metadata = self.store.add(expected_image_id, image_swift, 0) finally: self.store.large_object_chunk_size = orig_temp_size self.store.large_object_size = orig_max_size MAX_SWIFT_OBJECT_SIZE = orig_max_swift_object_size self.assertEqual("swift1", metadata['store']) self.assertEqual(expected_location, loc) self.assertEqual(expected_swift_size, size) self.assertEqual(expected_checksum, checksum) # Expecting 6 calls to put_object -- 5 chunks, and the manifest. self.assertEqual(6, SWIFT_PUT_OBJECT_CALLS) loc = location.get_location_from_uri_and_backend( expected_location, "swift1", conf=self.conf) (new_image_swift, new_image_size) = self.store.get(loc) new_image_contents = b''.join([chunk for chunk in new_image_swift]) new_image_swift_size = len(new_image_contents) self.assertEqual(expected_swift_contents, new_image_contents) self.assertEqual(expected_swift_size, new_image_swift_size) def test_location_url_prefix_is_set(self): self.store = Store(self.conf, backend="swift1") self.store.configure() expected_url_prefix = "swift+config://ref1/glance/" self.assertEqual(expected_url_prefix, self.store.url_prefix) def test_add_already_existing(self): """ Tests that adding an image with an existing identifier raises an appropriate exception """ self.store = Store(self.conf, backend="swift1") self.store.configure() image_swift = io.BytesIO(b"nevergonnamakeit") self.assertRaises(exceptions.Duplicate, self.store.add, FAKE_UUID, image_swift, 0) def _option_required(self, key): conf = self.getConfig() conf[key] = None try: self.config(group="swift1", **conf) self.store = Store(self.conf, backend="swift1") return not self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS) except Exception: return False def test_no_store_credentials(self): """ Tests that options without a valid credentials disables the add method """ self.store = Store(self.conf, backend="swift1") self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'user': '', 'key': ''}} self.store.configure() self.assertFalse(self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS)) def test_no_auth_address(self): """ Tests that options without auth address disables the add method """ self.store = Store(self.conf, backend="swift1") self.store.ref_params = {'ref1': {'auth_address': '', 'user': 'user1', 'key': 'key1'}} self.store.configure() self.assertFalse(self.store.is_capable( capabilities.BitMasks.WRITE_ACCESS)) def test_delete(self): """ Test we can delete an existing image in the swift store """ conf = copy.deepcopy(SWIFT_CONF) self.config(group="swift1", **conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch.object(swiftclient.client, 'delete_object') def test_delete_slo(self, mock_del_obj): """ Test we can delete an existing image stored as SLO, static large object """ conf = copy.deepcopy(SWIFT_CONF) self.config(group="swift1", **conf) importlib.reload(swift) self.store = Store(self.conf, backend="swift1") self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % (self.swift_store_user, FAKE_UUID2) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) self.store.delete(loc) self.assertEqual(1, mock_del_obj.call_count) _, kwargs = mock_del_obj.call_args self.assertEqual('multipart-manifest=delete', kwargs.get('query_string')) @mock.patch.object(swiftclient.client, 'delete_object') def test_delete_nonslo_not_deleted_as_slo(self, mock_del_obj): """ Test that non-SLOs are not being deleted the SLO way """ conf = copy.deepcopy(SWIFT_CONF) self.config(group="swift1", **conf) importlib.reload(swift) self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.store.configure() uri = "swift://%s:key@authurl/glance/%s" % (self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) self.store.delete(loc) self.assertEqual(1, mock_del_obj.call_count) _, kwargs = mock_del_obj.call_args self.assertIsNone(kwargs.get('query_string')) def test_delete_with_reference_params(self): """ Test we can delete an existing image in the swift store """ conf = copy.deepcopy(SWIFT_CONF) self.config(group="swift1", **conf) importlib.reload(swift) # mock client because v3 uses it to receive auth_info self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.store.configure() uri = "swift+config://ref1/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) self.store.delete(loc) self.assertRaises(exceptions.NotFound, self.store.get, loc) def test_delete_non_existing(self): """ Test that trying to delete a swift that doesn't exist raises an error """ conf = copy.deepcopy(SWIFT_CONF) self.config(group="swift1", **conf) importlib.reload(swift) self.store = Store(self.conf, backend="swift1") self.store.configure() loc = location.get_location_from_uri_and_backend( "swift://%s:key@authurl/glance/noexist" % (self.swift_store_user), "swift1", conf=self.conf) self.assertRaises(exceptions.NotFound, self.store.delete, loc) def test_delete_with_some_segments_failing(self): """ Tests that delete of a segmented object recovers from error(s) while deleting one or more segments. To test this we add a segmented object first and then delete it, while simulating errors on one or more segments. """ test_image_id = str(uuid.uuid4()) def fake_head_object(container, object_name): object_manifest = '/'.join([container, object_name]) + '-' return {'x-object-manifest': object_manifest} def fake_get_container(container, **kwargs): # Returning 5 fake segments return None, [{'name': '%s-%03d' % (test_image_id, x)} for x in range(1, 6)] def fake_delete_object(container, object_name): # Simulate error on 1st and 3rd segments global SWIFT_DELETE_OBJECT_CALLS SWIFT_DELETE_OBJECT_CALLS += 1 if object_name.endswith('-001') or object_name.endswith('-003'): raise swiftclient.ClientException('Object DELETE failed') else: pass conf = copy.deepcopy(SWIFT_CONF) self.config(group="swift1", **conf) importlib.reload(swift) self.store = Store(self.conf, backend="swift1") self.store.configure() loc_uri = "swift+https://%s:key@localhost:8080/glance/%s" loc_uri = loc_uri % (self.swift_store_user, test_image_id) loc = location.get_location_from_uri_and_backend( loc_uri, "swift1", conf=self.conf) conn = self.store.get_connection(loc.store_location) conn.delete_object = fake_delete_object conn.head_object = fake_head_object conn.get_container = fake_get_container global SWIFT_DELETE_OBJECT_CALLS SWIFT_DELETE_OBJECT_CALLS = 0 self.store.delete(loc, connection=conn) # Expecting 6 delete calls, 5 for the segments and 1 for the manifest self.assertEqual(6, SWIFT_DELETE_OBJECT_CALLS) def test_read_acl_public(self): """ Test that we can set a public read acl. """ self.config(group="swift1", swift_store_config_file=None) self.config(group="swift1", swift_store_multi_tenant=True) store = Store(self.conf, backend="swift1") store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) ctxt = mock.MagicMock() store.set_acls(loc, public=True, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual("*:*", container_headers['X-Container-Read']) def test_read_acl_tenants(self): """ Test that we can set read acl for tenants. """ self.config(group="swift1", swift_store_config_file=None) self.config(group="swift1", swift_store_multi_tenant=True) store = Store(self.conf, backend="swift1") store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) read_tenants = ['matt', 'mark'] ctxt = mock.MagicMock() store.set_acls(loc, read_tenants=read_tenants, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual('matt:*,mark:*', container_headers[ 'X-Container-Read']) def test_write_acls(self): """ Test that we can set write acl for tenants. """ self.config(group="swift1", swift_store_config_file=None) self.config(group="swift1", swift_store_multi_tenant=True) store = Store(self.conf, backend="swift1") store.configure() uri = "swift+http://storeurl/glance/%s" % FAKE_UUID loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) read_tenants = ['frank', 'jim'] ctxt = mock.MagicMock() store.set_acls(loc, write_tenants=read_tenants, context=ctxt) container_headers = swiftclient.client.head_container('x', 'y', 'glance') self.assertEqual('frank:*,jim:*', container_headers[ 'X-Container-Write']) @mock.patch("glance_store._drivers.swift." "connection_manager.MultiTenantConnectionManager") def test_get_connection_manager_multi_tenant(self, manager_class): manager = mock.MagicMock() manager_class.return_value = manager self.config(group="swift1", swift_store_config_file=None) self.config(group="swift1", swift_store_multi_tenant=True) store = Store(self.conf, backend="swift1") store.configure() loc = mock.MagicMock() self.assertEqual(store.get_manager(loc), manager) @mock.patch("glance_store._drivers.swift." "connection_manager.SingleTenantConnectionManager") def test_get_connection_manager_single_tenant(self, manager_class): manager = mock.MagicMock() manager_class.return_value = manager store = Store(self.conf, backend="swift1") store.configure() loc = mock.MagicMock() self.assertEqual(store.get_manager(loc), manager) def test_get_connection_manager_failed(self): store = swift.BaseStore(mock.MagicMock()) loc = mock.MagicMock() self.assertRaises(NotImplementedError, store.get_manager, loc) def test_init_client_multi_tenant(self): """Test that keystone client was initialized correctly""" with mock.patch.object(swift.MultiTenantStore, '_set_url_prefix'): self._init_client(verify=True, swift_store_multi_tenant=True, swift_store_config_file=None) def test_init_client_multi_tenant_swift_cacert(self): """Test that keystone client was initialized with swift cacert""" with mock.patch.object(swift.MultiTenantStore, '_set_url_prefix'): self._init_client(verify='/foo/bar', swift_store_multi_tenant=True, swift_store_config_file=None, swift_store_cacert='/foo/bar') def test_init_client_multi_tenant_insecure(self): """ Test that keystone client was initialized correctly with no certificate verification. """ with mock.patch.object(swift.MultiTenantStore, '_set_url_prefix'): self._init_client(verify=False, swift_store_multi_tenant=True, swift_store_auth_insecure=True, swift_store_config_file=None) @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def _init_client(self, mock_client, mock_session, mock_identity, verify, **kwargs): # initialize store and connection parameters self.config(group="swift1", **kwargs) store = Store(self.conf, backend="swift1") store.configure() ref_params = sutils.SwiftParams(self.conf, backend="swift1").params default_ref = getattr(self.conf, "swift1").default_swift_reference default_swift_reference = ref_params.get(default_ref) # prepare client and session trustee_session = mock.MagicMock() trustor_session = mock.MagicMock() main_session = mock.MagicMock() trustee_client = mock.MagicMock() trustee_client.session.get_user_id.return_value = 'fake_user' trustor_client = mock.MagicMock() trustor_client.session.auth.get_auth_ref.return_value = { 'roles': [{'name': 'fake_role'}] } trustor_client.trusts.create.return_value = mock.MagicMock( id='fake_trust') main_client = mock.MagicMock() mock_session.Session.side_effect = [trustor_session, trustee_session, main_session] mock_client.Client.side_effect = [trustor_client, trustee_client, main_client] # initialize client ctxt = mock.MagicMock() client = store.init_client(location=mock.MagicMock(), context=ctxt) # test trustor usage mock_identity.V3Token.assert_called_once_with( auth_url=default_swift_reference.get('auth_address'), token=ctxt.auth_token, project_id=ctxt.project_id ) mock_session.Session.assert_any_call(auth=mock_identity.V3Token(), verify=verify) mock_client.Client.assert_any_call(session=trustor_session) # test trustee usage and trust creation tenant_name, user = default_swift_reference.get('user').split(':') mock_identity.V3Password.assert_any_call( auth_url=default_swift_reference.get('auth_address'), username=user, password=default_swift_reference.get('key'), project_name=tenant_name, user_domain_id=default_swift_reference.get('user_domain_id'), user_domain_name=default_swift_reference.get('user_domain_name'), project_domain_id=default_swift_reference.get('project_domain_id'), project_domain_name=default_swift_reference.get( 'project_domain_name') ) mock_session.Session.assert_any_call(auth=mock_identity.V3Password(), verify=verify) mock_client.Client.assert_any_call(session=trustee_session) trustor_client.trusts.create.assert_called_once_with( trustee_user='fake_user', trustor_user=ctxt.user_id, project=ctxt.project_id, impersonation=True, role_names=['fake_role'] ) mock_identity.V3Password.assert_any_call( auth_url=default_swift_reference.get('auth_address'), username=user, password=default_swift_reference.get('key'), trust_id='fake_trust', user_domain_id=default_swift_reference.get('user_domain_id'), user_domain_name=default_swift_reference.get('user_domain_name'), project_domain_id=default_swift_reference.get('project_domain_id'), project_domain_name=default_swift_reference.get( 'project_domain_name') ) mock_client.Client.assert_any_call(session=main_session) self.assertEqual(main_client, client) class TestStoreAuthV1(base.MultiStoreBaseTest, SwiftTests, test_store_capabilities.TestStoreCapabilitiesChecking): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def getConfig(self): conf = SWIFT_CONF.copy() conf['swift_store_auth_version'] = '1' conf['swift_store_user'] = 'tenant:user1' return conf def setUp(self): """Establish a clean test environment.""" super(TestStoreAuthV1, self).setUp() enabled_backends = { "swift1": "swift", "swift2": "swift", } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='swift1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path config = self.getConfig() conf_file = 'glance-swift.conf' self.swift_config_file = self.copy_data_file(conf_file, self.test_dir) config.update({'swift_store_config_file': self.swift_config_file}) self.stub_out_swiftclient(config['swift_store_auth_version']) self.mock_keystone_client() self.store = Store(self.conf, backend="swift1") self.config(group="swift1", **config) self.store.configure() self.register_store_backend_schemes(self.store, 'swift', 'swift1') self.addCleanup(self.conf.reset) class TestStoreAuthV2(TestStoreAuthV1): def getConfig(self): config = super(TestStoreAuthV2, self).getConfig() config['swift_store_auth_version'] = '2' config['swift_store_user'] = 'tenant:user1' return config def test_v2_with_no_tenant(self): uri = "swift://failme:key@auth_address/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) self.assertRaises(exceptions.BadStoreUri, self.store.get, loc) def test_v2_multi_tenant_location(self): config = self.getConfig() config['swift_store_multi_tenant'] = True self.config(group="swift1", **config) uri = "swift://auth_address/glance/%s" % (FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) self.assertEqual('swift', loc.store_name) class TestStoreAuthV3(TestStoreAuthV1): def getConfig(self): config = super(TestStoreAuthV3, self).getConfig() config['swift_store_auth_version'] = '3' config['swift_store_user'] = 'tenant:user1' return config @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def test_init_client_single_tenant(self, mock_client, mock_session, mock_identity): """Test that keystone client was initialized correctly""" # initialize client store = Store(self.conf, backend="swift1") store.configure() uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) ctxt = mock.MagicMock() store.init_client(location=loc.store_location, context=ctxt) mock_identity.V3Password.assert_called_once_with( auth_url=loc.store_location.swift_url + '/', username="user1", password="key", project_name="tenant", project_domain_id='default', project_domain_name=None, user_domain_id='default', user_domain_name=None,) mock_session.Session.assert_called_once_with( auth=mock_identity.V3Password(), verify=True) mock_client.Client.assert_called_once_with( session=mock_session.Session()) @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def test_init_client_single_tenant_with_domain_ids(self, mock_client, mock_session, mock_identity): """Test that keystone client was initialized correctly""" conf = self.getConfig() conf['default_swift_reference'] = 'ref4' self.config(group="swift1", **conf) store = Store(self.conf, backend="swift1") store.configure() uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) ctxt = mock.MagicMock() store.init_client(location=loc.store_location, context=ctxt) mock_identity.V3Password.assert_called_once_with( auth_url=loc.store_location.swift_url + '/', username="user1", password="key", project_name="tenant", project_domain_id='projdomainid', project_domain_name=None, user_domain_id='userdomainid', user_domain_name=None) mock_session.Session.assert_called_once_with( auth=mock_identity.V3Password(), verify=True) mock_client.Client.assert_called_once_with( session=mock_session.Session()) @mock.patch("glance_store._drivers.swift.store.ks_identity") @mock.patch("glance_store._drivers.swift.store.ks_session") @mock.patch("glance_store._drivers.swift.store.ks_client") def test_init_client_single_tenant_with_domain_names(self, mock_client, mock_session, mock_identity): """Test that keystone client was initialized correctly""" conf = self.getConfig() conf['default_swift_reference'] = 'ref5' self.config(group="swift1", **conf) store = Store(self.conf, backend="swift1") store.configure() uri = "swift://%s:key@auth_address/glance/%s" % ( self.swift_store_user, FAKE_UUID) loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) ctxt = mock.MagicMock() store.init_client(location=loc.store_location, context=ctxt) mock_identity.V3Password.assert_called_once_with( auth_url=loc.store_location.swift_url + '/', username="user1", password="key", project_name="tenant", project_domain_id=None, project_domain_name='projdomain', user_domain_id=None, user_domain_name='userdomain') mock_session.Session.assert_called_once_with( auth=mock_identity.V3Password(), verify=True) mock_client.Client.assert_called_once_with( session=mock_session.Session()) class FakeConnection(object): def __init__(self, authurl=None, user=None, key=None, retries=5, preauthurl=None, preauthtoken=None, starting_backoff=1, tenant_name=None, os_options=None, auth_version="1", insecure=False, ssl_compression=True, cacert=None): if os_options is None: os_options = {} self.authurl = authurl self.user = user self.key = key self.preauthurl = preauthurl self.preauthtoken = preauthtoken self.tenant_name = tenant_name self.os_options = os_options self.auth_version = auth_version self.insecure = insecure self.cacert = cacert class TestSingleTenantStoreConnections(base.MultiStoreBaseTest): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): super(TestSingleTenantStoreConnections, self).setUp() enabled_backends = { "swift1": "swift", "swift2": "swift", } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='swift1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path self.useFixture(fixtures.MockPatch( 'swiftclient.Connection', FakeConnection)) self.store = swift.SingleTenantStore(self.conf, backend="swift1") self.store.configure() specs = {'scheme': 'swift', 'auth_or_store_url': 'example.com/v2/', 'user': 'tenant:user1', 'key': 'key1', 'container': 'cont', 'obj': 'object'} self.location = swift.StoreLocation(specs, self.conf, backend_group="swift1") self.register_store_backend_schemes(self.store, 'swift', 'swift1') self.addCleanup(self.conf.reset) def test_basic_connection(self): connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertIsNone(connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_conf_endpoint(self): ctx = mock.MagicMock(user='tenant:user1', tenant='tenant') self.config(group="swift1", swift_store_endpoint='https://internal.com') self.store.configure() connection = self.store.get_connection(self.location, context=ctx) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertEqual('https://internal.com', connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_conf_endpoint_no_context(self): self.config(group="swift1", swift_store_endpoint='https://internal.com') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) self.assertEqual('2', connection.auth_version) self.assertEqual('user1', connection.user) self.assertEqual('tenant', connection.tenant_name) self.assertEqual('key1', connection.key) self.assertEqual('https://internal.com', connection.preauthurl) self.assertFalse(connection.insecure) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_no_trailing_slash(self): self.location.auth_or_store_url = 'example.com/v2' connection = self.store.get_connection(self.location) self.assertEqual('https://example.com/v2/', connection.authurl) def test_connection_insecure(self): self.config(group="swift1", swift_store_auth_insecure=True) self.store.configure() connection = self.store.get_connection(self.location) self.assertTrue(connection.insecure) def test_connection_with_auth_v1(self): self.config(group="swift1", swift_store_auth_version='1') self.store.configure() self.location.user = 'auth_v1_user' connection = self.store.get_connection(self.location) self.assertEqual('1', connection.auth_version) self.assertEqual('auth_v1_user', connection.user) self.assertIsNone(connection.tenant_name) def test_connection_invalid_user(self): self.store.configure() self.location.user = 'invalid:format:user' self.assertRaises(exceptions.BadStoreUri, self.store.get_connection, self.location) def test_connection_missing_user(self): self.store.configure() self.location.user = None self.assertRaises(exceptions.BadStoreUri, self.store.get_connection, self.location) def test_connection_with_region(self): self.config(group="swift1", swift_store_region='Sahara') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'region_name': 'Sahara', 'service_type': 'object-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_service_type(self): self.config(group="swift1", swift_store_service_type='shoe-store') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'service_type': 'shoe-store', 'endpoint_type': 'publicURL'}, connection.os_options) def test_connection_with_endpoint_type(self): self.config(group="swift1", swift_store_endpoint_type='internalURL') self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'internalURL'}, connection.os_options) def test_bad_location_uri(self): self.store.configure() self.location.uri = 'http://bad_uri://' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_bad_location_uri_invalid_credentials(self): self.store.configure() self.location.uri = 'swift://bad_creds@uri/cont/obj' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_bad_location_uri_invalid_object_path(self): self.store.configure() self.location.uri = 'swift://user:key@uri/cont' self.assertRaises(exceptions.BadStoreUri, self.location.parse_uri, self.location.uri) def test_ref_overrides_defaults(self): self.config(group="swift1", swift_store_auth_version='2', swift_store_user='testuser', swift_store_key='testpass', swift_store_auth_address='testaddress', swift_store_endpoint_type='internalURL', swift_store_config_file='somefile') self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'auth_version': '3', 'user': 'user:pass', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}} self.store.configure() self.assertEqual('user:pass', self.store.user) self.assertEqual('3', self.store.auth_version) self.assertEqual('authurl.com', self.store.auth_address) self.assertEqual('default', self.store.user_domain_id) self.assertEqual('ignored', self.store.user_domain_name) self.assertEqual('default', self.store.project_domain_id) self.assertEqual('ignored', self.store.project_domain_name) def test_with_v3_auth(self): self.store.ref_params = {'ref1': {'auth_address': 'authurl.com', 'auth_version': '3', 'user': 'user:pass', 'key': 'password', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}} self.store.configure() connection = self.store.get_connection(self.location) self.assertEqual('3', connection.auth_version) self.assertEqual({'service_type': 'object-store', 'endpoint_type': 'publicURL', 'user_domain_id': 'default', 'user_domain_name': 'ignored', 'project_domain_id': 'default', 'project_domain_name': 'ignored'}, connection.os_options) class TestMultiTenantStoreConnections(base.MultiStoreBaseTest): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): super(TestMultiTenantStoreConnections, self).setUp() enabled_backends = { "swift1": "swift", "swift2": "swift", } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='swift1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path self.useFixture(fixtures.MockPatch( 'swiftclient.Connection', FakeConnection)) self.context = mock.MagicMock( user='tenant:user1', tenant='tenant', auth_token='0123') self.store = swift.MultiTenantStore(self.conf, backend="swift1") specs = {'scheme': 'swift', 'auth_or_store_url': 'example.com', 'container': 'cont', 'obj': 'object'} self.location = swift.StoreLocation(specs, self.conf, backend_group="swift1") self.addCleanup(self.conf.reset) def test_basic_connection(self): self.store.configure() connection = self.store.get_connection(self.location, context=self.context) self.assertIsNone(connection.authurl) self.assertEqual('1', connection.auth_version) self.assertIsNone(connection.user) self.assertIsNone(connection.tenant_name) self.assertIsNone(connection.key) self.assertEqual('https://example.com', connection.preauthurl) self.assertEqual('0123', connection.preauthtoken) self.assertEqual({}, connection.os_options) def test_connection_does_not_use_endpoint_from_catalog(self): self.store.configure() self.context.service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'https://scexample.com', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] connection = self.store.get_connection(self.location, context=self.context) self.assertIsNone(connection.authurl) self.assertEqual('1', connection.auth_version) self.assertIsNone(connection.user) self.assertIsNone(connection.tenant_name) self.assertIsNone(connection.key) self.assertNotEqual('https://scexample.com', connection.preauthurl) self.assertEqual('https://example.com', connection.preauthurl) self.assertEqual('0123', connection.preauthtoken) self.assertEqual({}, connection.os_options) def test_connection_manager_does_not_use_endpoint_from_catalog(self): self.store.configure() self.context.service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'https://scexample.com', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] connection_manager = manager.MultiTenantConnectionManager( store=self.store, store_location=self.location, context=self.context ) conn = connection_manager._init_connection() self.assertNotEqual('https://scexample.com', conn.preauthurl) self.assertEqual('https://example.com', conn.preauthurl) class TestMultiTenantStoreContext(base.MultiStoreBaseTest): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): """Establish a clean test environment.""" super(TestMultiTenantStoreContext, self).setUp() config = SWIFT_CONF.copy() enabled_backends = { "swift1": "swift", "swift2": "swift", } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='swift1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path self.store = Store(self.conf, backend="swift1") self.config(group="swift1", **config) self.store.configure() self.register_store_backend_schemes(self.store, 'swift', 'swift1') service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'region': 'RegionOne', 'publicURL': 'http://127.0.0.1:0', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] self.ctx = mock.MagicMock( service_catalog=service_catalog, user='tenant:user1', tenant='tenant', auth_token='0123') self.addCleanup(self.conf.reset) @requests_mock.mock() def test_download_context(self, m): """Verify context (ie token) is passed to swift on download.""" self.config(group="swift1", swift_store_multi_tenant=True) store = Store(self.conf, backend="swift1") store.configure() uri = "swift+http://127.0.0.1/glance_123/123" loc = location.get_location_from_uri_and_backend( uri, "swift1", conf=self.conf) m.get("http://127.0.0.1/glance_123/123", headers={'Content-Length': '0'}) store.get(loc, context=self.ctx) self.assertEqual(b'0123', m.last_request.headers['X-Auth-Token']) @requests_mock.mock() def test_upload_context(self, m): """Verify context (ie token) is passed to swift on upload.""" head_req = m.head("http://127.0.0.1/glance_123", text='Some data', status_code=201) put_req = m.put("http://127.0.0.1/glance_123/123") self.config(group="swift1", swift_store_multi_tenant=True) store = Store(self.conf, backend="swift1") store.configure() content = b'Some data' pseudo_file = io.BytesIO(content) store.add('123', pseudo_file, len(content), context=self.ctx) self.assertEqual(b'0123', head_req.last_request.headers['X-Auth-Token']) self.assertEqual(b'0123', put_req.last_request.headers['X-Auth-Token']) class TestCreatingLocations(base.MultiStoreBaseTest): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): super(TestCreatingLocations, self).setUp() enabled_backends = { "swift1": "swift", "swift2": "swift", } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='swift1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path config = copy.deepcopy(SWIFT_CONF) self.store = Store(self.conf, backend="swift1") self.config(group="swift1", **config) self.store.configure() self.register_store_backend_schemes(self.store, 'swift', 'swift1') importlib.reload(swift) self.addCleanup(self.conf.reset) service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'https://some_admin_endpoint', 'region': 'RegionOne', 'internalURL': 'https://some_internal_endpoint', 'publicURL': 'https://some_endpoint', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] self.ctxt = mock.MagicMock(user='user', tenant='tenant', auth_token='123', service_catalog=service_catalog) def test_single_tenant_location(self): conf = copy.deepcopy(SWIFT_CONF) conf['swift_store_container'] = 'container' conf_file = "glance-swift.conf" self.swift_config_file = self.copy_data_file(conf_file, self.test_dir) conf.update({'swift_store_config_file': self.swift_config_file}) conf['default_swift_reference'] = 'ref1' self.config(group="swift1", **conf) importlib.reload(swift) store = swift.SingleTenantStore(self.conf, backend="swift1") store.configure() location = store.create_location('image-id') self.assertEqual('swift+https', location.scheme) self.assertEqual('https://example.com', location.swift_url) self.assertEqual('container', location.container) self.assertEqual('image-id', location.obj) self.assertEqual('tenant:user1', location.user) self.assertEqual('key1', location.key) def test_single_tenant_location_http(self): conf_file = "glance-swift.conf" test_dir = self.useFixture(fixtures.TempDir()).path self.swift_config_file = self.copy_data_file(conf_file, test_dir) self.config(group="swift1", swift_store_container='container', default_swift_reference='ref2', swift_store_config_file=self.swift_config_file) store = swift.SingleTenantStore(self.conf, backend="swift1") store.configure() location = store.create_location('image-id') self.assertEqual('swift+http', location.scheme) self.assertEqual('http://example.com', location.swift_url) def test_multi_tenant_location(self): self.config(group="swift1", swift_store_container='container') store = swift.MultiTenantStore(self.conf, backend="swift1") store.configure() location = store.create_location('image-id', context=self.ctxt) self.assertEqual('swift+https', location.scheme) self.assertEqual('https://some_endpoint', location.swift_url) self.assertEqual('container_image-id', location.container) self.assertEqual('image-id', location.obj) self.assertIsNone(location.user) self.assertIsNone(location.key) def test_multi_tenant_location_http(self): store = swift.MultiTenantStore(self.conf, backend="swift1") store.configure() self.ctxt.service_catalog[0]['endpoints'][0]['publicURL'] = \ 'http://some_endpoint' location = store.create_location('image-id', context=self.ctxt) self.assertEqual('swift+http', location.scheme) self.assertEqual('http://some_endpoint', location.swift_url) def test_multi_tenant_location_with_region(self): self.config(group="swift1", swift_store_region='WestCarolina') store = swift.MultiTenantStore(self.conf, backend="swift1") store.configure() self.ctxt.service_catalog[0]['endpoints'][0]['region'] = 'WestCarolina' self.assertEqual('https://some_endpoint', store._get_endpoint(self.ctxt)) def test_multi_tenant_location_custom_service_type(self): self.config(group="swift1", swift_store_service_type='toy-store') self.ctxt.service_catalog[0]['type'] = 'toy-store' store = swift.MultiTenantStore(self.conf, backend="swift1") store.configure() store._get_endpoint(self.ctxt) self.assertEqual('https://some_endpoint', store._get_endpoint(self.ctxt)) def test_multi_tenant_location_custom_endpoint_type(self): self.config(group="swift1", swift_store_endpoint_type='internalURL') store = swift.MultiTenantStore(self.conf, backend="swift1") store.configure() self.assertEqual('https://some_internal_endpoint', store._get_endpoint(self.ctxt)) class TestChunkReader(base.MultiStoreBaseTest): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): super(TestChunkReader, self).setUp() enabled_backends = { "swift1": "swift", "swift2": "swift", } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='swift1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path config = copy.deepcopy(SWIFT_CONF) self.store = Store(self.conf, backend="swift1") self.config(group="swift1", **config) self.store.configure() self.register_store_backend_schemes(self.store, 'swift', 'swift1') self.addCleanup(self.conf.reset) def test_read_all_data(self): """ Replicate what goes on in the Swift driver with the repeated creation of the ChunkReader object """ CHUNKSIZE = 100 data = b'*' * units.Ki expected_checksum = md5(data, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(data).hexdigest() data_file = tempfile.NamedTemporaryFile() data_file.write(data) data_file.flush() infile = open(data_file.name, 'rb') bytes_read = 0 checksum = md5(usedforsecurity=False) os_hash_value = hashlib.sha256() while True: cr = swift.ChunkReader(infile, checksum, os_hash_value, CHUNKSIZE) chunk = cr.read(CHUNKSIZE) if len(chunk) == 0: self.assertEqual(True, cr.is_zero_size) break bytes_read += len(chunk) self.assertEqual(units.Ki, bytes_read) self.assertEqual(expected_checksum, cr.checksum.hexdigest()) self.assertEqual(expected_multihash, cr.os_hash_value.hexdigest()) data_file.close() infile.close() def test_read_zero_size_data(self): """ Replicate what goes on in the Swift driver with the repeated creation of the ChunkReader object """ expected_checksum = md5(b'', usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(b'').hexdigest() CHUNKSIZE = 100 checksum = md5(usedforsecurity=False) os_hash_value = hashlib.sha256() data_file = tempfile.NamedTemporaryFile() infile = open(data_file.name, 'rb') bytes_read = 0 while True: cr = swift.ChunkReader(infile, checksum, os_hash_value, CHUNKSIZE) chunk = cr.read(CHUNKSIZE) if len(chunk) == 0: break bytes_read += len(chunk) self.assertEqual(True, cr.is_zero_size) self.assertEqual(0, bytes_read) self.assertEqual(expected_checksum, cr.checksum.hexdigest()) self.assertEqual(expected_multihash, cr.os_hash_value.hexdigest()) data_file.close() infile.close() class TestMultipleContainers(base.MultiStoreBaseTest): # NOTE(flaper87): temporary until we # can move to a fully-local lib. # (Swift store's fault) _CONF = cfg.ConfigOpts() def setUp(self): super(TestMultipleContainers, self).setUp() enabled_backends = { "swift1": "swift", "swift2": "swift", } self.conf = self._CONF self.conf(args=[]) self.conf.register_opt(cfg.DictOpt('enabled_backends')) self.config(enabled_backends=enabled_backends) store.register_store_opts(self.conf) self.config(default_backend='swift1', group='glance_store') # Ensure stores + locations cleared location.SCHEME_TO_CLS_BACKEND_MAP = {} store.create_multi_stores(self.conf) self.addCleanup(setattr, location, 'SCHEME_TO_CLS_BACKEND_MAP', dict()) self.test_dir = self.useFixture(fixtures.TempDir()).path self.config(group="swift1", swift_store_multiple_containers_seed=3) self.store = swift.SingleTenantStore(self.conf, backend="swift1") self.store.configure() self.register_store_backend_schemes(self.store, 'swift', 'swift1') self.addCleanup(self.conf.reset) def test_get_container_name_happy_path_with_seed_three(self): test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_fda' self.assertEqual(expected, actual) def test_get_container_name_with_negative_seed(self): self.assertRaises(ValueError, self.config, group="swift1", swift_store_multiple_containers_seed=-1) def test_get_container_name_with_seed_beyond_max(self): self.assertRaises(ValueError, self.config, group="swift1", swift_store_multiple_containers_seed=33) def test_get_container_name_with_max_seed(self): self.config(group="swift1", swift_store_multiple_containers_seed=32) self.store = swift.SingleTenantStore( self.conf, backend="swift1") test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + test_image_id self.assertEqual(expected, actual) def test_get_container_name_with_dash(self): self.config(group="swift1", swift_store_multiple_containers_seed=10) self.store = swift.SingleTenantStore( self.conf, backend="swift1") test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + 'fdae39a1-ba' self.assertEqual(expected, actual) def test_get_container_name_with_min_seed(self): self.config(group="swift1", swift_store_multiple_containers_seed=1) self.store = swift.SingleTenantStore( self.conf, backend="swift1") test_image_id = 'fdae39a1-bac5-4238-aba4-69bcc726e848' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container_' + 'f' self.assertEqual(expected, actual) def test_get_container_name_with_multiple_containers_turned_off(self): self.config(group="swift1", swift_store_multiple_containers_seed=0) self.store.configure() test_image_id = 'random_id' actual = self.store.get_container_name(test_image_id, 'default_container') expected = 'default_container' self.assertEqual(expected, actual) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_swift_store_utils.py0000664000175000017500000001425700000000000026214 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from glance_store._drivers.swift import utils as sutils from glance_store import exceptions from glance_store.tests import base class TestSwiftParams(base.StoreBaseTest): def setUp(self): super(TestSwiftParams, self).setUp() conf_file = "glance-swift.conf" test_dir = self.useFixture(fixtures.TempDir()).path self.swift_config_file = self.copy_data_file(conf_file, test_dir) self.config(swift_store_config_file=self.swift_config_file) def test_multiple_swift_account_enabled(self): self.config(swift_store_config_file="glance-swift.conf") self.assertTrue( sutils.is_multiple_swift_store_accounts_enabled(self.conf)) def test_multiple_swift_account_disabled(self): self.config(swift_store_config_file=None) self.assertFalse( sutils.is_multiple_swift_store_accounts_enabled(self.conf)) def test_swift_config_file_doesnt_exist(self): self.config(swift_store_config_file='fake-file.conf') self.assertRaises(exceptions.BadStoreConfiguration, sutils.SwiftParams, self.conf) def test_swift_config_uses_default_values_multiple_account_disabled(self): default_user = 'user_default' default_key = 'key_default' default_auth_address = 'auth@default.com' default_account_reference = 'ref_default' conf = {'swift_store_config_file': None, 'swift_store_user': default_user, 'swift_store_key': default_key, 'swift_store_auth_address': default_auth_address, 'default_swift_reference': default_account_reference} self.config(**conf) swift_params = sutils.SwiftParams(self.conf).params self.assertEqual(1, len(swift_params.keys())) self.assertEqual(default_user, swift_params[default_account_reference]['user'] ) self.assertEqual(default_key, swift_params[default_account_reference]['key'] ) self.assertEqual(default_auth_address, swift_params[default_account_reference] ['auth_address'] ) def test_swift_store_config_validates_for_creds_auth_address(self): swift_params = sutils.SwiftParams(self.conf).params self.assertEqual('tenant:user1', swift_params['ref1']['user'] ) self.assertEqual('key1', swift_params['ref1']['key'] ) self.assertEqual('example.com', swift_params['ref1']['auth_address']) self.assertEqual('user2', swift_params['ref2']['user']) self.assertEqual('key2', swift_params['ref2']['key']) self.assertEqual('http://example.com', swift_params['ref2']['auth_address'] ) def test_swift_store_config_validates_quotes_removal(self): swift_params = sutils.SwiftParams(self.conf).params self.assertEqual('user3', swift_params['ref3']['user'] ) self.assertEqual('key3', swift_params['ref3']['key'] ) self.assertEqual('http://example.com', swift_params['ref3']['auth_address'] ) def test_swift_store_config_without_domain(self): swift_params = sutils.SwiftParams(self.conf).params self.assertEqual('default', swift_params['ref1']['project_domain_id']) self.assertIsNone(swift_params['ref1']['project_domain_name']) self.assertEqual('default', swift_params['ref1']['user_domain_id']) self.assertIsNone(swift_params['ref1']['user_domain_name']) def test_swift_store_config_with_domain_ids(self): swift_params = sutils.SwiftParams(self.conf).params self.assertEqual('projdomainid', swift_params['ref4']['project_domain_id']) self.assertIsNone(swift_params['ref4']['project_domain_name']) self.assertEqual('userdomainid', swift_params['ref4']['user_domain_id']) self.assertIsNone(swift_params['ref4']['user_domain_name']) def test_swift_store_config_with_domain_names(self): swift_params = sutils.SwiftParams(self.conf).params self.assertIsNone(swift_params['ref5']['project_domain_id']) self.assertEqual('projdomain', swift_params['ref5']['project_domain_name']) self.assertIsNone(swift_params['ref5']['user_domain_id']) self.assertEqual('userdomain', swift_params['ref5']['user_domain_name']) class TestSwiftConfigParser(base.StoreBaseTest): def setUp(self): super(TestSwiftConfigParser, self).setUp() self.method = sutils.SwiftConfigParser._process_quotes def test_quotes_processor(self): self.assertEqual('user', self.method('user')) self.assertEqual('user', self.method('"user"')) self.assertEqual("user", self.method("'user'")) self.assertEqual("user'", self.method("user'")) self.assertEqual('user"', self.method('user"')) def test_quotes_processor_negative(self): negative_values = [ '\'user"', '"user\'', '\'user', '"user\'', "'user", '"user', '"', "'", ] for value in negative_values: self.assertRaises(ValueError, self.method, value) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_test_utils.py0000664000175000017500000000250500000000000024614 0ustar00zuulzuul00000000000000# Copyright 2020 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store.tests import base from glance_store.tests import utils as test_utils class TestFakeData(base.StoreBaseTest): def test_via_read(self): fd = test_utils.FakeData(1024) data = [] for i in range(0, 1025, 256): chunk = fd.read(256) data.append(chunk) if not chunk: break self.assertEqual(5, len(data)) # Make sure we got a zero-length final read self.assertEqual(b'', data[-1]) # Make sure we only got 1024 bytes self.assertEqual(1024, len(b''.join(data))) def test_via_iter(self): data = b''.join(list(test_utils.FakeData(1024))) self.assertEqual(1024, len(data)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/unit/test_vmware_store.py0000664000175000017500000007326200000000000025142 0ustar00zuulzuul00000000000000# Copyright 2014 OpenStack, LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the VMware Datastore backend store""" import hashlib import io from unittest import mock import uuid from oslo_utils import secretutils from oslo_utils import units from oslo_vmware import api from oslo_vmware import exceptions as vmware_exceptions from oslo_vmware.objects import datacenter as oslo_datacenter from oslo_vmware.objects import datastore as oslo_datastore import glance_store._drivers.vmware_datastore as vm_store from glance_store import backend from glance_store import exceptions from glance_store import location from glance_store.tests import base from glance_store.tests.unit import test_store_capabilities from glance_store.tests import utils FAKE_UUID = str(uuid.uuid4()) FIVE_KB = 5 * units.Ki VMWARE_DS = { 'debug': True, 'known_stores': ['vmware_datastore'], 'default_store': 'vsphere', 'vmware_server_host': '127.0.0.1', 'vmware_server_username': 'username', 'vmware_server_password': 'password', 'vmware_store_image_dir': '/openstack_glance', 'vmware_insecure': 'True', 'vmware_datastores': ['a:b:0'], } def format_location(host_ip, folder_name, image_id, datastores): """ Helper method that returns a VMware Datastore store URI given the component pieces. """ scheme = 'vsphere' (datacenter_path, datastore_name, weight) = datastores[0].split(':') return ("%s://%s/folder%s/%s?dcPath=%s&dsName=%s" % (scheme, host_ip, folder_name, image_id, datacenter_path, datastore_name)) def fake_datastore_obj(*args, **kwargs): dc_obj = oslo_datacenter.Datacenter(ref='fake-ref', name='fake-name') dc_obj.path = args[0] return oslo_datastore.Datastore(ref='fake-ref', datacenter=dc_obj, name=args[1]) class TestStore(base.StoreBaseTest, test_store_capabilities.TestStoreCapabilitiesChecking): @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch('oslo_vmware.api.VMwareAPISession') def setUp(self, mock_api_session, mock_get_datastore): """Establish a clean test environment.""" super(TestStore, self).setUp() vm_store.Store.CHUNKSIZE = 2 default_store = VMWARE_DS['default_store'] self.config(default_store=default_store, stores=['vmware']) backend.register_opts(self.conf) self.config(group='glance_store', vmware_server_username='admin', vmware_server_password='admin', vmware_server_host=VMWARE_DS['vmware_server_host'], vmware_insecure=VMWARE_DS['vmware_insecure'], vmware_datastores=VMWARE_DS['vmware_datastores']) mock_get_datastore.side_effect = fake_datastore_obj backend.create_stores(self.conf) self.store = backend.get_store_from_scheme('vsphere') self.store.store_image_dir = ( VMWARE_DS['vmware_store_image_dir']) self.hash_algo = 'sha256' def _mock_http_connection(self): return mock.patch('http.client.HTTPConnection') @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get(self, mock_api_session): """Test a "normal" retrieval of an image in chunks.""" expected_image_size = 31 expected_returns = ['I am a teapot, short and stout\n'] loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() (image_file, image_size) = self.store.get(loc) self.assertEqual(expected_image_size, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_non_existing(self, mock_api_session): """ Test that trying to retrieve an image that doesn't exist raises an error """ loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glan" "ce/%s?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch.object(vm_store.Store, '_build_vim_cookie_header') @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(vm_store._Reader, 'size') @mock.patch.object(api, 'VMwareAPISession') def test_add(self, fake_api_session, fake_size, fake_select_datastore, fake_cookie): """Test that we can add an image via the VMware backend.""" fake_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size hash_code = secretutils.md5(expected_contents, usedforsecurity=False) expected_checksum = hash_code.hexdigest() sha256_code = hashlib.sha256(expected_contents) expected_multihash = sha256_code.hexdigest() fake_size.__get__ = mock.Mock(return_value=expected_size) expected_cookie = 'vmware_soap_session=fake-uuid' fake_cookie.return_value = expected_cookie expected_headers = {'Content-Length': str(expected_size), 'Cookie': expected_cookie} with mock.patch('hashlib.md5') as md5: with mock.patch('hashlib.new') as fake_new: md5.return_value = hash_code fake_new.return_value = sha256_code expected_location = format_location( VMWARE_DS['vmware_server_host'], VMWARE_DS['vmware_store_image_dir'], expected_image_id, VMWARE_DS['vmware_datastores']) image = io.BytesIO(expected_contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() location, size, checksum, multihash, _ = self.store.add( expected_image_id, image, expected_size, self.hash_algo) _, kwargs = HttpConn.call_args self.assertEqual(expected_headers, kwargs['headers']) self.assertEqual(utils.sort_url_by_qs_keys(expected_location), utils.sort_url_by_qs_keys(location)) self.assertEqual(expected_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(vm_store._Reader, 'size') @mock.patch('oslo_vmware.api.VMwareAPISession') def test_add_size_zero(self, mock_api_session, fake_size, fake_select_datastore): """ Test that when specifying size zero for the image to add, the actual size of the image is returned. """ fake_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size hash_code = secretutils.md5(expected_contents, usedforsecurity=False) expected_checksum = hash_code.hexdigest() sha256_code = hashlib.sha256(expected_contents) expected_multihash = sha256_code.hexdigest() fake_size.__get__ = mock.Mock(return_value=expected_size) with mock.patch('hashlib.md5') as md5: with mock.patch('hashlib.new') as fake_new: md5.return_value = hash_code fake_new.return_value = sha256_code expected_location = format_location( VMWARE_DS['vmware_server_host'], VMWARE_DS['vmware_store_image_dir'], expected_image_id, VMWARE_DS['vmware_datastores']) image = io.BytesIO(expected_contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() location, size, checksum, multihash, _ = self.store.add( expected_image_id, image, 0, self.hash_algo) self.assertEqual(utils.sort_url_by_qs_keys(expected_location), utils.sort_url_by_qs_keys(location)) self.assertEqual(expected_size, size) self.assertEqual(expected_checksum, checksum) self.assertEqual(expected_multihash, multihash) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch('glance_store._drivers.vmware_datastore._Reader') def test_add_with_verifier(self, fake_reader, fake_select_datastore): """Test that the verifier is passed to the _Reader during add.""" verifier = mock.MagicMock(name='mock_verifier') image_id = str(uuid.uuid4()) size = FIVE_KB contents = b"*" * size image = io.BytesIO(contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() self.store.add(image_id, image, size, self.hash_algo, verifier=verifier) fake_reader.assert_called_with(image, self.hash_algo, verifier) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch('glance_store._drivers.vmware_datastore._Reader') def test_add_with_verifier_size_zero(self, fake_reader, fake_select_ds): """Test that the verifier is passed to the _ChunkReader during add.""" verifier = mock.MagicMock(name='mock_verifier') image_id = str(uuid.uuid4()) size = FIVE_KB contents = b"*" * size image = io.BytesIO(contents) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() self.store.add(image_id, image, 0, self.hash_algo, verifier=verifier) fake_reader.assert_called_with(image, self.hash_algo, verifier) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_delete(self, mock_api_session): """Test we can delete an existing image in the VMware store.""" loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s?" "dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() vm_store.Store._service_content = mock.Mock() self.store.delete(loc) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_delete_non_existing(self, mock_api_session): """ Test that trying to delete an image that doesn't exist raises an error """ loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s?" "dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch.object(self.store.session, 'wait_for_task') as mock_task: mock_task.side_effect = vmware_exceptions.FileNotFoundException self.assertRaises(exceptions.NotFound, self.store.delete, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_size(self, mock_api_session): """ Test we can get the size of an existing image in the VMware store """ loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response() image_size = self.store.get_size(loc) self.assertEqual(image_size, 31) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_size_non_existing(self, mock_api_session): """ Test that trying to retrieve an image size that doesn't exist raises an error """ loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glan" "ce/%s?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=404) self.assertRaises(exceptions.NotFound, self.store.get_size, loc) def test_reader_full(self): content = b'XXX' image = io.BytesIO(content) expected_checksum = secretutils.md5(content, usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(content).hexdigest() reader = vm_store._Reader(image, self.hash_algo) ret = reader.read() self.assertEqual(content, ret) self.assertEqual(expected_checksum, reader.checksum.hexdigest()) self.assertEqual(expected_multihash, reader.os_hash_value.hexdigest()) self.assertEqual(len(content), reader.size) def test_reader_partial(self): content = b'XXX' image = io.BytesIO(content) expected_checksum = secretutils.md5(b'X', usedforsecurity=False).hexdigest() expected_multihash = hashlib.sha256(b'X').hexdigest() reader = vm_store._Reader(image, self.hash_algo) ret = reader.read(1) self.assertEqual(b'X', ret) self.assertEqual(expected_checksum, reader.checksum.hexdigest()) self.assertEqual(expected_multihash, reader.os_hash_value.hexdigest()) self.assertEqual(1, reader.size) def test_reader_with_verifier(self): content = b'XXX' image = io.BytesIO(content) verifier = mock.MagicMock(name='mock_verifier') reader = vm_store._Reader(image, self.hash_algo, verifier) reader.read() verifier.update.assert_called_with(content) def test_sanity_check_api_retry_count(self): """Test that sanity check raises if api_retry_count is <= 0.""" self.store.conf.glance_store.vmware_api_retry_count = -1 self.assertRaises(exceptions.BadStoreConfiguration, self.store._sanity_check) self.store.conf.glance_store.vmware_api_retry_count = 0 self.assertRaises(exceptions.BadStoreConfiguration, self.store._sanity_check) self.store.conf.glance_store.vmware_api_retry_count = 1 try: self.store._sanity_check() except exceptions.BadStoreConfiguration: self.fail() def test_sanity_check_task_poll_interval(self): """Test that sanity check raises if task_poll_interval is <= 0.""" self.store.conf.glance_store.vmware_task_poll_interval = -1 self.assertRaises(exceptions.BadStoreConfiguration, self.store._sanity_check) self.store.conf.glance_store.vmware_task_poll_interval = 0 self.assertRaises(exceptions.BadStoreConfiguration, self.store._sanity_check) self.store.conf.glance_store.vmware_task_poll_interval = 1 try: self.store._sanity_check() except exceptions.BadStoreConfiguration: self.fail() def test_sanity_check_multiple_datastores(self): self.store.conf.glance_store.vmware_api_retry_count = 1 self.store.conf.glance_store.vmware_task_poll_interval = 1 self.store.conf.glance_store.vmware_datastores = ['a:b:0', 'a:d:0'] try: self.store._sanity_check() except exceptions.BadStoreConfiguration: self.fail() def test_parse_datastore_info_and_weight_less_opts(self): datastore = 'a' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight_invalid_weight(self): datastore = 'a:b:c' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight_empty_opts(self): datastore = 'a: :0' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) datastore = ':b:0' self.assertRaises(exceptions.BadStoreConfiguration, self.store._parse_datastore_info_and_weight, datastore) def test_parse_datastore_info_and_weight(self): datastore = 'a:b:100' parts = self.store._parse_datastore_info_and_weight(datastore) self.assertEqual('a', parts[0]) self.assertEqual('b', parts[1]) self.assertEqual(100, parts[2]) def test_parse_datastore_info_and_weight_default_weight(self): datastore = 'a:b' parts = self.store._parse_datastore_info_and_weight(datastore) self.assertEqual('a', parts[0]) self.assertEqual('b', parts[1]) self.assertEqual(0, parts[2]) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_unexpected_status(self, mock_api_session, mock_select_datastore): expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = io.BytesIO(expected_contents) self.session = mock.Mock() with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=401) self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size, self.hash_algo) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_unexpected_status_no_response_body(self, mock_api_session, mock_select_datastore): expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = io.BytesIO(expected_contents) self.session = mock.Mock() with self._mock_http_connection() as HttpConn: HttpConn.return_value = utils.fake_response(status_code=500, no_response_body=True) self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size, self.hash_algo) @mock.patch.object(api, 'VMwareAPISession') def test_reset_session(self, mock_api_session): self.store.reset_session() self.assertTrue(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_active(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = True self.store._build_vim_cookie_header(True) self.assertFalse(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_expired(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = False self.store._build_vim_cookie_header(True) self.assertTrue(mock_api_session.called) @mock.patch.object(api, 'VMwareAPISession') def test_build_vim_cookie_header_expired_noverify(self, mock_api_session): self.store.session.is_current_session_active = mock.Mock() self.store.session.is_current_session_active.return_value = False self.store._build_vim_cookie_header() self.assertFalse(mock_api_session.called) @mock.patch.object(vm_store.Store, 'select_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_add_ioerror(self, mock_api_session, mock_select_datastore): mock_select_datastore.return_value = self.store.datastores[0][0] expected_image_id = str(uuid.uuid4()) expected_size = FIVE_KB expected_contents = b"*" * expected_size image = io.BytesIO(expected_contents) self.session = mock.Mock() with mock.patch('requests.Session.request') as HttpConn: HttpConn.request.side_effect = IOError self.assertRaises(exceptions.BackendException, self.store.add, expected_image_id, image, expected_size, self.hash_algo) def test_qs_sort_with_literal_question_mark(self): url = 'scheme://example.com/path?key2=val2&key1=val1?sort=true' exp_url = 'scheme://example.com/path?key1=val1%3Fsort%3Dtrue&key2=val2' self.assertEqual(exp_url, utils.sort_url_by_qs_keys(url)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map(self, mock_api_session, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] mock_ds_obj.side_effect = fake_datastore_obj ret = self.store._build_datastore_weighted_map(datastores) ds = ret[200] self.assertEqual('e', ds[0].datacenter.path) self.assertEqual('f', ds[0].name) ds = ret[100] self.assertEqual(2, len(ds)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map_equal_weight(self, mock_api_session, mock_ds_obj): datastores = ['a:b:200', 'a:b:200'] mock_ds_obj.side_effect = fake_datastore_obj ret = self.store._build_datastore_weighted_map(datastores) ds = ret[200] self.assertEqual(2, len(ds)) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(api, 'VMwareAPISession') def test_build_datastore_weighted_map_empty_list(self, mock_api_session, mock_ds_ref): datastores = [] ret = self.store._build_datastore_weighted_map(datastores) self.assertEqual({}, ret) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_insufficient_freespace(self, mock_get_freespace, mock_ds_ref): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5, 5, 5] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp self.assertRaises(exceptions.StorageFull, self.store.select_datastore, image_size) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_insufficient_fs_one_ds(self, mock_get_freespace, mock_ds_ref): # Tests if fs is updated with just one datastore. datastores = ['a:b:100'] image_size = 10 self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp self.assertRaises(exceptions.StorageFull, self.store.select_datastore, image_size) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_equal_freespace(self, mock_get_freespace, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 mock_ds_obj.side_effect = fake_datastore_obj self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [11, 11, 11] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp ds = self.store.select_datastore(image_size) self.assertEqual('e', ds.datacenter.path) self.assertEqual('f', ds.name) @mock.patch.object(vm_store.Store, '_get_datastore') @mock.patch.object(vm_store.Store, '_get_freespace') def test_select_datastore_contention(self, mock_get_freespace, mock_ds_obj): datastores = ['a:b:100', 'c:d:100', 'e:f:200'] image_size = 10 mock_ds_obj.side_effect = fake_datastore_obj self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) freespaces = [5, 11, 12] def fake_get_fp(*args, **kwargs): return freespaces.pop(0) mock_get_freespace.side_effect = fake_get_fp ds = self.store.select_datastore(image_size) self.assertEqual('c', ds.datacenter.path) self.assertEqual('d', ds.name) def test_select_datastore_empty_list(self): datastores = [] self.store.datastores = ( self.store._build_datastore_weighted_map(datastores)) self.assertRaises(exceptions.StorageFull, self.store.select_datastore, 10) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_get_datacenter_ref(self, mock_api_session): datacenter_path = 'Datacenter1' self.store._get_datacenter(datacenter_path) self.store.session.invoke_api.assert_called_with( self.store.session.vim, 'FindByInventoryPath', self.store.session.vim.service_content.searchIndex, inventoryPath=datacenter_path) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_redirect(self, mock_api_session): # Add two layers of redirects to the response stack, which will # return the default 200 OK with the expected data after resolving # both redirects. redirect1 = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} redirect2 = {"location": "https://example.com?dsName=ds2&dcPath=dc2"} responses = [utils.fake_response(), utils.fake_response(status_code=302, headers=redirect1), utils.fake_response(status_code=301, headers=redirect2)] def getresponse(*args, **kwargs): return responses.pop() expected_image_size = 31 expected_returns = ['I am a teapot, short and stout\n'] loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.side_effect = getresponse (image_file, image_size) = self.store.get(loc) self.assertEqual(expected_image_size, image_size) chunks = [c for c in image_file] self.assertEqual(expected_returns, chunks) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_max_redirects(self, mock_api_session): redirect = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} responses = ([utils.fake_response(status_code=302, headers=redirect)] * (vm_store.MAX_REDIRECTS + 1)) def getresponse(*args, **kwargs): return responses.pop() loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.side_effect = getresponse self.assertRaises(exceptions.MaxRedirectsExceeded, self.store.get, loc) @mock.patch('oslo_vmware.api.VMwareAPISession') def test_http_get_redirect_invalid(self, mock_api_session): redirect = {"location": "https://example.com?dsName=ds1&dcPath=dc1"} loc = location.get_location_from_uri( "vsphere://127.0.0.1/folder/openstack_glance/%s" "?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf) with mock.patch('requests.Session.request') as HttpConn: HttpConn.return_value = utils.fake_response(status_code=307, headers=redirect) self.assertRaises(exceptions.BadStoreUri, self.store.get, loc) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/glance_store/tests/utils.py0000664000175000017500000000730400000000000021541 0ustar00zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import io import urllib.parse from oslo_utils import units import requests def sort_url_by_qs_keys(url): # NOTE(kragniz): this only sorts the keys of the query string of a url. # For example, an input of '/v2/tasks?sort_key=id&sort_dir=asc&limit=10' # returns '/v2/tasks?limit=10&sort_dir=asc&sort_key=id'. This is to prevent # non-deterministic ordering of the query string causing problems with unit # tests. parsed = urllib.parse.urlparse(url) # In python2.6, for arbitrary url schemes, query string # is not parsed from url. http://bugs.python.org/issue9374 path = parsed.path query = parsed.query if not query: path, query = parsed.path.split('?', 1) queries = urllib.parse.parse_qsl(query, True) sorted_query = sorted(queries, key=lambda x: x[0]) encoded_sorted_query = urllib.parse.urlencode(sorted_query, True) url_parts = (parsed.scheme, parsed.netloc, path, parsed.params, encoded_sorted_query, parsed.fragment) return urllib.parse.urlunparse(url_parts) class FakeHTTPResponse(object): def __init__(self, status=200, headers=None, data=None, *args, **kwargs): data = data or 'I am a teapot, short and stout\n' self.data = io.StringIO(data) self.read = self.data.read self.status = status self.headers = headers or {'content-length': len(data)} if not kwargs.get('no_response_body', False): self.body = None def getheader(self, name, default=None): return self.headers.get(name.lower(), default) def getheaders(self): return self.headers or {} def read(self, amt): self.data.read(amt) def release_conn(self): pass def close(self): self.data.close() def fake_response(status_code=200, headers=None, content=None, **kwargs): r = requests.models.Response() r.status_code = status_code r.headers = headers or {} r.raw = FakeHTTPResponse(status_code, headers, content, kwargs) return r class FakeData(object): """Generate a bunch of data without storing it in memory. This acts like a read-only file object which generates fake data in chunks when read() is called or it is used as a generator. It can generate an arbitrary amount of data without storing it in memory. :param length: The number of bytes to generate :param chunk_size: The chunk size to return in iteration mode, or when read() is called unbounded """ def __init__(self, length, chunk_size=64 * units.Ki): self._max = length self._chunk_size = chunk_size self._len = 0 def read(self, length=None): if length is None: length = self._chunk_size length = min(length, self._max - self._len) self._len += length if length == 0: return b'' else: return b'0' * length def __iter__(self): return self def __next__(self): r = self.read() if len(r) == 0: raise StopIteration() else: return r ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1692107 glance_store-4.8.1/glance_store.egg-info/0000775000175000017500000000000000000000000020353 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/glance_store.egg-info/PKG-INFO0000664000175000017500000000541500000000000021455 0ustar00zuulzuul00000000000000Metadata-Version: 2.1 Name: glance-store Version: 4.8.1 Summary: OpenStack Image Service Store Library Home-page: https://docs.openstack.org/glance_store/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/glance_store.svg :target: https://governance.openstack.org/tc/reference/tags/index.html :alt: The following tags have been asserted for the Glance Store Library: "project:official", "stable:follows-policy", "vulnerability:managed". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for glance_store. (The SVG in the governance repo is updated automatically.) .. Change things from this point on Glance Store Library ==================== Glance's stores library This library has been extracted from the Glance source code for the specific use of the Glance and Glare projects. The API it exposes is not stable, has some shortcomings, and is not a general purpose interface. We would eventually like to change this, but for now using this library outside of Glance or Glare will not be supported by the core team. * License: Apache License, Version 2.0 * Documentation: https://docs.openstack.org/glance_store/latest/ * Source: https://opendev.org/openstack/glance_store/ * Bugs: https://bugs.launchpad.net/glance-store * Release notes: https://docs.openstack.org/releasenotes/glance_store/index.html Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: OpenStack Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Requires-Python: >=3.8 Provides-Extra: cinder Provides-Extra: s3 Provides-Extra: swift Provides-Extra: test Provides-Extra: vmware ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254696.0 glance_store-4.8.1/glance_store.egg-info/SOURCES.txt0000664000175000017500000002100100000000000022231 0ustar00zuulzuul00000000000000.stestr.conf .zuul.yaml AUTHORS ChangeLog LICENSE README.rst requirements.txt setup.cfg setup.py test-requirements.txt tox.ini doc/requirements.txt doc/source/conf.py doc/source/index.rst doc/source/reference/index.rst doc/source/user/drivers.rst doc/source/user/index.rst etc/glance/rootwrap.conf etc/glance/rootwrap.d/glance_cinder_store.filters glance_store/__init__.py glance_store/backend.py glance_store/capabilities.py glance_store/driver.py glance_store/exceptions.py glance_store/i18n.py glance_store/location.py glance_store/multi_backend.py glance_store.egg-info/PKG-INFO glance_store.egg-info/SOURCES.txt glance_store.egg-info/dependency_links.txt glance_store.egg-info/entry_points.txt glance_store.egg-info/not-zip-safe glance_store.egg-info/pbr.json glance_store.egg-info/requires.txt glance_store.egg-info/top_level.txt glance_store/_drivers/__init__.py glance_store/_drivers/filesystem.py glance_store/_drivers/http.py glance_store/_drivers/rbd.py glance_store/_drivers/s3.py glance_store/_drivers/vmware_datastore.py glance_store/_drivers/cinder/__init__.py glance_store/_drivers/cinder/base.py glance_store/_drivers/cinder/nfs.py glance_store/_drivers/cinder/scaleio.py glance_store/_drivers/cinder/store.py glance_store/_drivers/swift/__init__.py glance_store/_drivers/swift/buffered.py glance_store/_drivers/swift/connection_manager.py glance_store/_drivers/swift/store.py glance_store/_drivers/swift/utils.py glance_store/common/__init__.py glance_store/common/attachment_state_manager.py glance_store/common/cinder_utils.py glance_store/common/fs_mount.py glance_store/common/utils.py glance_store/locale/en_GB/LC_MESSAGES/glance_store.po glance_store/locale/ko_KR/LC_MESSAGES/glance_store.po glance_store/tests/__init__.py glance_store/tests/base.py glance_store/tests/fakes.py glance_store/tests/utils.py glance_store/tests/etc/glance-swift.conf glance_store/tests/functional/README.rst glance_store/tests/functional/__init__.py glance_store/tests/functional/base.py glance_store/tests/functional/filesystem/__init__.py glance_store/tests/functional/filesystem/test_functional_filesystem.py glance_store/tests/functional/swift/__init__.py glance_store/tests/functional/swift/test_functional_swift.py glance_store/tests/unit/__init__.py glance_store/tests/unit/test_backend.py glance_store/tests/unit/test_connection_manager.py glance_store/tests/unit/test_driver.py glance_store/tests/unit/test_exceptions.py glance_store/tests/unit/test_filesystem_store.py glance_store/tests/unit/test_http_store.py glance_store/tests/unit/test_location.py glance_store/tests/unit/test_multistore_filesystem.py glance_store/tests/unit/test_multistore_rbd.py glance_store/tests/unit/test_multistore_s3.py glance_store/tests/unit/test_multistore_vmware.py glance_store/tests/unit/test_opts.py glance_store/tests/unit/test_rbd_store.py glance_store/tests/unit/test_s3_store.py glance_store/tests/unit/test_store_base.py glance_store/tests/unit/test_store_capabilities.py glance_store/tests/unit/test_swift_store.py glance_store/tests/unit/test_swift_store_multibackend.py glance_store/tests/unit/test_swift_store_utils.py glance_store/tests/unit/test_test_utils.py glance_store/tests/unit/test_vmware_store.py glance_store/tests/unit/cinder/__init__.py glance_store/tests/unit/cinder/test_base.py glance_store/tests/unit/cinder/test_cinder_base.py glance_store/tests/unit/cinder/test_cinder_store.py glance_store/tests/unit/cinder/test_multistore_cinder.py glance_store/tests/unit/cinder/test_nfs.py glance_store/tests/unit/cinder/test_scaleio.py glance_store/tests/unit/common/__init__.py glance_store/tests/unit/common/test_attachment_state_manager.py glance_store/tests/unit/common/test_cinder_utils.py glance_store/tests/unit/common/test_fs_mount.py glance_store/tests/unit/common/test_utils.py releasenotes/notes/.placeholder releasenotes/notes/0.29.1-notes-ded2a1d473a306c7.yaml releasenotes/notes/Stein_final_release-c7df5838028b8c7e.yaml releasenotes/notes/add-store-weight-d443fbea8cc8d4c9.yaml releasenotes/notes/block-creating-encrypted-nfs-volumes-d0ff370ab762042e.yaml releasenotes/notes/bug-1620999-8b76a0ad14826197.yaml releasenotes/notes/bug-1820817-0ee70781918d232e.yaml releasenotes/notes/bug-1915602-fcc807a435d8a6bf.yaml releasenotes/notes/bug-1954883-3666d63a3c0233f1.yaml releasenotes/notes/bug-2004555-4fd67fce86c07461.yaml releasenotes/notes/cinder-fix-nfs-sparse-vol-create-76631ce05f86257c.yaml releasenotes/notes/cinder-nfs-block-qcow2-vol-4fed58b0afafc980.yaml releasenotes/notes/cinder-support-extend-in-use-volume-c6292f950ff75cca.yaml releasenotes/notes/deprecate-rados_connect_timeout-767ed1eaa026196e.yaml releasenotes/notes/deprecate-sheepdog-driver-1f9689c327f313d4.yaml releasenotes/notes/deprecate-store_add_to_backend-f419e5c4210613d2.yaml releasenotes/notes/deprecate-store_capabilities_update_min_interval-039389fa296e2494.yaml releasenotes/notes/deprecate-vmware-store-2f720c6074b843b0.yaml releasenotes/notes/drop-py-2-7-345cafc9c1d3f892.yaml releasenotes/notes/drop-python-3-6-and-3-7-41af87576c4fd7b1.yaml releasenotes/notes/fix-exception-logging-during-attach-9546e24189db83c4.yaml releasenotes/notes/fix-interval-in-retries-471155ff34d9f0e9.yaml releasenotes/notes/fix-ip-in-connector-info-36b95d9959f10f63.yaml releasenotes/notes/fix-legacy-image-update-49a149ec267dccb6.yaml releasenotes/notes/fix-rados_connect_timeout-39e5074bc1a3b65b.yaml releasenotes/notes/fix-rbd-lockup-3aa2bb86f7d29e19.yaml releasenotes/notes/fix-wait-device-resize-c282940b71a3748e.yaml releasenotes/notes/fs-drv-chunk-sz-a1b2f6a72fad92d5.yaml releasenotes/notes/handle-sparse-image-a3ecfc4ae1c00d48.yaml releasenotes/notes/improved-configuration-options-3635b56aba3072c9.yaml releasenotes/notes/lock_path-cef9d6f5f52c3211.yaml releasenotes/notes/move-rootwrap-config-f2cf435c548aab5c.yaml releasenotes/notes/multi-store-0c004fc8aba2a25d.yaml releasenotes/notes/multi-tenant-store-058b67ce5b7f3bd0.yaml releasenotes/notes/multiattach-volume-handling-1a8446a64463f2cf.yaml releasenotes/notes/multihash-support-629e9cbc283a8b47.yaml releasenotes/notes/pike-relnote-9f547df14184d18c.yaml releasenotes/notes/prevent-unauthorized-errors-ebb9cf2236595cd0.yaml releasenotes/notes/queens-relnote-5fa2d009d9a9e458.yaml releasenotes/notes/rbd-trash-snapshots-158a39da4248fb0c.yaml releasenotes/notes/release-1.0.0-7ab43e91523eb3c8.yaml releasenotes/notes/release-1.0.1-098b1487ac8cc9a1.yaml releasenotes/notes/release-1.2.0-8d239f01cd8ff0bf.yaml releasenotes/notes/releasenote-0.17.0-efee3f557ea2096a.yaml releasenotes/notes/remove-cinder-experimental-fbf9dea32c84dc9b.yaml releasenotes/notes/remove-gridfs-driver-09286e27613b4353.yaml releasenotes/notes/remove-s3-driver-f432afa1f53ecdf8.yaml releasenotes/notes/remove-store-cap-update-min-interval-21fea4173ed4a09b.yaml releasenotes/notes/rethinking-filesystem-access-5ab872fd0c0d27db.yaml releasenotes/notes/rocky-bugfixes-adefa8f47db16a2d.yaml releasenotes/notes/set-documented-default-directory-for-filesystem-9b417a29416d3a94.yaml releasenotes/notes/sorted-drivers-for-configs-a905f07d3bf9c973.yaml releasenotes/notes/start-using-reno-73ef709807e37b74.yaml releasenotes/notes/support-cinder-multiple-stores-6cc8489f8f4f8ff3.yaml releasenotes/notes/support-cinder-upload-c85849d9c88bbd7e.yaml releasenotes/notes/support-cinder-user-domain-420c76053dd50534.yaml releasenotes/notes/support-s3-driver-a4158f9fa35931d5.yaml releasenotes/notes/update-stein-deprecations-3c2f6ffeab22b558.yaml releasenotes/notes/victoria-milestone-1-c1f9de5b90e8c326.yaml releasenotes/notes/vmware-store-requests-369485d2cfdb6175.yaml releasenotes/notes/volume-type-validation-check-011a400d7fb3b307.yaml releasenotes/notes/wallaby-final-release-00f0f851ff7d93ab.yaml releasenotes/notes/xena-final-release-3c6e19dfba43b40d.yaml releasenotes/source/2023.1.rst releasenotes/source/2023.2.rst releasenotes/source/2024.1.rst releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/liberty.rst releasenotes/source/mitaka.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/queens.rst releasenotes/source/rocky.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/ussuri.rst releasenotes/source/victoria.rst releasenotes/source/wallaby.rst releasenotes/source/xena.rst releasenotes/source/yoga.rst releasenotes/source/zed.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder releasenotes/source/locale/de/LC_MESSAGES/releasenotes.po releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po releasenotes/source/locale/zh_CN/LC_MESSAGES/releasenotes.po tools/with_venv.sh././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/glance_store.egg-info/dependency_links.txt0000664000175000017500000000000100000000000024421 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/glance_store.egg-info/entry_points.txt0000664000175000017500000000177500000000000023663 0ustar00zuulzuul00000000000000[console_scripts] glance-rootwrap = oslo_rootwrap.cmd:main [glance_store.drivers] cinder = glance_store._drivers.cinder:Store file = glance_store._drivers.filesystem:Store glance.store.cinder.Store = glance_store._drivers.cinder:Store glance.store.filesystem.Store = glance_store._drivers.filesystem:Store glance.store.http.Store = glance_store._drivers.http:Store glance.store.rbd.Store = glance_store._drivers.rbd:Store glance.store.s3.Store = glance_store._drivers.s3:Store glance.store.swift.Store = glance_store._drivers.swift:Store glance.store.vmware_datastore.Store = glance_store._drivers.vmware_datastore:Store http = glance_store._drivers.http:Store no_conf = glance_store.tests.fakes:UnconfigurableStore rbd = glance_store._drivers.rbd:Store s3 = glance_store._drivers.s3:Store swift = glance_store._drivers.swift:Store vmware = glance_store._drivers.vmware_datastore:Store [oslo.config.opts] glance.multi_store = glance_store.multi_backend:_list_config_opts glance.store = glance_store.backend:_list_opts ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/glance_store.egg-info/not-zip-safe0000664000175000017500000000000100000000000022601 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/glance_store.egg-info/pbr.json0000664000175000017500000000005600000000000022032 0ustar00zuulzuul00000000000000{"git_version": "5552d59", "is_release": true}././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/glance_store.egg-info/requires.txt0000664000175000017500000000145100000000000022754 0ustar00zuulzuul00000000000000eventlet!=0.18.3,!=0.20.1,>=0.18.2 jsonschema>=3.2.0 keystoneauth1>=3.4.0 oslo.concurrency>=3.26.0 oslo.config>=5.2.0 oslo.i18n>=3.15.3 oslo.serialization!=2.19.1,>=2.18.0 oslo.utils>=4.7.0 python-keystoneclient>=3.8.0 requests>=2.14.2 stevedore>=1.20.0 [cinder] os-brick>=6.3.0 oslo.privsep>=1.23.0 oslo.rootwrap>=5.8.0 python-cinderclient>=4.1.0 [s3] boto3>=1.9.199 [swift] python-swiftclient>=3.2.0 [test] boto3>=1.9.199 coverage!=4.4,>=4.0 ddt>=1.4.4 doc8>=0.6.0 fixtures>=3.0.0 hacking<6.2.0,>=6.1.0 httplib2>=0.9.1 os-brick>=2.6.0 oslo.privsep>=1.23.0 oslo.rootwrap>=5.8.0 oslo.vmware>=3.6.0 oslotest>=3.2.0 python-cinderclient>=4.1.0 python-subunit>=1.0.0 python-swiftclient>=3.2.0 requests-mock>=1.2.0 retrying>=1.3.3 stestr>=2.0.0 testscenarios>=0.4 testtools>=2.2.0 [vmware] oslo.vmware>=3.6.0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254695.0 glance_store-4.8.1/glance_store.egg-info/top_level.txt0000664000175000017500000000001500000000000023101 0ustar00zuulzuul00000000000000glance_store ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/releasenotes/0000775000175000017500000000000000000000000016705 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.193217 glance_store-4.8.1/releasenotes/notes/0000775000175000017500000000000000000000000020035 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/.placeholder0000664000175000017500000000000000000000000022306 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/0.29.1-notes-ded2a1d473a306c7.yaml0000664000175000017500000000132400000000000025062 0ustar00zuulzuul00000000000000--- critical: - | glance_store 0.29.0 was released with backwards incompatible changes. There was no corresponding releasenote to mention this. 0.29.1 has reverted said change which will be included to 1.0.0 release later on the cycle. fixes: - | Following bugs were fixed and included after 0.28.0 release: * Bug 1824533_: Do not include ETag when puting manifest in chunked uploads * Bug 1818915_: Python3: Fix return type on CooperativeReader.read * Bug 1805332_: Prevent unicode object error from zero-byte read .. _1824533: https://code.launchpad.net/bugs/1824533 .. _1818915: https://code.launchpad.net/bugs/1818915 .. _1805332: https://code.launchpad.net/bugs/1805332 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/Stein_final_release-c7df5838028b8c7e.yaml0000664000175000017500000000230000000000000027103 0ustar00zuulzuul00000000000000--- prelude: This was a quiet development cycle for the ``glance_store`` library. One new feature was added to the Filesystem store driver. Several bugs were fixed and some code changes were committed to increase stability. features: - | A chunk size config option was added to the filesystem driver to allow some performance tweaking. The former hardcoded 64 KB is the current default value. deprecations: - | Removal of ``stores`` and ``default_store`` has been postponed until Train cycle to allow time to move multiple backends stores from being EXPERIMENTAL due to some unresolved issues with the feature. fixes: - | * Bug 1785641_: Fix Defaults for ConfigParser * Bug 1808456_: Catch rdb NoSpace Exception * Bug 1813092_: Fix some types in the FS and VMware drivers * Bug 1815335_: Do not raise StopIteration * Bug 1816721_: Fix python3 compatibility of rbd get_fsid .. _1785641: https://code.launchpad.net/bugs/1785641 .. _1808456: https://code.launchpad.net/bugs/1808456 .. _1813092: https://code.launchpad.net/bugs/1813092 .. _1815335: https://code.launchpad.net/bugs/1815335 .. _1816721: https://code.launchpad.net/bugs/1816721 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/add-store-weight-d443fbea8cc8d4c9.yaml0000664000175000017500000000034200000000000026533 0ustar00zuulzuul00000000000000--- features: - | A `weight` option has been added to the store configuration definition. This allows configuring stores with *relative* weights to each other for sorting when an image exists in multiple stores. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/block-creating-encrypted-nfs-volumes-d0ff370ab762042e.yaml0000664000175000017500000000026000000000000032256 0ustar00zuulzuul00000000000000--- fixes: - | `Bug #1884482 `_: Blocked creation of images on encrypted nfs volumes when glance store is cinder. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/bug-1620999-8b76a0ad14826197.yaml0000664000175000017500000000045600000000000024424 0ustar00zuulzuul00000000000000--- fixes: - | Now the ``project_domain_name`` parameter and the ``user_domain_name`` parameter are properly used by swift backends. Previously these two parameters were ignored and the ``*_domain_id`` parameters should be set to use a keystone domain different from the default one. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/bug-1820817-0ee70781918d232e.yaml0000664000175000017500000000066500000000000024415 0ustar00zuulzuul00000000000000--- fixes: - | Swift backend now can use custom CA bundle to verify SSL connection to Keystone without adding this bundle to global system ones. For this it re-uses the CA bundle specified as ``swift_store_cacert`` config option, so this bundle must verify both certificates of Swift and Keysotne API endpoints. For more details see [`bug 1820817 `_]. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/bug-1915602-fcc807a435d8a6bf.yaml0000664000175000017500000000035600000000000024707 0ustar00zuulzuul00000000000000--- fixes: - | Default value of the ``cinder_catalog_info`` parameter has been changed from ``volumev2::publicURL`` to ``volumev3::publicURL``, so that the current v3 API is used by default instead of the deprecated v2 API. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/bug-1954883-3666d63a3c0233f1.yaml0000664000175000017500000000067400000000000024415 0ustar00zuulzuul00000000000000--- fixes: - | * Bug 1954883_: [RBD] Image is unusable if deletion fails .. _1954883: https://code.launchpad.net/bugs/1954883 upgrade: - | Deployments which are using Ceph V2 clone feature (i.e. RBD backend for glance_store as well as cinder driver is RBD or nova is using RBD driver) and minimum ceph client version is greater than 'luminous' need to grant glance osd read access to the cinder and nova RBD pool. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/bug-2004555-4fd67fce86c07461.yaml0000664000175000017500000000070300000000000024551 0ustar00zuulzuul00000000000000security: - | Cinder glance_store driver: in order to avoid a situation where a leftover device could be mapped to a different volume than the one intended, the cinder glance_store driver now instructs the os-brick library to force detach volumes, which ensures that devices are removed from the host. See `Bug #2004555 `_ for more information about this issue. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/cinder-fix-nfs-sparse-vol-create-76631ce05f86257c.yaml0000664000175000017500000000024500000000000031156 0ustar00zuulzuul00000000000000--- fixes: - | `Bug #2000584 `_: Fixed image create with cinder NFS store when using sparse volumes. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/cinder-nfs-block-qcow2-vol-4fed58b0afafc980.yaml0000664000175000017500000000033100000000000030326 0ustar00zuulzuul00000000000000--- fixes: - | `Bug #1901138 `_: Blocked creation of images when glance store is cinder, cinder backend is nfs and volumes created are qcow2 format. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/cinder-support-extend-in-use-volume-c6292f950ff75cca.yaml0000664000175000017500000000061600000000000032200 0ustar00zuulzuul00000000000000--- features: - | Added support for extending in-use volumes in cinder store. A new boolean config option ``cinder_do_extend_attached`` is added which allows operators to enable/disable extending in-use volume support when creating an image. By default, ``cinder_do_extend_attached`` will be ``False`` i.e. old flow of detaching, extending and attaching will be used. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/deprecate-rados_connect_timeout-767ed1eaa026196e.yaml0000664000175000017500000000047600000000000031470 0ustar00zuulzuul00000000000000--- deprecations: - | The 'rados_connect_timeout' config option for the RBD store has been deprecated and will be removed in the future. It has been silently ignored for multiple releases. Users willing to set a timeout for the connection to the cluster can use Ceph's 'client_mount_timeout' option. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/deprecate-sheepdog-driver-1f9689c327f313d4.yaml0000664000175000017500000000104200000000000030027 0ustar00zuulzuul00000000000000--- deprecations: - | The Sheepdog driver is deprecated in this release and is subject to removal at the beginning of the 'U' development cycle, following the `OpenStack standard deprecation policy `_. The driver is being removed because `Sheepdog is not maintained upstream `_. Additionally, the Sheepdog driver is no longer tested in the OpenStack gate. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/deprecate-store_add_to_backend-f419e5c4210613d2.yaml0000664000175000017500000000143500000000000031040 0ustar00zuulzuul00000000000000--- deprecations: - | The glance_store function ``store_add_to_backend``, which is a wrapper around each store's ``add()`` method, is deprecated in this release and is subject to removal at the beginning of the Stein development cycle, following the `OpenStack standard deprecation policy `_. The function is replaced by ``store_add_to_backend_with_multihash``, which is a similar wrapper, but which takes an additional argument allowing a caller to specify an secure hashing algorithm. The hexdigest of this algorithm is returned as one of the multiple values returned by the function. The function also returns the md5 checksum for backward compatability. ././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=glance_store-4.8.1/releasenotes/notes/deprecate-store_capabilities_update_min_interval-039389fa296e2494.yaml 22 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/deprecate-store_capabilities_update_min_interval-039389fa296e20000664000175000017500000000125000000000000033442 0ustar00zuulzuul00000000000000--- deprecations: - | The glance_store configuration option ``store_capabilities_update_min_interval`` is deprecated in this release and is subject to removal at the beginning of the Stein development cycle, following the `OpenStack standard deprecation policy `_. The option configures a stub method that has not been implemented for any existing store drivers. Hence it is non-operational. Given that it has *never* been operational, it will not be missed. Its presence is confusing to operators and thus it is hereby deprecated for removal. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/deprecate-vmware-store-2f720c6074b843b0.yaml0000664000175000017500000000033000000000000027347 0ustar00zuulzuul00000000000000--- deprecations: - | The VMWare Datastore has been deprecated. The vmwareapi virt driver in nova was marked as experimental due to lack of CI and maintainers and it may be removed in a future release. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/drop-py-2-7-345cafc9c1d3f892.yaml0000664000175000017500000000032100000000000025115 0ustar00zuulzuul00000000000000--- upgrade: - | Python 2.7 support has been dropped. Last release of glance_store to support py2.7 is OpenStack Train. The minimum version of Python now supported by glance_store is Python 3.6. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/drop-python-3-6-and-3-7-41af87576c4fd7b1.yaml0000664000175000017500000000020100000000000026771 0ustar00zuulzuul00000000000000--- upgrade: - | Python 3.6 & 3.7 support has been dropped. The minimum version of Python now supported is Python 3.8. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/fix-exception-logging-during-attach-9546e24189db83c4.yaml0000664000175000017500000000033300000000000031750 0ustar00zuulzuul00000000000000--- fixes: - | `Bug #1970698 `_: Cinder: Fixed exception logging when the image create operation fails due to failing to attach volume to glance host. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/fix-interval-in-retries-471155ff34d9f0e9.yaml0000664000175000017500000000037100000000000027561 0ustar00zuulzuul00000000000000--- fixes: - | `Bug #1969373 `_: Cinder Driver: Correct the retry interval from fixed 1 second to exponential backoff for attaching a volume during image create/save operation. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/fix-ip-in-connector-info-36b95d9959f10f63.yaml0000664000175000017500000000034100000000000027540 0ustar00zuulzuul00000000000000--- fixes: - | `Bug #1955668 `_: Fixed issue with glance cinder store passing hostname instead of IP address to os-brick while getting connector information.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/fix-legacy-image-update-49a149ec267dccb6.yaml0000664000175000017500000000057300000000000027620 0ustar00zuulzuul00000000000000--- fixes: - | `Bug #2056179 `_: Cinder Store: Fix issue when updating legacy image location. Previously we only used the user context's credentials to make request to cinder which we have now updated to use the service credentials configured in the config file else use the user context's credentials. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/fix-rados_connect_timeout-39e5074bc1a3b65b.yaml0000664000175000017500000000152600000000000030307 0ustar00zuulzuul00000000000000--- features: - | RBD driver: the ``rados_connect_timeout`` config option has been un-deprecated and its behavior has been improved. A value of ``0`` is now respected as disabling timeout in requests, while a value less than zero indicates that glance_store will not set a timeout but instead will use whatever timeouts are set in the Ceph configuration file. upgrade: - | RBD driver: the default value of the ``rados_connect_timeout`` option has been changed from 0 to -1, so that the RBD driver will by default use the timeout values defined in ``ceph.conf``. Be aware that setting this option to 0 disables timeouts (that is, the RBD driver will make requests with a timeout of zero, and all requests wait forever), thereby overriding any timeouts that are set in the Ceph configuration file. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/fix-rbd-lockup-3aa2bb86f7d29e19.yaml0000664000175000017500000000042700000000000026053 0ustar00zuulzuul00000000000000--- fixes: - | A recent change to the RBD driver introduced a potential threading lockup when using native threads, and also a (process-)blocking call to an external library when using greenthreads. That change has been reverted until a better fix can be made. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/fix-wait-device-resize-c282940b71a3748e.yaml0000664000175000017500000000032700000000000027266 0ustar00zuulzuul00000000000000--- fixes: - | `Bug #1959913 `_: Added wait between the volume being extended and the new size being detected while opening the volume device. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/fs-drv-chunk-sz-a1b2f6a72fad92d5.yaml0000664000175000017500000000017400000000000026234 0ustar00zuulzuul00000000000000--- fixes: - | The filesystem driver is now using a configurable chunk size. Increasing it may avoid bottlenecks. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/handle-sparse-image-a3ecfc4ae1c00d48.yaml0000664000175000017500000000123000000000000027143 0ustar00zuulzuul00000000000000--- features: - | Add new configuration option ``rbd_thin_provisioning`` and ``filesystem_thin_provisioning`` to rbd and filesystem store to enable or not sparse upload, default are False. A sparse file means that we do not actually write null byte sequences but only the data itself at a given offset, the "holes" which can appear will automatically be interpreted by the storage backend as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/improved-configuration-options-3635b56aba3072c9.yaml0000664000175000017500000000247600000000000031244 0ustar00zuulzuul00000000000000--- prelude: > Improved configuration options for glance_store. Please refer to the ``other`` section for more information. other: - The glance_store configuration options have been improved with detailed help texts, defaults for sample configuration files, explicit choices of values for operators to choose from, and a strict range defined with ``min`` and ``max`` boundaries. It is to be noted that the configuration options that take integer values now have a strict range defined with "min" and/or "max" boundaries where appropriate. This renders the configuration options incapable of taking certain values that may have been accepted before but were actually invalid. For example, configuration options specifying counts, where a negative value was undefined, would have still accepted the supplied negative value. Such options will no longer accept negative values. However, options where a negative value was previously defined (for example, -1 to mean unlimited) will remain unaffected by this change. Values that do not comply with the appropriate restrictions will prevent the service from starting. The logs will contain a message indicating the problematic configuration option and the reason why the supplied value has been rejected. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/lock_path-cef9d6f5f52c3211.yaml0000664000175000017500000000074500000000000025173 0ustar00zuulzuul00000000000000--- features: - | When using the cinder backend, a custom os-brick file lock location can be specified using the ``lock_path`` configuration option in the ``[os_brick]`` configuration section. Helpful when deploying on the same host as the Cinder service. upgrade: - | When running Cinder and Glance with Cinder backend on the same host an os-brick shared location can be configured using the ``lock_path`` in the ``[os_brick]`` configuration section. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/move-rootwrap-config-f2cf435c548aab5c.yaml0000664000175000017500000000031100000000000027352 0ustar00zuulzuul00000000000000--- upgrade: - Packagers should be aware that the rootwrap configuration files have been moved from etc/ to etc/glance/ in order to be consistent with where other projects place these files. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/multi-store-0c004fc8aba2a25d.yaml0000664000175000017500000000125600000000000025540 0ustar00zuulzuul00000000000000--- prelude: > This release contains the base work for multiple back-ends changing how the back-ends get configured. Please note that in Rocky release the work is still experimental. Thus it's not advised to utilize the new configs in production environments before Stein release even though old config options are deprecated for removal. features: - | EXPERIMENTAL: Multiple back-end stores deprecations: - | 'stores' and 'default_store' config options have been deprecated for removal. As the replacements are experimental for Rocky release, migration away from these options in production environments is not advised before Stein release. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/multi-tenant-store-058b67ce5b7f3bd0.yaml0000664000175000017500000000060100000000000026767 0ustar00zuulzuul00000000000000--- upgrade: - If using Swift in the multi-tenant mode for storing images in Glance, please note that the configuration options ``swift_store_multi_tenant`` and ``swift_store_config_file`` are now mutually exclusive and cannot be configured together. If you intend to use multi-tenant store, please make sure that you have not set a swift configuration file. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/multiattach-volume-handling-1a8446a64463f2cf.yaml0000664000175000017500000000057300000000000030472 0ustar00zuulzuul00000000000000--- prelude: > This release adds support for handling cinder's multiattach volumes in glance cinder store. features: - | Glance cinder store now supports handling of multiattach volumes. fixes: - | `Bug #1904546 `_: Fixed creating multiple instances/volumes from image if multiattach volumes are used.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/multihash-support-629e9cbc283a8b47.yaml0000664000175000017500000000117400000000000026663 0ustar00zuulzuul00000000000000--- prelude: > This release adds support for Glance multihash computation. features: - | A new function, ``store_add_to_backend_with_multihash``, has been added. This function wraps each store's ``add`` method to provide consumers with a constant interface. It is similar to the existing ``store_add_to_backend`` function but requires the caller to specify an additional ``hashing_algo`` argument whose value is a hashlib algorithm identifier. The function returns a 5-tuple containing a ``multihash`` value, which is a hexdigest of the stored data computed using the specified hashing algorithm. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/pike-relnote-9f547df14184d18c.yaml0000664000175000017500000000463300000000000025475 0ustar00zuulzuul00000000000000--- prelude: > This was a quiet development cycle for the ``glance_store`` library. No new features were added. Several bugs were fixed and some code changes were committed to increase stability. fixes: - | The following bugs were fixed during the Pike release cycle. * Bug 1618666_: Fix SafeConfigParser DeprecationWarning in Python 3.2+ * Bug 1668848_: PBR 2.0.0 will break projects not using constraints * Bug 1657710_: Unit test passes only because is launched as non-root user * Bug 1686063_: RBD driver can't delete image with unprotected snapshot * Bug 1691132_: Fixed tests failing due to updated oslo.config * Bug 1693670_: Fix doc generation for Python3 * Bug 1643516_: Cinder driver: TypeError in _open_cinder_volume * Bug 1620214_: Sheepdog: command execution failure .. _1618666: https://code.launchpad.net/bugs/1618666 .. _1668848: https://code.launchpad.net/bugs/1668848 .. _1657710: https://code.launchpad.net/bugs/1657710 .. _1686063: https://code.launchpad.net/bugs/1686063 .. _1691132: https://code.launchpad.net/bugs/1691132 .. _1693670: https://code.launchpad.net/bugs/1693670 .. _1643516: https://code.launchpad.net/bugs/1643516 .. _1620214: https://code.launchpad.net/bugs/1620214 other: - | The following improvements were made during the Pike release cycle. * `Fixed string formatting in log message `_ * `Correct error msg variable that could be unassigned `_ * `Use HostAddressOpt for store opts that accept IP and hostnames `_ * `Replace six.iteritems() with .items() `_ * `Add python 3.5 in classifier and envlist `_ * `Initialize privsep root_helper command `_ * `Documentation was reorganized according to the new standard layout `_ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/prevent-unauthorized-errors-ebb9cf2236595cd0.yaml0000664000175000017500000000111100000000000030726 0ustar00zuulzuul00000000000000--- prelude: > Prevent Unauthorized errors during uploading or donwloading data to Swift store. features: - Allow glance_store to refresh token when upload or download data to Swift store. glance_store identifies if token is going to expire soon when executing request to Swift and refresh the token. For multi-tenant swift store glance_store uses trusts, for single-tenant swift store glance_store uses credentials from swift store configurations. Please also note that this feature is enabled if and only if Keystone V3 API is available and enabled.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/queens-relnote-5fa2d009d9a9e458.yaml0000664000175000017500000000500100000000000026110 0ustar00zuulzuul00000000000000--- prelude: > This was a quiet development cycle for the ``glance_store`` library. One new feature was added to the Swift store driver. Several bugs were fixed and some code changes were committed to increase stability. features: - | A `BufferedReader`_ has been added to the Swift store driver in order to enable better recovery from errors during uploads of large image files. Because this reader buffers image data, it could cause Glance to use a much larger amount of disk space, and so the Buffered Reader is *not* enabled by default. To use the new reader with the Swift store, you must do the following: * Set the ``glance_store`` configuration option ``swift_buffer_on_upload`` to ``True`` * Set the ``glance_store`` configuration option ``swift_upload_buffer_dir`` to a string value representing an absolute directory path. This directory will be used to hold the buffered data. The Buffered Reader works by taking advantage of the way Swift stores large objects by segmenting them into discrete chunks. Thus, the amount of disk space a Glance API node will require for buffering is a function of the ``swift_store_large_object_chunk_size`` setting and the number of worker threads (configured in **glance-api.conf** as the value of ``workers``). Disk utilization will cap at the following value swift_store_large_object_chunk_size * workers * 1000 Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the Glance image cache, which may affect overall performance. For more information, see the `Buffered Reader for Swift Driver`_ spec. .. _BufferedReader: https://opendev.org/openstack/glance_store/commit/2e0024c85ca2ddf380014e44213be4fb876f680e .. _Buffered Reader for Swift Driver: http://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/buffered-reader-for-swift-driver.html fixes: - | * Bug 1738331_: Fix BufferedReader writing zero size chunks * Bug 1733502_: Use cached auth_ref instead of getting a new one each time .. _1738331: https://code.launchpad.net/bugs/1738331 .. _1733502: https://code.launchpad.net/bugs/1733502 upgrade: - | Two new configuration options, ``swift_buffer_on_upload`` and ``swift_upload_buffer_dir`` have been introduced. These apply only to users of the Swift store and their use is optional. See the New Features section for more information. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/rbd-trash-snapshots-158a39da4248fb0c.yaml0000664000175000017500000000077100000000000027046 0ustar00zuulzuul00000000000000--- features: - | The RBD driver now moves images to the trash if they cannot be deleted immediately due to having snapshots. This fixes the long-standing issue where base images are unable to be deleted until/unless all snapshots of it are also deleted. Moving the image to the trash allows Glance to proceed with the deletion of the image (as far as it is concerned), mark the RBD image for deletion, which will happen once the last snapshot that uses it has been deleted. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/release-1.0.0-7ab43e91523eb3c8.yaml0000664000175000017500000000406000000000000025207 0ustar00zuulzuul00000000000000--- prelude: > The Glance Project Team is excited to announce the version 1.0.0 release of the glance_store library. This release marks the finalization of changes introduced on an experimental basis in previous releases beginning with 0.25.0 to support the Glance `Multi-store backend support `_ feature. features: - | Multiple backend stores may be configured using the ``glance_store.multi_backend`` module. See the documentation of the ``create_multi_stores`` function in the `glance_store Reference Guide `_ for details. deprecations: - | The 'stores' and 'default_store' configuration options have been deprecated for removal since the OpenStack Rocky release. They are subject to removal early in the 'U' development cycle. When these options are removed, the ``glance_store.backend`` module, that depends on them, will be removed as well. upgrade: - | Consuming services should begin the transition away from the ``glance_store.backend`` module and instead use the ``glance_store.multi_backend`` module. The ``backend`` module is expected to be removed during the 'U' development cycle. issues: - | The responses from some functions in the ``glance_store.multi_backend`` module, which was EXPERIMENTAL until this release, have changed. In particular, the ``glance_store.driver.Store.add`` function which returns a tuple whose last element is a dictionary of storage system specific information, no longer contains a 'backend' key. Instead, this key is named 'store'. This change extends to any convenience functions that wrap ``Store.add``. Consumers relying upon the EXPERIMENTAL behavior should not upgrade past version 0.29.1. Now that the ``multi_backend`` module is fully supported in release 1.0.0, it will not undergo any more backward-incompatible changes. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/release-1.0.1-098b1487ac8cc9a1.yaml0000664000175000017500000000126500000000000025221 0ustar00zuulzuul00000000000000--- other: - | In this version, refactor was made how registering of filesystem configuration options for reserved stores works. Consumer just need to pass the key:value pair where key represents the name of the reserved store and value represents the actual store driver, to the glance_store. issues: - | At the moment use of reserved stores is only limited to filesystem store driver. Also default ``filesystem_store_datadir`` path for these stores is set to ``/var/lib/glance/``, so with if you are using devstack for the deployment, you need to make sure you have appropriate permissions to create these reserved stores directories. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/release-1.2.0-8d239f01cd8ff0bf.yaml0000664000175000017500000000255000000000000025360 0ustar00zuulzuul00000000000000--- fixes: - | Following bugs were fixed and included after 1.0.1 release: * Bug 1820817_: Swift backend can not use custom CA bundle to verify server SSL certs when those are not added to global system certs * Bug 1866966_: define mount_point for ``*fs`` drivers in glance cinder store * Bug 1863691_: When Image size greater than the chunk size and the glance buffered upload for swift is enabled, glance just put 0 bytes * Bug 1839778_: Python3 swift config quotes removal * Bug 1863983_: Image upload is failing with NoFibreChannelVolumeDeviceFound after configuring Cinder(HP3Par FC storage) as glance backend .. _1820817: https://bugs.launchpad.net/glance-store/+bug/1820817 .. _1866966: https://bugs.launchpad.net/glance-store/+bug/1866966 .. _1863691: https://bugs.launchpad.net/fuel/+bug/1863691 .. _1839778: https://bugs.launchpad.net/glance-store/+bug/1839778 .. _1863983: https://bugs.launchpad.net/glance-store/+bug/1863983 other: - | The following improvements were made during the Ussuri release cycle: * Partial refactoring of cinder driver of glance store to use cinderclient version 3 and some methods have been moved to class level rather than use them as module level. * Droped support for python 2.7 and testing for the same. * Drop support for tempest-full ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/releasenote-0.17.0-efee3f557ea2096a.yaml0000664000175000017500000000116300000000000026335 0ustar00zuulzuul00000000000000--- prelude: > Some deprecated exceptions have been removed. See upgrade section for more details. upgrade: - The following list of exceptions have been deprecated since 0.10.0 release -- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, ``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, ``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, ``ImageDataNotFound``, ``InvalidParameterValue``, ``InvalidImageStatusTransition``. This release removes these exceptions so any remnant consumption of the same must be avoided/removed. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/remove-cinder-experimental-fbf9dea32c84dc9b.yaml0000664000175000017500000000054100000000000030677 0ustar00zuulzuul00000000000000--- prelude: > From glance_store release 0.26.0 onwards Cinder driver is no longer considered as experimental. other: - | During Rocky cycle number of issues still warranting experimental status on Cinder back-end driver was addressed. The team considers the driver stable and production ready from Rocky release onwards (0.26.0). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/remove-gridfs-driver-09286e27613b4353.yaml0000664000175000017500000000033200000000000026700 0ustar00zuulzuul00000000000000--- prelude: > glance_store._drivers.gridfs deprecations: - The gridfs driver has been removed from the tree. The environments using this driver that were not migrated will stop working after the upgrade.././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/remove-s3-driver-f432afa1f53ecdf8.yaml0000664000175000017500000000126500000000000026473 0ustar00zuulzuul00000000000000--- prelude: > glance_store._drivers.s3 removed from tree. upgrade: - The S3 driver has been removed completely from the glance_store source tree. All environments running and (or) using this s3-driver piece of code and have not been migrated will stop working after the upgrade. We recommend you use a different storage backend that is still being supported by Glance. The standard deprecation path has been used to remove this. The process requiring store driver maintainers was initiated at http://lists.openstack.org/pipermail/openstack-dev/2015-December/081966.html . Since, S3 driver did not get any maintainer, it was decided to remove it. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/remove-store-cap-update-min-interval-21fea4173ed4a09b.yaml0000664000175000017500000000045400000000000032263 0ustar00zuulzuul00000000000000--- upgrade: - The ``store_capabilities_update_min_interval`` configuration option, deprecated since the Rocky release, has been removed. The option configured a capability that was not implemented by any glance_store drivers. Thus its removal will have no impact on any deployments. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/rethinking-filesystem-access-5ab872fd0c0d27db.yaml0000664000175000017500000000065500000000000031060 0ustar00zuulzuul00000000000000--- features: - | Added keyword argument to ``register_store_opts`` and ``create_multi_stores`` calls to configure reserved stores by the consuming service. This feature will allow a mix of operator-configured stores via enabled_backends configuration option set in the [glance_store] section of the consuming service's configuration file, and stores that are reserved for use by the consuming service. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/rocky-bugfixes-adefa8f47db16a2d.yaml0000664000175000017500000000070400000000000026376 0ustar00zuulzuul00000000000000--- fixes: - | * Bug 1606268_: Failure to upload to swift when keystone uses "insecure" SSL * Bug 1779455_: cinder backend causes BadRequest due to NULL mountpoint * Bug 1764200_: Glance Cinder backed images & multiple regions .. _1606268: https://bugs.launchpad.net/glance-store/+bug/1606268 .. _1779455: https://bugs.launchpad.net/glance-store/+bug/1779455 .. _1764200: https://bugs.launchpad.net/glance-store/+bug/1764200 ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=glance_store-4.8.1/releasenotes/notes/set-documented-default-directory-for-filesystem-9b417a29416d3a94.yaml 22 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/set-documented-default-directory-for-filesystem-9b417a29416d3a0000664000175000017500000000036200000000000033176 0ustar00zuulzuul00000000000000--- other: - For years, `/var/lib/glance/images` has been presented as the default dir for the filesystem store. It was not part of the default value until now. New deployments and ppl overriding config files should watch for this. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/sorted-drivers-for-configs-a905f07d3bf9c973.yaml0000664000175000017500000000116100000000000030335 0ustar00zuulzuul00000000000000--- prelude: > Return list of store drivers in sorted order for generating configs. More info in ``Upgrade Notes`` and ``Bug Fixes`` section. upgrade: - This version of glance_store will result in Glance generating the configs in a sorted (deterministic) order. So, preferably store releases on or after this should be used for generating any new configs if the mismatched ordering of the configs results in an issue in your environment. fixes: - Bug 1619487 is fixed which was causing random order of the generation of configs in Glance. See ``upgrade`` section for more details. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/start-using-reno-73ef709807e37b74.yaml0000664000175000017500000000007100000000000026235 0ustar00zuulzuul00000000000000--- other: - Start using reno to manage release notes. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/support-cinder-multiple-stores-6cc8489f8f4f8ff3.yaml0000664000175000017500000000053300000000000031401 0ustar00zuulzuul00000000000000--- features: - | Added support for cinder multiple stores. Operators can now configure multiple cinder stores by configuring a unique cinder_volume_type for each cinder store. upgrade: - | Legacy images will be moved to specific stores as per their current volume's type and the location URL will be updated respectively. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/support-cinder-upload-c85849d9c88bbd7e.yaml0000664000175000017500000000063400000000000027512 0ustar00zuulzuul00000000000000--- features: - Implemented image uploading, downloading and deletion for cinder store. It also supports new settings to put image volumes into a specific project to hide them from users and to control them based on ACL of the images. Note that cinder store is currently considered experimental, so current deployers should be aware that the use of it in production right now may be risky. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/support-cinder-user-domain-420c76053dd50534.yaml0000664000175000017500000000067300000000000030112 0ustar00zuulzuul00000000000000--- features: - | For the Cinder store, if using an internal user to store images, it is now possible to have the internal user and the internal project in Keystone domains other than the ``Default`` one. Two new config options ``cinder_store_user_domain_name`` and ``cinder_store_project_domain_name`` are added (both default to ``Default``) and now are possible to use in the configuration of the Cinder store. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/support-s3-driver-a4158f9fa35931d5.yaml0000664000175000017500000000033500000000000026416 0ustar00zuulzuul00000000000000--- features: - | Implemented S3 driver to use Amazon S3 or S3 compatible storage as Glance backend. This is a revival of the S3 driver supported up to Mitaka, with the addition of a multiple store support. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/update-stein-deprecations-3c2f6ffeab22b558.yaml0000664000175000017500000000030300000000000030352 0ustar00zuulzuul00000000000000--- upgrade: - | Removal of the ``stores`` and ``default_store`` configuration options, which were deprecated in Rocky, has been postponed until during the Train development cycle. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/victoria-milestone-1-c1f9de5b90e8c326.yaml0000664000175000017500000000045400000000000027203 0ustar00zuulzuul00000000000000--- fixes: - | * Bug 1875281_: API returns 503 if one of the store is mis-configured * Bug 1870289_: Add lock per share for cinder nfs mount/umount .. _1875281: https://bugs.launchpad.net/glance-store/+bug/1875281 .. _1870289: https://bugs.launchpad.net/glance-store/+bug/1870289 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/vmware-store-requests-369485d2cfdb6175.yaml0000664000175000017500000000047400000000000027402 0ustar00zuulzuul00000000000000--- security: - Previously the VMWare Datastore was using HTTPS Connections from httplib which do not verify the connection. By switching to using requests library the VMware storage backend now verifies HTTPS connection to vCenter server and thus addresses the vulnerabilities described in OSSN-0033. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/volume-type-validation-check-011a400d7fb3b307.yaml0000664000175000017500000000102200000000000030513 0ustar00zuulzuul00000000000000--- upgrade: - | Previously, during service startup, the check to validate volume types used to raise ``BackendException`` or ``BadStoreConfiguration`` exceptions when an invalid volume type was configured hence failing the service startup. It now logs a warning and the glance service starts normally. fixes: - | `Bug #1915163 `_: Added handling to log and raise proper exception during image create when an invalid volume type is configured. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/wallaby-final-release-00f0f851ff7d93ab.yaml0000664000175000017500000000045400000000000027361 0ustar00zuulzuul00000000000000--- prelude: > This was a quiet development cycle for the ``glance_store`` library. Several bugs were fixed and some code changes were committed to increase stability. fixes: - | * Bug 1915602_: Cinder store: Use v3 API by default .. _1915602: https://code.launchpad.net/bugs/1915602././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/notes/xena-final-release-3c6e19dfba43b40d.yaml0000664000175000017500000000107500000000000026732 0ustar00zuulzuul00000000000000--- prelude: > This was a quiet development cycle for the ``glance_store`` library. Several bugs were fixed and some code changes were committed to increase stability. fixes: - | * Bug 1926404_: HTTP 413 : Failed to add object to Swift. Got error from Swift * Bug 1885651_: swift_store_endpoint doesn't override keystone catalog * Bug 1934849_: s3 backend takes time exponentially .. _1926404: https://code.launchpad.net/bugs/1926404 .. _1885651: https://code.launchpad.net/bugs/1885651 .. _1934849: https://code.launchpad.net/bugs/1934849 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/releasenotes/source/0000775000175000017500000000000000000000000020205 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/2023.1.rst0000664000175000017500000000020200000000000021456 0ustar00zuulzuul00000000000000=========================== 2023.1 Series Release Notes =========================== .. release-notes:: :branch: stable/2023.1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/2023.2.rst0000664000175000017500000000020200000000000021457 0ustar00zuulzuul00000000000000=========================== 2023.2 Series Release Notes =========================== .. release-notes:: :branch: stable/2023.2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/2024.1.rst0000664000175000017500000000020200000000000021457 0ustar00zuulzuul00000000000000=========================== 2024.1 Series Release Notes =========================== .. release-notes:: :branch: stable/2024.1 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/releasenotes/source/_static/0000775000175000017500000000000000000000000021633 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/_static/.placeholder0000664000175000017500000000000000000000000024104 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/releasenotes/source/_templates/0000775000175000017500000000000000000000000022342 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/_templates/.placeholder0000664000175000017500000000000000000000000024613 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/conf.py0000664000175000017500000002141400000000000021506 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Glance_store Release Notes documentation build configuration file # # Modified from corresponding configuration file in Glance. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # openstackdocstheme options openstackdocs_repo_name = 'openstack/glance_store' openstackdocs_auto_name = False openstackdocs_bug_project = 'glance-store' openstackdocs_bug_tag = '' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = 'Glance_store Release Notes' copyright = '2015, Openstack Foundation' # Release notes are unversioned, so we don't need to set version or release version = '' release = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'GlanceStoreReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'GlanceStoreReleaseNotes.tex', 'Glance_store Release Notes Documentation', 'Glance_store Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'glancestorereleasenotes', 'Glance_store Release Notes Documentation', ['Glance_store Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'GlanceStoreReleaseNotes', 'Glance_store Release Notes Documentation', 'Glance_store Developers', 'GlanceStoreReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/index.rst0000664000175000017500000000045600000000000022053 0ustar00zuulzuul00000000000000============================ Glance_store Release Notes ============================ .. toctree:: :maxdepth: 1 unreleased 2024.1 2023.2 2023.1 zed yoga xena wallaby victoria ussuri train stein rocky queens pike ocata newton mitaka liberty ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/liberty.rst0000664000175000017500000000022200000000000022405 0ustar00zuulzuul00000000000000============================== Liberty Series Release Notes ============================== .. release-notes:: :branch: origin/stable/liberty ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/releasenotes/source/locale/0000775000175000017500000000000000000000000021444 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/releasenotes/source/locale/de/0000775000175000017500000000000000000000000022034 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/releasenotes/source/locale/de/LC_MESSAGES/0000775000175000017500000000000000000000000023621 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/locale/de/LC_MESSAGES/releasenotes.po0000664000175000017500000000476700000000000026670 0ustar00zuulzuul00000000000000# Andreas Jaeger , 2019. #zanata msgid "" msgstr "" "Project-Id-Version: glance_store\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2019-12-19 00:11+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2019-10-01 06:30+0000\n" "Last-Translator: Andreas Jaeger \n" "Language-Team: German\n" "Language: de\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "0.11.0" msgstr "0.11.0" msgid "0.12.0" msgstr "0.12.0" msgid "0.16.0" msgstr "0.16.0" msgid "0.17.0" msgstr "0.17.0" msgid "0.18.0-5" msgstr "0.18.0-5" msgid "0.19.0" msgstr "0.19.0" msgid "0.21.0" msgstr "0.21.0" msgid "0.23.0" msgstr "0.23.0" msgid "0.25.0" msgstr "0.25.0" msgid "0.26.0" msgstr "0.26.0" msgid "0.26.1" msgstr "0.26.1" msgid "0.28.0" msgstr "0.28.0" msgid "0.29.0" msgstr "0.29.0" msgid "0.29.1" msgstr "0.29.1" msgid "1.0.0" msgstr "1.0.0" msgid "1.0.1" msgstr "1.0.1" msgid "Bug Fixes" msgstr "Fehlerkorrekturen" msgid "Critical Issues" msgstr "Kritische Probleme" msgid "Current Series Release Notes" msgstr "Aktuelle Serie Releasenotes" msgid "Deprecation Notes" msgstr "Ablaufwarnungen" msgid "Glance_store Release Notes" msgstr "Glance_store Releasenotes" msgid "Known Issues" msgstr "Bekannte Probleme" msgid "Liberty Series Release Notes" msgstr "Liberty Serie Releasenotes" msgid "Mitaka Series Release Notes" msgstr "Mitaka Serie Releasenotes" msgid "New Features" msgstr "Neue Funktionen" msgid "Newton Series Release Notes" msgstr "Newton Serie Releasenotes" msgid "Ocata Series Release Notes" msgstr "Ocata Serie Releasenotes" msgid "Other Notes" msgstr "Andere Notizen" msgid "Pike Series Release Notes" msgstr "Pike Serie Releasenotes" msgid "Prelude" msgstr "Einleitung" msgid "Queens Series Release Notes" msgstr "Queens Serie Releasenotes" msgid "Rocky Series Release Notes" msgstr "Rocky Serie Releasenotes" msgid "Security Issues" msgstr "Sicherheitsrelevante Probleme" msgid "Start using reno to manage release notes." msgstr "Reno wird für die Verwaltung der Releasenotes verwendet." msgid "Stein Series Release Notes" msgstr "Stein Serie Releasenotes" msgid "Train Series Release Notes" msgstr "Train Serie Releasenotes" msgid "Upgrade Notes" msgstr "Aktualisierungsnotizen" msgid "glance_store._drivers.gridfs" msgstr "glance_store._drivers.gridfs" msgid "glance_store._drivers.s3 removed from tree." msgstr "glance_store._drivers.s3 removed from tree." ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/releasenotes/source/locale/en_GB/0000775000175000017500000000000000000000000022416 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/releasenotes/source/locale/en_GB/LC_MESSAGES/0000775000175000017500000000000000000000000024203 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/locale/en_GB/LC_MESSAGES/releasenotes.po0000664000175000017500000017013500000000000027243 0ustar00zuulzuul00000000000000# Andi Chandler , 2016. #zanata # Andi Chandler , 2017. #zanata # Andi Chandler , 2018. #zanata # Andi Chandler , 2019. #zanata # Andi Chandler , 2020. #zanata # Andi Chandler , 2021. #zanata # Andi Chandler , 2022. #zanata # Andi Chandler , 2023. #zanata # Andi Chandler , 2024. #zanata msgid "" msgstr "" "Project-Id-Version: Glance_store Release Notes\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2024-04-30 11:23+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2024-05-08 01:29+0000\n" "Last-Translator: Andi Chandler \n" "Language-Team: English (United Kingdom)\n" "Language: en_GB\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" msgid "" "'stores' and 'default_store' config options have been deprecated for " "removal. As the replacements are experimental for Rocky release, migration " "away from these options in production environments is not advised before " "Stein release." msgstr "" "'stores' and 'default_store' config options have been deprecated for " "removal. As the replacements are experimental for Rocky release, migration " "away from these options in production environments is not advised before " "Stein release." msgid "0.11.0" msgstr "0.11.0" msgid "0.12.0" msgstr "0.12.0" msgid "0.16.0" msgstr "0.16.0" msgid "0.17.0" msgstr "0.17.0" msgid "0.18.0-5" msgstr "0.18.0-5" msgid "0.19.0" msgstr "0.19.0" msgid "0.21.0" msgstr "0.21.0" msgid "0.23.0" msgstr "0.23.0" msgid "0.25.0" msgstr "0.25.0" msgid "0.26.0" msgstr "0.26.0" msgid "0.26.1" msgstr "0.26.1" msgid "0.28.0" msgstr "0.28.0" msgid "0.29.0" msgstr "0.29.0" msgid "0.29.1" msgstr "0.29.1" msgid "1.0.0" msgstr "1.0.0" msgid "1.0.1" msgstr "1.0.1" msgid "1.1.0" msgstr "1.1.0" msgid "2.0.0" msgstr "2.0.0" msgid "2.3.0" msgstr "2.3.0" msgid "2.5.0" msgstr "2.5.0" msgid "2.5.1" msgstr "2.5.1" msgid "2.7.0" msgstr "2.7.0" msgid "2.7.1" msgstr "2.7.1" msgid "2.7.1-2" msgstr "2.7.1-2" msgid "2023.1 Series Release Notes" msgstr "2023.1 Series Release Notes" msgid "2023.2 Series Release Notes" msgstr "2023.2 Series Release Notes" msgid "2024.1 Series Release Notes" msgstr "2024.1 Series Release Notes" msgid "3.0.0" msgstr "3.0.0" msgid "3.0.1" msgstr "3.0.1" msgid "4.1.0" msgstr "4.1.0" msgid "4.1.1" msgstr "4.1.1" msgid "4.1.1-3" msgstr "4.1.1-3" msgid "4.3.0" msgstr "4.3.0" msgid "4.3.1" msgstr "4.3.1" msgid "4.3.2" msgstr "4.3.2" msgid "4.4.0" msgstr "4.4.0" msgid "4.5.0" msgstr "4.5.0" msgid "4.6.0" msgstr "4.6.0" msgid "4.7.0" msgstr "4.7.0" msgid "4.8.0" msgstr "4.8.0" msgid "" "A `BufferedReader`_ has been added to the Swift store driver in order to " "enable better recovery from errors during uploads of large image files. " "Because this reader buffers image data, it could cause Glance to use a much " "larger amount of disk space, and so the Buffered Reader is *not* enabled by " "default." msgstr "" "A `BufferedReader`_ has been added to the Swift store driver in order to " "enable better recovery from errors during uploads of large image files. " "Because this reader buffers image data, it could cause Glance to use a much " "larger amount of disk space, and so the Buffered Reader is *not* enabled by " "default." msgid "" "A `weight` option has been added to the store configuration definition. This " "allows configuring stores with *relative* weights to each other for sorting " "when an image exists in multiple stores." msgstr "" "A `weight` option has been added to the store configuration definition. This " "allows configuring stores with *relative* weights to each other for sorting " "when an image exists in multiple stores." msgid "" "A chunk size config option was added to the filesystem driver to allow some " "performance tweaking. The former hardcoded 64 KB is the current default " "value." msgstr "" "A chunk size config option was added to the filesystem driver to allow some " "performance tweaking. The former hardcoded 64 KB is the current default " "value." msgid "" "A new function, ``store_add_to_backend_with_multihash``, has been added. " "This function wraps each store's ``add`` method to provide consumers with a " "constant interface. It is similar to the existing ``store_add_to_backend`` " "function but requires the caller to specify an additional ``hashing_algo`` " "argument whose value is a hashlib algorithm identifier. The function " "returns a 5-tuple containing a ``multihash`` value, which is a hexdigest of " "the stored data computed using the specified hashing algorithm." msgstr "" "A new function, ``store_add_to_backend_with_multihash``, has been added. " "This function wraps each store's ``add`` method to provide consumers with a " "constant interface. It is similar to the existing ``store_add_to_backend`` " "function but requires the caller to specify an additional ``hashing_algo`` " "argument whose value is a hashlib algorithm identifier. The function " "returns a 5-tuple containing a ``multihash`` value, which is a hexdigest of " "the stored data computed using the specified hashing algorithm." msgid "" "A recent change to the RBD driver introduced a potential threading lockup " "when using native threads, and also a (process-)blocking call to an external " "library when using greenthreads. That change has been reverted until a " "better fix can be made." msgstr "" "A recent change to the RBD driver introduced a potential threading lockup " "when using native threads, and also a (process-)blocking call to an external " "library when using green threads. That change has been reverted until a " "better fix can be made." msgid "" "A sparse file means that we do not actually write null byte sequences but " "only the data itself at a given offset, the \"holes\" which can appear will " "automatically be interpreted by the storage backend as null bytes, and do " "not really consume your storage." msgstr "" "A sparse file means that we do not actually write null byte sequences but " "only the data itself at a given offset, the \"holes\" which can appear will " "automatically be interpreted by the storage backend as null bytes, and do " "not really consume your storage." msgid "" "Add new configuration option ``rbd_thin_provisioning`` and " "``filesystem_thin_provisioning`` to rbd and filesystem store to enable or " "not sparse upload, default are False." msgstr "" "Add new configuration option ``rbd_thin_provisioning`` and " "``filesystem_thin_provisioning`` to rbd and filesystem store to enable or " "not sparse upload, default are False." msgid "" "Added keyword argument to ``register_store_opts`` and " "``create_multi_stores`` calls to configure reserved stores by the consuming " "service. This feature will allow a mix of operator-configured stores via " "enabled_backends configuration option set in the [glance_store] section of " "the consuming service's configuration file, and stores that are reserved for " "use by the consuming service." msgstr "" "Added keyword argument to ``register_store_opts`` and " "``create_multi_stores`` calls to configure reserved stores by the consuming " "service. This feature will allow a mix of operator-configured stores via " "enabled_backends configuration option set in the [glance_store] section of " "the consuming service's configuration file, and stores that are reserved for " "use by the consuming service." msgid "" "Added support for cinder multiple stores. Operators can now configure " "multiple cinder stores by configuring a unique cinder_volume_type for each " "cinder store." msgstr "" "Added support for cinder multiple stores. Operators can now configure " "multiple cinder stores by configuring a unique cinder_volume_type for each " "Cinder store." msgid "" "Added support for extending in-use volumes in cinder store. A new boolean " "config option ``cinder_do_extend_attached`` is added which allows operators " "to enable/disable extending in-use volume support when creating an image. By " "default, ``cinder_do_extend_attached`` will be ``False`` i.e. old flow of " "detaching, extending and attaching will be used." msgstr "" "Added support for extending in-use volumes in Cinder store. A new boolean " "config option ``cinder_do_extend_attached`` is added which allows operators " "to enable/disable extending in-use volume support when creating an image. By " "default, ``cinder_do_extend_attached`` will be ``False`` i.e. old flow of " "detaching, extending and attaching will be used." msgid "" "Allow glance_store to refresh token when upload or download data to Swift " "store. glance_store identifies if token is going to expire soon when " "executing request to Swift and refresh the token. For multi-tenant swift " "store glance_store uses trusts, for single-tenant swift store glance_store " "uses credentials from swift store configurations. Please also note that this " "feature is enabled if and only if Keystone V3 API is available and enabled." msgstr "" "Allow glance_store to refresh token when upload or download data to Swift " "store. glance_store identifies if token is going to expire soon when " "executing request to Swift and refresh the token. For multi-tenant swift " "store glance_store uses trusts, for single-tenant swift store glance_store " "uses credentials from swift store configurations. Please also note that this " "feature is enabled if and only if Keystone V3 API is available and enabled." msgid "" "At the moment use of reserved stores is only limited to filesystem store " "driver. Also default ``filesystem_store_datadir`` path for these stores is " "set to ``/var/lib/glance/``, so with if you are using devstack " "for the deployment, you need to make sure you have appropriate permissions " "to create these reserved stores directories." msgstr "" "At the moment use of reserved stores is only limited to filesystem store " "driver. Also default ``filesystem_store_datadir`` path for these stores is " "set to ``/var/lib/glance/``, so with if you are using devstack " "for the deployment, you need to make sure you have appropriate permissions " "to create these reserved stores directories." msgid "" "Be aware that depending upon how the file system is configured, the disk " "space used for buffering may decrease the actual disk space available for " "the Glance image cache, which may affect overall performance." msgstr "" "Be aware that depending upon how the file system is configured, the disk " "space used for buffering may decrease the actual disk space available for " "the Glance image cache, which may affect overall performance." msgid "" "Bug 1606268_: Failure to upload to swift when keystone uses \"insecure\" SSL" msgstr "" "Bug 1606268_: Failure to upload to swift when keystone uses \"insecure\" SSL" msgid "Bug 1618666_: Fix SafeConfigParser DeprecationWarning in Python 3.2+" msgstr "Bug 1618666_: Fix SafeConfigParser DeprecationWarning in Python 3.2+" msgid "" "Bug 1619487 is fixed which was causing random order of the generation of " "configs in Glance. See ``upgrade`` section for more details." msgstr "" "Bug 1619487 is fixed which was causing random order of the generation of " "configs in Glance. See ``upgrade`` section for more details." msgid "Bug 1620214_: Sheepdog: command execution failure" msgstr "Bug 1620214_: Sheepdog: command execution failure" msgid "Bug 1643516_: Cinder driver: TypeError in _open_cinder_volume" msgstr "Bug 1643516_: Cinder driver: TypeError in _open_cinder_volume" msgid "" "Bug 1657710_: Unit test passes only because is launched as non-root user" msgstr "" "Bug 1657710_: Unit test passes only because is launched as non-root user" msgid "Bug 1668848_: PBR 2.0.0 will break projects not using constraints" msgstr "Bug 1668848_: PBR 2.0.0 will break projects not using constraints" msgid "Bug 1686063_: RBD driver can't delete image with unprotected snapshot" msgstr "Bug 1686063_: RBD driver can't delete image with unprotected snapshot" msgid "Bug 1691132_: Fixed tests failing due to updated oslo.config" msgstr "Bug 1691132_: Fixed tests failing due to updated oslo.config" msgid "Bug 1693670_: Fix doc generation for Python3" msgstr "Bug 1693670_: Fix doc generation for Python3" msgid "" "Bug 1733502_: Use cached auth_ref instead of getting a new one each time" msgstr "" "Bug 1733502_: Use cached auth_ref instead of getting a new one each time" msgid "Bug 1738331_: Fix BufferedReader writing zero size chunks" msgstr "Bug 1738331_: Fix BufferedReader writing zero size chunks" msgid "Bug 1764200_: Glance Cinder backed images & multiple regions" msgstr "Bug 1764200_: Glance Cinder backed images & multiple regions" msgid "Bug 1779455_: cinder backend causes BadRequest due to NULL mountpoint" msgstr "Bug 1779455_: cinder backend causes BadRequest due to NULL mountpoint" msgid "Bug 1784420_: Interface function for multihash not wrapped correctly" msgstr "Bug 1784420_: Interface function for multihash not wrapped correctly" msgid "Bug 1785641_: Fix Defaults for ConfigParser" msgstr "Bug 1785641_: Fix Defaults for ConfigParser" msgid "Bug 1808456_: Catch rdb NoSpace Exception" msgstr "Bug 1808456_: Catch rdb NoSpace Exception" msgid "Bug 1813092_: Fix some types in the FS and VMware drivers" msgstr "Bug 1813092_: Fix some types in the FS and VMware drivers" msgid "Bug 1815335_: Do not raise StopIteration" msgstr "Bug 1815335_: Do not raise StopIteration" msgid "Bug 1816721_: Fix python3 compatibility of rbd get_fsid" msgstr "Bug 1816721_: Fix python3 compatibility of rbd get_fsid" msgid "" "Bug 1820817_: Swift backend can not use custom CA bundle to verify server " "SSL certs when those are not added to global system certs" msgstr "" "Bug 1820817_: Swift backend can not use custom CA bundle to verify server " "SSL certs when those are not added to global system certs" msgid "Bug 1839778_: Python3 swift config quotes removal" msgstr "Bug 1839778_: Python3 swift config quotes removal" msgid "" "Bug 1863691_: When Image size greater than the chunk size and the glance " "buffered upload for swift is enabled, glance just put 0 bytes" msgstr "" "Bug 1863691_: When Image size greater than the chunk size and the glance " "buffered upload for swift is enabled, glance just put 0 bytes" msgid "" "Bug 1863983_: Image upload is failing with NoFibreChannelVolumeDeviceFound " "after configuring Cinder(HP3Par FC storage) as glance backend" msgstr "" "Bug 1863983_: Image upload is failing with NoFibreChannelVolumeDeviceFound " "after configuring Cinder(HP3Par FC storage) as Glance backend" msgid "" "Bug 1866966_: define mount_point for ``*fs`` drivers in glance cinder store" msgstr "" "Bug 1866966_: define mount_point for ``*fs`` drivers in glance cinder store" msgid "Bug 1885651_: swift_store_endpoint doesn't override keystone catalog" msgstr "Bug 1885651_: swift_store_endpoint doesn't override Keystone catalogue" msgid "Bug 1915602_: Cinder store: Use v3 API by default" msgstr "Bug 1915602_: Cinder store: Use v3 API by default" msgid "" "Bug 1926404_: HTTP 413 : Failed to add object to Swift. Got error from Swift" msgstr "" "Bug 1926404_: HTTP 413 : Failed to add object to Swift. Got error from Swift" msgid "Bug 1934849_: s3 backend takes time exponentially" msgstr "Bug 1934849_: s3 backend takes time exponentially" msgid "Bug 1954883_: [RBD] Image is unusable if deletion fails" msgstr "Bug 1954883_: [RBD] Image is unusable if deletion fails" msgid "Bug Fixes" msgstr "Bug Fixes" msgid "" "Cinder glance_store driver: in order to avoid a situation where a leftover " "device could be mapped to a different volume than the one intended, the " "cinder glance_store driver now instructs the os-brick library to force " "detach volumes, which ensures that devices are removed from the host." msgstr "" "Cinder glance_store driver: in order to avoid a situation where a leftover " "device could be mapped to a different volume than the one intended, the " "Cinder glance_store driver now instructs the os-brick library to force " "detach volumes, which ensures that devices are removed from the host." msgid "" "Consumers relying upon the EXPERIMENTAL behavior should not upgrade past " "version 0.29.1. Now that the ``multi_backend`` module is fully supported in " "release 1.0.0, it will not undergo any more backward-incompatible changes." msgstr "" "Consumers relying upon the EXPERIMENTAL behavior should not upgrade past " "version 0.29.1. Now that the ``multi_backend`` module is fully supported in " "release 1.0.0, it will not undergo any more backward-incompatible changes." msgid "" "Consuming services should begin the transition away from the ``glance_store." "backend`` module and instead use the ``glance_store.multi_backend`` module. " "The ``backend`` module is expected to be removed during the 'U' development " "cycle." msgstr "" "Consuming services should begin the transition away from the ``glance_store." "backend`` module and instead use the ``glance_store.multi_backend`` module. " "The ``backend`` module is expected to be removed during the 'U' development " "cycle." msgid "Critical Issues" msgstr "Critical Issues" msgid "Current Series Release Notes" msgstr "Current Series Release Notes" msgid "" "Default value of the ``cinder_catalog_info`` parameter has been changed from " "``volumev2::publicURL`` to ``volumev3::publicURL``, so that the current v3 " "API is used by default instead of the deprecated v2 API." msgstr "" "Default value of the ``cinder_catalog_info`` parameter has been changed from " "``volumev2::publicURL`` to ``volumev3::publicURL``, so that the current v3 " "API is used by default instead of the deprecated v2 API." msgid "" "Deployments which are using Ceph V2 clone feature (i.e. RBD backend for " "glance_store as well as cinder driver is RBD or nova is using RBD driver) " "and minimum ceph client version is greater than 'luminous' need to grant " "glance osd read access to the cinder and nova RBD pool." msgstr "" "Deployments which are using Ceph V2 clone feature (i.e. RBD backend for " "glance_store as well as Cinder driver is RBD or Nova is using the RBD " "driver) and minimum Ceph client version is greater than 'luminous' need to " "grant Glance OSD read access to the Cinder and Nova RBD pool." msgid "Deprecation Notes" msgstr "Deprecation Notes" msgid "Drop support for tempest-full" msgstr "Drop support for tempest-full" msgid "Droped support for python 2.7 and testing for the same." msgstr "Dropped support for Python 2.7 and testing for the same." msgid "" "During Rocky cycle number of issues still warranting experimental status on " "Cinder back-end driver was addressed. The team considers the driver stable " "and production ready from Rocky release onwards (0.26.0)." msgstr "" "During Rocky cycle number of issues still warranting experimental status on " "Cinder back-end driver was addressed. The team considers the driver stable " "and production ready from Rocky release onwards (0.26.0)." msgid "EXPERIMENTAL: Multiple back-end stores" msgstr "EXPERIMENTAL: Multiple back-end stores" msgid "" "Enabling this feature will also speed up image upload and save network " "traffic in addition to save space in the backend, as null bytes sequences " "are not sent over the network." msgstr "" "Enabling this feature will also speed up image upload and save network " "traffic in addition to save space in the backend, as null bytes sequences " "are not sent over the network." msgid "" "Fixed creating multiple instances/volumes from image if multiattach volumes " "are used." msgstr "" "Fixed creating multiple instances/volumes from image if multiattach volumes " "are used." msgid "" "Following bugs were fixed and included after 0.28.0 release: * Bug 1824533_: " "Do not include ETag when puting manifest in chunked uploads * Bug 1818915_: " "Python3: Fix return type on CooperativeReader.read * Bug 1805332_: Prevent " "unicode object error from zero-byte read" msgstr "" "Following bugs were fixed and included after 0.28.0 release: * Bug 1824533_: " "Do not include ETag when puting manifest in chunked uploads * Bug 1818915_: " "Python3: Fix return type on CooperativeReader.read * Bug 1805332_: Prevent " "unicode object error from zero-byte read" msgid "Following bugs were fixed and included after 1.0.1 release:" msgstr "Following bugs were fixed and included after 1.0.1 release:" msgid "" "For more details see [`bug 1820817 `_]." msgstr "" "For more details see [`bug 1820817 `_]." msgid "For more information, see the `Buffered Reader for Swift Driver`_ spec." msgstr "" "For more information, see the `Buffered Reader for Swift Driver`_ spec." msgid "" "For years, `/var/lib/glance/images` has been presented as the default dir " "for the filesystem store. It was not part of the default value until now. " "New deployments and ppl overriding config files should watch for this." msgstr "" "For years, `/var/lib/glance/images` has been presented as the default dir " "for the filesystem store. It was not part of the default value until now. " "New deployments and people overriding config files should watch for this." msgid "" "From glance_store release 0.26.0 onwards Cinder driver is no longer " "considered as experimental." msgstr "" "From glance_store release 0.26.0 onwards Cinder driver is no longer " "considered as experimental." msgid "Glance cinder store now supports handling of multiattach volumes." msgstr "Glance Cinder store now supports handling of multiattach volumes." msgid "Glance_store Release Notes" msgstr "Glance_store Release Notes" msgid "" "If using Swift in the multi-tenant mode for storing images in Glance, please " "note that the configuration options ``swift_store_multi_tenant`` and " "``swift_store_config_file`` are now mutually exclusive and cannot be " "configured together. If you intend to use multi-tenant store, please make " "sure that you have not set a swift configuration file." msgstr "" "If using Swift in the multi-tenant mode for storing images in Glance, please " "note that the configuration options ``swift_store_multi_tenant`` and " "``swift_store_config_file`` are now mutually exclusive and cannot be " "configured together. If you intend to use multi-tenant store, please make " "sure that you have not set a swift configuration file." msgid "" "Implemented S3 driver to use Amazon S3 or S3 compatible storage as Glance " "backend. This is a revival of the S3 driver supported up to Mitaka, with the " "addition of a multiple store support." msgstr "" "Implemented S3 driver to use Amazon S3 or S3 compatible storage as Glance " "backend. This is a revival of the S3 driver supported up to Mitaka, with the " "addition of a multiple store support." msgid "" "Implemented image uploading, downloading and deletion for cinder store. It " "also supports new settings to put image volumes into a specific project to " "hide them from users and to control them based on ACL of the images. Note " "that cinder store is currently considered experimental, so current deployers " "should be aware that the use of it in production right now may be risky." msgstr "" "Implemented image uploading, downloading and deletion for Cinder store. It " "also supports new settings to put image volumes into a specific project to " "hide them from users and to control them based on ACL of the images. Note " "that Cinder store is currently considered experimental, so current deployers " "should be aware that the use of it in production right now may be risky." msgid "" "Improved configuration options for glance_store. Please refer to the " "``other`` section for more information." msgstr "" "Improved configuration options for glance_store. Please refer to the " "``other`` section for more information." msgid "" "In this version, refactor was made how registering of filesystem " "configuration options for reserved stores works. Consumer just need to pass " "the key:value pair where key represents the name of the reserved store and " "value represents the actual store driver, to the glance_store." msgstr "" "In this version, refactor was made how registering of filesystem " "configuration options for reserved stores works. Consumer just need to pass " "the key:value pair where key represents the name of the reserved store and " "value represents the actual store driver, to the glance_store." msgid "Known Issues" msgstr "Known Issues" msgid "" "Legacy images will be moved to specific stores as per their current volume's " "type and the location URL will be updated respectively." msgstr "" "Legacy images will be moved to specific stores as per their current volume's " "type and the location URL will be updated respectively." msgid "Liberty Series Release Notes" msgstr "Liberty Series Release Notes" msgid "Mitaka Series Release Notes" msgstr "Mitaka Series Release Notes" msgid "" "Multiple backend stores may be configured using the ``glance_store." "multi_backend`` module. See the documentation of the " "``create_multi_stores`` function in the `glance_store Reference Guide " "`_ for details." msgstr "" "Multiple backend stores may be configured using the ``glance_store." "multi_backend`` module. See the documentation of the " "``create_multi_stores`` function in the `glance_store Reference Guide " "`_ for details." msgid "New Features" msgstr "New Features" msgid "Newton Series Release Notes" msgstr "Newton Series Release Notes" msgid "" "Now the ``project_domain_name`` parameter and the ``user_domain_name`` " "parameter are properly used by swift backends. Previously these two " "parameters were ignored and the ``*_domain_id`` parameters should be set to " "use a keystone domain different from the default one." msgstr "" "Now the ``project_domain_name`` parameter and the ``user_domain_name`` " "parameter are properly used by Swift backends. Previously these two " "parameters were ignored and the ``*_domain_id`` parameters should be set to " "use a Keystone domain different from the default one." msgid "Ocata Series Release Notes" msgstr "Ocata Series Release Notes" msgid "Other Notes" msgstr "Other Notes" msgid "" "Packagers should be aware that the rootwrap configuration files have been " "moved from etc/ to etc/glance/ in order to be consistent with where other " "projects place these files." msgstr "" "Packagers should be aware that the rootwrap configuration files have been " "moved from etc/ to etc/glance/ in order to be consistent with where other " "projects place these files." msgid "" "Partial refactoring of cinder driver of glance store to use cinderclient " "version 3 and some methods have been moved to class level rather than use " "them as module level." msgstr "" "Partial refactoring of Cinder driver of Glance store to use cinderclient " "version 3 and some methods have been moved to class level rather than use " "them as module level." msgid "Pike Series Release Notes" msgstr "Pike Series Release Notes" msgid "Prelude" msgstr "Prelude" msgid "" "Prevent Unauthorized errors during uploading or donwloading data to Swift " "store." msgstr "" "Prevent Unauthorised errors during uploading or downloading data to Swift " "store." msgid "" "Previously the VMWare Datastore was using HTTPS Connections from httplib " "which do not verify the connection. By switching to using requests library " "the VMware storage backend now verifies HTTPS connection to vCenter server " "and thus addresses the vulnerabilities described in OSSN-0033." msgstr "" "Previously the VMware Datastore was using HTTPS Connections from httplib " "which do not verify the connection. By switching to using requests library " "the VMware storage backend now verifies HTTPS connection to vCenter server " "and thus addresses the vulnerabilities described in OSSN-0033." msgid "" "Previously, during service startup, the check to validate volume types used " "to raise ``BackendException`` or ``BadStoreConfiguration`` exceptions when " "an invalid volume type was configured hence failing the service startup. It " "now logs a warning and the glance service starts normally." msgstr "" "Previously, during service startup, the check to validate volume types used " "to raise ``BackendException`` or ``BadStoreConfiguration`` exceptions when " "an invalid volume type was configured hence failing the service startup. It " "now logs a warning and the Glance service starts normally." msgid "" "Python 2.7 support has been dropped. Last release of glance_store to support " "py2.7 is OpenStack Train. The minimum version of Python now supported by " "glance_store is Python 3.6." msgstr "" "Python 2.7 support has been dropped. Last release of glance_store to support " "py2.7 is OpenStack Train. The minimum version of Python now supported by " "glance_store is Python 3.6." msgid "Queens Series Release Notes" msgstr "Queens Series Release Notes" msgid "" "RBD driver: the ``rados_connect_timeout`` config option has been un-" "deprecated and its behavior has been improved. A value of ``0`` is now " "respected as disabling timeout in requests, while a value less than zero " "indicates that glance_store will not set a timeout but instead will use " "whatever timeouts are set in the Ceph configuration file." msgstr "" "RBD driver: the ``rados_connect_timeout`` config option has been un-" "deprecated and its behaviour has been improved. A value of ``0`` is now " "respected as disabling timeout in requests, while a value less than zero " "indicates that glance_store will not set a timeout but instead will use " "whatever timeouts are set in the Ceph configuration file." msgid "" "RBD driver: the default value of the ``rados_connect_timeout`` option has " "been changed from 0 to -1, so that the RBD driver will by default use the " "timeout values defined in ``ceph.conf``. Be aware that setting this option " "to 0 disables timeouts (that is, the RBD driver will make requests with a " "timeout of zero, and all requests wait forever), thereby overriding any " "timeouts that are set in the Ceph configuration file." msgstr "" "RBD driver: the default value of the ``rados_connect_timeout`` option has " "been changed from 0 to -1, so that the RBD driver will by default use the " "timeout values defined in ``ceph.conf``. Be aware that setting this option " "to 0 disables timeouts (that is, the RBD driver will make requests with a " "timeout of zero, and all requests wait forever), thereby overriding any " "timeouts that are set in the Ceph configuration file." msgid "" "Removal of ``stores`` and ``default_store`` has been postponed until Train " "cycle to allow time to move multiple backends stores from being EXPERIMENTAL " "due to some unresolved issues with the feature." msgstr "" "Removal of ``stores`` and ``default_store`` has been postponed until Train " "cycle to allow time to move multiple backends stores from being EXPERIMENTAL " "due to some unresolved issues with the feature." msgid "" "Removal of the ``stores`` and ``default_store`` configuration options, which " "were deprecated in Rocky, has been postponed until during the Train " "development cycle." msgstr "" "Removal of the ``stores`` and ``default_store`` configuration options, which " "were deprecated in Rocky, has been postponed until during the Train " "development cycle." msgid "" "Return list of store drivers in sorted order for generating configs. More " "info in ``Upgrade Notes`` and ``Bug Fixes`` section." msgstr "" "Return list of store drivers in sorted order for generating configs. More " "info in ``Upgrade Notes`` and ``Bug Fixes`` section." msgid "Rocky Series Release Notes" msgstr "Rocky Series Release Notes" msgid "Security Issues" msgstr "Security Issues" msgid "" "See `Bug #2004555 `_ " "for more information about this issue." msgstr "" "See `Bug #2004555 `_ " "for more information about this issue." msgid "" "Set the ``glance_store`` configuration option ``swift_buffer_on_upload`` to " "``True``" msgstr "" "Set the ``glance_store`` configuration option ``swift_buffer_on_upload`` to " "``True``" msgid "" "Set the ``glance_store`` configuration option ``swift_upload_buffer_dir`` to " "a string value representing an absolute directory path. This directory will " "be used to hold the buffered data." msgstr "" "Set the ``glance_store`` configuration option ``swift_upload_buffer_dir`` to " "a string value representing an absolute directory path. This directory will " "be used to hold the buffered data." msgid "" "Some deprecated exceptions have been removed. See upgrade section for more " "details." msgstr "" "Some deprecated exceptions have been removed. See upgrade section for more " "details." msgid "Start using reno to manage release notes." msgstr "Start using reno to manage release notes." msgid "Stein Series Release Notes" msgstr "Stein Series Release Notes" msgid "" "Swift backend now can use custom CA bundle to verify SSL connection to " "Keystone without adding this bundle to global system ones. For this it re-" "uses the CA bundle specified as ``swift_store_cacert`` config option, so " "this bundle must verify both certificates of Swift and Keysotne API " "endpoints." msgstr "" "Swift backend now can use custom CA bundle to verify SSL connection to " "Keystone without adding this bundle to global system ones. For this it re-" "uses the CA bundle specified as ``swift_store_cacert`` config option, so " "this bundle must verify both certificates of Swift and Keystone API " "endpoints." msgid "" "The 'rados_connect_timeout' config option for the RBD store has been " "deprecated and will be removed in the future. It has been silently ignored " "for multiple releases. Users willing to set a timeout for the connection to " "the cluster can use Ceph's 'client_mount_timeout' option." msgstr "" "The 'rados_connect_timeout' config option for the RBD store has been " "deprecated and will be removed in the future. It has been silently ignored " "for multiple releases. Users willing to set a timeout for the connection to " "the cluster can use Ceph's 'client_mount_timeout' option." msgid "" "The 'stores' and 'default_store' configuration options have been deprecated " "for removal since the OpenStack Rocky release. They are subject to removal " "early in the 'U' development cycle. When these options are removed, the " "``glance_store.backend`` module, that depends on them, will be removed as " "well." msgstr "" "The 'stores' and 'default_store' configuration options have been deprecated " "for removal since the OpenStack Rocky release. They are subject to removal " "early in the 'U' development cycle. When these options are removed, the " "``glance_store.backend`` module, that depends on them, will be removed as " "well." msgid "" "The Buffered Reader works by taking advantage of the way Swift stores large " "objects by segmenting them into discrete chunks. Thus, the amount of disk " "space a Glance API node will require for buffering is a function of the " "``swift_store_large_object_chunk_size`` setting and the number of worker " "threads (configured in **glance-api.conf** as the value of ``workers``). " "Disk utilization will cap at the following value" msgstr "" "The Buffered Reader works by taking advantage of the way Swift stores large " "objects by segmenting them into discrete chunks. Thus, the amount of disk " "space a Glance API node will require for buffering is a function of the " "``swift_store_large_object_chunk_size`` setting and the number of worker " "threads (configured in **glance-api.conf** as the value of ``workers``). " "Disk utilisation will cap at the following value" msgid "" "The Glance Project Team is excited to announce the version 1.0.0 release of " "the glance_store library. This release marks the finalization of changes " "introduced on an experimental basis in previous releases beginning with " "0.25.0 to support the Glance `Multi-store backend support `_ feature." msgstr "" "The Glance Project Team is excited to announce the version 1.0.0 release of " "the glance_store library. This release marks the finalisation of changes " "introduced on an experimental basis in previous releases beginning with " "0.25.0 to support the Glance `Multi-store backend support `_ feature." msgid "" "The RBD driver now moves images to the trash if they cannot be deleted " "immediately due to having snapshots. This fixes the long-standing issue " "where base images are unable to be deleted until/unless all snapshots of it " "are also deleted. Moving the image to the trash allows Glance to proceed " "with the deletion of the image (as far as it is concerned), mark the RBD " "image for deletion, which will happen once the last snapshot that uses it " "has been deleted." msgstr "" "The RBD driver now moves images to the Rubbish Bin if they cannot be deleted " "immediately due to having snapshots. This fixes the long-standing issue " "where base images are unable to be deleted until/unless all snapshots of it " "are also deleted. Moving the image to the Rubbish Bin allows Glance to " "proceed with the deletion of the image (as far as it is concerned), mark the " "RBD image for deletion, which will happen once the last snapshot that uses " "it has been deleted." msgid "" "The Rocky release of glance_store contains support for computing secure hash " "values of stored data, but the function called by Glance to store data was " "not wrapped correctly, thereby making the computed secure hash value " "unavailable to Glance." msgstr "" "The Rocky release of glance_store contains support for computing secure hash " "values of stored data, but the function called by Glance to store data was " "not wrapped correctly, thereby making the computed secure hash value " "unavailable to Glance." msgid "" "The S3 driver has been removed completely from the glance_store source tree. " "All environments running and (or) using this s3-driver piece of code and " "have not been migrated will stop working after the upgrade. We recommend you " "use a different storage backend that is still being supported by Glance. The " "standard deprecation path has been used to remove this. The proces requiring " "store driver maintainers was initiated at http://lists.openstack.org/" "pipermail/openstack-dev/2015-December/081966.html . Since, S3 driver did not " "get any maintainer, it was decided to remove it." msgstr "" "The S3 driver has been removed completely from the glance_store source tree. " "All environments running and (or) using this s3-driver piece of code and " "have not been migrated will stop working after the upgrade. We recommend you " "use a different storage backend that is still being supported by Glance. The " "standard deprecation path has been used to remove this. The process " "requiring store driver maintainers was initiated at http://lists.openstack." "org/pipermail/openstack-dev/2015-December/081966.html . Since, S3 driver did " "not get any maintainer, it was decided to remove it." msgid "" "The Sheepdog driver is deprecated in this release and is subject to removal " "at the beginning of the 'U' development cycle, following the `OpenStack " "standard deprecation policy `_." msgstr "" "The Sheepdog driver is deprecated in this release and is subject to removal " "at the beginning of the 'U' development cycle, following the `OpenStack " "standard deprecation policy `_." msgid "" "The VMWare Datastore has been deprecated. The vmwareapi virt driver in nova " "was marked as experimental due to lack of CI and maintainers and it may be " "removed in a future release." msgstr "" "The VMWare Datastore has been deprecated. The vmwareapi Virt driver in Nova " "was marked as experimental due to lack of CI and maintainers and it may be " "removed in a future release." msgid "" "The ``store_capabilities_update_min_interval`` configuration option, " "deprecated since the Rocky release, has been removed. The option configured " "a capability that was not implemented by any glance_store drivers. Thus its " "removal will have no impact on any deployments." msgstr "" "The ``store_capabilities_update_min_interval`` configuration option, " "deprecated since the Rocky release, has been removed. The option configured " "a capability that was not implemented by any glance_store drivers. Thus its " "removal will have no impact on any deployments." msgid "" "The driver is being removed because `Sheepdog is not maintained upstream " "`_. " "Additionally, the Sheepdog driver is no longer tested in the OpenStack gate." msgstr "" "The driver is being removed because `Sheepdog is not maintained upstream " "`_. " "Additionally, the Sheepdog driver is no longer tested in the OpenStack gate." msgid "" "The filesystem driver is now using a configurable chunk size. Increasing it " "may avoid bottlenecks." msgstr "" "The filesystem driver is now using a configurable chunk size. Increasing it " "may avoid bottlenecks." msgid "The following bugs were fixed during the Pike release cycle." msgstr "The following bugs were fixed during the Pike release cycle." msgid "The following improvements were made during the Pike release cycle." msgstr "The following improvements were made during the Pike release cycle." msgid "The following improvements were made during the Ussuri release cycle:" msgstr "The following improvements were made during the Ussuri release cycle:" msgid "" "The following list of exceptions have been deprecated since 0.10.0 release " "-- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, " "``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, " "``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, " "``ImageDataNotFound``, ``InvalidParameterValue``, " "``InvalidImageStatusTransition``. This release removes these exceptions so " "any remnant consumption of the same must be avoided/removed." msgstr "" "The following list of exceptions have been deprecated since 0.10.0 release " "-- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, " "``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, " "``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, " "``ImageDataNotFound``, ``InvalidParameterValue``, " "``InvalidImageStatusTransition``. This release removes these exceptions so " "any remnant consumption of the same must be avoided/removed." msgid "" "The function is replaced by ``store_add_to_backend_with_multihash``, which " "is a similar wrapper, but which takes an additional argument allowing a " "caller to specify an secure hashing algorithm. The hexdigest of this " "algorithm is returned as one of the multiple values returned by the " "function. The function also returns the md5 checksum for backward " "compatability." msgstr "" "The function is replaced by ``store_add_to_backend_with_multihash``, which " "is a similar wrapper, but which takes an additional argument allowing a " "caller to specify an secure hashing algorithm. The hexdigest of this " "algorithm is returned as one of the multiple values returned by the " "function. The function also returns the md5 checksum for backward " "compatibility." msgid "" "The glance_store configuration option " "``store_capabilities_update_min_interval`` is deprecated in this release and " "is subject to removal at the beginning of the Stein development cycle, " "following the `OpenStack standard deprecation policy `_." msgstr "" "The glance_store configuration option " "``store_capabilities_update_min_interval`` is deprecated in this release and " "is subject to removal at the beginning of the Stein development cycle, " "following the `OpenStack standard deprecation policy `_." msgid "" "The glance_store configuration options have been improved with detailed help " "texts, defaults for sample configuration files, explicit choices of values " "for operators to choose from, and a strict range defined with ``min`` and " "``max`` boundaries. It is to be noted that the configuration options that " "take integer values now have a strict range defined with \"min\" and/or \"max" "\" boundaries where appropriate. This renders the configuration options " "incapable of taking certain values that may have been accepted before but " "were actually invalid. For example, configuration options specifying counts, " "where a negative value was undefined, would have still accepted the supplied " "negative value. Such options will no longer accept negative values. However, " "options where a negative value was previously defined (for example, -1 to " "mean unlimited) will remain unaffected by this change. Values that do not " "comply with the appropriate restrictions will prevent the service from " "starting. The logs will contain a message indicating the problematic " "configuration option and the reason why the supplied value has been rejected." msgstr "" "The glance_store configuration options have been improved with detailed help " "texts, defaults for sample configuration files, explicit choices of values " "for operators to choose from, and a strict range defined with ``min`` and " "``max`` boundaries. It is to be noted that the configuration options that " "take integer values now have a strict range defined with \"min\" and/or \"max" "\" boundaries where appropriate. This renders the configuration options " "incapable of taking certain values that may have been accepted before but " "were actually invalid. For example, configuration options specifying counts, " "where a negative value was undefined, would have still accepted the supplied " "negative value. Such options will no longer accept negative values. However, " "options where a negative value was previously defined (for example, -1 to " "mean unlimited) will remain unaffected by this change. Values that do not " "comply with the appropriate restrictions will prevent the service from " "starting. The logs will contain a message indicating the problematic " "configuration option and the reason why the supplied value has been rejected." msgid "" "The glance_store function ``store_add_to_backend``, which is a wrapper " "around each store's ``add()`` method, is deprecated in this release and is " "subject to removal at the beginning of the Stein development cycle, " "following the `OpenStack standard deprecation policy `_." msgstr "" "The glance_store function ``store_add_to_backend``, which is a wrapper " "around each store's ``add()`` method, is deprecated in this release and is " "subject to removal at the beginning of the Stein development cycle, " "following the `OpenStack standard deprecation policy `_." msgid "" "The gridfs driver has been removed from the tree. The environments using " "this driver that were not migrated will stop working after the upgrade." msgstr "" "The gridfs driver has been removed from the tree. The environments using " "this driver that were not migrated will stop working after the upgrade." msgid "" "The option configures a stub method that has not been implemented for any " "existing store drivers. Hence it is non-operational. Given that it has " "*never* been operational, it will not be missed. Its presence is confusing " "to operators and thus it is hereby deprecated for removal." msgstr "" "The option configures a stub method that has not been implemented for any " "existing store drivers. Hence it is non-operational. Given that it has " "*never* been operational, it will not be missed. Its presence is confusing " "to operators and thus it is hereby deprecated for removal." msgid "" "The responses from some functions in the ``glance_store.multi_backend`` " "module, which was EXPERIMENTAL until this release, have changed. In " "particular, the ``glance_store.driver.Store.add`` function which returns a " "tuple whose last element is a dictionary of storage system specific " "information, no longer contains a 'backend' key. Instead, this key is named " "'store'. This change extends to any convenience functions that wrap ``Store." "add``." msgstr "" "The responses from some functions in the ``glance_store.multi_backend`` " "module, which was EXPERIMENTAL until this release, have changed. In " "particular, the ``glance_store.driver.Store.add`` function which returns a " "tuple whose last element is a dictionary of storage system specific " "information, no longer contains a 'backend' key. Instead, this key is named " "'store'. This change extends to any convenience functions that wrap ``Store." "add``." msgid "This release adds support for Glance multihash computation." msgstr "This release adds support for Glance multihash computation." msgid "" "This release adds support for handling cinder's multiattach volumes in " "glance cinder store." msgstr "" "This release adds support for handling cinder's multiattach volumes in " "Glance Cinder store." msgid "" "This release contains the base work for multiple back-ends changing how the " "back-ends get configured. Please note that in Rocky release the work is " "still experimental. Thus it's not advised to utilize the new configs in " "production environments before Stein release even though old config options " "are deprecated for removal." msgstr "" "This release contains the base work for multiple back-ends changing how the " "back-ends get configured. Please note that in Rocky release the work is " "still experimental. Thus it's not advised to utilise the new configs in " "production environments before Stein release even though old config options " "are deprecated for removal." msgid "" "This version of glance_store will result in Glance generating the configs in " "a sorted (deterministic) order. So, preferably store releases on or after " "this should be used for generating any new configs if the mismatched " "ordering of the configs results in an issue in your environment." msgstr "" "This version of glance_store will result in Glance generating the configs in " "a sorted (deterministic) order. So, preferably store releases on or after " "this should be used for generating any new configs if the mismatched " "ordering of the configs results in an issue in your environment." msgid "" "This was a quiet development cycle for the ``glance_store`` library. No new " "features were added. Several bugs were fixed and some code changes were " "committed to increase stability." msgstr "" "This was a quiet development cycle for the ``glance_store`` library. No new " "features were added. Several bugs were fixed and some code changes were " "committed to increase stability." msgid "" "This was a quiet development cycle for the ``glance_store`` library. One new " "feature was added to the Filesystem store driver. Several bugs were fixed " "and some code changes were committed to increase stability." msgstr "" "This was a quiet development cycle for the ``glance_store`` library. One new " "feature was added to the Filesystem store driver. Several bugs were fixed " "and some code changes were committed to increase stability." msgid "" "This was a quiet development cycle for the ``glance_store`` library. One new " "feature was added to the Swift store driver. Several bugs were fixed and " "some code changes were committed to increase stability." msgstr "" "This was a quiet development cycle for the ``glance_store`` library. One new " "feature was added to the Swift store driver. Several bugs were fixed and " "some code changes were committed to increase stability." msgid "" "This was a quiet development cycle for the ``glance_store`` library. Several " "bugs were fixed and some code changes were committed to increase stability." msgstr "" "This was a quiet development cycle for the ``glance_store`` library. Several " "bugs were fixed and some code changes were committed to increase stability." msgid "To use the new reader with the Swift store, you must do the following:" msgstr "To use the new reader with the Swift store, you must do the following:" msgid "Train Series Release Notes" msgstr "Train Series Release Notes" msgid "" "Two new configuration options, ``swift_buffer_on_upload`` and " "``swift_upload_buffer_dir`` have been introduced. These apply only to users " "of the Swift store and their use is optional. See the New Features section " "for more information." msgstr "" "Two new configuration options, ``swift_buffer_on_upload`` and " "``swift_upload_buffer_dir`` have been introduced. These apply only to users " "of the Swift store and their use is optional. See the New Features section " "for more information." msgid "Upgrade Notes" msgstr "Upgrade Notes" msgid "Ussuri Series Release Notes" msgstr "Ussuri Series Release Notes" msgid "Victoria Series Release Notes" msgstr "Victoria Series Release Notes" msgid "Wallaby Series Release Notes" msgstr "Wallaby Series Release Notes" msgid "Xena Series Release Notes" msgstr "Xena Series Release Notes" msgid "Yoga Series Release Notes" msgstr "Yoga Series Release Notes" msgid "Zed Series Release Notes" msgstr "Zed Series Release Notes" msgid "" "`Add python 3.5 in classifier and envlist `_" msgstr "" "`Add python 3.5 in classifier and envlist `_" msgid "" "`Bug #1901138 `_: " "Blocked creation of images when glance store is cinder, cinder backend is " "nfs and volumes created are qcow2 format." msgstr "" "`Bug #1901138 `_: " "Blocked creation of images when Glance store is Cinder, Cinder backend is " "NFS and volumes created are qcow2 format." msgid "`Bug #1904546 `_:" msgstr "`Bug #1904546 `_:" msgid "" "`Bug #1915163 `_: " "Added handling to log and raise proper exception during image create when an " "invalid volume type is configured." msgstr "" "`Bug #1915163 `_: " "Added handling to log and raise proper exception during image create when an " "invalid volume type is configured." msgid "" "`Bug #1955668 `_: " "Fixed issue with glance cinder store passing hostname instead of IP address " "to os-brick while getting connector information." msgstr "" "`Bug #1955668 `_: " "Fixed issue with glance cinder store passing hostname instead of IP address " "to os-brick while getting connector information." msgid "" "`Bug #1959913 `_: " "Added wait between the volume being extended and the new size being detected " "while opening the volume device." msgstr "" "`Bug #1959913 `_: " "Added wait between the volume being extended and the new size being detected " "while opening the volume device." msgid "" "`Bug #1969373 `_: " "Cinder Driver: Correct the retry interval from fixed 1 second to exponential " "backoff for attaching a volume during image create/save operation." msgstr "" "`Bug #1969373 `_: " "Cinder Driver: Correct the retry interval from fixed 1 second to exponential " "backoff for attaching a volume during image create/save operation." msgid "" "`Bug #1970698 `_: " "Cinder: Fixed exception logging when the image create operation fails due to " "failing to attach volume to glance host." msgstr "" "`Bug #1970698 `_: " "Cinder: Fixed exception logging when the image create operation fails due to " "failing to attach the volume to a Glance host." msgid "" "`Bug #2000584 `_: " "Fixed image create with cinder NFS store when using sparse volumes." msgstr "" "`Bug #2000584 `_: " "Fixed image create with Cinder NFS store when using sparse volumes." msgid "" "`Bug #2056179 `_: " "Cinder Store: Fix issue when updating legacy image location. Previously we " "only used the user context's credentials to make request to cinder which we " "have now updated to use the service credentials configured in the config " "file else use the user context's credentials." msgstr "" "`Bug #2056179 `_: " "Cinder Store: Fix issue when updating legacy image location. Previously we " "only used the user context's credentials to make requests to Cinder which we " "have now updated to use the service credentials configured in the config " "file else use the user context's credentials." msgid "" "`Correct error msg variable that could be unassigned `_" msgstr "" "`Correct error msg variable that could be unassigned `_" msgid "" "`Documentation was reorganized according to the new standard layout `_" msgstr "" "`Documentation was reorganised according to the new standard layout `_" msgid "" "`Fixed string formatting in log message `_" msgstr "" "`Fixed string formatting in log message `_" msgid "" "`Initialize privsep root_helper command `_" msgstr "" "`Initialize privsep root_helper command `_" msgid "" "`Replace six.iteritems() with .items() `_" msgstr "" "`Replace six.iteritems() with .items() `_" msgid "" "`Use HostAddressOpt for store opts that accept IP and hostnames `_" msgstr "" "`Use HostAddressOpt for store opts that accept IP and hostnames `_" msgid "" "glance_store 0.29.0 was released with backwards incompatible changes. There " "was no corresponding releasenote to mention this. 0.29.1 has reverted said " "change which will be included to 1.0.0 release later on the cycle." msgstr "" "glance_store 0.29.0 was released with backwards incompatible changes. There " "was no corresponding releasenote to mention this. 0.29.1 has reverted said " "change which will be included to 1.0.0 release later on the cycle." msgid "glance_store._drivers.gridfs" msgstr "glance_store._drivers.gridfs" msgid "glance_store._drivers.s3 removed from tree." msgstr "glance_store._drivers.s3 removed from tree." msgid "swift_store_large_object_chunk_size * workers * 1000" msgstr "swift_store_large_object_chunk_size * workers * 1000" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1724254696.1612084 glance_store-4.8.1/releasenotes/source/locale/zh_CN/0000775000175000017500000000000000000000000022445 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/releasenotes/source/locale/zh_CN/LC_MESSAGES/0000775000175000017500000000000000000000000024232 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/locale/zh_CN/LC_MESSAGES/releasenotes.po0000664000175000017500000000456400000000000027274 0ustar00zuulzuul00000000000000# zzxwill , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: Glance_store Release Notes\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2018-02-28 18:24+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-08-23 02:05+0000\n" "Last-Translator: zzxwill \n" "Language-Team: Chinese (China)\n" "Language: zh_CN\n" "X-Generator: Zanata 4.3.3\n" "Plural-Forms: nplurals=1; plural=0\n" msgid "0.11.0" msgstr "0.11.0" msgid "0.12.0" msgstr "0.12.0" msgid "0.16.0" msgstr "0.16.0" msgid "0.17.0" msgstr "0.17.0" msgid "Current Series Release Notes" msgstr "当前版本发布说明" msgid "Deprecation Notes" msgstr "弃用说明" msgid "Glance_store Release Notes" msgstr "Glance_store发布说明" msgid "Liberty Series Release Notes" msgstr "Liberty版本发布说明" msgid "Mitaka Series Release Notes" msgstr "Mitaka 版本发布说明" msgid "New Features" msgstr "新特性" msgid "Other Notes" msgstr "其他说明" msgid "Security Issues" msgstr "安全问题" msgid "Start using reno to manage release notes." msgstr "开始使用reno管理发布说明。" msgid "" "The following list of exceptions have been deprecated since 0.10.0 release " "-- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, " "``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, " "``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, " "``ImageDataNotFound``, ``InvalidParameterValue``, " "``InvalidImageStatusTransition``. This release removes these exceptions so " "any remnant consumption of the same must be avoided/removed." msgstr "" "以下的异常列表自0.10.0版本后已经弃用了 ——``Conflict``, " "``ForbiddenPublicImage`` ``ProtectedImageDelete``, " "``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, " "``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, " "``ImageDataNotFound``, ``InvalidParameterValue``, " "``InvalidImageStatusTransition``。该版本移除了这些异常,所以任何遗留的相同的" "使用方式必须避免或去掉。" msgid "Upgrade Notes" msgstr "升级说明" msgid "glance_store._drivers.gridfs" msgstr "glance_store._drivers.gridfs" msgid "glance_store._drivers.s3 removed from tree." msgstr "glance_store._drivers.s3从树上移除了。" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/mitaka.rst0000664000175000017500000000023200000000000022202 0ustar00zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/newton.rst0000664000175000017500000000023200000000000022246 0ustar00zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/ocata.rst0000664000175000017500000000023000000000000022021 0ustar00zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/pike.rst0000664000175000017500000000021700000000000021667 0ustar00zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/queens.rst0000664000175000017500000000022300000000000022234 0ustar00zuulzuul00000000000000=================================== Queens Series Release Notes =================================== .. release-notes:: :branch: stable/queens ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/rocky.rst0000664000175000017500000000022100000000000022061 0ustar00zuulzuul00000000000000=================================== Rocky Series Release Notes =================================== .. release-notes:: :branch: stable/rocky ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/stein.rst0000664000175000017500000000022100000000000022054 0ustar00zuulzuul00000000000000=================================== Stein Series Release Notes =================================== .. release-notes:: :branch: stable/stein ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/train.rst0000664000175000017500000000017600000000000022060 0ustar00zuulzuul00000000000000========================== Train Series Release Notes ========================== .. release-notes:: :branch: stable/train ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/unreleased.rst0000664000175000017500000000016000000000000023063 0ustar00zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/ussuri.rst0000664000175000017500000000020200000000000022263 0ustar00zuulzuul00000000000000=========================== Ussuri Series Release Notes =========================== .. release-notes:: :branch: stable/ussuri ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/victoria.rst0000664000175000017500000000020700000000000022556 0ustar00zuulzuul00000000000000============================= Victoria Series Release Notes ============================= .. release-notes:: :branch: victoria-eom ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/wallaby.rst0000664000175000017500000000020300000000000022365 0ustar00zuulzuul00000000000000============================ Wallaby Series Release Notes ============================ .. release-notes:: :branch: wallaby-eom ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/xena.rst0000664000175000017500000000016700000000000021676 0ustar00zuulzuul00000000000000========================= Xena Series Release Notes ========================= .. release-notes:: :branch: xena-eom ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/yoga.rst0000664000175000017500000000016700000000000021702 0ustar00zuulzuul00000000000000========================= Yoga Series Release Notes ========================= .. release-notes:: :branch: yoga-eom ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/releasenotes/source/zed.rst0000664000175000017500000000016300000000000021521 0ustar00zuulzuul00000000000000======================== Zed Series Release Notes ======================== .. release-notes:: :branch: zed-eom ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/requirements.txt0000664000175000017500000000060000000000000017474 0ustar00zuulzuul00000000000000oslo.config>=5.2.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.utils>=4.7.0 # Apache-2.0 oslo.concurrency>=3.26.0 # Apache-2.0 stevedore>=1.20.0 # Apache-2.0 eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT jsonschema>=3.2.0 # MIT keystoneauth1>=3.4.0 # Apache-2.0 python-keystoneclient>=3.8.0 # Apache-2.0 requests>=2.14.2 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.201219 glance_store-4.8.1/setup.cfg0000664000175000017500000000454200000000000016042 0ustar00zuulzuul00000000000000[metadata] name = glance_store summary = OpenStack Image Service Store Library description_file = README.rst author = OpenStack author_email = openstack-discuss@lists.openstack.org home_page = https://docs.openstack.org/glance_store/latest/ python_requires = >=3.8 classifier = Development Status :: 5 - Production/Stable Environment :: OpenStack Intended Audience :: Developers Intended Audience :: Information Technology License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: 3 :: Only Programming Language :: Python :: 3 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3.10 Programming Language :: Python :: 3.11 [files] packages = glance_store data_files = etc/glance = etc/glance/rootwrap.conf etc/glance/rootwrap.d = etc/glance/rootwrap.d/glance_cinder_store.filters [entry_points] glance_store.drivers = file = glance_store._drivers.filesystem:Store http = glance_store._drivers.http:Store swift = glance_store._drivers.swift:Store rbd = glance_store._drivers.rbd:Store cinder = glance_store._drivers.cinder:Store vmware = glance_store._drivers.vmware_datastore:Store s3 = glance_store._drivers.s3:Store no_conf = glance_store.tests.fakes:UnconfigurableStore glance.store.filesystem.Store = glance_store._drivers.filesystem:Store glance.store.http.Store = glance_store._drivers.http:Store glance.store.swift.Store = glance_store._drivers.swift:Store glance.store.rbd.Store = glance_store._drivers.rbd:Store glance.store.cinder.Store = glance_store._drivers.cinder:Store glance.store.vmware_datastore.Store = glance_store._drivers.vmware_datastore:Store glance.store.s3.Store = glance_store._drivers.s3:Store oslo.config.opts = glance.store = glance_store.backend:_list_opts glance.multi_store = glance_store.multi_backend:_list_config_opts console_scripts = glance-rootwrap = oslo_rootwrap.cmd:main [extras] vmware = oslo.vmware>=3.6.0 # Apache-2.0 swift = python-swiftclient>=3.2.0 # Apache-2.0 cinder = python-cinderclient>=4.1.0 # Apache-2.0 os-brick>=6.3.0 # Apache-2.0 oslo.rootwrap>=5.8.0 # Apache-2.0 oslo.privsep>=1.23.0 # Apache-2.0 s3 = boto3>=1.9.199 # Apache-2.0 [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/setup.py0000664000175000017500000000127100000000000015727 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import setuptools setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/test-requirements.txt0000664000175000017500000000130100000000000020450 0ustar00zuulzuul00000000000000hacking>=6.1.0,<6.2.0 # Apache-2.0 # Documentation style doc8>=0.6.0 # Apache-2.0 # Packaging # Unit testing coverage!=4.4,>=4.0 # Apache-2.0 ddt>=1.4.4 # MIT fixtures>=3.0.0 # Apache-2.0/BSD python-subunit>=1.0.0 # Apache-2.0/BSD requests-mock>=1.2.0 # Apache-2.0 retrying>=1.3.3 stestr>=2.0.0 # Apache-2.0 testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT oslotest>=3.2.0 # Apache-2.0 # Dependencies for each of the optional stores boto3>=1.9.199 # Apache-2.0 oslo.vmware>=3.6.0 # Apache-2.0 httplib2>=0.9.1 # MIT python-swiftclient>=3.2.0 # Apache-2.0 python-cinderclient>=4.1.0 # Apache-2.0 os-brick>=2.6.0 # Apache-2.0 oslo.rootwrap>=5.8.0 # Apache-2.0 oslo.privsep>=1.23.0 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1724254696.197218 glance_store-4.8.1/tools/0000775000175000017500000000000000000000000015354 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/tools/with_venv.sh0000775000175000017500000000033200000000000017722 0ustar00zuulzuul00000000000000#!/bin/bash TOOLS_PATH=${TOOLS_PATH:-$(dirname $0)} VENV_PATH=${VENV_PATH:-${TOOLS_PATH}} VENV_DIR=${VENV_NAME:-/../.venv} TOOLS=${TOOLS_PATH} VENV=${VENV:-${VENV_PATH}/${VENV_DIR}} source ${VENV}/bin/activate && "$@" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1724254667.0 glance_store-4.8.1/tox.ini0000664000175000017500000000406200000000000015531 0ustar00zuulzuul00000000000000[tox] minversion = 3.1.1 envlist = py39,py38,pep8 ignore_basepython_conflict = True [testenv] basepython = python3 setenv = VIRTUAL_ENV={envdir} usedevelop = True deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt passenv = OS_TEST_* commands = stestr run --slowest {posargs} [testenv:docs] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/doc/requirements.txt commands = sphinx-build -W -b html doc/source doc/build/html [testenv:releasenotes] deps = -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/doc/requirements.txt commands = sphinx-build -a -E -W -d releasenotes/build/.doctrees -b html releasenotes/source releasenotes/build/html [testenv:pep8] commands = flake8 {posargs} doc8 {posargs} [testenv:cover] setenv = PYTHON=coverage run --source glance_store --parallel-mode commands = stestr run {posargs} coverage combine coverage html -d cover coverage xml -o cover/coverage.xml [testenv:venv] commands = {posargs} # See glance_store/tests/functional/README.rst for information on writing or # running functional tests. [testenv:functional-swift] sitepackages = True commands = stestr run --slowest --test-path=./glance_store/tests/functional/swift [testenv:functional-filesystem] commands = stestr run --slowest --test-path=./glance_store/tests/functional/filesystem [doc8] ignore-path = .venv,.git,.tox,*glance_store/locale*,*lib/python*,glance_store.egg*,doc/build,*requirements.txt [flake8] # TODO(dmllr): Analyze or fix the warnings blacklisted below # H301 one import per line # H404 multi line docstring should start with a summary # H405 multi line docstring summary not separated with an empty line # W503 line break before binary operator # W504 line break after binary operator ignore = H301,H404,H405,W503,W504 exclude = .venv,.git,.tox,dist,doc,etc,*glance_store/locale*,*lib/python*,*egg,build